lecture 8 - MVPA Flashcards
Multivariate pattern analysis (MVPA)
- set of computational techniques to analyse patterns of brain activity by combining information across multiple voxels
- uses the distribution of activity across voxels, rather than just activity level of individual voxels
univariate vs multivariate
- univariate: analyzing each voxel separately
–> e.g., GLM - multivariate: combine information of multiple voxels
–> e.g., decoding
spatial average vs MVPA
- spatial averaging could potentially miss subtle differences in brain activity patterns across different conditions
- can differentiate conditions effectively, showing a “huge effect.”
–> suggests that MVPA is sensitive to distributed patterns of activity, not just the amplitude of activation in individual voxels
thought experiment where oranges invoke low activity, and all other types of fruit invoke high activity
(univariate vs multivariate result interpretation)
- univariate:
1. region responds to all fruit but oranges
2. region more active = more involved in task - multivariate:
1. region carries more information about oranges
2. pattern more distinct = more involved in task
‘MVPA is more than a technique, it’s a mindset’
- MVPA asks about the presence of information
- MVPA can be useful even if we don’t fully understand the underlying information
- prediction does not equal explanation
why use MVPA
- increased sensitivity compared to GLM approaches
- allows researchers to abstract and generalize findings beyond simple activation levels
–> e.g., representational similarity
–> allows comparisons between species, data modalities, and computational models
2 main MVPA techniques
- representational similarity analysis (RSA)
- decoding
MVPA critical logic
if category matters, then within-category similarity > across-category similarity
representational similarity analysis (RSA)
- each stimulus elicits a certain pattern of activity in the brain or in a computational model
- dissimilarity matrices are used to quantify and visualize the dissimilarity between the activity patterns elicited by each pair of stimuli.
–> red = higher dissimilarity
–> blue = more similarity
- representational dissimilarity matrices (RDMs) are compared, NOT the BOLD signal
–> “Abstraction away from measurement” means that RSA focuses on comparing the patterns of activity rather than the raw signals such as BOLD responses directly.
RSA - comparisons across species
though there are different BOLD responses in monkeys and humans, their RDMs are similar
RSA - multidimensional scaling
REPRESENTATIONAL SPACES: a method that takes dissimilarities among various stimuli and represents them in a 2D space such that the distances in this space correspond to the dissimilarities
–> intuitive visualization of representational dissimilarities
–> object space inferred from behavioral judgements vs cortical activity, researchers can assess the extent to which patterns of neural activity align with human perceptual experiences.
RSA - simple hypothesis testing
RSA can be used to compare representational dissimilarity matrices (model RDMs to reflect hypotheses) to fMRI data RDMs.
–> more complex models are also possible (e.g., comparing convolutional neural networks layers to fMRI data)
RSA - comparing computational models to fMRI data
UPDATE THIS CARD
models can be directly compared to brain activity based on representational similarity
RSA - comparing different recording techniques
UPDATE THIS CARD
RSA enables combining different techniques, such as those with high spatial (fMRI) and temporal (e.g., MEG) resolution
RDMs
versatile hubs for relating different representations
- distance metric matters: pearson, spearman, euclidean distance
- data normalization matters: z-scoring, multivariate noise normalization
- dataset size matters: more is better
- always check diagonal of RDMs: diagonal reflects SNR
- cross-validate results: distance estimates can be corrupted by noise
encoding
predicting brain activity using stimulus features or behavioral features
–> e.g., GLM
decoding
reconstructing stimulus features or behavioral features from brain activity
two basic types of decoders
- continuous outcomes are predicted by regression models
- categorical outcomes are predicted by pattern classifiers
two types of pattern classifiers
- non-linear classifiers
- linear classifiers
–> most common decoders in fMRI
decoding: searchlight-based approaches
- center a sphere on each voxel and extract multivoxel pattern
- train & test the decoder for sphere
- assign decoding performance to center voxel
decoding: support vector machine (SVM)
- solves problem: how do we decide on a decision boundary that separates classes
- SVM solution:
1. find support vectors
2. the decision boundary maximizes the margin between them - in fMRI, SVMs are trained on more than two voxels
- they find high-dimensional decision boundaries (= HYPERPLANES) rather than 1D lines
- commonly used since SVMs are robust and versatile in high-dimensional data
- computationally expensive
decoding: common classifiers in fMRI
- support vector machine
- gaussian naive bayes
- linear discriminant analysis
gaussian naive bayes
- computes bayesian probability of belonging to a specific class
- high accuracy
- fast to compute
- assumes normality and independence of voxels
linear discriminant analysis
- maximizes ratio of between-class variance to within-class variance
- high accuracy even in small samples
- assumes normality and equal covariance across classes
patterns
when we talk about patterns, we mean the distribution of voxel intensities, not their spatial organization
- different distance metrics are sensitive to different patterns
- reverse inference problem remains (e.g., what drives decoding)
How does representational similarity analysis (RSA) allow bridging different domains (e.g., data types, species, models)?
By comparing dissimilarity matrices obtained for each domain
Support vector machines are an example of which MVPA technique?
Linear pattern classifiers
What is the primary goal of multidimensional scaling?
To visualize the similarity between items of a dataset (e.g., stimuli) in a 2D space
True or false: MVPA is always better than classical univariate analyses (e.g., the GLM)
Not true, which method is optimal depends on your research question