lecture 8 - MVPA Flashcards

You may prefer our related Brainscape-certified flashcards:
1
Q

Multivariate pattern analysis (MVPA)

A
  • set of computational techniques to analyse patterns of brain activity by combining information across multiple voxels
  • uses the distribution of activity across voxels, rather than just activity level of individual voxels
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

univariate vs multivariate

A
  • univariate: analyzing each voxel separately
    –> e.g., GLM
  • multivariate: combine information of multiple voxels
    –> e.g., decoding
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

spatial average vs MVPA

A
  • spatial averaging could potentially miss subtle differences in brain activity patterns across different conditions
  • can differentiate conditions effectively, showing a “huge effect.”
    –> suggests that MVPA is sensitive to distributed patterns of activity, not just the amplitude of activation in individual voxels
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

thought experiment where oranges invoke low activity, and all other types of fruit invoke high activity

(univariate vs multivariate result interpretation)

A
  • univariate:
    1. region responds to all fruit but oranges
    2. region more active = more involved in task
  • multivariate:
    1. region carries more information about oranges
    2. pattern more distinct = more involved in task
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

‘MVPA is more than a technique, it’s a mindset’

A
  1. MVPA asks about the presence of information
  2. MVPA can be useful even if we don’t fully understand the underlying information
  3. prediction does not equal explanation
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

why use MVPA

A
  1. increased sensitivity compared to GLM approaches
  2. allows researchers to abstract and generalize findings beyond simple activation levels
    –> e.g., representational similarity
    –> allows comparisons between species, data modalities, and computational models
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

2 main MVPA techniques

A
  1. representational similarity analysis (RSA)
  2. decoding
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

MVPA critical logic

A

if category matters, then within-category similarity > across-category similarity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

representational similarity analysis (RSA)

A
  1. each stimulus elicits a certain pattern of activity in the brain or in a computational model
  2. dissimilarity matrices are used to quantify and visualize the dissimilarity between the activity patterns elicited by each pair of stimuli.
    –> red = higher dissimilarity
    –> blue = more similarity
  • representational dissimilarity matrices (RDMs) are compared, NOT the BOLD signal
    –> “Abstraction away from measurement” means that RSA focuses on comparing the patterns of activity rather than the raw signals such as BOLD responses directly.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

RSA - comparisons across species

A

though there are different BOLD responses in monkeys and humans, their RDMs are similar

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

RSA - multidimensional scaling

A

REPRESENTATIONAL SPACES: a method that takes dissimilarities among various stimuli and represents them in a 2D space such that the distances in this space correspond to the dissimilarities
–> intuitive visualization of representational dissimilarities
–> object space inferred from behavioral judgements vs cortical activity, researchers can assess the extent to which patterns of neural activity align with human perceptual experiences.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

RSA - simple hypothesis testing

A

RSA can be used to compare representational dissimilarity matrices (model RDMs to reflect hypotheses) to fMRI data RDMs.
–> more complex models are also possible (e.g., comparing convolutional neural networks layers to fMRI data)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

RSA - comparing computational models to fMRI data

A

UPDATE THIS CARD

models can be directly compared to brain activity based on representational similarity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

RSA - comparing different recording techniques

A

UPDATE THIS CARD

RSA enables combining different techniques, such as those with high spatial (fMRI) and temporal (e.g., MEG) resolution

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

RDMs

A

versatile hubs for relating different representations

  1. distance metric matters: pearson, spearman, euclidean distance
  2. data normalization matters: z-scoring, multivariate noise normalization
  3. dataset size matters: more is better
  4. always check diagonal of RDMs: diagonal reflects SNR
  5. cross-validate results: distance estimates can be corrupted by noise
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

encoding

A

predicting brain activity using stimulus features or behavioral features
–> e.g., GLM

17
Q

decoding

A

reconstructing stimulus features or behavioral features from brain activity

18
Q

two basic types of decoders

A
  1. continuous outcomes are predicted by regression models
  2. categorical outcomes are predicted by pattern classifiers
19
Q

two types of pattern classifiers

A
  1. non-linear classifiers
  2. linear classifiers
    –> most common decoders in fMRI
20
Q

decoding: searchlight-based approaches

A
  1. center a sphere on each voxel and extract multivoxel pattern
  2. train & test the decoder for sphere
  3. assign decoding performance to center voxel
21
Q

decoding: support vector machine (SVM)

A
  • solves problem: how do we decide on a decision boundary that separates classes
  • SVM solution:
    1. find support vectors
    2. the decision boundary maximizes the margin between them
  • in fMRI, SVMs are trained on more than two voxels
  • they find high-dimensional decision boundaries (= HYPERPLANES) rather than 1D lines
  • commonly used since SVMs are robust and versatile in high-dimensional data
  • computationally expensive
22
Q

decoding: common classifiers in fMRI

A
  1. support vector machine
  2. gaussian naive bayes
  3. linear discriminant analysis
23
Q

gaussian naive bayes

A
  • computes bayesian probability of belonging to a specific class
  • high accuracy
  • fast to compute
  • assumes normality and independence of voxels
24
Q

linear discriminant analysis

A
  • maximizes ratio of between-class variance to within-class variance
  • high accuracy even in small samples
  • assumes normality and equal covariance across classes
25
Q

patterns

A

when we talk about patterns, we mean the distribution of voxel intensities, not their spatial organization

  • different distance metrics are sensitive to different patterns
  • reverse inference problem remains (e.g., what drives decoding)
26
Q

How does representational similarity analysis (RSA) allow bridging different domains (e.g., data types, species, models)?

A

By comparing dissimilarity matrices obtained for each domain

27
Q

Support vector machines are an example of which MVPA technique?

A

Linear pattern classifiers

28
Q

What is the primary goal of multidimensional scaling?

A

To visualize the similarity between items of a dataset (e.g., stimuli) in a 2D space

29
Q

True or false: MVPA is always better than classical univariate analyses (e.g., the GLM)

A

Not true, which method is optimal depends on your research question