Lecture 11: Group Level Analyses and Statistics Flashcards
In most studies we want to be able to say something general
about our whole sample of participants
In most studies, we want to be able to say something general about our whole sample of participants
group level analyses in sensor space
to do this we can
average across individuals
In most studies, we want to be able to say something general about our whole sample of participants
group level analyses in sensor space
To do this we average across individuals
This helps us to
reduce noise and get a clearer picture of the brain’s response, and visualise our effects
In most studies, we want to be able to say something general about our whole sample of participants
To do this we average across individuals
This helps us to reduce noise and get a clearer picture of the brain’s response, and visualise our effects
We also conduct in group level analyses in sensor space
statistical tests to make comparisons, e.g., between conditions of an experiment.
We can do this with the different types of results we have, both in sensor space, e.g., below..
Group level analyses in sensor space is easy than group level analyses in source space as
sensor 1 is same across participants
We can do group level analyses in (2)
- Sensor space
- Source space
We can do group level analyses in source space
Diagram of group level analyses in source space across participants- (3)
- Activity map at a single frequency –> alpha
- Activity map at a single timepoint
- ROI time course
We can do group analyses in source space where we can also
OR oull out ROI (Scout) time coruses and do statististics on these values
Group level analyses must be transformed into
shared MNI space
Trasnforming MEG source data into group soace is useful as
able to average source localised data across participants in a common coordinate space (e.g., MNI or a group-averaged brain)
How does it work by transforming MEG source data into group space?
Works by inflating each hemisphere and then aligning them to a template (easier to align spheres than folding patterns)
Trasnforming MEG source data into group space allows us to do
This allows us to do group-level visualisation and statistics
in source space
Group level statistics our statistical tests might be - (2)
- Parameteric
- Non-parametric
In parametric there is in terms of assumptions
stronger assumptions, including normally distributed data)
Parametric tests for group level statistics is ( 2)
Usually t-tests
Statistical significance is then calculated based on the distribution of the test statistic
Group leve statistics for non-parametric that have fewever assumptions (3)
Including non-parametric versions of standard tests e.g., Mann Whitney U-test
Safer as neuroimaging data may not be normally distributed (esp when doing many tests)
But less power
For group level statistics instead of parametric or non-parametric we can do a resampling based approach (2)
Includes permutation and boot-strapping methods
Avoids assumptions about the form of the data, and is therefore non-parametric and quite robust
In resmapling based approaches for group-level statistics - (7)
Say we want to compare values for conditions A and B
We calculate our test statistic as normal, e.g., a t-statistic
But then we build a null distribution using our data
On each resampling iteration, we scramble the group membership of the data, and recalculate the test
The distribution of these resampled statistics will still center on 0 (i.e., no difference) but doesn’t have to be parametric - normally distributed
We calculate a p-value by comparing our original t-statistic to the distribution of the resampled t-statistics
Increasingly popular but much slower to run
How do we apply the group-level statistics?
- Test is repeated independently at each time point/sensor/location/freuency
How to apply group-level statistics
The test is repeated independently at each time point/sensor/location/frequency
E.g., When compare the time course of activity in two conditions + a single condition - (3)
In each participant, subtract the time courses for each condition
Compare the difference to 0 with a t-test at each time point at the group level
For a single condition, simply compare to 0 without a subtraction
How to apply group-level statistics
The test is repeated independently at each time point/sensor/location/frequency
E.g., for sensors - (3) same as each time point
Same for topographies/current density maps across the whole head
Calculate difference and compare to 0 or compare single condition to 0
Do t-tests across the group at each sensor/vertex
How do we apply group level statistics for multivariate analyses - (2)
For multivariate analyses, decoding accuracy over time would be tested with a t-test vs. chance at each time point
Time-frequency plots in two conditions would be compared with a t-test at each time and frequency pair