The General Linear Model and Single-Subject & Group Analyses Flashcards
When are the data ready for statistical analysis?
After reconstruction, realignment, stereotactic normalisation and possibly smoothing
What are the two steps for statistical analysis?
- Statistics indicating evidence against a null hypothesis of no effect at each voxel are computed - this results in an image of statistics
- The statistic image must be assessed, reliably locating voxels where an effect is exhibited whilst limiting the possibility of false positives
What is used for analysis of functional mapping experiments?
SPM
What is auditory block-design experiment?
• One session
- One subject in the scanner
• Passive word listening versus rest
• 7 cycles of rest and listening
• Blocks of 6 scans with 7 second TR
• In the data template in the brain – what are the involved time series that correspond to the stimulus function
• There is greater response in the brain when you are listening to words vs resting
Why do we model the measured data?
Make inferences about effects of interest
How is the measured data modelled?
Decompose data into effects and error
Form statistic using estimates of effects and error
Partition the data into 2 points: noise and signal
Get data partition data using linear model
Why is statistic done?
To find the ratio between effects estimate divided by error estimate
When do you get a significant effect?
Very small effect size but a very consistent one
What happens after stimulus function?
- Data
- Linear model
- Effect estimate
- Error estimate - Statistic
(Mass univariate) voxel-wise time series analysis:
• As the data has been corrected for movement and normalised – look at the single voxel over time
• It corresponds to 1 time series – one time series per voxel
• Do modelling of the time series and describe how we expect it to evolve over time
- Model specification
- Parameter estimation
- Hypothesis
- Statistic
• Repeat the same procedure for each and every voxel in the brain
• We use the same voxel-wise analysis for all the voxel in the brain one at a time
What is a single voxel regression model?
• If I take a single voxel – volume density over time at a given voxel
• Express it as a linear combination of two things
- Stimulus function e.g. listening to words and resting [e.g. rest = 0, 1= listening]
- Regressor that is constant – it is one for every time point – model the average point of the time series
• Data that is observed can be expressed in a linear combination of 2 terms B1 + B2 [weights are given – unknown parameters]
• Error – noise that is found on top of our measurements
• Data y [time series] is expressed by linear combinations of B1 + B2 of 2 vectors – stimulus function and the average signal and plus error term linear model
What is the equation for the single voxel regression model?
Y = x1B1 + X2B2 + e
What is the General Linear Model (GLM) described by?
data Y – 1 x N data points expressed as a linear combination of X and B
• X contains all the columns – everything that you expect to see in the data
• B is the vectors of the unknown parameters – unknown quantities that we want to estimate – how much of the response is observed in a particular voxel
• Error term is the time series the same dimension as the data
What is the GLM model specified by?
- Design matrix X
2. Assumptions about e
How is the time series expressed as?
an image – dark regions mean small values and light regions means higher values FMRI time series in voxels
What is the time series expressed as?
Linear combination of stimulus function and the constant term + error
\why does the parameters need to be estimated?
Minimize the data - get the data to be as close as possible to 0
What are problems of this model with fMRI time series?
- The BOLD response has a delayed and dispersed shape
- The BOLD signal includes substantial amount of low-frequency noise (e.g. due to scanner drift)
- Due to breathing, heartbeat and unmodelled neuronal activity, the errors are serially correlated. This violates the assumptions of the noise model in the glm