Investigating cognition with structural MRI Flashcards
What is the link between measuring brain volume and cognition?
Linking regional brain volume change to cognitive performance is an excellent way of showing that cognitive function is dependent on that region
Grey matter is often the focus of structural MRI investigations, being home to computationally specific ROIs
How would we measure the hippocampus using structural MRI in a cognition study?
- Brain extraction (to make the scan easier to analyse in subsequent steps)
- Brain vs. skull size estimation (for our normalising factor)
- Automatic hippocampus identification (FSL’s “FIRST”) - The hippocampi have values 53 and 17 in the resulting mask image. We can then use different tools to calculate the volume of voxels that have either / or a value of 53 / 17, thus calculating the size of the hippocampi
- Perform these on all scans in a dataset (“for loops” are your best friend!), and normalise all hippocampal volumes, we can now compare everyone’s hippocampus size, controlling for natural brain size variation
Explain the process of brain extraction
With all images organised and assembled in the same directory, a command in FSL will run BET on every image
These outputs must be checked to make sure there are no major failures
Explain tissue segmentation
The goal of brain extraction is to prepare the images for FSLs tissue segmentation pipeline FAST
This is a GM tissue probability map (TPM) overlaid on the skull-stripped brain of one pp
TPM is a quantitative image - the voxel values now have meaning and convey the fraction of that voxel which contains GM
Explain templates and registrations
A VBM experiment requires all our images to share the same space
Template images like the MNI152 are very useful for facilitating this
But every registration introduces error and noise into our data and the more our target image is unlike the scan we are registering, the more noise and error we will get in our result
We therefore try to make a template image from the data we are using in the analysis
This means our target is as geometrically similar to our scans as possible minimising registration error
What is a VBM study?
VBM is a “global” kind of volumetric brain analysis.
It is essentially a single experiment which enables you to identify local grey matter changes and other associations across the whole brain.
It does not need an a-priori hypothesis, but you should only use this if you really can’t generate one
Is it ok to have uneven group sizes in VBM?
It is generally ok to have uneven group sizes but still need to consider how this will effect the template image
If we are testing different groups, we are probably hypothesising that one group has consistently different brains to the other
We need to be sure that any such differences are evenly weighted in our template image
Otherwise our template image would look disproportionately more like one group than the other
This would favour one set of registrations leading to an unbalanced spread of noise across our experiment
What does FSL VBM require for template and registration?
Requires a file which lists the names of all the scans which will contribute to our own template image
If we are testing a group design AND our group sizes are not the same we need to randomly cut some scan names out of this list to make it balanced
How is the MNI152 brain used to make our own template brain?
Used as the starting point for making a template brain
The initial registrations first do a 12DOF linear registration to approximate the shape of the MN152 and then do a non-linear registration of that transformation to finish
With the steps of that transformation from a pps native space to the MN152 space calculated we apply the same to their GM TPM (coregistration)
Running this for every pp gets us a set of GM TPMs which are in the MN152 space
The TPMS are then averaged to create a new template image
With this template created we then go back to our original native space GM TPMs and perform a fresh linear and non linear registration between those and this GM template
What is the basis of voxelwise analysis?
Every GM TPM should now have a reasonably high quality transformation into the same shared space
This means that if we examine the same voxel in each transformed image the registered TPM value reflects the quantity of GM per participant for the same anatomical point
This is the basis of performing voxelwise analysis - running a statistical test per every voxel in the brain
What is a big disadvantage of ROI analysis?
You need an a-priori hypothesis (based on assumed principles and deductions from the conclusions of previous research, and are generated prior to a new study taking place)
== You need to know what to look at
What happens if we do not know the exact reason to base our study on?
Voxel-based morphometry (VBM)
A global kind of volumetric brain analysis
It is essentially a single experiment which enables you to identify local grey matter changes and other associations across the whole brain
It does not need an a-priori hypothesis but you should only use this if you really can’t generate one
What processing is needed to complete VBM?
- Brain extraction
- GM tissue segmentation
- Templates and registrations (linear and non linear)
- Smoothing
- Statistical testing
What is a problem with VBM when trying to detect atrophy?
GM atrophy primarily presents as a thinning of the cortex
If we register everyone’s GM TPM to the same target space, this will make thin cortexes thicker and thick cortexes thinner
All cortexes will end up the same thickness in order to match the template image
For voxelwise analysis it is essential everyone shares the same space, but we also need some way of preserving information about how thick or thin someone’s cortex was pre-transformation
How do we overcome this problem in VBM (everyone’s cortex being the same size after registration)?
When we perform our registrations we can save the deformation map - an image where voxel values represent how much the original scan was expanded or contracted at that point to match the template space
Multiplying the deformation map by the ‘raw’ transformation gives us a version of the GM TPM in template space where the cortex thickness has been spatially matched but the GM values have been increased or decreased by the degree that it was inflated or deflated to achieve this
It is these modulated TPMs which are used in our statistical analyses