Investigating cognition with structural MRI Flashcards

1
Q

What is the link between measuring brain volume and cognition?

A

Linking regional brain volume change to cognitive performance is an excellent way of showing that cognitive function is dependent on that region

Grey matter is often the focus of structural MRI investigations, being home to computationally specific ROIs

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

How would we measure the hippocampus using structural MRI in a cognition study?

A
  1. Brain extraction (to make the scan easier to analyse in subsequent steps)
  2. Brain vs. skull size estimation (for our normalising factor)
  3. Automatic hippocampus identification (FSL’s “FIRST”) - The hippocampi have values 53 and 17 in the resulting mask image. We can then use different tools to calculate the volume of voxels that have either / or a value of 53 / 17, thus calculating the size of the hippocampi
  4. Perform these on all scans in a dataset (“for loops” are your best friend!), and normalise all hippocampal volumes, we can now compare everyone’s hippocampus size, controlling for natural brain size variation
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Explain the process of brain extraction

A

With all images organised and assembled in the same directory, a command in FSL will run BET on every image
These outputs must be checked to make sure there are no major failures

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Explain tissue segmentation

A

The goal of brain extraction is to prepare the images for FSLs tissue segmentation pipeline FAST
This is a GM tissue probability map (TPM) overlaid on the skull-stripped brain of one pp
TPM is a quantitative image - the voxel values now have meaning and convey the fraction of that voxel which contains GM

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Explain templates and registrations

A

A VBM experiment requires all our images to share the same space
Template images like the MNI152 are very useful for facilitating this
But every registration introduces error and noise into our data and the more our target image is unlike the scan we are registering, the more noise and error we will get in our result
We therefore try to make a template image from the data we are using in the analysis
This means our target is as geometrically similar to our scans as possible minimising registration error

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is a VBM study?

A

VBM is a “global” kind of volumetric brain analysis.

It is essentially a single experiment which enables you to identify local grey matter changes and other associations across the whole brain.

It does not need an a-priori hypothesis, but you should only use this if you really can’t generate one

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Is it ok to have uneven group sizes in VBM?

A

It is generally ok to have uneven group sizes but still need to consider how this will effect the template image
If we are testing different groups, we are probably hypothesising that one group has consistently different brains to the other
We need to be sure that any such differences are evenly weighted in our template image
Otherwise our template image would look disproportionately more like one group than the other
This would favour one set of registrations leading to an unbalanced spread of noise across our experiment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What does FSL VBM require for template and registration?

A

Requires a file which lists the names of all the scans which will contribute to our own template image
If we are testing a group design AND our group sizes are not the same we need to randomly cut some scan names out of this list to make it balanced

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

How is the MNI152 brain used to make our own template brain?

A

Used as the starting point for making a template brain
The initial registrations first do a 12DOF linear registration to approximate the shape of the MN152 and then do a non-linear registration of that transformation to finish

With the steps of that transformation from a pps native space to the MN152 space calculated we apply the same to their GM TPM (coregistration)

Running this for every pp gets us a set of GM TPMs which are in the MN152 space

The TPMS are then averaged to create a new template image

With this template created we then go back to our original native space GM TPMs and perform a fresh linear and non linear registration between those and this GM template

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is the basis of voxelwise analysis?

A

Every GM TPM should now have a reasonably high quality transformation into the same shared space
This means that if we examine the same voxel in each transformed image the registered TPM value reflects the quantity of GM per participant for the same anatomical point
This is the basis of performing voxelwise analysis - running a statistical test per every voxel in the brain

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is a big disadvantage of ROI analysis?

A

You need an a-priori hypothesis (based on assumed principles and deductions from the conclusions of previous research, and are generated prior to a new study taking place)

== You need to know what to look at

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What happens if we do not know the exact reason to base our study on?

A

Voxel-based morphometry (VBM)
A global kind of volumetric brain analysis
It is essentially a single experiment which enables you to identify local grey matter changes and other associations across the whole brain
It does not need an a-priori hypothesis but you should only use this if you really can’t generate one

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What processing is needed to complete VBM?

A
  1. Brain extraction
  2. GM tissue segmentation
  3. Templates and registrations (linear and non linear)
  4. Smoothing
  5. Statistical testing
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is a problem with VBM when trying to detect atrophy?

A

GM atrophy primarily presents as a thinning of the cortex
If we register everyone’s GM TPM to the same target space, this will make thin cortexes thicker and thick cortexes thinner
All cortexes will end up the same thickness in order to match the template image
For voxelwise analysis it is essential everyone shares the same space, but we also need some way of preserving information about how thick or thin someone’s cortex was pre-transformation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

How do we overcome this problem in VBM (everyone’s cortex being the same size after registration)?

A

When we perform our registrations we can save the deformation map - an image where voxel values represent how much the original scan was expanded or contracted at that point to match the template space

Multiplying the deformation map by the ‘raw’ transformation gives us a version of the GM TPM in template space where the cortex thickness has been spatially matched but the GM values have been increased or decreased by the degree that it was inflated or deflated to achieve this
It is these modulated TPMs which are used in our statistical analyses

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Why do we smooth?

A

To boost signal to noise

Signal = patterns in our GM values which will underpin a statistical relationship - this is a numerical change in a consistent direction across all relevant voxels

Noise = from image transformations, scanner/sequence imperfections - this is a numerical change on top of the true GM values which is random

Because any signal can be relied on to be consistent across a region of a scan, by averaging voxel values with their neighbouring values we aim to preserve the effect of brain atrophy while smoothing random noise out of the data

17
Q

How is smoothing achieved?

A

FSL VBM automatically smooths your data by 3 different degrees
It measures in sigma values (to a sigma of 2, 3, 4)
These hold a relationship to the full width at half maximum (FWHM) which is the way smoothing is conventionally measured in image analyses
This uses a gaussian curve to weight the averaging of voxels value differently, depending on how far away the other neighbouring voxels are

18
Q

What does a higher sigma value mean?

A

Higher FWHM = smoothing over a greater distance

19
Q

What is FWHM?

A

Full width at half maximum
Commonly used measure of the width at half the maximum value of peaked functions such as spectral lines or slice profiles and an important measure of the quality of an imaging device and its spatial resolution

20
Q

What is the analysis involved with VBM?

A

FSL runs statistics on images using a tool called RANDOMISE
This can implement any desired statistical design via permutation testing
More permutations = more accurate statistical values
After all this processing you end up with a final image in template space where each voxel value is now the p value of the statistical test for that location

21
Q

How do you assume significance in this analysis?

A

We can assume that voxels which become significant by chance alone will be positioned randomly, while ones which are significant because of real effect will neighbour one another in clusters

A final correlation is made to suppress lone significant voxels and to enhance areas where voxels are consistently significant

22
Q

What are the pros and cons of ROI analysis?

A

PROS
- Higher measurement accuracy and therefore statistical sensitivity for regions examined

CONS
- Very restricted in regional scope (needs a good a-priori hypothesis)

23
Q

What are the pros and cons of global analysis?

A

PROS
- Considers the whole brain and therefore nothing is missing
- Don’t need an a priori hypothesis

CONS
- The elaborate processing and transformation steps create a relatively high amount of noise in our final data - for a given region our statistical power will not be as high as an ROI study of the same place
- Multiple comparisons still unideal even after cluster-based correction