Voxel-Based Morphometry Flashcards

1
Q

What is morphometry?

A

The study of local shape and size of regions of the brain

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is computational neuroanatomy?

A

Refers to the use of automated tools to perform morphometric analyses

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What are the aims of computational neuroanatomy?

A

Compare the shape or local size of regions of the brain
Distinguishing between groups
Characterise the changes seen over time in development and aging
Find structural correlates of brain changes
Understand plasticity – does the brain change as we develop new skills?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What are the aims of computational neuroanatomy?

A

Exploratory analyses of whole-brain structures can facilitate our understanding of a multitude of topics
Clinical and non-clinical questions
Sometimes region of interest is not enough!

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What can the aims of computational neuroanatomy result be used to adjust?

A

fMRI data to control for differences in anatomy

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are the different morphometric analyses?

A

Manual volumetry
Landmark-based shape analysis (Bookstein)
Surface-based analysis / shape analysis (Styner)
Deformation based morphometry
Tensor based morphometry
Surface-based vertex-wise analysis (Fischl)
Voxel-Based Morphometry (Ashburner)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is deformation-based morphometry used to describe?

A

Methods of studying the positions of structures within the brain

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is tensor-based morphometry used for?

A

Methods that look at local shapes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is VBM?

A

Statistical parametric mapping of segmented tissue volume or density

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What does VBM aim to align?

A

Structural data from a group of participants so that the data can be analysed for group differences, or associations between volume and cognitive performances

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What are the 3 main processing steps for VBM?

A
  1. Spatial normalisation
  2. Segmentation
  3. Smoothing

Followed by statistical analysis

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Why is brain morphometry one of the most studied modalities in brain imaging?

A

Morphometry is the study of size and shape of the brain and its structures. The brain changes as it grows into adulthood, decays with age, and undergoes disease processes. The shape of the brain is highly dependent on genetic factors

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What are the several metrics that one can use to test a morphometry-related hypothesis?

A
  1. Grey matter volume
  2. White matter volume
  3. Cortical thickness
  4. Cortical curvature
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What are the two techniques for analysing brain morphometry?

A
  1. Voxel-based morphometry (VBM)

2. Surface-based analysis (SBA)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

How is VBM implemented?

A
  1. VBM starts by spatially normalising the T1-weighted image of an individual to a group template - in order to establish voxel-for-voxel correspondence across subjects
  2. This is a non-linear registration which allows local areas to stretch and compress with respect to each other
  3. This process creates a deformation field which is a map of how far each voxel in the input image must move to land at the matching point in the template image
  4. This deformation is applied to input image to create an image that is in voxel-for-voxel registration with the template
  5. The deformed image is then segmented into tissue classes - GM, WM , CSF based upon the intensity in the image as well as tissue class priors which indicate the likelihood of finding a given tissue class at a given location
  6. The segmented image have values that indicate the probability of a given class
  7. The segmented image is then spatially smoothed
  8. The concentration images from different subjects are then combined in a voxel-wise statistical analysis
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is modulation in VBM?

A

An operation where the voxel concentration is scaled based on the amount of stretching or compression that was applied to that voxel in the process of applying the deformation field

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

How is modulation in VBM done?

A

Computing the jacobian of the deformation field

The jacobian map is red/yellow in places that needed to be compressed

The jacobian map is blue in areas that needed to be stretched

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

How is the value at a voxel in the modulated image interpreted as?

A

The volume of grey matter at that location

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What are the two quantities used to measure morphometric properties?

A
  1. Volume

2. Concentration

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What is an example of a concentration study?

A

the amount of gray matter per unit of

intracranial volume

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What is an example of volume study?

A

collecting an MRI on males and females, counting
the number of gray matter voxels in the brain, and then comparing the gray matter volume across the
genders

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What is spatial normalisation of VBM?

A

Every brain is different
This is not just an overall scaling issue, each component differs between people
There are still enough consistency between brains to compare them

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What is affine vs non-linear registration of VBM?

A

Different ways to align scans:
Affine registration uses 12 degrees of freedom to roughly align scans
Non-linear registration uses thousands of degrees of freedom to more precisely align scans

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

What is the consequence of using too many degree of freedoms?

A

The brain begins to distort into unnatural shapes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
High resolution MRI reveals fine structural detail in the brain, but also exhibits several challenges
Noise/intensity is different between individuals | MRI intensity from T1 scans is not quantitatively meaningful
26
What does segmentation use?
Different intensities within the images to output regions
27
Why is spatial normalisation of a segmented brain tissue more robust and precise than registration of original structural iamge?
Intensity differs across scans, impeding registration If we use GM segmentations should improve registration In order to get the tissue segmentations it’s beneficial to register them to standard space and use tissue priors that tell us what to expect
28
What is the tissue segmentation: unified model?
SPM 12 implements a generative model – principled Bayesian probabilistic formulation Gaussian mixture model segmentation with deformable tissue probability maps (priors) The inverse of the transformation that aligns the TPMs can be used to normalise the original image Also models the non-uniformity of tissues
29
What is Tissue segmentation: Gaussian mixture model?
With a specific number of components Parameterised by means, variances, and mixing proportions Your data is fit to the model to aid segmentation
30
How are MRI images corrupted?
Smoothly varying intensity inhomogeneity Does not reflect real tissue differences but coil shape, magnetic field
31
What is surface-based analysis (SBA)?
One derives morphometric measures from geometric models of the cortical surface
32
What is the process of SBA?
1. Extraction of the cortical surface | 2. Cortex is the outer layer of the brain and has an inherent 2-dimensional structure
33
What are the two coronal slice displaying two surfaces?
1.The yellow line is the surface boundary between cortical white matter and cortical gray matter known as the white surface - represents inner boundary of the cortex 2. The red line is the boundary between grey matter and dura and/or CSF - pial surface 3. The cortex is modelled as a surface model which is a mesh of triangles - each triangle is known as a face 4. The place where the corners of triangles meet is the vertex 5. The parameters of the model are the coordinates (i.e. the X,Y,Z ) at each vertex 6. These coordinates are determined from the MRI during the extraction process 7.Once the coordinates of each vertex are known, the surface can be rendered as a surface embedded in 3D
34
What is thickness a direct measure of?
The amount of gray matter present at each point on the surface
35
What can thickness be used for?
group analysis to | track changes associated with age and disease process.
36
What is another metric for SBA?
1. Curvature which is a measure of how sharply cortex is folded at each point Curvature is a direct measure of the folding pattern of cortex
37
What can the surface be morphed into?
A sphere This type of display clearly shows the 2D nature of the surface as each vertex can be localised with only two spherical coordinates
38
VBM
Voxel-based morphometry (VBM) is one such automated technique that has grown in popularity since its introduction (Wright et al., 1995; Ashburner and Friston, 2000), largely because of the fact that it is relatively easy to use and has provided biologically plausible results.
39
What does VBM used?
Statistics to identify diffrences in brain anatomy between groups of subjects, which in turn can be used to infer the presence of atrophy, or less commonly tissue expansion in subjects with disease
40
What does VBM technique typically use?
T1-weighted volumetric MRI scans and essentially performs statistical tests across all voxels in the image to identify volume differences between groupsFor example, to identify differences in patterns of regional anatomy between groups of subjects, a series of t tests can be performed at every voxel in the image.
41
Why is regression analysis performed across voxels?
To assess neuroanatomical correlates of cognitive or behavioural deficits The technique has been applied to a number of different disorders, including neurodegenerative diseases (Whitwell and Jack, 2005), movement disorders (Whitwell and Josephs, 2007), epilepsy (Keller and Roberts, 2008), multiple sclerosis (Prinster et al., 2006; Sepulcre et al., 2006), and schizophrenia (Williams, 2008),
42
What have VBM analysis compared to?
Manual and visual measurements of particular structures and have shown relatively good correspondence between techniques (Good et al. , 2002; Giuliani et al., 2005; Whitwell et al. , 2005; Davies et al., 2009), providing confidence in the biological validity of VBM
43
What is essential to be done in order to perform statistical analyses?
The MRI scans need to be matched together spatially (i.e. registered) so that a location in one subject corresponds to the same location in another subject's MRI - this is spatial normalisation - Anatomy varies across subjects and heads will be in different positions in the scanner
44
How is spatial normalisation achieved?
Registering all of the images from a study onto the same template image so that they are all in the same space Different algorithms can be used to perform this registration (Ashburner and Friston, 2000; Davatzikos et al., 2001), but they typically include a nonlinear transformation (Ashburner and Friston, 2000)
45
What is the most commonly applied algorithm available in SPM software?
performing a 12-parameter affine transformation followed by a nonlinear registration using a mean squared difference matching function (Ashburner and Friston, 2000).
46
What are the different types that the template image used for spatial normalisation orientated?
1. Could be one specific MRI scan | 2. Created by averaging across a number of different MRI scans that have been put in the same space
47
What are-recommended for registrations that use a mean squared difference matching function to improve the normalization between each subject in the study cohort and the template (Good et al., 2001; Senjem et al., 2005)
Customized templates that are created using the study cohort or a cohort that is matched to the study cohort in terms of age, disease status, scanner field strength, and scanning parameters
48
What are a number of different ways to perform segmentation?
Using prior probability maps as well as voxel intensity to guide segmentation (SPM)
49
Why is modulation applied?
Aims to correct for volume change during the spatial normalisation step (Good et al., 2001)
50
How are image intensities scaled?
By the amount of contraction that has occurred during spatial normalization so that the total amount of grey matter remains the same as in the original image
51
What will the analysis compare?
Volumetric differences between scans
52
What happens if the spatial normalization was precise and all the segmented images appeared identical?
No significant differences would be detected in unmodulated data and the analysis would reflect registration error rather than volume differences
53
What does RAVEN (different normalisation procedure) use?
High dimensional elastic transformation using point correspondence(Davatzikos, 1998; Davatzikos et al., 2001), preserves the volume of different tissues and so do not require a separate modulation step
54
Images are smoothed(Ashburner and Friston, 2000; Good et al., 2001)
The intensity of each voxel is replaced by the weighted average of surrounding voxels, in essence blurring the segmented image
55
What is the number of voxels averaged at each point determined by?
The size of the smoothing kernel which can vary across studies(Rosen et al., 2002; Karas et al., 2003; Whitwell et al., 2009)
56
What does smoothing make the data?
Conform more closely to the Gaussian field model which is an important assumption of VBN Renders the data more normally distributed increasing the validity of parametric tests and reduces intersubject variability (Ashburner and Friston, 2000; Salmond et al., 2002).
57
What does smoothing increase?
The sensitivity to detect changes by reducing the variance across subjects Although excessive smoothing will diminish the ability to accurately localis the changes in the brain
58
What can VBM not differentiate?
Real changes in tissue volume from local mis-registration of images(Ashburner and Friston, 2001; Bookstein, 2001).
59
What will the accuracy of segmentation depend on?
Quality of the normalisation
60
Why can segmentation erros occur?
Displacement of tissues and partial volume effects between gray matter and CSF - occur in atrophic brain The use of customized templates can help to minimize some of these potential errors (Good et al., 2001)
61
What can the statistical analysis of smoothed segmented images be performed with?
parametric statistics using the general linear model and the theory of Gaussian random fields to ascertain significance (Ashburner and Friston, 2000)
62
What is the null hypothesis?
There is no difference in tissue volume between the groups in question These analyses generate statistical maps showing all voxels of the brain that refute the null and show significance to a certain, user-selected, p value
63
What is statistical map?
Colour map with the scale representing the t statistical but can be shown as three-dimensional surface renders of the brain or what is known as the ''glass-brain'' display in which all significant voxels are displayed on an essentially transparent render
64
What is important to consider when doing statistical tests?
Studies to correct for multiple comparisons to prevent the occurence of false positives e family-wise error (FWE) correction (Friston et al., 1993) and the more lenient false discovery rate (FDR) correction (Genovese et al., 2002), which both reduce the chance of false-positive result
65
What is FWE and FDR?
The FWE correction controls the chance of any false positives (as in Bonferroni methods) across the entire volume, whereas the FDR correction controls the expected proportion of false positives among suprathreshold voxels.
66
Interpreting VBM results
First, the processing steps often vary across studies (Whitwell and Jack, 2005), with studies using different degrees of smoothing and different registration and segmentation algorithms. Second, as well as having different options for correction for multiple comparisons, there are no standard conventions for what p value to apply to each statistical analysis, leading to variability across studies the larger the sample size, the greater the power to detect differences, although differences can be observed with smaller cohorts if the effect size is large.
67
What does VBM compare?
Different brains on a voxel-by-voxel basis after the deformation fields have been used to spatially normalise the images
68
What is one shared aspect of VBM?
The entire brain, rather than a particular structure, can be examined in an unbiased and objective manner
69
What are the appropriate approach for VBM?
Depends on the types of structural difference that are expected among the images
70
What is VBM likely to provide?
Greater sensitivity got localising small scale, regional differences in gray or white matter
71
What does VBM require?
Estimation of smooth, low frequency deformation field and is therefore a simple and pragmatic approach within the capabilities of most research units
72
What is the aim of VBM and how is it achieved?
Identify differences in the local composition of brain tissue; while discounting large scale differences in gross anatomy and position Achieved by spatially normalising all the structural images to the same stereotactic space, segmenting the normalised images into grey and white matter, smoothing the grey and white matter images and finally performing statistical analysis to localise significant differences between two or more experimental groups
73
What is spatial normalisation?
Registering the individual MRI images to the same template image An ideal template consists of the average of a large number of MR images that have been registered in the same stereotactic space
74
What are the 2 steps of spatial normalisation?
1. Estimating the optimum 12-parameter affine transformation that maps the individual MRI images to the template - Bayesian framework - compute maximum a posteriori estimate of the spatial transformation based on the apriori knowledge of the normal brain size variability 2. Accounts got global non linear shape differences which are modelled by a linear combination of smooth spatial basis functions
75
What does spatial normalisation correct for?
Global brain shape differences
76
What does VBM try to detect?
Differences in the local con de rations or volume of grey and white matter having discounted global shape differences
77
How is segmentation achieved ?
Combining a priori probability maps or “Bayesian priors” which encode the knowledge of the spatial distribution of particular tissue types
78
What does the segmentation step also incorporate(
Image intensity non-uniformity correction to account for smooth intensity variations caused by different positions of cranial structures within the MRI coil
79
How are the segmented grey and white matter images smoothed ?
Convincing with an istropic Gaussian kernel
80
What is the motivation for smoothing the images before the statistical analysis?
1. Smoothing ensures that each voxel In the images contains the average Amount of grey or white matter from around the voxel 2. Smoothing step has the effect of rendering the the data more normally distributed by the central limit thereom - increasing the validity of the parametric statistical test 3. Smoothing helps compensate for the inexact nature of the spatial normalisation
81
What does smoothing have the effect of reducing?
The effective number of statistical comparisons thus making the correction for multiple comparisons less severe
82
What does statistical analysis employ ?
The general linear model (GLM) - a flexible framework that allows a variety of different statically tests such as group comparisons and correlations with covariates of interest