Pipelines for different techniques Flashcards
at a basic level, how is fMRI showing brain function
seeing which brain regions ‘light-up’ during a particular task e.g. visual cortex lights up during visual task
draw the flow chart from stimulus to BOLD response
see diagram
who first experimentally indicated relationship between blood flow and brain function?
Angelo Mossi (balance board 19th century)
overall fMRI set up
person lying in MRI magnet with RF head coil, headphones, mirror to see screen, holding response box for output
is the BOLD signal directly measuring blood flow?
no
if BOLD signal is not directly measuring blood flow, what is it measuring?
it is sensitive to the magnetic properties of oxygenated and deoxygenated RBCs
outline the oxygenated properties of RBCs
diamagnetic- slightly (indistinguishably) reduces magnetic field
outline deoxygenated properties of RBCs
paramagnetic- slightly increases magnetic field (distinguishable)
if magnetic field increases, precession…
increases
so if magnetic field increases where deoxygenated RBCs are present, what happens
it cause a magnetic field gradient causing adjacent water molecules to precess at different frequencies then dephase (spread out to x and y planes) causing signal loss - results in a distinguishable signal loss around deoxygenated blood, in comparison to oxygenated blood
outline activation contrast occurring in BOLD (comparing rest to activation)
at rest: oxygenated blood travels through arteries. exchanges oxygen in capillary bed and deoxygenated blood leaves via veins. Deoxygenated blood increases dephasing, hence increases T2 signal.
When active: excess oxygenated blood flows into activated region, washing out deoxygenated blood, this reduces dephasing, hence reduces T2 signal
The slight change in T2 signal between rest and activity can be expressed mathematically to give an optimum echo time
draw diagrams to represent the different techniques used for structural MRI data acquisition in K-space vs fMRI (echo planar)
see notes
why does fMRI use echo planar imaging
it’s much faster- can acquire a single slice in less than 100ms
what pattern does fMRI echo planar use
interleaved
what’s the disadvantage of interleaved echo planar
it can lead to stripes on the image due to the artefact of signal changes during acquisition or subject motion
in fMRI how is signal intensity time course looked at?
looked at along rest/task/rest/task etc. voxel by voxel- to see which voxels show difference in signal between rest and activation
what is the method of fMRI data analysis focussed on?
model based linear regression
outline the development of models to fit voxel signal activity- with images
we have our voxel signal activity going up and down between rest and task- see notes
a simple model is a poor fit for the data as the data is shifted in time due to delayed response to stimulus
improved model is time shifted to better fit this
then we take physiology into account- the sloped haemodynamic response with delay then decay
this is convoluted with the simple time shifted model
then due to signal drift over time is combined with linear ramp- best fit
outline how a study like the 10 year f/u on hippocampal volume using MRI in early dementia and cognitive decline would be run
MRI baseline of volume and repeat at defined intervals for each participant
hippocampal segmentation from a regulizer and probabilistic atlas- from registered brains
normalise volume overtime compared to see hippocampal volume changes and whether it aligns with dementia/cognitive decline
(overall- raw volume calculation -> normalisation -> percentage loss overtime calculated-)
what is the downside of ROI analysis?
you need to know what you want to look at (ROI)- an a-priori hypothesis
what do you do if you don’t have a ROI or an a-priori hypothesis
an exploratory study where you don’t have to work within pre-defined areas- this is usually Voxel Based Morphometry (VBM)
what is VBM?
a global volumetric brain analysis where a single experiment allows identification of GM changes and other associations across brain
does VBM use the same or different tools of image analysis as ROI studies?
the same
what are the steps/tools used in VBM image processing?
Brain extraction, GM tissue segmentation, templates, registration, smoothing, stats testing
outline Brain extraction (BET) in VBM
all images organised and assembled in the same directory, BET on FSL will make all pixels that aren’t brain =0
these outputs need to be checked to ensure there’s no major issues
how is GM tissue segmentation done in VBM
GM TPM is made and overlaid on the extracted brain of participant
TPM is quantitative so voxels have values conveying the fraction of voxel that contains GM (0-1)
discuss uneven group sizes in VBM
it’s ok to have uneven group sizes in statistical analysis, but how this effects the template image has to be considered as we want the template to be equally representative of all participants so average morphology isn’t biased towards one group and to prevent more noise being created in images of one group than the other has it needs to be altered more in registration to fit template.
This is prevented by cutting out randomly the excess scans in the larger group so N=N
Outline the steps of VBM templates and registrations
- overall, a VBM experiment requires all images to share same space, but registration induces noise, therefore we try to get template image from data we are using in analysis taking it more geometrically similar to scans
- firstly, all scans are listed on FSL VBM (ensuring balanced numbers)
- MNI152 scan used as starting point for template brain, initially a 12DOF linear registration to approximate MNI152, then non-linear registration. Then we apply this to their GM TPM (co registration)
- this gives us an average of all images in MNI152 space
- we then go back to our native space GM TPMs and perform new linear and non-linear registrations between these and this new GM template, instead to the MNI152 template- gives less noisy template
- now every GM TPM should have reasonably high quality transformation into shared space
after VBM registration, what can we do?
voxelwise analysis
what is voxelwise analysis?
running a stats test per every voxel in the brain
what does voxelwise analysis after the aforementioned registration process overlook?
the ability to detect GM atrophy by differentiating between high and low GM values as GM atrophy primarily presents as thinning of cortex but this is lost in individual measurements during registration
how do we process images to ensure we can do voxelwise analysis to see GM loss for indiviual participants
when we perform our registration, we save a deformation map (the jacobians of the transformation)- an image where voxel values represent how much the cortical thickness was expanded/contracted during registration, so we can multiply this by transformation to give cortical thickness accurately
what is the purpose of smoothing?
to boost the signal to noise ratio
signal definition
patterns in our GM values which will underpin a statistical relationship. This is a numerical change in a consistent direction across all relevant voxels
noise definition
caused by image transformations and scanner/sequence imperfection it is a random numerical change of voxel value on top of true numerical values
what is the mechanism of smoothing?
signal can be relied on to be consistent across a region of scan, by averaging voxels with their neighbouring voxel values, it preserves brain atrophy while smoothing random noise across data - need a balance with smoothing as small regions of atrophy will have their signal smoothed out if too much applied
FSL VMB smooths data by 3 different degrees measurement in sigma (2,3,4) holding relationship to full width at half maximum (FWHM)
after smoothing data by 3 degrees, FSL VBM will run pilot stats to see which has the strongest stats values
higher sigma value=
higher FWM= smoothing over greater distance
outline statistical testing/analysis done with VBM data once processed
voxelwise analysis will perform stats test separately for every voxel coordinate across all pts
end up with a final image in template space where each voxel value= p-value for chosen stats test
there will be some random significant voxels, however, if data is truly significant it will be seen in a cluster
con of ROI analysis
very restricted in regional scope (needs a good a-priori hypothesis)
pro of ROI analysis
higher measurement accuracy and therefore statistical sensitivity for regions examined
pro of global analysis (VBM)
considers the whole brain, therefore nothing is ‘missed’
cons of global analysis (VBM)
elaborate processing an transformation steps create noise in final data - for a given region, statistical power will not be as high as it would be in ROI study of same area
multiple comparisons still unideal even after cluster-based correction
what is the linear regression model?
where our model of is time shifted with voxel signal up with HRF convolved linear ramp model
what is HRF
haemodynamic response function
how do we work out how well our model fits the data?
the data is a series of discrete signal measurements taken every few seconds with residual noise calculated by looking at the gap between model and actual data points
what is the equation for the general linear model (GLM)?
y = x beta e
what do the aspects of y = x beta e stand for ?
y = voxel time series data
x= design matrix
beta= regression parameters
e = gaussian noise
t-statistic definition
the ratio of the departure of the estimated value of a parameter from its hypothesis value to its standard error
what gives activation map?
threshold t-statistics where t-statistic is beyond threshold it lights up to show activation in this area
is fMRI quantitative?
no (we cab’t get the subject to perform a task and measure activation in meaningful units) therefore fMRI relies on signal contrast generated between states (e.g. task vs rest)
task definition
a set of activities usually involving stimuli and responses
run definition
a continuous period of data acquisition that has the same acquisition parameters and task
event definition
an isolated occurrence of stimulus being presented or a response being made
epoch definition
a period of sustained neural activity
trial definition
stimulus presentation followed by a response- can be treated as an event if less than 2 seconds
ISI definition
inter-stimulus- interval
ITI definition
inter-trial-interval
SOA definition
stimulus onset asynchrony
outline block designs
- earliest fMRI design mirroring that used in PET
- closely spaced successive trials over a short interval of time
- utilises blocks of identical trial types to establish task-specific conditions
draw diagram of block design
see notes
draw the block design to GLM regressor
see notes
advantages of block design
- most efficient design for detecting BOLD signal amplitude differences between conditions
- fairly robust
- can acquire more trials in less time than other designs because you don’t have to worry about spacing out individual trials to get an estimate of an individual event
disadvantages of block design
- predictable stimuli (subjects may know what is coming and alter strategies accordingly)
- inflexible for more complex tasks
- doesn’t account for any transient responses at the end of the block
- it can be difficult to determine an appropriate baseline condition
outline slow event related (ER) design
consists of short stimuli separated by fairly long ISI to enable HRF enough time to fallback to baseline before next trial
how do you workout the ideal ISI length
8s + 2 x stimulus duration
draw a diagram of slow ER design
see notes
draw slow ER convolving with HRF to generate GLM regressor
see notes
outline fast ER design
- short stimuli separated by a varying ISI with events truly randomised
- every combination of trial sequences must be used
does fast ER design allow HRF decay back to baseline?
only sometimes- depends on how long the ISI is for each trial (varies)
draw a diagram of fast ER design
see notes
draw a picture of fast ER design convolved with HRF to generate GLM
see notes
list the 5 important things to include in fMRI experimental design good practice
- the design evokes the cognitive process/ other process of interest
- collect as much fMRI data as possible on as many subjects as possible
- choose a stimulus and timings with create maximal change in cognitive process of interest
- try to time the stimuli presentation of different conditions in order to minimise signal overlap
- use a software to optimise design efficiency for ER designs
- get a measure of subject behaviour in scanner (ideally related to task e.g. finger tapping)
if we have 3 different event types in fMRI trial, how many regressors do we have?
3 (event type n = regressor n)
if we have more regressors do we make more or less statistical comparisons?
more
what does COPEs stand for
Contrast Of Parameter Estimates
what are COPEs
they describe the statistical comparisons we want to make input into software using contrast weight vectors
if we imaging we have 2 stimuli given to subject in fMRI one corresponding to faces (B1) and one to places (B2), draw the table of vector, COPE and description
see notes
is the GLM fitted for every voxel?
yes
how in early fMRI did we infer which of our voxels showed significantly significant activation?
used a slider to threshold t- value until our maps looked ‘right’
what does a small p value imply?
our null hypothesis is unlikely
what does P-value = 0.05 mean?
a false positive rate of 5% allowed at maximum to consider outcome statistically significant
what does a probability transform do?
transforms t-values to z-values
z > 1.64 =
p < 0.05
what is the issue with p < 0.05 paramters?
this allows space for 5% false positives, so in an fMRI recording hundreds of thousands of voxels, this may implicate thousands of voxels as being activated by the task, and there’s a chance that some of these false positives may cluster
how do we prevent false-positive fMRI findings from altering the correct inference of data
using multiple comparison correction methods
what is the purpose of group analysis?
to look at inter-subject differences- rather than just fMRI activations of a single subject
what is another term used to describe the analysis of fMRI data in individual subjects?
first level analysis
what is another term to describe group analysis?
second level analysis (or higher level analysis)
how would you go about analysing data from a group of 5 participants to look at how the group activates on average?
- firstly need to register all images from each subject into standard space (so you can make voxel-by-voxel comparisons e.g. using MNI152 template)
- each subject (k) will have a βk value (regression parameter) and an ek value (error value) calculated form 1st level analysis
- this will formulate a GLM:
βK = XG βG + eg (voxel data = amplification x mean of βk plus error)
what are the 2 group analysis models?
fixed effects and mixed effects
outline fixed effects:
βk = Xg Bg
ignores group variance, only considering variance in original MRI data
this makes outcome only applicable to the specific subjects studied
there are few occasions this model would be used
outline mixed effects:
βk = Xg Bg + eg
- considers group variance (between subject) in addition to MRI data variance
- findings can be generalised to whole population
- usually used
formulating GLM enables us to test whether group mean is…
> 0
outline multilevel analysis
this is where we are say looking at difference between 2 groups- need to build up to this:
first level analysis- comparing each session
second level- looking at each subject to get a group average
third level: group analysis where they are grouped depending on condition
fourth level- group difference- is there a difference between groups?
outline the example of visual checkboard design in multilevel analysis
- we start with a paradigm of 30 second blocks alternating between rest and visual checkerboard stimuli- hence the COPEs are visual vs rest to see the difference in activation
if we wanted to look at a higher level, we can split subject groups to go through the same paradigm but with different visual contrast in visual trials- able to do more complex analysis between groups
outline brain extraction in fMRI
where we remove the non-brain tissue, allowing for more accurate registration and mask making so we are processing only in-brain voxels
outline spatial smoothing in fMRI
purpose is to improve the signal:noise ratio (SNR), however this comes at the expense of spatial resolution
outline motion correction in fMRI
rigid body transform ensures consistent image space between images
outline slice timing in fMRI
fMRI volume may take 2-3 to acquire so slices acquired a slightly different times during brain activity in response to stimuli. it is assumed data on each slice is acquired in the middle of acquisition - rarely corrected
overview temporal filtering in fMRI
high-pass filtering used to remove slow signal drift (which often appears over time)
overview unwarping in fMRI
EPl images can be warped, with distortion depending on polarity/direction of place encoded blips (y signal) so by looking at difference between -ve and +ve blips distortion can be corrected
overview registration in fMRI
registers image to standard space for group analysis
the simplest way to approach this is directly register fMRI volume to standard space, however it’s not generally accurate so better to use hi-res structural image from same subject as intermediary (before registering to template such as MNI152)
what are the 3 main presentations of fMRI results?
- orthographic (an image in each orientation- sag, cor and ax)
- lightbox (many images showing each slice for each orientation)
- 3D - showing 3D image of brain, usually just cortical surface visible
discuss orthographic presentation of fMRI
it’s suitable if there are only a few activation clusters, but if you have many clusters need more separate views
discuss lightbox presentation of fMRI
best as it gives overview of slices to give a good idea of the area
discuss 3D presentation of fMRI
looks nice, but less useful scientifically- usually only useful for cortical surface activation- can look deeper into brain and add colours etc.
by what proportion does the BOLD-signal change in task related activation
only 1-2% but still significant
what does the subject do in RS fMRI
nothing- usually looks at a cross on a screen
why study the brain at rest?
- to compare task-based localisation with resting state connectivity
- gives an inherent understanding of functional brain organisation
- objective clinical biomarker (e.g. BD DMN)
- quick, with little set up/expertise needed
- can be done in any population
when/how was functional connectivity discovered?
1995 by Biswall when comparing finger tapping activation was compared to RS, similar activation seen between the 2 with networks active during RS
What are the 3 types of connectivity?
functional, structural, effective
what is functional connectivity?
how 2+ regions are functionally connected
what is structural connectivity?
how 2+ regions are structurally connected
what is effective connectivity?
how 1-2 regions connect to several other regions
what are the typical RS networks that come up in RS research (name 3)
3 from:
- DMN
- right/left fronto-parietal attention
- executive control
- medial and lateral visual cortical areas
- auditive system
- sensorimotor cortex
what did Cordes et al. 2001 find
evidenced that resting state connectivity was due to neuronal activity/fluctuations as it correlated with fluctuations in the power of EEG (evidence resting state is neuronal in nature)
what’s the typical RS acquisition set up?
- usually 6-10 mins of RS
- T2*- weighted pulse sequence
- TE (echo time)= 35ms
- TR (length of time between echo/pulse activity) = 2000ms
- in-plane pixel dimensions- 1.8mm x 1.8mm
- transaxial slices of 4mm
what is the overall/basic resting state analysis pipeline?
fMRI data –> Preprocessing –> Denoising
–> postprocessing
what steps are involved in fMRI preprocessing
the same as other imaging types:
- registration (alignment to shared space)
- slice timing correction
- outlier detection/removal
- functional normalisation
-smoothing
what is denoising?
removing noise either by filtering out data prior to analysis or may be involved in analysis to explain additional variance
what are the 2 overall types of denoising?
- Independent Component Analysis (ICA)
- Mean- signal based Analysis
what are the 3 main subcategories of ICA-based denoising?
- ICA-AROMA (fully automatic, only removes head-motion)
- ICA-FIX (semi-automatic, requires training data to software)
- Manual - labour intensive, (maybe best- but human error?)
what are the mean-signal based denoising techniques?
- global-signal
- mean GM/WM/CSF
what are the 2 overall methods of post-processing resting fMRI?
voxel-based methods
node-based methods
what are the 4 voxel-based methods of post-processing fMRI data?
- seed-based correlation analysis (SBC)
- ICA
- Amplitude of low frequency fluctuation (ALFF)
- regional homogeneity (ReHo)
- group MVPA
what are the 5 node-based methods of post-processing fMRI data?
- network modelling analysis
- graph theory analysis
- dynamic causal modelling (DCM)
- non-stationary methods
- ROI to ROI
is ICA a network measure or predefined measure?
network
outline ICA
- multivariate voxel-based approach
- finds interesting data
- exploratory (global/whole brain) model free method
- spatial approach
- separates signals in to high and low frequency
X = A * S (fMRI data = time course * spatial pattern)
what is the ICA pipeline for both single subject and group?
see in notes
outline groupwise ICA (GICA)
instead of having to individually analyse each subject it can be done all at once in software to give spatial maps for each participant with p-values corresponding to specific functional networks e.g. 0.096 means activation of DMN etc
from here you can do second level analysis
outline seed-based connectivity (SBC)
calculate correlation coefficient between 2 regions
it is a localised approach where user specifies seed/ROI
what is the pipeline for SBC and ROI?
define ROI or seed region –> extract time series –> extract connectivity measures –> group/condition comparison
what is diffusion?
random Brownian motion of molecules due to thermal processes
what does diffusion imaging measure
the diffusion mobility coefficient of molecules (particularly water)
what’s the difference between a standard echo MRI sequence and that used in diffusion weighted (use image to help)
see notes for image
standard echo MRI has 90 degree RF with 180 degree pulse to give echo readout,
whereas diffusion weighted adds strong diffusion weighting gradient before and after the 180 spin echo where the signal is attenuated in proportion to diffusion
is there more or less diffusion signal attenuation in CSF than WM/GM?
more
what is the equation for diffusion weighted image?
see notes
How much is signal attenuated by in WM with a b-value of 1000smm-2?
~60%
how much is signal attenuated by in CSF with a b-value of 1000smm-2?
~95%
why does diffusion signal have greater attenuation in CSF than WM/GM?
because diffusion is more mobile
how do we get a number of diffusion coefficients for diffusion imaging?
we take several measurements of diffusion in different directions - at least 3 (x,y,z)
why is diffusion image in corpus callosum brighter in one direction than another?
because diffusion mobility is different along different irections dependent on the structure of myeline
do we use a baseline in Diffusion imaging?
yes, a T2 weighted image with no diffusion gradient
after we calculate a diffusion gradient in each direction, what do we do?
calculate the mean diffusion for each direction
how many directions to most diffusion MR studies use?
around 30-60
outline how human tissue effects diffusion imaging
it restricts diffusion due to its complex microstructural environment (why we look at a number of directions). Diffusion in some areas e.g. CSF spread out equally in every direction- spherical/isotopic diffusion, whereas in some voxels, particularly in myelin, its ellipsoid/anisotropic where diffusion happens more freely in certain directions (normally along myelin)- diffusion tensor
what are scalar diffusion images?
images calculated from diffusion properties to provide a visual summary
these can be created using mean diffusivity
or
using fractional anisotropy, giving standard deviation of how but diffusivity deviates from the mean- image appears dark mostly (where isotropic/closest to mean) and lighter where WM occurs due to ellipsoid diffusivity
summarise scalar diffusion images:
- calculated from T2 and DWI images directionally averaged
- if diffusion is calculated in multiple direction, a measure of directional diffusion variability (anisotropy) can be calculated
- many human neuroimaging studies have shown changes in <D> and anisotropy (in neurodegenerative diseases</D>
what is <D></D>
diffusivity coefficient
outline directional colour encoded images
using fractional anisotropy (FA) with colour coding the direction of diffusion (e.g. S-I blue, A-P green, L-R, red)
outline diffusion imaging’s application to stroke
able to show stroke in images earlier than standard T2
- early in stroke diffusion is reduced (first few hours) as water moves into cells (cytotoxic oedema- intracellular swelling)
- overtime, diffusion recovers towards being more normal, then elevates above normal due to vasogenic oedema (extracellular oedema after cell death)
- this ability to see earlier on has enabled better stroke management due to improved visualisation
what are 2 key types of data correction used in DWI processing?
motion correction - to improve image quality
susceptibility induced distortion correction- EPl images created distortion, often at tissue boundaries due to phase encoded gradient blips, images with positive and negative blips are averaged out to remove distortion
what is tractography?
where eigenvectors can be plotted at each voxel to give an indication of major WM fibre pathways/orientations
overview streamline tractography
factor algorithm used to follow direction of eigenvectors until you reach the next voxel and continue along, starting from seed point, tract propagates until termination criteria are met- FA drop below a certain value and/or angle between principle eigenvectors exceeds a threshold
what is good about streamline tractography
able to identify pathways within the vector field from a specific seed point
what’s a weakness of streamline tractography?
the algorithms used only identify a single pathway, which doesn’t take branching fibres into account
they also give little indication of whether the pathway is a true anatomical pathway
how are these weaknesses of streamline tractography overcome?
diffusion ellipsoids and profiles where orientation distribution functions (ODFs) are considered and probabilistic tractography can be performed
what is probabilistic tractography?
where streamline tractography is performed many times with ODF used to define principle diffusion direction in each voxel- the number of times a streamline passes through a voxel i counted to give connection probability
what’s the main limitation of diffusion tensor model?
not good for modelling voxels with corssing fibres
outline diffusion imaging ROI analysis
WM pathways can be identified using probabilistic tractography and ROI masks can be generated- useful to see if connectivity is breaking down by looking at correlations with clinical/cognitive performance
outline using DTI to look at cortical connectivity
streamline-based algorithms perform poorly as cortex has much more GM hence diffusion is mostly isotropic so tracts cannot be found easily
what helps overcome issues with using DTI to look at cortical connectivity?
global tractography allows you to see the ‘best connections’ throughout the brain where connections are scored to give a connectivity value
overview voxel-based morphometry using fractional anisotropy
VBM traditionally used to look at GM density, however, this process uses it to look at WM tracts, however results are highly dependent on registration and difficult to get results from which you can infer much meaning