lecture 6 - experiment meets analysis Flashcards

You may prefer our related Brainscape-certified flashcards:
1
Q

GLM assumptions

A
  1. linearity: any change in the regressor is associated with a proportional change in the data
    –> i.e., there is a linear relationship between regressor and data
  2. normality: residuals are normally distributed
  3. no multicollinearity: regressors are independent of each other (this assumption of often violated)
  4. independence: observations and residuals are independent of each other (e.g., different time points)§
  5. homoscedasticity: the variance of the residuals is constant across all levels of the data (e.g., all time points)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

multicollinearity: definition + what is the problem

A

when two or more predictors in the model are highly correlated, or predictors are linear combinations of other predictors

  1. correlated regressors explain overlapping variance in the signal
  2. model coefficients (β) become unstable
    –> i.e., small changes in the data lead to large changes in the coefficients
  3. in case of perfect collinearity, there are infinite solutions to the regression
    –> meaning the model can’t uniquely determine the individual contributions of the correlated predictors.
  4. bouncing beta effect: model coefficients for the same regressor can be strongly positive or strongly negative depending on the coefficients of other regressors
    –> the sign and size of the coefficient for a regressor can change dramatically depending on the presence of other correlated regressors in the model.
  5. coefficients are not reliable, and the resulting model does not generalize to new data
    –> most important problem
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

multicollinearity: how can we quantify the problem

A
  1. look at the data: are stimulus features/behavioral variables correlated
    –> if yes, this will cause problems for EVERY VOXEL in the brain
  2. look at the covariance structure of the design matrix: high correlations among predictors after HRF convolution could be deleted if they are unnecessary and affect important comparisons
  3. compute variance inflation factors (VIF): quantifies how much the variance of a regression coefficient increases due to multicollinearity
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

VIF

A

quantifies how much variance of a regression coefficient increases due to multicollinearity

  • R^2 = variance explained in a predictor by all other predictors in the model
  • VIF = 1/(1-R^2)
  • VIF = 1, no collinearity
  • VIF = 5-10, you are in trouble: 80-90% of your predictor is explained by other predictors
  • VIF > 20 = close laptop
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

‘solving’ multicollinearity

A

not possible, but you can
1. avoid the problem before it occurs through the experimental design
2. compensate for the problem through analytical strategies

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

multicollinearity: experimental considerations

A
  1. think of the analysis before designing the experiment: determine a priori which factors need to be independent
  2. orthogonal task designs: e.g., vary each experimental component independently from all others, balance their combination
  3. separate conditions in time: add inter-trial intervals with litter, separate task phases (e.g., stimuli & button clicks)
  4. counterbalance trial order: ensure that each condition precedes each other condition equally often (at least randomize order)
  5. block designs: group together trials of a certain condition to separate them from trials of another condition (unlike event-related designs)m
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

multicollinearity: analytical considerations

A
  1. reduce model complexity: remove predictors that are not needed
    –> rule of thumb: n_regressors < n_datapoints/20
  2. orthogonalization of regressors: decide which predictor gets credit for explaining overlapping variance
  3. regularized regressions (e.g., Ridge regression): penalty term (λ) added to the GLM shrinks coefficients, with larger coefficients being compressed more.
    –> λ value needs to be estimated through CV
    –> model fits the training data less well, but it generalizes better to new data
  4. dimensionality reduction: find principal components of design matrix and fit those to the data
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

pro’s and cons for orthogonalization

A

pro: can be appropriate for covariate regressors of a main regressor

con: can be misleading
–> e.g., difference between model coefficients is ‘not real’ but rather reflects your decision

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

regularization

A
  • Regularized regression is a statistical method that modifies traditional regression to prevent overfitting, which can occur when a model is too complex. It introduces a penalty term to the loss function that the optimization algorithm seeks to minimize.
  • This penalty term typically increases as the absolute value of the coefficients increases, leading to a preference for smaller coefficients overall, which can lead to simpler models that generalize better to new data.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

temporal autocorrelation

A

the signal is correlated with a delayed version of itself, meaning that each value in the time series can be predicted based on the values that came before
–> also known as serial dependence

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

problem with temporal autocorrelation

A

observations are not independent

  1. samples acquired close in time are very similar (e.g., because of the HRF)
  2. the amount of independent information in the data is reduced
  3. degrees of freedom are overestimated, leading standard errors to be underestimated
  4. autocorrelation leads to inflated t-statistic, and to an increase in false positive results
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Temporal autocorrelation – How can we quantify the problem?

A
  1. compute autocorrelogram
  2. prewhitening
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

compute autocorrelogram

A

An autocorrelogram is a plot that shows the correlation of the time series with itself at different lags.

  1. correlate time series with a delayed version of itself
  2. do this for all possible delays
  3. inspect the resulting curve (i.e., the autocorrelogram for all delays)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

prewhitening

A

remove autocorrelation by transforming the data such that the residuals resemble white noise

  1. fit a GLM model
  2. compute residual autocorrelation
  3. correct residual autocorrelation (e.g., through filtering)
  4. add the uncorrected residuals to the ‘explained (fitted) signal’
  5. re-run the GLM on corrected data

this improves fMRI reliability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Temporal autocorrelation as a feature, not a bug

A

Check for autocorrelations in your data. They might speak towards your research question or cause problems (e.g., violating assumptions)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

heteroscedasticity

A

physiological or thermal noise can vary over scan duration (e.g., head motion), leading to variations in the variance of residuals

17
Q

what is the problem with heteroscedasticity

A
  1. variance in residuals may change over time
  2. standard errors differing between conditions etc., leading to biased t-statistics
18
Q

how can we detect heteroscedasticity

A

plot residuals over predicted values of regression model
–> plot(residuals(mod), predicted(mod))

  • In a homoscedastic situation, the residuals will be randomly dispersed around the horizontal axis, with no clear pattern.
    –> diagonal line (good)
  • In contrast, a heteroscedastic pattern might show a funnel shape where residuals spread out with larger predicted values, indicating increasing variability in the residuals.
    –> funnel shape (bad)
19
Q

how can we solve heteroscedasticity

A

correct metrics for residual variance
–> e.g., weighted least squares, robust standard errors

20
Q

smoothing of hemodynamic response

A
  • the hemodynamic response ‘smooths’ the time series, blurring the lines between trials and making regressors similar
    –> The HRF is a slow and gradual process, taking several seconds to rise and fall again after a brief neural event. This means that if two cognitive events occur close together in time, the slow HRFs that result from these events can overlap and summate.
    –> The overlapping responses mean that the regressors, which are meant to be distinct, will appear more similar to each other because the blood flow responses they are trying to model are not well separated in time.
21
Q

why do fMRI researchers need to be extra-aware of multicollinearity and autocorrelation compared to those using other techniques (e.g., EEG)?

A
  1. because HRF-convolution of regressors can increase the correlation among them
  2. because the hemodynamic response results in ‘smooth’ time series (i.e., time points are not independent)
22
Q

which GLM assumption is related to the proportional change in data with changes in the regressor

A

linearity

23
Q

why can multicollinearity of your design matrix be a problem

A
  1. coefficients can invert their sign depending on other coefficients
  2. affected model coefficients are difficult to interpret
  3. multicollinearity increases the variance of model coefficients
24
Q

Prewhitening is a technique to correct for:

A

autocorrelation

25
Q

What are reasons to use ridge regression for fMRI analysis?

A
  1. it can improve the generalization performance of the model
  2. it can compensate for the problem of multicollinearity