Week 2: GLM part 1 Flashcards
Once we have collected the data we want to know which areas are active. How do we do this?
Through model-based techniques (e.g., linear regression) and model-free approaches (e.g., PCA)
What general steps do we follow to perform model-based analysis of fMRI data?
1) model the predictors; 2) fit the resulting models (one per predictor) to the data (if we have more than one predictor, sum the two models and fit this sum to the actual data); 3) how good do the models fit (e.g., through t-stat)?
Univariate analysis
We treat each voxel’s time course independently
Hemodynamic response function
A function that represents the change in blood-oxygen levels in response to neural activity
T-statistic definition and formula
Signal-to-noise measure. Formula: beta_hat-beta0 / SE(b_hat)
Contrast testing
Conducting hypothesis testing about our betas (for which we have one model each). Example: we have b1 and b2; we create one model for each, we sum these models and we fit the sum to the actual data. Now, we want to know whether the amplitude of the b1 model is the same or not as the amplitude of b2. In this case, the H0 would be: b1 = b2 and the HA woud be: b1 != b2. Bringing everything to the left of the “=” sign, getting b1-b2 in this case, and calculate the contrast we need to answer the hypothesis. In this case, this would be solving for […] * [b0,b1,b2,b3]= b1-b2, which gives [0, 1, -1, 0].
In which of the following steps do we get the regressors that we will fit to the actual data?
- creating stimulus vector for each stimulus/task
- convolving the vector with the HRF
- fitting the model to the data
At step 2. Convolving the initial vector with the HRF will give us the regressors (Xs) which we will then fit to the actual data
Probability
Expected relative frequency of a particular outcome
Random variable
variable determined by random experiment
Expected value (mean)
Mean of random variable
Variance (of a random variable)
How the values of the random variable are dispersed around the mean
Covariance
How much two random variables vary together
Bias, variance, estimator
How much on average the estimate is correct; the reliability of the estimate; something that estimates a parameter
Regression equals…
…association (not causality!!)
Simple linear regression model
Yi = beta0+beta1Xi + e
The error in linear regression is assumed to have a mean of ….
0 (this means that if we took the mean of all our error terms, e1, e2, e3, …., eN, then the result of the sum would be 0, and then dividing by N would again give 0)
The variance of Yi (Var(Yi)) equals…
sigma^2.
The formula for sigma^2 is …
sum(e^2) / ( # of independent pieces of information - # of parameters in the model, including b0 )
The most used loss function for linear regression
Least squared errors function
Gauss Markov theorem states that…
…assuming the following assumptions of the GLM (linear regression model) are not violated:
1. Linearity: The relationship between X and the mean of Y is linear.
2. Homoscedasticity: The variance of residual is the same for any value of X
3. Independence: Observations are independent of each other (hence randomly sampled)
4. Errors have mean of 0
…then the OLS estimators b0 and b1 are the Best Linear Unbiased Estimators of b0 and b1. The OLS method is therefore used to estimate parameters of a linear regression model (e.g., GLM).
Note: assumptions of BLUE are 1) the model is unbiased (the results will, on average, hit the bull’s eye) and 2. it has the lowest variance among all the unbiased estimators