Week 3: GLM part 2 + Experimental designs Flashcards
What we can control in experimental designs
What we present and when
Main effects
effects of a single condition, collapsing over the other(s). For example, testing whether red stimuli lead to different activity levels than green stimuli (regardless of shape) would be a test of a main effect
Factorial design
Factorial designs are designs in which each event (e.g., stimulus) may be represented by a combination of different conditions. For example, you could show images of squares and circles (condition 1: shape) which may be either green or red (condition 2: color)
In neuroscience, most hypotheses are…
directional (e.g., red > green)
Parametric design
So far, we have discussed only designs with conditions that are categorical, such as “male vs. female faces” and “circles vs. squares”. The independent variables in your experimental design, however, do not have to be categorical! They can be continuous or ordinal, meaning that a particular variable might have different values (or “weights”) across trials. Designs involving continuously varying properties are often called parametric designs or parametric modulation. One hypothesis you might be interested in is whether there are voxels/brain regions which response is modulated by the reward magnitude (e.g., higher activity for larger rewards, or vice versa). In parametric designs, we create two regressors for every parametric modulation: one for the unmodulated response and one for the modulated response.
To obtain large effects (i.e. t-values), we need three things
- a large response/effect (e.g., beta)
- an efficient design or, in other words, low design variance (can be done prior to collecting data)
- Low noise/unexplained variance (this can be delat with during pre-processing)
Efficiency is the inverse of…
the design variance (e.g., high eff = low design variance)
Design variance is…
the part of the beta’s standard error that is caused by the design matrix (X)
Researchers do not need to acquire (fMRI) data to calculate the efficiency of their design (X). Why?
We do not need to acquire data to calculate the efficiency because the formula for the efficiency only relies on the following values: stimuli, onsets and ordering of stimuli. In other words, the efficiency formula only relies on X (the design matrix), and not y (the actual signal)
We want high variance within our predictors because…
…we want to base our model on a wide range of values that represent the whole population sample!
Reason about this
You probably by now understand what’s the culprit: the design-variance! Given that the effect (beta IQ) is about the same for the two models and the MSE is higher for the high-variance model, the logical conclusion is that the design-variance of the high-variance model must be waaaaay lower.
Detection
As you can see in the plot above, a blocked design groups trials of the same condition together in blocks, while the event-related design is completely random in the sequence of trials. Note that designs can of course also be a “mixture” between blocked and event-related (e.g., largely random with some “blocks” in between).
So, if we’re interested in detection (i.e., the amplitude of the response), what should we choose? Well, the answer is simple: blocked designs.
This is because blocked designs simply (almost always) have lower design variance because of:
lower covariance (“correlation”)
higher variance (“spread”)
Block designs
- Bigger response between baseline and experimental > we are flooding stuff though, the nature of prediction is therefore weaker
- Similar events are grouped together
- Two condition block design with 16-20 sec blocks maximize power > but not always applicable!
- If we are interested in detection (which we almost always are), blocked designs are better because:
> Lower covariance (no risk of overlap like in event designs)
> Higher predictor variance = more “spread”
Event-related designs
- Better!! (ignore what it says in the notebook!)
- Events are mixed together
- Good signal-to-noise ratio
- jitter (semi-random ISI) = good for statistical efficiency (jitter = higher efficiency because we randomise the ISI)
Psychological principles (1)
Stimulus predictability
- Influences psychological state
- e.g., go/no-go task: the predictability of the no-go stimulus determines how hard it is to not respond (event related better than block in this case)
Psychological principles (2)
Time on task
- We can only image what subjects are doing, so they should be doing what we want them to do as much as possible
- Recognition time ~ 250 ms!! > remember that we recognise objects very fast, so the stimulus should be presented for <= 250 ms
Psychological principles (3)
Participant strategy
- Different stimulus configurations afford different strategies (e.g., stroop task)
- Compatible trials vs. incompatible trials = compare these two to study cognitive control (e.g., people respond more slowly and less accurately in incompatible trials)
- If we take the stroop task, it makes more sense to use it with the blocked design
Psychological principles (4)
Temporal precision of psychological manipulation
- What we expect from subjects should fit with what subjects can do
- e.g., sad vs. happy memories > block/event related is harder, people cannot switch between emotions that fast
- Solution: single epoch design (e.g., fixation baseline > emotional induction > emotional state > recovery induction > fixation baseline): this is viable because it does not present an emotion twice; emotions will not mean the same thing twice in a row !
Psychological principles (5)
Unintended brain activity
- Brain imaging can capture all kinds of mental processes
- e.g., spatial attention shifting
Overview
Experimental design
- Design: how many independent variables will be manipulated
- Trials: how are events organized (events, block, rapid, etc…)
- The design is all handled by the GLM! (BUT, there are other, novel ones that cannot be fully handled by the GLM: e.g., mediation, connectivity, classification/prediction, RSA)
Kinds of designs (1)
Subtraction/pure insertion
- we can compare more complex conditions to simpler ones, by subtracting the activity of the simpler condition activity; what is left over must be due to the complex condition.
- the goal is to isolate a single neural process
- assumes that neural processes can be summed linearly
- assumes that the neural processes associted to each task are NOT interacting with eachother
Issues: interaction with context & pure insertion often violated
Kinds of designs (2)
Multiple subtraction
- e.g., (task A – task B – task C)
- Can avoid issues with pure insertion (context issue)
- are useful for increasing specificity of the conclusions you can draw from your results.
Kinds of designs (3)
Factorial
- an approach that characterizes interaction between processes
- interaction effect = main effect 1 x main effect 2 (e.g., gender x expression)
Kinds of designs (4)
Parametric
- Designs involving continuously varying properties (different parametric values) > continuous independent variables (vs. discrete as in factorial)
- Key takeaway: Does the BOLD signal seem to increase as the intensity of the stimulus increase? > increase our confidence that there is a relationship
- Two levels (we assume that our design affects the voxel response in two ways):
> Unmodulated response > a response to the task independent of the parametric value
> PREDICTOR: stick predictor (1s and 0s)
> Modulated response > a response to the task dependent on the parametric value
> PREDICTOR: mean-subtracted e.g., reward magnitude; we subtract to decorrelate modulated from unmodulated responses
Very likely, the betas for the modulated predictor will have a greater effect on the voxel activity than the unmodulated predictor
Example: One hypothesis you might be interested in is whether there are voxels/brain regions whose response is modulated by the reward magnitude (e.g., higher activity for larger rewards, or vice versa)
IMPORTANT trade-off
- Only one comparison/condition
> More power, but less generalizability> Stick with this one at the beginning of studying a new field - Many comparisons/conditions
Low power, but more generalizability