lecture 9 - encoding & decoding Flashcards

You may prefer our related Brainscape-certified flashcards:
1
Q

hubel & wiesel

A
  • experiment on how the visual cortex processes information
  • demonstrated that certain neurons responded maximally to lines of specific orientations and locations
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Structured single-voxel BOLD time course

A

we can fit population receptive field (PRF) parameters

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

population receptive fields (PRF) + model

A
  • collective receptive field of a group of neurons within a given area or volume
    –> receptive field of a voxel
    –> how groups of neurons in the visual cortex respond to specific aspects of a visual stimulus, such as its orientation.
  • best-fitting parameters of the pRF model (x₀, y₀, σ) are estimated to best match the observed BOLD signal changes given the data
  • the function models the receptive field’s location in visual space and how large it is (position and size) (= retinotopic location of a voxel’s population receptive field)
  • huge number of parameters are estimated using penalised regression
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

more complex receptive fields

A
  • a huge amount of parameters are estimated using penalized regression
  • multiple models may be used to predict how a voxel will respond to a set of stimuli
  • we can add ‘orientation’ and other visual characteristics to our receptive field model (unlike simple models)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

penalised regression

A
  • ordinary least squares is only stable when nr_regressors < nr_timepoints
  • therefore, ridge (Tikhonov or L2 Norm) regression addresses the instability issue by adding a penalty term λ to the OLS equation
    –> penalises beta weight values, forcing them towards 0
    –> particularly useful when there are many regressors, as it helps to prevent overfitting and can deal with issues of multicollinearity
  • essentially, way to fit more complex models
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

how to fit encoding models

A
  1. linear (penalized) regression, with a very large design matrix
    –> more data-driven, less constrained
    –> no strict mathematical model
  2. find best-fitting parameters of a mathematical model
    –> more model-driven, only specific, controlled situations
    –> create explicit mathematical model
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

λ

A
  • penalty term in ridge regression
  • the higher λ, the stronger the penalisation
  • λ = 0 = OLS
  • need to cross-validate to find λ
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

encoding models

A
  • statistical models that aim to predict neural responses/brain activity based on stimulus attributes
  • if we can find what a neuron/sensor/voxel responds to, we will have a model of how this unit encodes information: an encoding model
  • stimuli –> brain activity
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

dimensions of receptive fields - higher spaces

A
  • a voxel’s receptive field doesn’t have to be retinotopic (visual), but can live in a more abstract (semantic) space such as numerosity
  • audio
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

decoding

A
  • ENCODING-MODEL BASED DECODING: when we have an encoding model for voxels (populations of neurons) that describe neural population responses, we can reverse this process to see what their pattern of activations tells us about conditions/stimuli/etc.
  • reconstruct stimuli or determine the information content in a specific region of interest from observed brain activity
  • not focused on mean signal intensity, but the pattern of activity in a certain region of interest is used to classify what a brain state is
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

performance of decoding algorithm

A
  • picking up on the feature space is left to a machine learning algorithm
  • expressed as accuracy (percentage correct)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

encoding vs decoding

A
  • Encoding models go from stimuli to brain activity, aiming to understand how sensory information is represented in the brain.
  • Decoding models go from brain activity to stimuli or mental states, aiming to read out what information is being processed or represented in the brain at a given time.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

decoding debates

A
  1. how do these patterns relate to neural mechanisms
  2. is fMRI sampling sub-voxel (finer scale) information, or merely basing itself on the global structure within a region
  • we need to be very explicit about what patterns the decoder/underlying model is supposed to be picking up for accurate interpretation
    –> encoding helps us do this
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

decoding process

A

stimulus –> population receptive field –> single neurons based on stimulus orientation –> cortical columns –> hemodynamic coupling changes magnetic properties of the blood –> BOLD signal –> MRI voxels

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

computational cognitive neuroscience

A
  1. input to computational model: parameters and stimulus
  2. model prediction and fitting:
    - The “fit model to data” step involves adjusting the model parameters to closely match the observed data.
    - The model makes predictions about brain activity in response to stimuli. These predictions are compared against actual brain data.
  3. Brain-wide Single-voxel Timecourses: the model’s predictions are compared to the timecourses of brain activity recorded from individual voxels across the brain.
  4. outcome: Best-fitting Parameters and CV Prediction Performance
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

computational model output: best fitting parameters

A

for probing computation and representation: examining how computation and representation (how information is encoded) occur in the brain using the model

17
Q

computational model output: CV prediction performance

A

for Comparing Computational Models: to see which one best explains the observed data

18
Q

inverted encoding model (decoding)

A
  1. build encoding model
    –> describe how neural channels in the brain respond to various stimuli.
  2. fit encoding model to training data
    –> The encoding model is applied to actual brain activity data (training data). This step involves predicting how each neural channel would respond to the presented stimuli.
  3. use fitted encoding model to reconstruct channel responses from testing data
    –> Take new data (testing data) and use the fitted encoding model to decode it. The model attempts to infer which stimulus was presented based on the voxel responses.
    To do this, the encoding model is inverted. For each voxel’s observed activity pattern in the testing data, the model estimates the most likely combination of channel activities that could have given rise to that pattern.
19
Q

bayesian decoding from an encoding model

A
  • finding p(stimulus|BOLD response)
  • advantage: you get a continuous read-out of what the pattern of voxel responses presents: you ‘reconstruct’ the stimulus, instead of just getting a correct/incorrect outcome of a classification algorithm’s prediction
20
Q

stimulus reconstruction

A

using diffusion models for reconstruction
–> goes from a noise pattern to an image

  • done by combining semantic (high level) and visual (low level) information processing streams
21
Q

two main ways for fitting receptive field like encoding models

A
  1. iterative parameter search based on a mathematical model
  2. penalised regression using a very large design matrix
22
Q

difference between MVPA and encoding models

A
  1. encoding models focus on the single voxel level and try to explain single voxel responses by computational models of information processing
  2. encoding models focus on the single voxel level, thereby making the pattern across voxels interpretable
  3. MVPA doesn’t care about the single voxel level, and treats voxels only as vectors that contain the eponymous patterns