lecture 6: receptive field models Flashcards

1
Q

encoding

A
  • finding out what a neuron, sensor, or voxel (a unit in brain imaging) responds to
  • this way we can have a model of how it encodes information
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

decoding

A
  • the reverse process of encoding
  • find out what the pattern of activations of a group of neurons tell us about conditions, stimuli, etc. (what they are representing)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

encoding vs decoding

A
  • Encoding helps us understand how systems represent information.
    • Understand what each unit responds to (representation).
  • Decoding lets us interpret those representations
    • Use patterns of activations to infer what the system is perceiving or processing.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

progression of processing

A

hierarchical processing from simple to complex tasks

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

up the hierarchy, neurons have

A
  1. increased abstraction
  2. increased invariance
  3. increased specialization
  4. increased multi-sensory integration
  5. increased temporal integration
  6. increased action-perception integration
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

hubel & wiesel

A
  • research on how the brain processes visual information
  • discovered how neurons in the visual cortex respond to specific visual features, such as edges, orientation, and motion
  • by stimulating different parts of the visual field, they showed how neurons are “tuned” to certain spatial or temporal properties of stimuli
  • not a mathematical model
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

receptive field

A
  • the specific area of sensory input (e.g., visual, auditory, or tactile stimuli) to which a particular neuron responds
  • In the visual system, it might be a portion of the retina that a neuron “cares about.”
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

types of visual receptive fields

A
  1. LGN: circular receptive fields (center-surround antagonism)
  2. V1: oriented receptive fields (and sometimes flickering)
  3. MT: receptive fields tuned to motion
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

retinotopic mapping

A

as object passes through receptive field, a similar wave of activation is passing through the brain

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

population receptive field (pRF)

A
  • joint/average receptive field of a population of neurons
  • gaussian model
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

what are pRF parameters and what do they represent

A
  • position x and y
  • size σ
  • what portion of visual space is represented by this location on the cortex
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What can the x and y parameters in retinotopic mapping be translated into?

A

Polar angles, showing the preferred orientation of brain areas.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

gaussian model: interpretable parameters

A
  • the parameters allow us to probe information processing in different fields (V1, V2 etc.)
  • this way, the model allows us to say something about differences between brain areas
  • such as the location and size of their RFs, but also preferred orientation
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

computational cognitive neuroscience

A
  1. parameters + stimulus make up the computational model
  2. from here we can make a computational model prediction
  3. then we can fit the model to the data
  4. we can find brain-wide single-voxel timecourses
  5. this results in
    1. best-fitting parameters: with this, we can probe computation and representation
    2. CV prediction performance: with this, we can compare computational models
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

bayesian decoding analysis: goal

A
  • p(s|b)
  • to decode or infer the representation of a stimulus (e.g., its orientation, value) based on BOLD response.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

p(s|b)

A
  • posterior probability of stimulus dimensions s given BOLD pattern b
  • tells us the likelihood of a stimulus dimension (like orientation) based on observed neural responses.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What does the peak and spread of the posterior p(s|b) indicate

A
  • peak: most likely orientation of stimulus
  • spread: uncertainty
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

tractability of acquiring full posterior

A
  • tractable for single dimension
    • e.g., orientation/value
  • intractable for an entire image space
    • i.e., decoding an entire image
    • find Maximum A Posteriori (MAP) in (or sample from) natural images
19
Q

What is the purpose of a voxel noise covariance matrix in Bayesian decoding?

A

To model the correlated noise between voxels and prevent decoding errors caused by random fluctuations.

20
Q

bayesian decoding: method

A
  1. encoding-model based on likelihood p(b|s): fitting a model that links stimuli to observed BOLD signals across voxels
  2. decoding: ‘reconstruction’ yields posterior: - using the encoding model p(b|s), you infer the stimulus s based on the observed BOLD activity b, using bayes’ rule
21
Q

what can we learn by analysing posterior probabilities obtained from the decoding process

A
  1. ground truth correlations
  2. uncertainty of the decoder
22
Q

bayesian decoding result: ground truth correlations

A
  • correlate expected p(s|b) (peak of posterior) with known stimulus (ground truth)
  • this way we check the accuracy of the decoder
23
Q

bayesian decoding result: uncertainty of the decoder

A
  • dispersion of p(s|b) reflects the uncertainty of the decoder
  • helps us test for prioritization: high uncertainty might indicate noise in the data or ambiguity in the brain’s representation.
24
Q

limitations of linear models

A
  1. linear responses tend to go to infinity: A linear model’s response is directly proportional to the strength of its input. If the input doubles, the output doubles. While this is mathematically straightforward, it’s not biologically realistic.
  2. negative-response issue
25
Q

constraints of real neurons

A
  1. Maximum Firing Rate: Neurons cannot fire infinitely fast due to physical limits. Real neurons have a maximum firing rate, beyond which they saturate.
  2. No Negative Firing: Neurons cannot “fire negatively.” Negative responses in a linear model (as you might observe when shifting the phase of the Gabor filter by π) are biologically meaningless since neurons can only be silent or have a positive firing rate.
26
Q

how are nonlinear transformations applied to simulate realistic neural responses

A

compute what the linear response of a neuron would be and then

  1. make all below-zero responses zero (ReLu): fixes the negative-response issue

and then

  1. raise it to some sub-unity power (between 0-1): fixes issue where responses go to infinity
27
Q

types of nonlinearities

A
  1. expansive
  2. compressive
28
Q

expansive nonlinearity

A
  • when the exponent of the output is above 1
  • higher input values (above 1) expand into higher and higher output levels.
  • lower input values (below 1) stay low for longer.
  • this way expansive nonlinearities emphasize strong inputs while suppressing weak inputs
29
Q

advantage/disadvantage of expansive nonlinearity

A

Neurons with expansive responses might prioritize highly salient or strong stimuli, but this comes at the cost of ignoring weaker signals.

30
Q

compressive nonlinearity

A
  • when the exponent of the output is below 1
  • higher input values get ‘compressed’, to smaller output values than a linear model would have produced.
  • lower input values rise faster
31
Q

advantage/disadvantage of compressive nonlinearity

A

Neurons with compressive responses might be more sensitive to weak signals but may saturate for stronger inputs, limiting their ability to differentiate high-intensity stimuli.

32
Q

adding ‘compressive’ nonlinearity

A
  • goal: take a simple model (linear) and progressively more complex (add nonlinearity)
  • this can account for real-world visual phenomena like position invariance (the ability to recognize objects regardless of their location).
33
Q
A
33
Q

Why Negative Responses Occur

A
  • In a linear filter, such as the Gabor filter used in this task, the output is computed as the dot product of the filter and the stimulus. This can lead to negative values in the response
  • The filter oscillates between positive and negative weights (e.g., in a sinusoidal grating pattern).
  • When a stimulus patch aligns with the negative weights of the filter, the response becomes negative.
34
Q

process of visual field reconstruction

A
  1. encoding - estimate the receptive field model of each voxel. output predicts how strongly each voxel it will respond to different stimuli
  2. decoding - image identification by first predicting brain activity for a set of images using receptive-field models
  3. Compare the predicted voxel activity patterns to the observed activity. The stimulus whose predicted activity pattern best matches the observed brain activity is chosen as the most likely stimulus.
35
Q

encoding: estimating the receptive field of a voxel

A
  1. show images to subject
  2. make receptive field model for a voxel
  3. output for each voxel predicts how strongly it will respond to different stimuli
36
Q

added spatiaotemporal (motion direction preference of neurons) receptive field structure

A

helps us reconstruct videoclips from brain activity

37
Q

additional areas decoding is possible in

A
  1. sounds: a tonotopic map in the auditory cortex is organized based on the sound frequencies they respond to. this RF allows us to do the same type of reconstruction for sound
  2. abstract dimensions: receptive fields for numerosity, helps us decode how the brain processes mathematical reasoning or quantity judgments
38
Q

encoding in higher spaces

A
  • a voxel’s receptive field doesnt have to be retinotopic/tonotopic, but can also live in a more abstract (semantic) space
  • captures semantic processing in the brain
  • so its a receptive field in semantic space
  • i.e., voxels are not limited to encoding low-level features like orientation or retinotopic position. They can represent abstract concepts, tuned to high-level semantics like categories of objects or scenes.
39
Q

encoding in higher spaces: methods

A
  • single-voxel tuning curve in semantic space: map out activation in regions of the brain to see what they respond to and how the receptive field is tuned to this cortical location
  • this way you can create atlases based on information processing processing: what type of mental content is processed in which location
40
Q

attention shifts can modulate voxel tuning

A
  • Attention dynamically reallocates the brain’s resources to focus on specific semantic regions in the space
  • attention shifts the activity of brain regions to enhance responses for categories aligned with the attentional goal
41
Q

CNN as encoding model

A
  • feature maps in CNN layers can be directly compared with neural responses in different brain regions (e.g., primary visual cortex vs. higher visual areas like V4 or IT).
  • we can use this as a receptive field model of certain brain locations
42
Q

neural networks as encoding model

A
  • layers 1-8 of the visual cortex have similarity to DNN layers
  • resemble how information is processed across different regions of the brain
  • we can use this as a receptive field model of certain brain locations
43
Q

use of encoding models for neuroscientists

A
  1. compare different encoding models: see which ANN architecture resembles the brain the most. helps refine ANNs for scientific use.
  2. explore model behavior under different conditions: test how neural or ANN models behave in various contexts, like attention, memory, or task complexity
  3. use fMRI voxels as electrodes, perform neuro-measurements in humans: make sense of fMRI which allows you to make neural models of humans