lecture 8 - normalization Flashcards

1
Q

What was the early focus of neural circuit research?

A

Researchers sought canonical circuits—specific physical arrangements of neurons and their interactions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

How did the focus shift in neural circuit research?

A

The focus shifted to computations, where the same computation can be implemented by different circuits across brain areas.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What did Marcus propose regarding canonical neural computations?

A

Marcus proposed a diverse set of computationally distinct building blocks that implement a broad range of elementary, reusable computations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What did Carandini and Heeger define as canonical neural computations?

A

Carandini and Heeger described canonical neural computations as standard computational modules that apply the same fundamental operations across different contexts.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is the primary goal of studying canonical neural computations?

A

To identify a computational “core” that underlies diverse functions of the neocortex.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is the consensus about canonical computations?

A

The emphasis is on identifying fundamental computational principles that apply broadly across brain regions, rather than focusing on specific circuits.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What are the three candidates for canonical computations?

A
  1. Receptive fields: Weighted linear summation combining inputs with specific weights.
  2. Predictive processing: Bayesian prediction-error propagation.
  3. Divisive normalization: Ratio of input-driven and contextual activity.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is the function of receptive fields in canonical computations?

A

Receptive fields combine inputs with specific weights using weighted linear summation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is predictive processing in the context of neural computations?

A

Predictive processing involves Bayesian prediction-error propagation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is divisive normalization in the context of neural computations?

A

Divisive normalization is a canonical neural computation for contextual processing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What are some phenomena explained by canonical circuits?

A
  1. Persistence of excitation and inhibition: Excitation and inhibition last longer than synaptic delays.
  2. Recurrent intracortical amplification: Thalamic input does not provide the major excitation; intracortical excitatory connections amplify input.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What role do computational accounts play in neural circuits?

A

Computational accounts explain the functional purpose of microcircuits, providing insights into their role in complex brain computations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

How does divisive normalization work computationally?

A
  • Neuronal responses are the ratio of input-driven (activation pool) and contextual activity of other neurons (normalization pool).
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What role does context play in divisive normalization?

A
  • Context represents other inputs that are summed and used to divide the original input, providing computational context.
  • this way, input → output is influenced by other inputs
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What nonlinear phenomena in V1 were first explained by divisive normalization?

A
  1. contrast saturation
  2. surround suppression
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

In which other situations has divisive normalization been applied?

A
  1. Olfaction
  2. Retina
  3. Attention
  4. Multisensory integration
  5. pRFs (population receptive fields).
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

How is divisive normalization conceptually linked to artificial neural networks (ANNs)?

A
  1. Max-pooling: Selecting the maximum value in a neighborhood.
  2. Softmax: Scaling outputs to probabilities in classification tasks.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What is the generic formulation of divisive normalization?

A
  • The response y is computed as y = [y1(x) + b]/[y2(x)+d]
  • here, y1 is the stimulus and y2 is the normalization pool
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What does divisive normalization suggest about neural activations?

A

Neural activations are not just input-driven but reflect a ratio of input-driven and contextual activations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What is contrast saturation in V1 neurons?

A

Neuronal responses increase with contrast but eventually saturate because both y1 and y2 increase. This prevents neurons from endlessly firing as input strength rises.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

How does center-surround suppression in V1 work?

A
  • Neurons’ responses decrease when there’s increased stimulation in the surround.
  1. The normalization pool (neighborhood) is activated more than the activation pool.
  2. This increases the denominator y2, reducing the center’s response.
22
Q

How does responsiveness to the center get modulated in surround suppression?

A

The center’s responsiveness is modulated by the surround, meaning stimulation in the surround influences the response of the center neuron.

23
Q

Why do we want to describe attention through divisive normalization?

A

Attention modulates how strongly a neuron responds to a stimulus by amplifying or diminishing the response depending on the focus of attention.

24
Q

What are the two primary effects of attention in the brain?

A
  1. contrast gain effect
  2. response gain effect
25
Q

What is the contrast gain effect

A
  • Large attention/normalization pool increases contrast gain.
  • Attention shifts sensitivity of the neuron to lower contrast levels.
  • Neurons start responding at lower contrasts but saturate at the same response percentage as without attention.
26
Q

What is the response gain effect

A
  • Small attention/normalization pool increases response gain by amplifying the attended stimulus.
  • The overall response of the neuron increases as contrast increases
27
Q

How does the DN (Divisive Normalization) model of attention explain contrast and response gain?

A
  • by implementing attention as a modulator of the size of the normalization pool
  1. large area of attention: creates a large normalization pool that distributes attention across multiple stimuli, leading to contrast gain.
  2. small area of attention: creates a small normalization pool that amplifies the attended stimulus, leading to response gain
28
Q

What is population response in attention and divisive normalization?

A

Population response is the collective neural activity across multiple neurons, accounting for:

  1. Stimulus activation
  2. Attention gain
  3. Divisive normalization effects
29
Q

What mechanism prevents overexcitation of neural responses?

A

Normalization factors regulate neural responses to prevent overexcitation, representing competition between stimuli in the neural network.

30
Q

What is suppression in signal time-courses?

A
  • Suppression refers to the reduction in the response of a neuron due to inhibitory inputs, often from the surrounding context of the receptive field.
31
Q

How does the Gaussian model explain suppression?

A

A single Gaussian model captures activations but explains only half the variance.

Additional negative activations (not captured) indicate suppression.

32
Q

What is the Difference of Gaussians (DoG) model?

A

The DoG model proposes a larger receptive field that subtracts the surround RF from the center RF. It explains negative responses that Gaussian models miss.

33
Q

What is compression in signal time-courses?

A
  • Compression describes the sublinear scaling of neuronal responses as input intensity or size increases.
  • Output is compressed at higher inputs
34
Q

How does compression manifest with stimuli?

A

The response to parts of a stimulus is not summed linearly when showing the full stimulus. Responses grow sublinearly with stimulus size.

35
Q

Which model explains response compression?

A

The Compressive Spatial Summation (CSS) pRF model explains response compression. It raises the linear response to some power (n < 1).

36
Q

Why should suppression and compression be combined into one model?

A
  • The DoG and CSS models don’t explain each other’s effects
37
Q

Which model explains suppression and compression

A
  • Suppression and compression effects are better explained together in a divisive model
  1. Divides the center and surround activation.
  2. Outperforms Gaussian, DoG, and CSS models.
38
Q

How are suppression and compression implemented in the pRF model?

A
  1. Activation terms: Driven by the stimulus.
  2. Normalization terms: Averages surrounding activity.
39
Q

What ensures the response is zero when there is no stimulus?

A

The term -b/d ensures the response is 0 when there is no stimulus

40
Q

What is the role of the activation constant b>

A
  • modulates suppression
  • When b=0, there is nothing to be suppressed, leading to a linear model.
  • As b increases, suppression occurs, and responses can go below baseline (negative flanks).
41
Q

What is the role of the normalization constant d?

A
  • modulates compression
  • Large d: Divides by a large value, resulting in a linear response.
  • Small d: Leads to a stronger division effect, creating a larger compressive response.
42
Q

What do activation and normalization constants do?

A

Simulations suggest that activation (b) and normalization (d) constants jointly modulate suppression and compression.

43
Q

How do suppression and compression change from lower to higher levels of the visual cortex?

A

As you go from lower to higher levels of the visual cortex:

  1. b-constant decreases → less suppression.
  2. d-constant decreases → more compression.
44
Q

How do the DoG model and CSS model perform across visual cortex levels?

A
  1. The DoG model better explains V1 and V2 (lower areas).
  2. The CSS model better explains higher-level visual regions.
45
Q

What is a key free parameter of the normalization pool?

A

The size of the normalization pool.

46
Q

What is spatial oversaturation in size-response curves?

A
  • Spatial oversaturation occurs when neuron responses increase, peak, and then decrease as stimulus size increases.
  • The normalization pool becomes more stimulated than the activation pool, increasing the denominator and decreasing the response.
47
Q

What are the two nonlinearities in the size-response curve?

A
  1. Rise to the peak (Standard Compression): Controlled by the d-parameter.
  2. After the peak (Oversaturation): As stimulus size increases, the response further decreases.
    Controlled by the size ratio ϕ-parameter.
  • Only the DN model can describe both nonlinearities present in size-response curves.
48
Q

What does the size ratio ϕ represent?

A
  • The ratio of the size (𝜎) of the normalization pool relative to the activation pool.
  • strength of spatial oversaturation in visual cortex
  • Lower ϕ is less oversaturation
49
Q

How does ϕ change from lower to higher cortical regions

A

ϕ decreases, so there is less oversaturation

50
Q

Response properties are described by the DN pRF model

A

Different responses to visual stimuli throughout human cortex

51
Q

What underlies response properties

A

Activation/normalization constants and pRF size ratio underlie response properties (suppression,
compression, oversaturation).