Multi-sensory integration Flashcards

1
Q

Give an example of a function of multisensory integration (cat, dog example)

A

Detecting when danger is approaching. For a cat, the visual image of a dog may elicit 1 action potential in a neuron, the sound of a dog may also elicit one. Both together, however, may produce many in a peak response and elicit appropriate behaviour.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Give other generalised functions of multi-sensory integration

A

Perceptual binding; combining of these different percepts to represent a concept. Possibly also consciousness according to certain theories but this is more debated.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Give an example of one early brain area which integrates sensory information

A

The superior colliculus (tectum in other vertebrates) integrates sensory modalities quite early. It is the earliest stage for processing visual information and bypasses the LGN in a different route coming directly from the eyes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

How is SC different to many of the other areas associated with integration?

A

The other regions are typically in much more modern areas. This could suggest that association cortices were previously confined to the more ancient regions (SC); now mostly processed in these more modern areas. The tectum of smaller creatures takes up much more of the brain than in primates such as humans.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Describe some cortical regions for multi-sensory integration (5)

A

PRR (MIP): Visual/ auditory/ tactile information
LIP: Visual/ auditory
STS: Vocalisation/ audiovisual congruence
VIP: Visual/ auditory/ tactile/ vestibular
VLPFC: Audiovisual congruence/ vocalisation

Look at docs for map

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What three senses are integration studies most focused on?

A

Visual, auditory and tactile information but there is obviously also gustatory and olfactory

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Briefly describe the recent history of visual/ auditory integration history

A

It was known that there was multi-sensory integration among higher order areas. There was some evidence foud for direct connections between the primary areas however there is debate around this. There was activity found in the visual cortex following sound, however this was found not to be true but actually resulting from a reaction to the sound (e.g differences in vision following movement). More evidence has been found since, however, for the integration of auditory information absent from these reactions. Moreover, axonal tracing studies indicate that numerous direct connections exist between primary auditory and visual cortex, suggesting that auditory-visual cross-talk is present before the associative stage.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Apart from the general understanding of how we build our percept and possible roles it plays in consciousness, why else is multi-sensory integration research important?

A

Important for people with brain damage, PTSD, restoring functions, plasticity etc. Due to the role our senses play in these ailments and phenomena.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

How is the relationship between primary areas and association areas different in primates and rodents? What implications does this have?

A

Primary areas in mice take up most of the brain, in humans these are more peripheral. Association areas have expanded a lot in our brains. It is likely that the mechanisms are conserved but we cannot make claims about individual association areas based on animal models.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

How have certain ‘streams’ been quite conserved across rodents and primates?

A

There is still quite. a clear dorsal and ventral stream in rodents

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is the spatial relationship between unimodal and multisensory neurons in rodents?

A

There are higher concentrations of multisensory neurons around the borders of these primary areas, with more unimodal neurons inside them. Pink areas = (mouse) dorsal stream; blue = (mouse) ventral stream in notes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What are the markers of multisensory neurons?

A

When two different sensory stimuli elicit a stronger response than the strongest individual stimuli. This is a typical effect called multi-sensory enhancement.

Response depression can also be see in that the presence of two sensory stimuli elicits weaker activation than the activation evoked by the weakest of the two sensory stimuli, this is not often observed.
(see docs)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

How is the effect of response enhancement further divided into two effects

A

The effects on activation of auditory (A) and visual (V) stimuli are summed (A+V). If the presence of both stimuli is higher than the strongest individual signal but not higher than the summation then the effect is described as sub-additive. If the increase in activation is higher than this summation then it is described

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Assignment

What is meant by the double flash illusion and what does it demonstrate?

A

A brief sequence of two sounds played during a single visual flash leads to the perception of two flashes; show that audition and vision have strong perceptual bonds

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Are the auditory to visual or visual to auditory connections stronger?

A

Recent studies indicate that auditory to visual connections are much stronger than their reciprocals, and that they provide inputs to visual cortex that can modulate visually driven activity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Mouse auditory cortex encodes a wide variety of acoustic features, what are the most well studied?

A

Spectral content and temporal features such as modulations of frequency or intensity are the most well-studied, and
intensity variations occurring at sound onsets and offsets are particularly salient auditory features.

17
Q

How might you measure how auditory information is mapped onto the visual cortex in mice?

A

The question of how sound frequency information would map onto visual cortex is a difficult one, because of the lack of perceptual and ethological data on the particular frequency cues that could potentially be associated with particular visual stimuli in mice

In contrast, temporal coincidence is known to be used for perceptually assigning auditory and visual stimuli to the same object and is implicated in the double flash and ventriloquist illusions. Detection of temporal coincidence involves determining when sounds begin and end, and therefore might implicate neurons that encode particular intensity envelope features such as onsets and offsets.

Also, covariations of the size of a visual input and sound intensity envelope are important for binding looming and
receding auditory-visual stimuli. This suggests that there could be preferential cross-talk between some intensity envelope features and visual information.

18
Q

Describe the kind of features different neurons in the auditory cortices respond to

A

Envelope features such as onsets, offsets and sustained temporal dynamics are encoded in separate cells with further selectivity for different sound amplitudes. Some neurons respond only to high amplitude, ‘loud’ sound onsets, whereas others only respond to low amplitude, ‘quiet’ onsets, and neurons responding to offsets and sustained phases are also tuned to precise intensity ranges. Some neurons also encode combinations of these features.

19
Q

Back to lecture : *

How can we mathematically compare the activations elicited by the two sensory modalities? (2)

A

Interactive index (or multisensory
enhancement index):
ii = ( ( AV - max(A, V) ) / ( max (A, V) ) ) x 100

Mean statistical contrast:
msc = sum([AVi - (Ai + Vi)]) / n

When the combined component is different to the individual components there is an effect.

20
Q

What relationship is observed between unisensory stimuli and multisensory integration?

A

Weak unisensory stimuli => large multisensory integration

Strong unisensory stimuli => small multisensory integration

E.g For unisensory neurons providing information to an integration neuron; When the dog is far and there is some auditory and olfactory information, there may be a four fold increase in the multisensory neurons from the small activation present in the unisensory neurons. When the dog is right there, the activation in the unisensory neurons might be strong and there will be some enhancement however this might be around x2.

21
Q

What name is given to this relationship between stimuli strength and multisensory activation?

A

Inversive effectiveness = enhancement is higher for weaker stimuli

22
Q

What is observed from the receptive fields of multisensory neurons?

A

Multisensory neurons have a
similar receptive field in all
modalities; A sensory cue falling in the overlapping portion of the
receptive field usually leads to a
stronger multi sensory
enhancement

23
Q

Why could this overlapping of fields be important in multisensory integration?

A

Important in orientating your body towards the given stimulus.

24
Q

Is multisensory integration different from unisensory integration?

A

No, in multi-sensory neurons weakly effective visual and auditory stimuli (represented by the electronic traces) are integrated to produce multisensory enhancement. In this case, the enhanced combined response exceeds the sum of the component responses (see docs) and was therefore superadditive. It is important to note that, in principle and in practice, enhanced responses could be superadditive, additive or subadditive.

In unisensory visual neurons with overlapping visual fields, pairing the visual stimulus with another visual stimulus yields a subadditive interaction that fails to meet the criterion for enhancement. This is due to surrounding suppressing receptive fields. These representative samples are exemplary of the characteristic differences between the neural computations that underlie multisensory and unisensory integration.

25
Q

Describe the methods of a study attempting to measure multisensory computations at the level of neuronal populations

A

The subject reports whether a stimulus was located to the left or right of a reference location (marked ‘0’; see docs). The stimulus can be a visual cue (represented by the red circle) and/or an auditory cue (represented by the blue circle) presented at one of several possible locations in front of the subject. The two cues are presented either at the same location or separated by some amount of distance the cue-conflict), and the reliability of one or both cues is often manipulated experimentally ( in docs it is denoted by the width of the circles where wider is less reliable).

The researchers manipulated the ease by which it could be sourcedLarger the circle is, the easier it is to say where it came from. Y axis: proportion of times people say it’s right; psychometric response. People typically weigh visual information more, if it is less reliable then they will pay more attention to audible information. If you play with the reliability then people report changes. If they are equally reliable, you can check each individually or together. When you put them both together it becomes steeper. Therefore a small change in position results in a quicker change in perception of the location.

26
Q

Describe the results of the study on multi-sensory computations at a population level if the reliability is heterogeneous

A

The brain performs a weighted integration of the different sensory modalities: Cue-conflict trials in which the visual cue is more reliable and also displaced to the right, whereas the auditory cue is less reliable and displaced to the left: The pair of stimuli are jointly moved to the left or right on different trials, generating a sigmoidal choice curve known as the psychometric function (purple line, shown on the graph on the right in notes; X-axis: actual stimulus position, Y-axis: perceived stimulus position), which is plotted relative to the midpoint between the two stimuli. If subjects weight the cues according to their reliability, they will make more rightward choices for a given position of the paired stimuli (relative to non-conflict conditions), and the psychometric curve will be shifted to the left of centre. The stimulus position at which the curve reaches 50% rightward choices (that is, the point of subjective equality (PSE), which is indicated by dashed lines) maps onto a particular set of perceptual weights (w.aud and w.vis), which in this case would have the relationship w.aud < w.vis, as the visual cue is more reliable. In same trials with the same cue-conflict but reversed reliability (the auditory cue is more reliable than the visual cue) the subject should make more leftward choices, shifting the curve to the right (w.aud > w.vis).

27
Q

Describe the results of the study on multi-sensory computations at a population level in regards to combining multisensory input

A

In addition to measuring shifts of the psychometric function, performance with combined visual and auditory stimuli (purple curve) can be compared with single-cue conditions (red and blue curves), testing the prediction of increased reliability. The graph becomes steeper; becomes more informative. If you move the midpoint a little they are more likely to choose either left or right thus, the reliability of the stimulus of the whole is improved, makes it easier to detect where the stimulus is

28
Q

How does the brain estimates the position (or any other feature) of a sensory
stimulus? (4)

A

Bayesian(optimal) cue integration:
1. Each stimulus s leads to a characteristic neuronal response r, with a certain probability distribution p(r | s)

  1. The brain can then estimate which stimulus was observed by applying Bayes
    theorem:
    p(s | r) = p(r | s)*p(s) / p(r)
  2. A stimulus s will yield different responses (r1, r2) in the different sensory modalities
  3. It has been shown that multisensory cortical area can perform an optimal
    integration of the different responses and improve the estimate of stimulus s.
29
Q

Give an example of bayesian cue integration at the level of a unisensory visual stimulus

A

Bayesian used to model how brains encode bar orientation (see docs). Given the activation of different cells with different preferred stimulus in terms of orientation; x represents all possible orientations of a moving bar, Y axis is the firing rate of neurons (i.e. the y-position of a dot is the response of a
neuron during the presentation of a given visual stimulus). Those whose orientation is closer to the stimulus obvs fires more. Brain has to do the opposite job; what is the probability of a stimulus given the neuron activation. Given a certain stimulus S, you can model it as probability of response(s) r –> gaussian distribution –> P(r|S). The information from the different neurons update beliefs to narrow distribution and increase reliability.

Therefore the weighted linear
summation of unisensory responses can lead to optimal multisensory
integration => smallest possible detection uncertainty

30
Q

Describe the process of obtaining a psychophysical estimate given inputs from two modalities, r1 and r2, in primary sensory neurons according to Fetsh et al., 2013

A

The input to the multisensory neurons from the primary sensory neurons is modelled as
d1r1 + d2r2
where di are the synaptic weights

The output of a single neuron is then modelled as
Rc = A1R1 + A2R2
where Ai are the neural output weights

The psychophysical estimate is then modelled as:
Sc = w1S1 + w2S2
where wi are the perceptual weights

This last step sums the individual stimuli to get a multisensory estimate which is closer to the actual stimulus position and less noisy(smaller standard deviation) than both unisensory estimates.

31
Q
  • Homework questions*

What was the main research question of the article?

A

There is quite a bit of evidence for direct ‘communication’ between the primary auditory and visual cortices, with stronger connections heading from the auditory cortex than towards it. What auditory features are communicated and most informative for this integration of information is not yet clear, however. The authors want to address which auditory features are channelled from the auditory cortex to the primary visual cortex in order to interact with the processing of visual information. Furthermore they wanted to investigate if some auditory features were more relevant than others when integrating these two modalities and what features these were.

32
Q

Which type of auditory information is primarily sent from AC to V1? Which technique(s) do the authors use to investigate this?

A

The authors use both two-photon calcium imaging and intersectional genetics to investigate the type of auditory information that is sent from AC to V1. They did this by initially identifying the projections of the auditory cortex to the primary visual cortex. They did this by first injecting a canine adenovirus expressing CRE (transported via retrograde transporters) into the auditory cortex. They then injected an adeno-associated virus (travels in an anterograde manner) expressing GCAMP6s, so that it was CRE-dependent, into the primary visual cortex. Through this, only neurons which had both of these viruses expressed GCAMP6s, a very sensitive protein calcium sensor. This facilitated solely imaging the neurons projecting from area AC to area V1. These were contrasted with mice injected with a plasmid containing GCAMP6s which was expressed regardless of cell type in layer 5 (where the V1 projections were observed) and layer 2/3 (where there were no projections observed) as two control conditions. Calcium imaging was then carried out at a single cell level to observe their calcium responses following auditory stimuli and grouped together according to their (functional) response profiles via hierarchical clustering. The researchers then assigned 9 response types based on this clustering, varying in a number of attributes. Through this they found that neurons which encode the onset of loud auditory signals are primarily projected to the primary visual cortex (48%). This was over-represented in the context of neurons in layer 5 (36%), which was a larger proportion than observed in layer 2 & 3 (21%). Neurons encoding other auditory features also project to the primary visual cortex but to a lesser extent.

33
Q

What is the circuit-level mechanism via which AC modulation of V1 is context-dependent?

A

This supragranular subpopulation of GABAergic interneurons receive input from the auditory cortex as well as negative visual input. [The majority of L1 neurons in an area where the neurons projecting from AC to V1 are concentrated do not show increased responses in the dark. However, a subpopulation (~5%) responded significantly more to sounds in the dark compared to in the light.] If they receive both of these modalities (Auditory stimuli in light condition) then they do not produce much output and do not have much of an effect due to the inhibitory effect of the visual information. However if they receive input from AC but not visual (Auditory stimuli in dark condition), then they have an inhibitory effect on the excitatory neuron due to the absence of this modulation. This excitatory neuron in V1 is activated with the presence of visual information and auditory information to a lesser extent and modulated by the interneurons (and therefore presence of auditory and not light.) These neurons are therefore most activated when there is both light and an auditory stimulus [AC modulation of V1 had an inhibitory response in supragranular neurons when the mice were in the dark and an excitatory response when the mice were in (dim) light. This effect almost disappeared when area AC was inhibited via GABA injections and significantly decreased when these projections were deactivated via chemogenetics.] These neurons likely played a part in a following experiment, where it was found that 11% of these neurons had a supra-additive effect in their activation when combining loud onset down-ramping auditory stimuli with visual information.