Hearing Flashcards

You may prefer our related Brainscape-certified flashcards:
1
Q

What is sound

A

Pressure waves in the air

It is a sequence of pressure waves which propagate through this elastic net.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are the properties of air

A

Air is ‘elastic;’ if you try to compress it, it will push back.
Air has both elasticity and mass; so we can imagine air as being made up of little ‘lumps of air’ where each lump is connected to the next lump by an elastic spring.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

How is the spectrum of the sound determined

A

Any complex sound wave can be produced by adding together sinusoidal sound waves (pure tones)
The amplitudes and frequencies of these components determine the spectrum of the sound.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is the head-related transfer function (HRTF)

A

When sound is transformed from the external ears (pinnae) from free-field sound to the sound at the ear drum.
HRTF adds localisation information into the signal (this is why ears are shaped as they are).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What happens in the middle ear

A

It contains a set of intricate, interconnected bones: stapes, incus and malleus.
These amplify the motion of the ear drum into pressure waves transmitted via the oval window into the fluid-filled cochlea (in the inner ear)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What happens in the inner ear (cochlea)

A

Basilar membrane inside the cochlea vibrates according to the frequency structure of the sound.
Hair cells attached to the basilar membrane then transduce this mechanical signal into action potentials in auditory nerve.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is the tonotopic map in the cochlea

A

It is a map of sound frequency that is created by the basilar membrane through its vibrations.
This is achieved through its structure: wide and stiff at the base, narrow & loose at apex.
The apex vibrates for a low frequency while the brave vibrates for a high frequency.
The cochlea separates out the different frequency components of a complex sound.
Thus, different auditory nerve fibres represent different frequencies = place code of frequency

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is phase locking

A

When auditory nerve fibres produce spikes that are phase-locked (synchronised) to the vibration of the basilar membrane.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What are the two ways that frequency components of sound are represented

A

Place of activity - tonotopic mapping or place coding
Timing of activity - temporal coding in which sounds with higher frequencies produce higher rates of synchronised firing.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is the main task of subcortical processing

A

Sound source localisation utilising cross-ear differences in:
Sound wave amplitude
Sound wave arrival time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What are the properties of the visual nerve

A

The retina is 2-dimensional
Provides shape information
Provides colour information
Task for visual system to separate out visual objects relatively easy

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What are the properties of the auditory nerve

A

Basilar membrane provides inofrmation on one dimension only:
Frequency content of the incoming sound wave
This sound wave is the the sum of all signals from around
Task for auditory system to separate out auditory objects is huge.
Subcortical pathway performs (at least) source localisation as part of the solution.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

How is the auditory cortex organised

A

Core (C): primary fields (HG) receiving input from thalamus
Belt (B): secondary fields surrounding core
Parabelt (PB): secondary fields next to belt (STG)
Hierarchal organisation: activity progresses from C to B to PB
Parallel organisation: activity propagates along multiple C-B-PB paths

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What happens in the cortical visual areas

A

Processing is distributed and specialised: each area represents specific type of information; orientation, colour, motion, etc.
What and where processing is achieved through two processing streams.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What do the auditory cortical fields do

A

Core fields respond to pure tones and complex sounds.
Belt and parabelt fields respond to complex sounds and noise, and less vigorously to pure tones.
No tonotopic organisation in parabelt.
However, there is little evidence of the kind of specialisation seen in visual cortex.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Which features do cells in the auditory cortex show selectivity for

A

Intensity
Bandwidth
Sound source location

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What are the ‘what’ and ‘where’ processing streams

A

Anterior ‘what’ stream: specialisation in sound source identity (e.g. semantic meaning)
Posterior ‘where’ stream: specialisation in sound source location.
But this is a debated issue; what and where info is mixed

18
Q

What are the properties of spectrotemporal receptive fields

A

Sharp and wide tuning to frequency
Tuning to sounds with complicated special structure
Tuning to sounds whose spectral structure changes

19
Q

How is the auditory cortex modulated

A

By top-down effects which can be seen in discrimination tasks:
- Attention modulates STRFs (Fritz et al., 2003)
- Working memory shows up as sustained activity in
auditory cortex (Huang et al., 2016)
- • Perceptual learning changes mappings in auditory cortex
(Polley et al., 2006)

20
Q

Describe perceptual learning in the auditory cortex

A

Polley et al (2006) - Rats trained to discriminate stimuli either according to pitch or loudness
Pitch discrimination training leads to expansion of frequency representation
Loudness discrimination training leads to expanded loudness representations
Thus, representations are modulated by top-down effects of learning

21
Q

Describe surface recordings from cortex

A

Mesgarani & Chang (2012, Nature) presented sentences to human epileptic patients.
Surface recordings from auditory cortex
Method: stimulus spectrogram reconstruction from neural population responses + using machine learning classifier to decode stimulus from neural
response.
In the mixed condition, auditory cortex is responding as if the sentence being attended to is being presented alone.

22
Q

What are some of the subcomponents of speech processing.

A
  • Basic auditory processing of complex sound
  • Distinguishing speech from non-speech
  • Identifying speech sounds (e.g. vowels)
  • Semantics at word, sentence, and discourse level
  • Recognising prosodic aspects of speech
  • Recognising individual voices, accents, & style of speech
  • Recognising emotions
  • Merging auditory and visual information
23
Q

What is the traditional model of speech processing

A

Speech production occurs in Broca’s area in inferior frontal gyrus (IFG)
Speech comprehension occurs in Wernicke’s area in
posterior superior
temporal gyrus (pSTG)
But this no longer holds

24
Q

Speech intelligibility

A

Scott et al. (2000) did a PET
study:
They contrasted responses to intelligible vs. non -intelligible speech stimuli
Larger responses to intelligible speech in left anterior STG
(so not Wernicke’s area)

25
Q

Does context support speech perception

A

Yes.
Obleser et al (2007), did an fMRI study looking at processing of degraded speech (~speech in noise)
It has been found that semantic predictability enhances intelligibility.
This enhancement in intelligibility is associated with a left-lateralized widly distributed system becoming active.
This system is associated with semantic processing.

26
Q

What is the problem of studying comprehension

A

Contrasts are needed to study the processing of intelligible speech.
Unintelligible stimuli tend to differ from intelligible stimuli in terms of acoustic complexity or
structure.
We can’t be sure that differences in brain activation reflect intelligibility only.

27
Q

What is intelligibility in increased activations

A

Hakonen et al. (2017), did an fMRI study and found:
Intelligibility leads to increased activity in Anterior Cingulate Cortex (ACC), Frontal Pole (FP),
and right Frontal Operculum.
These areas associated with retrieval mode of episodic memory.
Episodic memory needed in comprehension of speech in noise.

28
Q

What is intelligibility in decreased activations

A

Intelligible speech leads to decreased activation in auditory cortex.
This might be due to predictive coding:
Top down signals dampen predictable bottom-up signals in sensory cortex.

29
Q

What is visual binding

A

The process of integration where all information is put together to generate integrated percept (because the visual cortex has many retinotopic maps, each representing a specific feature; colour, orientation, motion, form).

30
Q

How is auditory processing serial

A

Because the temporal ordering of sound events is crucial for speech to make sense.

31
Q

What are the challenges of visual perception

A

The information arriving in the cortex comes in short discontinuous bursts of up to 200ms, but there is no info coming in between the bursts.

32
Q

What are the challenges of auditory perception

A

Information arrives in a continuous stream,
Information from different sound sources is mixed, but sound sources are perceived as unmixed.
In this kind of temporal binding, the time order of incoming information is essential.

33
Q

How do you test for memory in sensory systems

A

Tested by changing past stimulation: if there is an effect, this is called context sensitivity.

34
Q

What is adaptation/ forward masking

A

Repeating the same stimulus within hundreds of milliseconds leads to response attenuation in primary auditory cortex

35
Q

What is forward facilitation

A

Playing one stimulus and then another can lead to increased responses to the probe.

36
Q

What are event-related responses

A

Brain responses time-locked to stimulus presentation and averaged over many stimulus presentations.
They reflect sensory processing.
They can be measured in EEG: event-related potential (ERP)
Or they can be measured in MEG: event-related field (ERF)

37
Q

What is the oddball paradigm

A

When a ‘deviant’ is randomly thrown into a standard sequence to test whether attention is being paid.

38
Q

What stimulations can the standard and deviant differ along

A
• Frequency
• Intensity
• Sound duration
• Sound source location
• Speech sound identity
• Tone sequences
And more
39
Q

What are some psychophysical effects of deviants

A
Infrequent stimuli cause surprise. 
Orienting response/orienting reflex:
• Skin conductance
response
• Heart rate changes
• Eye movements
• Halting of ongoing activity
• Involuntary reorienting of
attention
40
Q

What is the Mismatch Negativity (MMN)

A

The difference in ERP that the deviant produces compared to the standard.
It is elicited by any perceptible difference between the standard and deviant.

41
Q

What does MMN reflect

A

Adaptation to standards, response recovery to deviants. Clear index of context sensitivity, memory, and temporal binding.
Clear index of change detection, which might subserve the orienting response (OR).
The auditory cortex keeps track of stimulus probabilities.
Linked to auditory sensory memory.
If we understood how MMN is generated, we might learn something about temporal binding more generally.