Quiz 1 Review (Part 2) Flashcards

1
Q

3 parameters allowing the brain to recognize sound?

A

pitch intensity and time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

what is pitch

A

subjective or perceptual attribute that corresponds closely to the physical attribute of frequency

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

A change in frequency is heard as a change in

A

pitch

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

can we measure pitch

A

not diretly
measured by matching the pitch in question

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

what is pitch related to

A

the physical repetition rate of the waveform of sound
Increasing repetition rate = sensation of increasing pitch

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

How do we know if something is high or low pitch

A

More vibrations in a given time = high pitch
Less vibrations in a given time = low pitch

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

what the audiometer is testing

A

frequency

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

perception we hear

A

pitch

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

what is frequency discrimination

A

ability to detect changes in frequency
Normal hearing in humans can differentiate as low as 3 Hz difference
Discriminate between 2 sinusoids that are simultaneous with a brief interval between them
Ex → 1000 Hz sinusoid can just be differentiated (just noticeable difference-jnd) from a 1003 Hz sinusoid with a silent interval between them

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

what is frequency selectivity

A

ability to resolve complex sounds into its component frequencies
Not the same as discrimination
Complex sounds: speech, music etc.

The cochlea achieves frequency selectivity through its structure, where different parts respond to different frequencies (higher frequencies at the base, lower frequencies at the apex).
“tuning” of the cochlea.
Example → When listening to a conversation in a noisy environment, frequency selectivity helps us focus on the frequencies of the speaker’s voice while ignoring background noise

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

ability to separate or filter out specific frequencies from complex sounds.

A

selectivity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

ability to notice small differences between two sound frequencies.

A

discrimination

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

what are the pitch perception theories

A

place theory
temporal/volley theory

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

what is the place theory

A

explains how we perceive different pitches (the highness or lowness of a sound) based on where sound waves stimulate the cochlea

Both discrimination and selectivity are closely connected here

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

what is frequency place mapping

A

explains how specific sound frequencies are linked to precise locations along the cochlea → helps us understand how our brain decodes different pitches based on where the cochlea is activated

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

A high-pitched sound causes maximum vibration at the _____ of the cochlea

A

base

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

A low-pitched sound causes maximum vibration at the

A

apex

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

suggests that our perception of pitch is linked to where in the cochlea the sound waves create the most activity and that specific places correspond to specific pitches.

A

place theory

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

how does the place theory work

A

Specific Regions of the Cochlea → The cochlea is “tonotopically organized,” (different parts of it are sensitive to different frequencies (pitches)). The base of the cochlea (closest to the outer ear) responds to high-frequency sounds, while the apex (the inner tip) responds to low-frequency sounds.

Pitch Perception → the pitch we hear is determined by the specific location (or “place”) along the cochlea where the sound waves cause the strongest vibrations, the point of maximum displacement in the traveling wave

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Place Theory suggests that our perception of pitch is linked to where in the cochlea the sound waves create the most activity and that specific places correspond to specific pitches.

A

true

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

what is the temporal/volley theory

A

explains how we perceive pitch, especially at lower frequencies, based on the timing of neural firing rather than the specific location of activation along the cochlea
Auditory neurons phase lock to vibrations of the BM
Pitch assigned to a signal is determined by the timing pattern of neural impulses evoked by a stimulus

When LFs are heard, neurons fire at a particular phase of the waveform so that the neural spikes are at or close to the integer multiples of the period of the pure-tone
Different frequencies produce different patterns of neural spikes across time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

how to determine timing of a tone

A

T = 1/f x 1000

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

500 Hz pure tone

A

t= 1/500 = .002 x 1000 = 2 ms

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

550 Hz pure tone

A

t= 1/550 x 1000 = 1.8 ms

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Information from timing cues breaks down at around

A

5 kHz

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

how is pitch
determined in complex signals

A

There are many different frequencies but only one dominant pitch
The pitch above is matched ot a 100 Hz tone = fundamental (f0)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Pitch perception of harmonic complexes are explained by either ____ or ______

A

place or timing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

how is pitch perception explained by place

A

max energy at 100 Hz causing excitation at the place corresponding to 100 Hz

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

how is pitch perception explained by timing

A

time-domain waveform could be periodic with a period equal to 1/f0 (t = 1/f)
to find period 1/f
to find frequency 1/t

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

other terms for missing fundamental

A

virtual pitch or residue pitch

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

what is the phenomenon of the missing fundamental

A

First tone that is heard has all of the frequencies, second tone has the fundamental removed but keeps all of the higher harmonics, each one after removes the lowest harmonic but although each note changes the pitch remains the same
this happens because they are harmonics and complex signals so they have the timing consistent with the fundamental even without it being present in the signal

Even though the fundamental frequency was removed from the signal the pitch perception stayed the same due to the brain interpreting the repetition patterns (harmonics-periodicity) that is present

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

what could or could not explain the missing fundamental

A

Cannot be explained by place theory because excites BM at that fundamental but now that the fundamental is gone it doesn’t explain how we still pick up the pitch so volley explains this
Could be explained through temporal theory because even though the fundamental is absent the temporal pattern of neurological activity is related to the period that is still being detected

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

what is cochlear HL associated with

A

reduced frequency selectivity (broad auditory filters)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

When a sound contains multiple tones it is harder to tell them apart when there are a moderate number of them, making it more challenging to understand speech clearly or appreciate music

A

true

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

why do people with HL for a while think they are fine and can understand?

A

they have memory, context, etc.
but out of context they will have a hard time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

what do PTs with cochlear HL rely on

A

depend on temporal cues and less on spectral information due to reduced frequency selectivity from the broad auditory filters

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

Those with cochlear loss have variable results even with similar audio results due to

A

individual differences in auditory filter size
neural synchrony

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

when do cochlear loss PTs have good pitch disrimination

A

Well preserved neural synchrony

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

when do cochlear loss PTs have poor pitch disrimination

A

Poor preserved neural synchrony Regardless of degree of the cochlear loss (broadening of the auditory filters)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

Pitch is important in order to understand

A

language

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

Pitch perception is important to

A

Distinguish most important utterances in speech
This is why those with HL have issues understanding unless there is context
Indicate structure of sentences of phrases, especially for tonal languages (e.g. Mandarin, Chinese, Thai)
Convey nonlinguistic information
Gender, age and emotional status
Supplement speech reading
Voicing information is helpful (seeing your mouth etc.)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

what is temporal resolution/acuity

A

how the as processes time-varying information changes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

what is gap detection

A

Ability to detect changes over time between two brief stimuli

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

two main processes of temporal resolution

A

Within-channel gap detection threshold

Across-channels gap detection threshold

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

what is within channel GDT

A

minimum time needed to detect a gap between sounds that have the SAME spectrum

Within each frequency filter you can detect the timing changes (close frequencies)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
46
Q

what are channels

A

filters in the cochlea

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
47
Q

if there is a larger gap

A

harder time understanding speech

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
48
Q

if you have two sounds in the same spectrum and they are separated, how far apart in time do they need to be for the brain to recognize they are different?

A

3 ms

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
49
Q

what is across-channel GDT

A

minimum time needed to detect gap between sounds presented to two ears

sounds that are spectrally dissimilar (e.g., tone and noise)

able to detect that sounds are two different ones

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
50
Q

what are the differences between within and across

A

w/in: relies on detecting a temporal gap in a single frequency channel
Across: requires integrating information from multiple channels making it more complex and usually yielding higher thresholds

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
51
Q

what is temporal integration/summation

A

Ability of the as to add information over time

if the sound is heard for a short time it will be harder to hear than one heard for a longer time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
52
Q

auditory system appears to integrate pure tone signal over

A

200-300 ms period

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
53
Q

Auditory thresholds do NOT improve if the signal duration >300ms

A

true

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
54
Q

If stim duration is too long (e.g., 2 min) threshold may become worse due to

A

adaptation
neurons will stop firing when the tone has been on for too long

55
Q

Sounds are characterized by pressure variations over time

A

true

56
Q

intensity and frequency are constant over time

A

steady sound

57
Q

what processes are used to measure temporal resolution

A

GDT
Temporal masking
amplitude detection

58
Q

determine the smallest detectable time gap between two stimuli

A

GDT

59
Q

what signals are used in GDT

A

sinusoids, BBN, or NBN

60
Q

GDT in humans of clearly audible noise bursts. is

A

about 2 to 3 ms
GDT increases for frequencies < 200 Hz

61
Q

three types of GDT

A

simple paradigm - silent variable time gap between two signals (e.g. Random GDT)

2nd Test used → two or more signal pairs with one signal containing a variable silent gap

3rd Test used → series of BBN segments with 0-3 gaps per segment varying in duration (e.g., GIN → Gaps In Noise test)

62
Q

what is the amplitude modulation detection threshold

A

determine the smallest amount of variation needed to detect that a sound is fluctuating in level → smallest amount of variation to recognize that it is varying and not steady

63
Q

closer the cycles are together =

A

faster amplitude modulation rate

64
Q

deep amplitude modulation depth =

A

far from baseline, large modulation depth

65
Q

farther apart the cycles are together =

A

slower amplitude modulation rate

66
Q

shallow amplitude modulation depth

A

close to baseline, small modulation depth and brain cannot grasp it so it has to be louder

67
Q

Temporal modulation refers to a

A

recurring change (amp or frequency change over time)

68
Q

temporal modulation is important because

A

speech is a modulating signal

69
Q

degree of change determines _________ of signal

A

modulation depth

70
Q

The depth of modulation is dependent on the ______ of the stimulus, which is the frequency at which the modulation changes over time

A

rate

71
Q

what is rate of a stimulus

A

frequency at which the modulation changes over time

72
Q

Ability to detect modulation worsens at ______ & ________ as modulation rate increases. What is then needed?

A

low SLs and higher frequencies
Greater modulation depth is then needed for detection

73
Q

what is the difference between simulataneous and non-simultaneous masking

A

audio masking → simultaneous masking (masking occurs when signal and masker overlap in time)

non-simultaneous masking (forward and backward masking)

74
Q

what is meant by temporal masking

A

masker and signal are separated in time

75
Q

what is forward masking

A

masker comes first, then the signal
Short duration signal masked by louder sound closely preceding it

For signal detection, separation between masker and signal needs to be >100-200 ms

Masking occurring is forward in time
Signal duration is short
Masking duration is longer

76
Q

what is the theory behind forward masking

A

when you have a large signal the neurons are firing and then once they fire they have a refractory period so it takes them time before firing again and because the masker is so loud and large, they fire and are refractoring and at this point the tiny signal comes in and is not enough to restimulate them because they are in the resting phase
no matter your hearing sensitivity you will not be able to hear the signal
this disappears within 100-200 ms so after the masker turned off, if a signal is put in it will not be heard but after this time period the signal will be heard

77
Q

if you have a longer and bigger signal than the masker you

A

will hear it even within the 100-200ms

78
Q

if you have a long masker and short signal and wait 1-2,

A

you will hear it

79
Q

if you have a long masker and short signal and do not wait 1-2,

A

you will not hear it

80
Q

what is backward masking

A

signal comes first then the masker

Short duration signal masked by a sound rapidly following it

For signal detection, separation between masker and signal has to be >25-50ms

Masking that occurs is backwards in time
Signal duration is short
Masker duration is longer

81
Q

in true forward and backward you will not hear the signal

A

true

82
Q

why is backward masking backward

A

Backwards because the signal is already there when the masker comes in
same criteria as forward but different timing

83
Q

binaural

A

sound reaches both ears

84
Q

diotic

A

identical stimuli presented to both ears
One signal in both ears
E.g., speech and speech or noise and noise etc.

85
Q

dichotic

A

sound presented to two ears is different
Two different signals in both ears
E.g., tone and noise or speech and noise etc.

86
Q

localization

A

judgement of sound position outside the head

87
Q

lateralization

A

judgement of sound position within the head (under headphones)
Listening to sound under headphones

88
Q

what is the duplex theory of sound localization

A

explains how we locate sounds in space using two main auditory cues: interaural time differences (ITDs) and interaural level differences (ILDs). these cues are used differently depending on the frequency of the sound

There are two sound cues used for localization
ITD or IPD (phase)
Provides localization for LF stimuli
ILD
Provides localization information for HF stimuli
Localization is better for complex stimuli than for pure tones

89
Q

Wavelength is inversely proportional to frequency

A

true

90
Q

difference between ILD and ITD

A

ILD
HF cue
HFs have shorter wavelengths and are about the size of the head, producing a sound shadow (good cue for sound localization)
Cue can be as high as 15-20dB
Head shadow effect
HF waves are partially blocked by the heat creating a difference in loudness between the ears

ITD
LF Cue
Sound wave’s longer wavelength reaches both ears at slightly different times and can transfer over the head
Comparing the phase differences arriving to each ear

91
Q

When wavelength of sound wave is smaller than the diameter of the head (> 1500 Hz), an ITD is

A

greater than one period of the wave
Here, comparing phase differences between waves arriving at each doesn’t provide a unique ITD

92
Q

Sound stimuli in front of a listener (00 azimuth and elevation) produce no interaural differences. why? what neurons are the most sensitive here

A

There are no interaural differences because the sound is not on either side
midline neurons are the most sensitive here because you have to pick up the signal from the front since l and r ears are not getting cues due to no differences

93
Q

ITD Cues & MAA

A

MAA is smallest when sounds come from directly in front
When reference ITD is at 00 azimuth
</= 900 Hz MAA = about 30
up to 900 Hz if the sound is only 3 deg apart you can detect the sound source changed using ITD cues
for <900 Hz in the LF you can notice a change in angle by 3 deg with the help of ITD cues
900-1500 Hz MAA = thresholds increase dramatically
Doesn’t do a good job; not as sensitive here
>150 Hz MAA = undetectable and cannot be used to detect MAA for sinusoids
after 1500 it is hard and you lose these cues which is because it is good for LF

94
Q

for <900 Hz in the LF you can notice a change in angle by 3 deg with the help of ITD cues

A

true

95
Q

after 1500 it is hard to use ITD cues and you lose these cues which is because it is good for LF

A

true

96
Q

ILD cues and MAA

A

MAA is smallest when sounds come from directly in front
When reference ILD is at 00 azimuth
ILD changes can be detected across frequencies when azimuth is > 00 azimuth but practically they are sufficiently large only at high frequencies
Performance worsens around 1500-1800 Hz
Due to small wavelengths compared to head diameter

97
Q

we do not do good at localizing above 1500 Hz with both ILD and ITD cues
we can still do it we just need more separation

A

true

98
Q

why are binaural beats important

A

they add another cue and richness to the sound for localization and signal detection

99
Q

When tones are in phase

A

add

100
Q

When they are out of phase

A

subtract

101
Q

When two signals with frequencies very close to each other are presented, what are produced

A

beats

102
Q

what are beats

A

waxing and waning of a signal

103
Q

what is the binaural beats

A

auditory illusion that occurs when two slightly different frequencies are played separately into each ear. The brain perceives a third, “phantom” beat frequency that is the difference between the two tones. For example, if a tone of 300 Hz is played in the left ear and a tone of 310 Hz in the right, the brain perceives a 10 Hz beat (310 - 300 = 10 Hz).

104
Q

equation for beats

A

Number of beats/second = | fS2 - fS1 |
If difference between frequencies = 3 Hz; waxing and waning will occur every 3 times/s
If difference between frequencies = 5 Hz; waxing and waning will occur every 5 times/s

105
Q

amplitude vs. time

A

beats

106
Q

when will beats not be heard

A

signals are the same frequency because they are always in phase so there is no addition or cancellation of signals (no beats)
If the difference between signal frequencies is >50-100 Hz
Time difference here allows the brain to detect the two signals as separate pure tones with different frequencies (no beats)

107
Q

what is the importance of sound source determination

A

Want to be able to do this when for ex you want to focus on one person talking and there are others talking in the background

108
Q

sound source determination uses

A

Perceptual coherence
Precedence effect
Modulation Detection Interference (MDI)
Comodulation Masking
Binaural Masking Level Difference (MLD)

109
Q

what is perceptual coherence

A

Components of speech are grouped together and perceived as one auditory event
How does our brain pull together the same sounds

110
Q

when will perceptual coherence more likely occur

A

if the components have similar acoustic properties like

Common fundamental frequency (f0)

The same voice onset time (VOT) - When VT is blocked for a stop consonant and a vowel follows
Measurement of the time between the release of the stop and start of the voicing
VOT is only applicable to stops and no other sounds

111
Q

Sound from a single source generally sounds the same regardless of whether presented in isolation or presented with other sounds

A

true

112
Q

what is the Precedence Effect/Law of the First Wavefront

A

llusion produced when 2 similar sounds are delivered in quick succession from sound sources at different locations but only a single sound is perceived
Why we do not here in echoes even though they are present

113
Q

echo threshold

A

30 to 50 ms for complex sounds
before we hear the echoes

114
Q

Law of the first wavefront states

A

We localize based on the signal that reaches our ears first
whichever wave hits your ear first will be the location that the brain says where sound is coming from because it suppresses the sounds that come in quick succession

115
Q

what is echo suppression? how do we perceive it in forward and backward recordings

A

Binaural auditory systems tend to suppress later-arriving sounds (echoes) and emphasize the first wave front (indicating sound source location)

We can hear echoes a microphone picks up when we play recordings backwards
We do not hear the echoes in real time because of echo suppression
Changes in voice quality are noticed when recordings are played forward and the reflections are not heard as echoes but more subtly these changes

116
Q

Localization is better for pure tones than complex tones

A

FALSE
it is better for complex stimuli than for pure tones

117
Q

do pure tones cross filters

A

no

118
Q

do complex signals cross filters

A

yes

119
Q

modulated vs unmodulated signals

A

modulated: loudness of the sound fluctuates periodically. This modulation can make the signal sound like it’s “pulsing” or “wavering,”; will cause an increase in threshold by 10-15dB; complex sounds

unmodulated: has a constant amplitude, frequency, or phase throughout its duration. this makes it sound steady and unchanging to the listener; pure tones

120
Q

When masker and signal modulator frequency are similar, threshold for detection of the amplitude modulated signal increases (worsens)

A

true

121
Q

Similar masker and signal modulator frequency (even in different filters) =

A

threshold for detection of the amplitude modulated signal increases (gets worse)
This is because the masker masked the signal

122
Q

Different masker and signal modulator frequency =

A

threshold for detection of the amplitude modulated signal decreases (improves)
They are in different filters so you will hear both

123
Q

what is comodulation masking release

A

Phenomenon which the detection of a tone centered in a modulated band of noise is improved with the addition of another band of modulated noise
Detection of a tone masked by a modulated noise will improve significantly if another band of noise with the same temporal characteristics is added

124
Q

we added a modulated masker and the signal threshold went down (supposed to happen) but now along with this masker we add another modulated signal the threshold gets better and comes down
what is this phenomenon

A

Comodulation Masking Release

125
Q

why does tone detection improve with two added maskers that are similar?

A

Detection of the tone improves because the two modulated bands of noise are perceptually grouped by the CANS while the signal is detected as a separate auditory event - perceptual coherence

126
Q

what is the difference between MDI and CMR

A

MDI - one masker and one signal, causes poorer threshold because it masks
CMR - one signal, two maskers, causes better thresholds because the brain lumps the two together
Perceptual coherence causes CMR to happen

127
Q

what is the Auditory Scene Analysis/Cocktail Party Effect

A

Ability to focus one’s listening attention on a single talker among a myriad of voices and background noise

AS breaks down sound waves into different frequency components using spectral analysis of the cochlea & after separating sounds this way it can assign different

128
Q

Factors aiding listening in a complex listening situation

A

Spatial separation
Voices coming from different directions
Where are the voices coming from
azimuth

Different pitches of different speakers
Men, women, children etc.

Different f0 of speakers (male vs. female differences)
We all have different ones based on how the VFs vibrate and this is why our voices are so distinct

Different accents

Different speeds of sound reaching the ears
Localization: how long does it take for the sound to hit you
Farther away - softer voice
Closer - louder voice

129
Q

Masker and signal coming from the same side

A

difficult listening environment

130
Q

Masker and signal coming from separate sides

A

easier listening environment

131
Q

what is auditory figure ground

A

ability to focus on a specific sound or “figure” in a noisy background, which is the “ground”
Figure = sound you are paying attention to
Ground = any other sound

132
Q

what is binaural masking level differences

A

Phenomenon where listener’s ability to detect a sound signal in the presence of background noise improves when using both ears rather than one
observed when there’s a phase or time difference between the signal (e.g., a tone) and the noise in each ear

133
Q

allows us to hear in noisy environments and localize based on phase changes

A

Binaural Masking Level Differences (MLDs)

134
Q

release from masking

A

Listener listens to a tone and noise identical in both ears, the level is adjusted until the tone is no longer heard (masked), and the signal is phase shifted 180 deg out of phase to make the tone audible again