localization Flashcards

1
Q

sound localization

A

ID of the position of a sound source

Theres a freq range where sound localization is poor but its good at the high freq and the low freq

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

minimum audible angle

A

smallest detectable change in sound-source position.

below 1000hz you can tell 1 degree arch distance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

duplex theory of sound localization

A

Interaural Time Differences (ITDs)

Interaural Level Differences (ILDs)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Interaural time difference

A

Takes time to get to the other ear. Its amazing because your system can pay attention to that time (Microseconds)
660microseconds

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

ITD: Jeffress delay line

A

Here’s the idea: what you have is a delay line. You have a sound source right in front of you which reaches each ear. And each ear sends info to a relay station. The trick about the relay station is that they’re sending info to a circuit of neurons called the delay line. They go to their own sides of the delay line. In the delay line, each one of these meet at a a certain point or the coincidence detector (CD). It will only fire if they reach CD at the same time. They’re looking to see if their neighbor fires at the same time. Eventually the 2 meet at some position and the CD will fire because it sees the information from both sides.

The CD is in the medial superior olive. the auditory system encodes information in the brain. Its called computation. Its an unbelievably complicated neural computation in the medial superior olive.

Farther to the right…delay is on the left.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

limits of ITD

A

low freq: Wavefront X will reach the left ear before any other wavefront—can calculate an ITD.
When ITD’s work.
Imagine you have a wavefront coming in, A, B, and then X,..X and moved from R to L ear. This is a good case because X moves across the head without other waves disturbing it.
High freq:
Higher frequency because more wavefronts are coming by.
Can you figure out what the time difference is for waveform x? no. because ou cant figure out which wave front is the you want. You don’t recognize x anymore because they all look the same. Head size matters for ITD/sound localization. Head size matters.

ITD becomes useless as the wavefront gets smallar than the width of your head.
Above 1600Hz you cant use ITD because its ambiguous.
You cant keep track of which one is x!

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

interaural level difference ILD

A

At high freq, your head becomes big thing compared to the wave front. Head is invisible to the low freq wave fronts cuz wavelength is large.
If wave fronts are smaller/high freq, head forms shadow and reduces the amplitude of the simulation on the other side. So the amp of the stimulus is higher on the side closer to the sound source than the other side.
ILDs depend on both the azimuth of the sound source and the frequency of the sound

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

ILD: wavelength > obstruction shadow/no shadow

A

Block in the wavefront. Smooth until block and then blurry after;Large waves don’t care about log of wood…boss check.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

cone of confusion

A

How many times did he call it a reversal? Ex: it was coming from 45 in the front but perceived 135 to the back on the RIGHT side. So a big amount of time hes confused.
No matter at what point a sound originates on a conical surface like the one illustrated, it is always the same distance farther from one ear than from the other.

Accordingly, though the hearer can tell from which side the sound comes, he cannot discriminate among the many possible locations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

ITD discrimination at high frequencies

A

Listeners are sensitive to ITDs in low-frequency pure tones.Listeners are not sensitive to ITDs in high-frequency pure tones.
the high freq wavefront is too hard to keep track of so the system give up even tho the time differences can be quite large
Listeners are sensitive to ITDs in high-frequency sounds that are amplitude modulated at low frequencies

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

ILD Discrimination at low frequencies

A

ILDs at low frequencies are:
small for distant sources
large when the sound source is close to the head

Listeners are sensitive
to low-frequency ILDs

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

sound elevation (spectral cues)

A

Sound localization on the vertical plane

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

head related transfer function (HRTF)

A

Input is broadband noise and is flat. but by the time the sound actually gets to ear drum its not 0. because of resonances in the system. And the resonance will amplify certain frequencies and will also have attenuation on other frequencies.
Measures from 0 degrees elevation/azimuth and it doesn’t look flat. how can you use info to generate virtual reality sense?
Spectral cues because this graph is a spectrum
The HRTF changes with changes in azimuth./The HRTF changes with changes in elevation.
accurate in a free field.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

relearning sound localizations with “new ears”

A
  1. pre adaption; 2 immediately after inserting molds; 3. during adaptation period; 4. near end of adaption period; 5. immediately after removal of molds.
    Individuals are in front of a grid is speakers. At top they’re not perfect, but essentially they’re still getting a grid.
    Then the put ear molds in people and had them redo the experiment. Look what happens! They get horizontal relatively correct, but vertically isn’t working.
    c. So now people are wearing ear molds and they’re testing different days. You can begin to see vertical space open up. They get towards the end of adaptation they can vertically localize again. We know something happened initially because they were doing poorly when we put in plugs. So they’re learning something.
  2. Test without the molds present and they do well again. They could localize with the molds, but the ability decreased with time.

Whats cool about this?
You can adapt to new ears. So youre learning something. So in development, you learn how to understand your own ears yourself. You have to learn your own spectral cues/your own HRTF. What are they learning to do? Associate special cues with localization and forming a new ‘map’. BUT you can take out the ear molds and you go back. So in this example these people have 2 different ‘maps’. They were expecting that once you made a new map, you would have to have time ot get over that and recreate the old map.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

precedence effect

A

aka echo suppression (law of 1st wavefront) so we don’t mistake sounds from direct source from sounds coming from walls
if delta t of the precedence effect is smaller than echo threshold, the listener hears 1, fused sound. > than threshold = 2 diff sounds.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

masking level difference (MLD)

A

Easier to hear sound when tone is out of phase in 1 ear but noise is constant.
You can improve signal detection by adding noise.
The perceived sound-source position differs across the different sound configurations
The MLD is largest at low frequencies (<~1600 Hz), the same range over which ITDs are effective

17
Q

masking level difference implications

A

The masked threshold of a signal sometimes can be markedly lower when listening with two ears than when listening with one.
Signal detection will be better when the signal and masker are not coincident in space.