localization Flashcards
sound localization
ID of the position of a sound source
Theres a freq range where sound localization is poor but its good at the high freq and the low freq
minimum audible angle
smallest detectable change in sound-source position.
below 1000hz you can tell 1 degree arch distance.
duplex theory of sound localization
Interaural Time Differences (ITDs)
Interaural Level Differences (ILDs)
Interaural time difference
Takes time to get to the other ear. Its amazing because your system can pay attention to that time (Microseconds)
660microseconds
ITD: Jeffress delay line
Here’s the idea: what you have is a delay line. You have a sound source right in front of you which reaches each ear. And each ear sends info to a relay station. The trick about the relay station is that they’re sending info to a circuit of neurons called the delay line. They go to their own sides of the delay line. In the delay line, each one of these meet at a a certain point or the coincidence detector (CD). It will only fire if they reach CD at the same time. They’re looking to see if their neighbor fires at the same time. Eventually the 2 meet at some position and the CD will fire because it sees the information from both sides.
The CD is in the medial superior olive. the auditory system encodes information in the brain. Its called computation. Its an unbelievably complicated neural computation in the medial superior olive.
Farther to the right…delay is on the left.
limits of ITD
low freq: Wavefront X will reach the left ear before any other wavefront—can calculate an ITD.
When ITD’s work.
Imagine you have a wavefront coming in, A, B, and then X,..X and moved from R to L ear. This is a good case because X moves across the head without other waves disturbing it.
High freq:
Higher frequency because more wavefronts are coming by.
Can you figure out what the time difference is for waveform x? no. because ou cant figure out which wave front is the you want. You don’t recognize x anymore because they all look the same. Head size matters for ITD/sound localization. Head size matters.
ITD becomes useless as the wavefront gets smallar than the width of your head.
Above 1600Hz you cant use ITD because its ambiguous.
You cant keep track of which one is x!
interaural level difference ILD
At high freq, your head becomes big thing compared to the wave front. Head is invisible to the low freq wave fronts cuz wavelength is large.
If wave fronts are smaller/high freq, head forms shadow and reduces the amplitude of the simulation on the other side. So the amp of the stimulus is higher on the side closer to the sound source than the other side.
ILDs depend on both the azimuth of the sound source and the frequency of the sound
ILD: wavelength > obstruction shadow/no shadow
Block in the wavefront. Smooth until block and then blurry after;Large waves don’t care about log of wood…boss check.
cone of confusion
How many times did he call it a reversal? Ex: it was coming from 45 in the front but perceived 135 to the back on the RIGHT side. So a big amount of time hes confused.
No matter at what point a sound originates on a conical surface like the one illustrated, it is always the same distance farther from one ear than from the other.
Accordingly, though the hearer can tell from which side the sound comes, he cannot discriminate among the many possible locations.
ITD discrimination at high frequencies
Listeners are sensitive to ITDs in low-frequency pure tones.Listeners are not sensitive to ITDs in high-frequency pure tones.
the high freq wavefront is too hard to keep track of so the system give up even tho the time differences can be quite large
Listeners are sensitive to ITDs in high-frequency sounds that are amplitude modulated at low frequencies
ILD Discrimination at low frequencies
ILDs at low frequencies are:
small for distant sources
large when the sound source is close to the head
Listeners are sensitive
to low-frequency ILDs
sound elevation (spectral cues)
Sound localization on the vertical plane
head related transfer function (HRTF)
Input is broadband noise and is flat. but by the time the sound actually gets to ear drum its not 0. because of resonances in the system. And the resonance will amplify certain frequencies and will also have attenuation on other frequencies.
Measures from 0 degrees elevation/azimuth and it doesn’t look flat. how can you use info to generate virtual reality sense?
Spectral cues because this graph is a spectrum
The HRTF changes with changes in azimuth./The HRTF changes with changes in elevation.
accurate in a free field.
relearning sound localizations with “new ears”
- pre adaption; 2 immediately after inserting molds; 3. during adaptation period; 4. near end of adaption period; 5. immediately after removal of molds.
Individuals are in front of a grid is speakers. At top they’re not perfect, but essentially they’re still getting a grid.
Then the put ear molds in people and had them redo the experiment. Look what happens! They get horizontal relatively correct, but vertically isn’t working.
c. So now people are wearing ear molds and they’re testing different days. You can begin to see vertical space open up. They get towards the end of adaptation they can vertically localize again. We know something happened initially because they were doing poorly when we put in plugs. So they’re learning something. - Test without the molds present and they do well again. They could localize with the molds, but the ability decreased with time.
Whats cool about this?
You can adapt to new ears. So youre learning something. So in development, you learn how to understand your own ears yourself. You have to learn your own spectral cues/your own HRTF. What are they learning to do? Associate special cues with localization and forming a new ‘map’. BUT you can take out the ear molds and you go back. So in this example these people have 2 different ‘maps’. They were expecting that once you made a new map, you would have to have time ot get over that and recreate the old map.
precedence effect
aka echo suppression (law of 1st wavefront) so we don’t mistake sounds from direct source from sounds coming from walls
if delta t of the precedence effect is smaller than echo threshold, the listener hears 1, fused sound. > than threshold = 2 diff sounds.