Localization Flashcards
Azimuth
- Horizontal Plane - Sound Localization
- identification of the position of a sound source
- not so great in the middle frequencies, very good at high and low ends
What is the smallest change in position of a sound source that someone can detect?
- Better less than 1000 Hz
- gets bigger as you move off to one side or the other
- wide stimulus/noise is easier, you use the most useful information, easier to use lower frequencies so you pick those out
- better at extremes, about 1 degree when right in front of your face, thumb
Duplex Theory of Sound Localization
- One thing is happening at low levels, something else at high levels
- interaural time differences (ITDs)
- interaural level differences (ILDs)
- combo of these two theories together
Interaural time differences
- microsecond is 1/1,000,000 second
- time difference between two ears, not in the same place, sound hits at different time
- 0 deg azimuth, exact same time for both ears, at 90, completely closer to one ear than the other, time difference is greatest
- .66 milliseconds or 660 microseconds is the distance around your head, coming from 90 deg right to left or other way around
- with training you can hear difference of 10-15 microseconds
Jeffress Delay Line
- sound hits you and goes through a relay line
- meet at a coincidence detector, only fires if both meet there, if only the information from one side gets there but the other doesn’t it won’t fire
- medial superior olive - coincidence detector
- neural computation for localization
Limits of ITDs
Low frequency: wavefront X will reach the non reference ear before any other wavefront can calculate an ITD
- with high frequency, wavefront will reach the other ear after many other wavefronts have had a chance to get over there, so this doesn’t work for high frequencies
- ITD becomes useless as a cue around 1600 Hz because that’s when the wavelength is equal to average width of human head
Interaural Level Difference (ILD)
- high frequency sounds give rise to ILDs, due to the acoustic shadow of the listener’s head
- head = big thing in the way of the sound
- creates shadow, reduces the amplitude of the stimulation on the other side of the head (side that is in the shadow)
- depend on both the azimuth of the sound source and the frequency of the sound
- difference between the two ears is about 20 dB at 6000 Hz
- if trained you can tell about a 1dB difference in ILD, not trained you can tell about 6-8
Cone of Confusion
- no matter at what point a sound originates on a conical surface it is always the same distance farther from one ear than from the other
- accordingly, though he hearer can tell from which side the sound comes, he cannot tell among possible locations
- level difference and time differences are compared and hard to tell one place from the other, you then rely on visual cues, reflection of the pinna etc (spectral cues)
- don’t confuse right from left, but can confuse two that are same difference from midline front/back
Exceptions to the rule:
ITD discrimination at high frequencies
listeners are sensitive to ITDs in low frequency pure tones - not high frequency PURE TONES
- are sensitive to ITDs in high frequency sounds that are AMPLITUDE MODULATED at low frequencies
- using the high frequency but you are latching on to low frequency information
- Amp modulation is modulating and you are paying attention to the envelope, this part is at low frequencies
Exceptions to the rule:
ILD discrimination at low frequencies
ILDs at low frequencies are small for distant sources, and large when the sound source is close to the head
- listeners are sensitive to low frequency ILDs
Sound Elevation (spectral cues)
- pinna reflection
- better with narrow band noise, can’t localize pure tone on a vertical plane
- shape of the whole system helps, gives you information from your neck, head, ears, shoulders etc.
Head Related Transfer Function
- sorta flat until 1500 Hz or so, amplified and attenuated depending on your body/system
- gets larger/amplified at 2000 - 4000 because of ear canal resonance - “amplifies” frequencies there in the TM
(Spectrum - freq vs amp) - different all around the body because it is interacting with different parts as you move around
- gives the spectrum a certain shape by the time it reaches your TM
- used to create virtual reality, shape sound for a certain person and then play them that sound
Sound localization learning
- ear molds, you can learn to localize with them after a while, forming a new map based on the sounds and seeing where that sound is coming
Precedence effect (aka echo suppression, law of the first wavefront)
- when something is played backwards you can see that you are actually suppressing some of the echoes, mechanism to be able to tell where sounds are coming from
- law of the first wavefront
- lag and lead - compare to one another and come up with an echo threshold
- lag is the echo off something else ( think canyon you can hear your own voice elsewhere, obviously you aren’t there)
- if delta t > “echo threshold” then you hear sound from two different locations, if < “echo threshold” you hear one sound from the lead area, you are suppressing the echo
- measured in “disparity” one small and one large can overcome one another
Masking Level Difference
- sometimes you can improve the ability to detect a signal by adding noise
- used different tests of noise and signal,
- -1. noise and signal (both - same in both ears) this is not helpful when trying to detect the signal
- -2. Noise same in both ears but signal out of phase the the two ears - this was helpful in detecting the sound
- -3. Noise and signal only in one ear - not helpful
- -4. noise same in both ears, signal only in one ear - this was helpful in signal detection
- when the sounds are in different places and not over lapping with each other they are easier to separate from one another
- MLD is largest at low frequencies (re on the phone