L14 - Localisation Flashcards
What are the two components of sound direction?
- Azimuth: 0 is in front, 180 at the back, -90 on the left = also known as lateralisation = if on left/right, and front/back is 0, and 180
- Elevation: -90 is directly below you, and +90 if directly above
- Externalisation and Distance: sound is usually heard externalised at some distance but headphone presented sound tends to be internalised = within the head = when you talk sound comes from inside the head
What are Binaural cues?
- The two ears working together
- Interaural Level Differences: high frequencies/short wavelengths
If a sound comes from one side of your head = 1) sound on other side of head is attenuated as head is in the way = also known as the head shadow = 2) as you watch the soundwaves = takes time to get from one ear to the other = called Interaural time delay
What are the different perspectives of interaural level differences: (the way sound travels around head)
- Sound comes from a speaker = head reflects some of the sound (called acoustic shadow), and diffracts when it goes to the other side= when waves curves around edge of object
- Sounds that are best at curving are the ones with longer wavelengths than your head
- Sound waves can come on both sides of the head and meet = can cause defects
- Reflection of the head adds to the intensity of the wave coming in = causes increased intensity of the sound on the nearside of the head
- Sound around the head = they meet together and can increase the intensity but they can also cancel each other out depending on their frequency = does not vary in a systematic way
- ILDs get larger with increasing frequency = when sound source is off to one side = sound at source is flat and has no variation in level across frequency, as you go up in frequency = small diff in low frequency as sound travels around the head and attenuation is at distance, left ear has a boost due to the reflection but the other side is more impaired
Why is the interaural level delays not completely the answer?
- Worked out the ILD is at low frequencies, people could tell what side the sound was coming from = must be another cue
- New cue is a time delay that occurs as sound comes across from one side to another = 700 microsecs to travel the few cms
- Smallest detectable ITD is 10 microsecs
What is the model of sound lateralisation? (media superior olive)
- In the medial superior olive: aud system exploits axonal transmission delays
- Ladder like network of converging axons coming from both ears = those axons would be of different lengths when they reach the nucleus in the medial superior olive
- These nuclei are coincidence detectors = only fire if they receive 2 action potentials in a time frame - tends to be the side away from the sound source = can be turned into a map
- Seen in barn owl but maybe not applicable for humans
What was the model for mammals?
- Two populations of coincident detectors: coding left/right and they compete with each other with activity levels
- Problem = axons are not only between ear and medial superior olive, there are several nuclei along the way e.g cochlea nuclei
- Communication of precise phase-locked action potentials to CNS requires extraordinary synapses in the brainstem
- Mammalian auditory system has adapted to enable this to work: endbulb of held & helix of held: A large synapse = envelopes the cell it wants to communicate with so can cause precise timing of input to output so action potential can produce another instantly to maintain the waveform
What is the duplex system
- Interaural time delays were at low frequencies and interaural level differences work well in high frequencies
- Combination of high/low frequencies to localise what you get.
- Mid-frequency pure tones are hard to localise because the ILD is small and the ITD is ambiguous
STUDY: - Went up to a roof and had someone on an elevated chair and listening to pure tones from a speaker that can move
- Ability to localise where sound was coming from was good at low frequencies, performance dropped at mid-frequency, and ability improved at higher frequencies
- DOES NOT happen with most natural sounds: because they have lots of frequency components and they are not long/continuous and vary up/down in level, and have different onsets
What are Pinna Cues (monaural)?
- ITD and ILD are the same at all points on the conical surface = telling you if it’s coming from left/right
- If sound comes directly in front/behind = 0 ITD and ILD = can’t tell where its coming from = called cone of confusion
What stops the cone of confusion?
- Sound reflects from the currugations of the pinna, particularly the concha and interferes with the sound directly entering the meatus
- Interference changes the sound spectrum producing a direction-dependent colouration
- Only works at high frequencies and it has spectral effects as sound will reflect off different parts of the ear and enter the ear and mix with other sound = adds the sound = making it more intense
- Causes a characteristic spectrum according to direction the sound comes from = important for encoding elevation and lateralisation
What is the frequency response of the human pinna as a function of sound elevation?
- You learn from your own pinna
- Can learn to use someone elses pinna over time
- There are similarities in pinna but the differences are due to the individual differences
What is the evidence that the shape of pinna helps?
- Used rubber compound to fill in different parts of the ear
- e.g made lots of errors when ear was completely covered up, and gets better the more ear is unveiled = shows shape of ears and parts of ear are involved in localisation
How does pinna help identify if sound is coming from front or behind?
- Most errors are front/back errors and 135/45 degrees because of the cone of confusion
- Turning head is good way to identify where sound is coming from = turns it into a binaural problem using leading and lagging for ITD and ILD PROVIDING SOUND IS LONG ENOUGH
What are the effects of room reverberation? (Localising sounds in a room not pure tones in a chamber where reflection cannot happen)
- Sound in a room reflects off the walls and pattern of reflection is complicated = lots of waveforms come from all different directions
- Some sound will go straight in ear, others will bounce of surfaces = seems like its coming from another source that is in another room
- Although sound travels fast, aud system listens to first sound and ignores what comes after = called the precedence effect or the law of the first wavefront
STUDY: - Looked at the precedence effect: varied distance of two loudspeakers when they were producing sounds and if one loudspeaker was situated closer = hear sound exclusively from that loudspeaker (sound was swapped between two), and ignores the location of echoes occurring just ms after the direct sound
- Ambiguity as moving the loudspeaker closer = feels louder
- 2nd exp: varied intensity of sounds coming from the loudspeakers - even if sound was louder from distance speaker, still hear the sound coming from the nearer speaker
- 3rd exp: filled room with furniture and things that would attenuate to reduce amount of reflection, changed timing of clicks in both speakers, the one that clicked first was where you perceived the sound to come from
- Even when you pan the sound across 2 speakers, you still think its coming from the first speaker
How do we know about externalisation and distance?
- Not very good at it
1) Realistic pinna cues: when people listen to VR = hear sounds as outside
2) Reverberation (different at each ear): sounds inside the head are not reverberation = so seems more likely to be externalised
3) Source stability during a head turn: when you turn your head and a sound stays in the same place = seems externalised