Final Flashcards
Sound localization
The ability to identify the location of a sound source in the sound field.
Precedence effect
When a sound is followed by another sound separated by a sufficiently short time delay (below the listener’s echo threshold), listeners perceive a single auditory event.
Auditory stream analysis
The ability to separate each of the sound sources and separate in space.
Perceptual grouping
Putting parts together into a whole.
Auditory Space
Surrounds an observer and exists wherever there is sound. Tones with the same frequency activate the cochlea (hair cells) in the same way regardless of where they are coming from.
Localization cues
Researchers study how sounds are localized in space by using:
Azimuth coordinate: position left to right
Elevation coordinates: position up and down
Distance coordinates: position from observer (most difficult).
On average, people can localize sounds:
Directly in front of them most accurately. To the sides behind their heads least accurately. Location cues are not contained in the receptor cells like on the retina in vision; thus, location for sounds must be calculated.
Binaural cues
Location cues based on the comparison of the signals are received by the left and right ears. Two binaural cues: interaural time difference and interaural level difference.
Interaural time difference (ITD)
Difference between the times that sounds reach the two ears. When distance to each ear is the same, there are no differences in time. When the source is to the side of the observer, the times will differ.
Interaural level difference (ILD)
Difference in sound pressure level reaching the two ear. Reduction in intensity occurs for high frequency sounds for the far ear. The head casts an acoustic shadow. This effect does not occur for low frequency sounds.
Cone of confusion
The “cone of confusion” describes a specific region where the auditory system has difficulty accurately determining the source of a sound. This occurs because certain cues used for sound localization, such as interaural time differences (ITDs) and interaural level differences (ILDs), become ambiguous within this region.
Monaural Cue for Sound Location
ILD and ITD are not effective for judgments on elevation, since in many locations they may be zero. Primary monaural cue for localization is called a spectral cue, because the info for localization is contained in differences in the distribution (or spectrum) of frequencies that reach the ear from different locations.
Experiment investigating spectral cues
Listeners were measured for performance locating sounds differing in elevation. They were then fitted with a mold that changed the shape of their pinnae. Right after the molds were inserted, performance was poor for elevation but was unaffected for azimuth. After 19 days, performance for elevation was close to original performance. Once the molds were removed, performance stayed high. This suggests that there might be two different sets of neurons - one for each set of cues.
Jeffress Neural Coincidence Model
There are a series of neurons that each respond best to a specific ITD. These neurons are wired so that they each receive signals from the two ears. Signals from the left ear arrive along the blue axon, and signals from the right ear arrive along the red axon. If the sound source is directly in front of the listener, so the sound reaches the left and right ear simultaneously, then signals from the left and right ears start out together. As each signal travels along its axon, it stimulates each neuron in turn. At the beginning, neurons receive signlas from only the left ear or the right ear. When the signals both reach neuron 5 together, that neurons fires. This neuron and the others in this circuit are called coincidence detectors, because they only fire when both signals arrive at the neuron simultaneously.
Broadly tuned ITD Neurons
These neurons are specialized for processing interaural time differences (ITDs), which are differences in the time of arrival of a sound at each ear. ITDs are a cue used to localize sounds in the horizontal plane, particularly for low-frequency sounds. Coding for localization based on broadly tuned neurons: in the right hemisphere that respond when sound is coming from the left, in the left hemisphere that respond when sound is coming from the right. The location of a sound is indicated by relative responses of these two types of broadly tuned neurons.
Cortical Mechanisms of Location
Area A1 is involved in locating sound: Neff’s research on cats. Posterior belt area is involved in locating sound: Recanzone’s research on monkey neurons. Antereior belt is involved in perceiving sound.
What and Where Auditory Pathways
What, or ventral stream, starts in the anterior portion of the core and belt and extends to the prefrontal cortex - used to identify sounds. Where, or dorsal stream, starts in the posterior core and belt and extends to the parietal and prefrontal cortices - used to locate sounds. Evidence from neural recordings, brain damage, and brain scanning support these findings.
Hearing Inside Rooms (Direct/indirect sound)
Direct sound: sound that reaches the listener’s ears straight from the source.
Indirect sound: sound that is reflected off of environmental surfaces and then to the listener.
When a listener is outside, most sound is direct; however, inside a building, there is direct and indirect sound.
Perceiving Two Sounds that Reach the Ears at Different Times - Experiment by Litovsky et al.
Listeners sat between two speakers: a lead speaker and a lag speaker. When sounds comes from the lead speaker followed by the lag speaker with a long delay, listeners hear two sounds. When the delay is decreased from 5:20msec, listeners hear the sound as only coming from the lead speaker: the precedence effect.
Architectural Acoustics
The study of how sounds are reflected in rooms. Factor that affects perception in concert halls - Reverberation time
Reverberation time
The time it takes sound to decrease by 1/1000th of its original pressure. If it is too long, sounds are “muddled”, if it is too short, sounds are “dead”, ideal times are around two seconds.
Intimacy time
Time between when sound leaves its source and when the first reflection arrives. Best time is around 20ms.
Bass ratio
Ratio of low to middle frequencies reflected from surfaces. High bass ratios are best.
Spaciousness factor
Fraction of all the sound received by listener that is indirect. High spaciousness factors are best.