Final Flashcards
Sound localization
The ability to identify the location of a sound source in the sound field.
Precedence effect
When a sound is followed by another sound separated by a sufficiently short time delay (below the listener’s echo threshold), listeners perceive a single auditory event.
Auditory stream analysis
The ability to separate each of the sound sources and separate in space.
Perceptual grouping
Putting parts together into a whole.
Auditory Space
Surrounds an observer and exists wherever there is sound. Tones with the same frequency activate the cochlea (hair cells) in the same way regardless of where they are coming from.
Localization cues
Researchers study how sounds are localized in space by using:
Azimuth coordinate: position left to right
Elevation coordinates: position up and down
Distance coordinates: position from observer (most difficult).
On average, people can localize sounds:
Directly in front of them most accurately. To the sides behind their heads least accurately. Location cues are not contained in the receptor cells like on the retina in vision; thus, location for sounds must be calculated.
Binaural cues
Location cues based on the comparison of the signals are received by the left and right ears. Two binaural cues: interaural time difference and interaural level difference.
Interaural time difference (ITD)
Difference between the times that sounds reach the two ears. When distance to each ear is the same, there are no differences in time. When the source is to the side of the observer, the times will differ.
Interaural level difference (ILD)
Difference in sound pressure level reaching the two ear. Reduction in intensity occurs for high frequency sounds for the far ear. The head casts an acoustic shadow. This effect does not occur for low frequency sounds.
Cone of confusion
The “cone of confusion” describes a specific region where the auditory system has difficulty accurately determining the source of a sound. This occurs because certain cues used for sound localization, such as interaural time differences (ITDs) and interaural level differences (ILDs), become ambiguous within this region.
Monaural Cue for Sound Location
ILD and ITD are not effective for judgments on elevation, since in many locations they may be zero. Primary monaural cue for localization is called a spectral cue, because the info for localization is contained in differences in the distribution (or spectrum) of frequencies that reach the ear from different locations.
Experiment investigating spectral cues
Listeners were measured for performance locating sounds differing in elevation. They were then fitted with a mold that changed the shape of their pinnae. Right after the molds were inserted, performance was poor for elevation but was unaffected for azimuth. After 19 days, performance for elevation was close to original performance. Once the molds were removed, performance stayed high. This suggests that there might be two different sets of neurons - one for each set of cues.
Jeffress Neural Coincidence Model
There are a series of neurons that each respond best to a specific ITD. These neurons are wired so that they each receive signals from the two ears. Signals from the left ear arrive along the blue axon, and signals from the right ear arrive along the red axon. If the sound source is directly in front of the listener, so the sound reaches the left and right ear simultaneously, then signals from the left and right ears start out together. As each signal travels along its axon, it stimulates each neuron in turn. At the beginning, neurons receive signlas from only the left ear or the right ear. When the signals both reach neuron 5 together, that neurons fires. This neuron and the others in this circuit are called coincidence detectors, because they only fire when both signals arrive at the neuron simultaneously.
Broadly tuned ITD Neurons
These neurons are specialized for processing interaural time differences (ITDs), which are differences in the time of arrival of a sound at each ear. ITDs are a cue used to localize sounds in the horizontal plane, particularly for low-frequency sounds. Coding for localization based on broadly tuned neurons: in the right hemisphere that respond when sound is coming from the left, in the left hemisphere that respond when sound is coming from the right. The location of a sound is indicated by relative responses of these two types of broadly tuned neurons.
Cortical Mechanisms of Location
Area A1 is involved in locating sound: Neff’s research on cats. Posterior belt area is involved in locating sound: Recanzone’s research on monkey neurons. Antereior belt is involved in perceiving sound.
What and Where Auditory Pathways
What, or ventral stream, starts in the anterior portion of the core and belt and extends to the prefrontal cortex - used to identify sounds. Where, or dorsal stream, starts in the posterior core and belt and extends to the parietal and prefrontal cortices - used to locate sounds. Evidence from neural recordings, brain damage, and brain scanning support these findings.
Hearing Inside Rooms (Direct/indirect sound)
Direct sound: sound that reaches the listener’s ears straight from the source.
Indirect sound: sound that is reflected off of environmental surfaces and then to the listener.
When a listener is outside, most sound is direct; however, inside a building, there is direct and indirect sound.
Perceiving Two Sounds that Reach the Ears at Different Times - Experiment by Litovsky et al.
Listeners sat between two speakers: a lead speaker and a lag speaker. When sounds comes from the lead speaker followed by the lag speaker with a long delay, listeners hear two sounds. When the delay is decreased from 5:20msec, listeners hear the sound as only coming from the lead speaker: the precedence effect.
Architectural Acoustics
The study of how sounds are reflected in rooms. Factor that affects perception in concert halls - Reverberation time
Reverberation time
The time it takes sound to decrease by 1/1000th of its original pressure. If it is too long, sounds are “muddled”, if it is too short, sounds are “dead”, ideal times are around two seconds.
Intimacy time
Time between when sound leaves its source and when the first reflection arrives. Best time is around 20ms.
Bass ratio
Ratio of low to middle frequencies reflected from surfaces. High bass ratios are best.
Spaciousness factor
Fraction of all the sound received by listener that is indirect. High spaciousness factors are best.
Auditory scene
The array of all sound sources in the environment.
Auditory scene analysis
Process by which sound sources in the auditory scene are separated into individual perceptions. Does not happen at the cochlea since simultaneous sounds are together in the pattern of vibration of the basiliar membrane.
Separating Sound Sources: Simultaneous grouping (Onset time, location, timbre and pitch)
Onset time: sounds that start at different times are likely to come from different sources.
Location: a single sound source tends to come from one location and to move continuously. Similarity of timbre and pitch: similar sounds are grouped together.
Separating Sound Sources: sequential grouping - Experiment by Bregman and Campbell
Compound melodic line in music is an example of auditory stream segregation - the ability to separate different sound sources. Stimuli were in alternating high and low tones. When stimuli played slowly, the perception is hearing high and low tones alternating. When the stimuli are played quickly, the listener hears two streams, one high and one low.
Separating Sound Sources: Experiment by Deutsch: the scale illusion or melodic channeling
Stimuli were two sequences alternating between the right and left ears. Listeners perceive two smooth sequences by grouping the sounds by similarity in pitch. This demonstrates the perceptual heuristic that sounds with the same frequency come from the same source, which is usually true in the environment.
Separating Sound Sources: Experiment by Warren et al. - A demonstration of auditory continuity, using tones.
Tones were presented interrupted by gaps of silence or by noise. In the silence condition, listeners perceived that the sound stopped during the gaps. In the noise condition, the perception was that the sound continued behind the noise.
Separating Sound Sources: Effect of past experience - Experiment by Dowling
Melody “Three Blind Mice” is played with notes alternating between octaves. Listeners find it difficult to identify the song. However, after they hear the normal melody, they can hear it in the modified version using melody schema.
Connections Between Hearing and Vision - Visual capture
Visual capture or the ventriloquist effect: an observer perceives the sound as coming from the visual location rather than the source for the sound. Two-flash illusion
Hearing and Vision: Physiology - Thaler et al. (2011)
The interaction between vision and hearing is multisensory in nature. They used expert blind echolocators to create clicking sounds and observed that these signals activated the brain.
Ian Waterman Story
Worked as an apprentice in a butcher shop. He had previously gotten a small cut in his finger and mostly likely the cut had developed into an infection. What started out as a common cold would prove to be much worse. He gradually lost control over his limbs and ended up lying in bed without conscious control over any part of his body from neck down. His muscles still worked and his brain was receiving signals from his body conveying sensations such as pain and differences in temperature.But the brain seemed to have lost the notion of where the different parts that it was supposed to move were located.
Nerve fibre (motor fibre and sensory fibres)
Can be either a sensory fibre or motor fibre. The motor fibres sends signals to our muscle fibres telling them to contract. The sensory fibres starts either in the skin or in the muscle and come in different sizes. The largest ones convey information concerning touch, muscle sensitivity or sense of movement. The smallest ones convey information concerning muscle fatigue, temperature and certain forms of pain.
The Somatosensory System 3 parts
Cutaneous senses: perception of touch and pain from stimulation of the skin.
Proprioception: ability to sense position of the body and limbs.
Kinesthesis: ability to sense movement of body and limbs.
Skin
Protects the organism by keeping damaging agents from penetrating the body. Epidermis is the outer layer of the skin, which is made up of dead skin cells. Dermis is below the epidermis and contains mechanoreceptors that respond to stimuli such as pressure, stretching, and vibration.
Two types of mechanoreceptors located close to surface of the skin (Merkel and Meissner)
Merkel receptor fires continuously while stimulus is present - responsible for sensing fine details.
Meissner corpuscle fires only when a stimulus is first applied and when it is removed - responsible for controlling hand-grip.
Two types of mechanoreceptors located deeper in the skin (Ruffini and Pacinian)
Ruffini cylinder fires continuously to stimulation - associated with perceiving stretching of the skin. Pacinian corpuscle fires only when a stimulus is first applied and when it is removed - associated with sensing rapid vibrations and fine texture.
Pathways from Skin to Cortex
Nerve fibers travel in bundles (peripheral nerves) to the spinal cord. Two major pathways in the spinal cord: Medial lemniscal pathway and spinothalamic pathway. These cross over to the opposite side of the body and synapse in the thalamus.
Medial lemniscal pathway
Consists of large fibers that conveys tactile, proprioceptive, and vibratory sensory information from the body to the brain. Critical role in transmission of fine touch and proprioceptive sensations, allowing individuals to perceive the position, movement, and texture of objects, as well as their own body position and movements.