Lecture 22 - Finishing Sounds in Space Flashcards
how we organize sounds in space: how we atribute sounds to particular locations and objects
binaural and monaural cues
narrowly tuned ITD neurons
kind of a specificity code for determining the localization
these neurons looking for a particular difference in time at which sound reached one ear vs. another ear
if you had one or a very few neurons that responded to that difference = a specificity code
broadly tuned ITD neurons
looking for a range of differences with more neurons responding= distributed code
Jeffress Model
for narrowly tuned ITD neurons
main idea: you have a few set of neurons looking for the differences in signals as they reached one ear vs another ear
the system would have a number of coincidence detectors that only fire when it gets signals from both ears
if you have a set of coincidence detectors (1-9) and you get a signal that reaches both ears at the same time then the coincidence detector that codes for zero time difference will respond
different neuron responding to different time differences
Coincidence detectors fire only when…
signals arrive from both ears
simultaneously.
If signal reach both ears at the same time, then ITD is….
zero
What is the first place where information from both ears is
combined?
superior olivary nucleus
Physiological support for narrowly tuned ITD neurons
– Neurons in the inferior colliculus and superior olivary nuclei
respond to a narrow range of interaural time differences
(ITD).
– Single-cell recordings in the barn owl show a response to ITD from left and right.
– This is a specificity code.
physiological evidence for Broadly-tuned ITD neurons
– Research on gerbils indicates that
neurons in the left hemisphere respond
best to sound from the right, and vice versa.
– Location of sound is indicated by the
ratio of responding to two types of ITD
neurons: left sensitive and right sensitive.
– This is a distributed coding system.
opposite hemisphere advantage
when a signal comes into the right it gets to the left hemisphere first
Physiological support
mammals vs. birds
Mammals (distributed coding) and birds (sparse coding) seem to use
different encoding schemes.
Human echolocation (flash sonar)
can we see with sound?
for individuals that haven’t had sight for a very long time, they can use sound to help navigate
don’t just listen to passive sounds, but with flash sonar they produce a sound and listen to the frequency differences as the echo comes back: use auditory cues for a phenomenal sense of location
– Some blind individuals can train themselves to detect objects in the environment by producing
clicking sounds and listening to the echo = RE-MAP THEIR VISUAL AREAS!! = cross sensory mapping an area of the cortex because it’s not getting it’s usual input through vision = use auditory localization to make a spatial map instead of using retinotopic mapping
– They don’t see, but they can sense the location (and sometimes an identity) of an object.
– fMRI evidence suggests
echolocation experts, blind since very young, use striate cortex (V1)
to help represent the space.
– Extreme case of experience dependent
plasticity!!!