CH4 Flashcards
The Dichotic listening paradigm:
two separate sources of auditory information are presented
to the two ears of the listener. Almost no one notices when the unattended message
changed from English to German, or when English speech was played backward. However,
when a 400-Hz auditory tone was presented, listeners almost always noticed it. This shows
that more information may be processed than we are normally aware of. When the
information receives just a bit more activation - either because it has recently been heard or
because it is highly pertinent - it attracts attention and we become aware of it.
auditory stream segregation.
Separating sound elements into different auditory objects:
Factors affecting the ease of selection
adding a third stream with equal intensity to
both ears impairs selection, and you need to be able to separate the two streams of
information on the basis of temporal cues for shadowing. So, it’s not just the physical
localisation of the voices that makes selection easy in dichotic listening, the
differences in onset times of the words in the messages also play an important role.
It’s probably pitch that is used as a filter, which explains why you notice a change
from a man to a woman’s voice.
Cherry:
dichotic listening; shadow one of the messages, if unattended ear cannot report anything. Detailed aspects go unnoticed. When same message to both ears, notice it so long as lag does not exceed a few seconds. More information may be processed than we think
Processing of unselected information:
people are much more likely to hear their own
name on the unattended channel than other names or information. This shows that
information on the unattended channel is analysed to the level of meaning, even if it
does not have to be conscious. However, this does not imply that all information on
the unattended channel is fully processed.
Treisman: information to unattended channel is
analyzed to level of meaning, s.a. words that fit into sentence context.
Corteen and Wood: words are
processed to semantic level, when conditioned, unattended message can elicit physiological response.
Dawson and Schell: but often accompanied by evidence of attention lapses. Even when semantic
information breaks through, attention is needed to further process.
Negative priming:
the finding that
information that has to be ignored at one point in time, but attended at a later point in
time, is responded to more slowly. When information from the unattended channel
was subsequently presented in the to-be-shadowed ear, shadowing performance
was worse than for words that had never been presented before.
The Split-Span technique:
memory span with the list of items to be remembered split into two shorter lists for presentation in the two ears. Listeners tend to report all the
items represented in one ear first, followed by the items presented to the other ear.
This suggests that selection of information occurs quite early in the course of
information processing, at the level of the physical properties of the stimuli.
Broadbent: report all items presented to one ear first, selection of
information early. But when instructed to do otherwise can do so easily.
Auditory Monitoring:
participants listen to streams of auditory stimuli and indicate when they have heard a target. Attention can be directed to multiple possible stimuli, but a bottleneck occurs at the point where multiple identifications are required. Just like in visual search, dividing attention across different channels (two ears) is more difficult than attending to just one source of information.
We can’t change the locus of the attention like in the visual system, but …
there is some built-in attentional control in the ear. The cochlea of the ear, where the sensory receptors are
located, receives input from the brain that may control the direction of attention. This might
help to tune the sensory receptors to favour one sound over another, to protect us from
distraction. However, changes in auditory stimulation typically break through any sort of
cognitive control of the receptors.
Probe-signal paradigm
is used to study auditory detection, in which the target signal
is first played loudly enough to be clearly audible several times to familiarise the
participant with the target. Then, a two-interval forced choice task is used in which
the participant listens to two short intervals of noise, one of which also contains the
target, and then reports whether the first or the second interval contained the target.
When the target tone was 1100 Hz, only tones between 1000 and 1200 Hz were
detected, so there must have been some “attentional filter”
the “heard but
not heeded” hypothesis
Some researchers argued that tones outside the immediate range of the
target are heard, but misidentified as belonging to the noise
However, this has been disproven because even
highly trained people can’t discriminate. Also, people can indeed set an
attentional filter for multiple target frequencies.
There is evidence that advance knowledge of location might ….
speed detection -
at least when the targets are relatively far from the ears of the observers.
However, frequency might be more important than spatial position in orienting
auditory attention and spatial attention does not seem to play a central role in
auditory processing.
(Scharf)
several studies have shown that it is
possible to attend to information in
two separate modalities relative to just one
without any apparent costs, and there can even be an advantage for cross-modal
over unimodal presentation. Also, attentional set can influence the early processing
of auditory stimuli.
- One of those experiments where there is a cue with 64% accuracy. For
auditory targets, the cuing effect is largely a cost when the cue is invalid
rather than a benefit when the cue is valid, whereas for the visual stimuli, a
benefit for valid cues is present. An explanation is that the use of a visual cue
may have had a direct priming effect for visual targets but not for auditory
ones.
Visual dominance:
when there is competing visual and other stimulation, the visual
information captures perception.
Effects of visual information on auditory localisation:
as seen with ventriloquism,
visual information has an influence on the localisation of sound. This effect is
strongest when the actual source of the sound is difficult to localise, and it’s harder to
do this vertically than horizontally. Although a visual cue normally does not attract
auditory attention to its location, it does do so when the sound is hard to localise.