speech perception - exam 2 Flashcards
bottom up processing
data-driven
using sensory info of incoming signal
small details
the actual sounds
top down processing
hypothesis driven
using the knowledge of our own language to understand speech
big picture
brain will expect a word more than a non word when given an ambiguous signal
Ganong Effect
play /d/ & /t/ on a continuum (make one end a word & one a non word - deach & teach)
we tend to favor the word over the non word at the category boundary
results in shifting the category boundary so %word takes up more area on the graph than %nonword
top down processing
sine wave speech
created by replacing formant freqs w/ sine waves
initially unintelligible
becomes understandable once listeners knows what the person is saying
top down processing
pop out effect
phoneme restoration effect
listeners “fill in” missing phonemes in a word, relying on context & expectations
top down effect allows continuity in perception even w/ absent sounds - noisy environments
priming
exposure to one stimulus influences a response to a subsequent stimulus
just seeing options yanny & laurel primes you to hear one or the other (& not some secret 3rd thing)
laurel/yanny
your brain chooses which freqs to pay attention to
laurel/yanny signal ambiguous so if you pay attention to lower freqs = laurel & high freqs = yanny
attention changes perception of sound –> top down
low quality recording & noise at high freq makes it plausible to mix up F3 & F2
plausible masker
playing sound over a sentence w/ gaps
easier to understand the sentence w/ the sound than w/out it
w/ masker – people couldn’t tell where the masker was & thought all phonemes were present
what are whistled languages
whistled versions of spoken language - must speak the language to understand
can overcome ambient noise & distance much better than speech
higher freqs makes it harder to mask
useful in mountainous regions & w/ shepards
pitch based whistling
used in tonal languages
whistles emulate pitch contours
speech is stripped of articulation
leaves only suprasegmental features like duration & tone
formant based whistling
used in non-tonal languages
whistles emulate articulatory features
timbral variations are transformed into pitch variations
Lombard effect
auditory feedback causes compensatory changes in speech output
involuntary (& usually unknown to speaker) increase in volume & clarity when speaking in noisy environments
static plated louder in headphones
she spoke louder (she didn’t know)
receiving less feedback from her own voice so increased volume until she was receiving feedback again
disproves that you adjust volume for your communication partner
how do we compensate for loud environments
volume
increasing pitch
increasing vowel duration
prolonging duration of content words (vs function words)
larger facial movements
sensorimotor adaptation
oppose feedback changes
learned over time
feedback loop
when speakers hear altered feedback, –> they adjust their speech in response
demonstrates feedback loop between production & perception