speech perception - final exam Flashcards
bottom up processing
data driven
using sensory info of incoming signal
small details
the actual sounds
top down processing
hypothesis driven
using the knowledge of our own language to understand speech
big picture
brain will expect a word more than a non word when given an ambiguous signal
Ganong effect
play /d/ & /t/ on a continuum (make one end a word & one a non word – deach & teach)
we tend to favor the word over the non word at the category boundary
results in shifting the category boundary so %word takes up more area on the graph than %nonword
top down processing
sine wave speech
created by replacing formants freqs w/ sine waves
initially unintelligible
becomes understandable once listener knows what the person is saying
top down processing
pop out effect
phonemes restoration effect
listeners “fill in” missing phonemes in a word, relying on context & expectations
top down effect allows continuity in perception even w/ absent sounds - noisy environments
laurel/yanny
your brain chooses which freqs to pay attention to
laurel/yanny signal ambiguous so if you pay attention to lower freqs = laurel & high freqs = yanny
attention changes perception of sound –> top down
low quality recording & noise at high freq makes it plausible to mix up F3 & F2
what are whistled languages
whistled versions of spoken language - must speak the language to understand
can overcome ambient noise & distance much better than speech
higher freqs makes it harder to mask
useful in mountainous regions & w/ shepards
pitch based whistling
used in tonal languages
whistles emulate pitch contours
speech is stripped of articulation
leaves only suprasegmental features like duration & tone
formant based whistling
used in non tonal languages
whistles emulate articulatory features
timbral variations are transformed into pitch variations
Lombard effect
auditory feedback causes compensatory changes in speech output
involuntary (& usually unknown to speaker) increased in volume & clarity when speaking in noisy environments
static played louder in headphones
she spoke louder - she didn’t know
receiving less feedback from her own voice so increased volume until she was receiving feedback again
disproves that you adjust volume for your communication partner
feedback loop
when speakers hear altered feedback –> they adjust their speech in response
demonstrates feedback loop between production & perception
acuity relationships
how well you discriminate sounds predicts how differently you produce sounds
adaptive dispersion
hypothesis suggesting that vowel sounds in a language spread out within the F1-F2 space to maximize distinctiveness
maximize perceptual distance between them
vowels tend to spread out around the edges in all languages
cocktail party effect
ability to focus on one speaker in a noisy environment
auditory attention enhancing the neural representation of the target speech stream
article 2
play a sound where 2 speakers are saying different things at the same time
underlying signal stays the same but brain representation (multi electrode surface recordings from the cortex) changes depending on who you are listening for
the representation of when you are attending to one speaker was very similar to if you heard that speaker alone
attention can be trained