LING330: Quiz #3 Flashcards
Mapping from acoustics to perception: linear or non-linear?
Equal changes in frequency/amplitude don’t end up as equal responses in the perceptual system
What’s the jnd?
Just noticeable difference
A young healthy ear can tell the difference between a few hertz at lower decibels but once it gets over 1000 hertz you can’t tell the difference
Scientists will often transfers raw frequency values into a logarithmic scale mathematically that better matches perceptual sensitivities. What are some of these scales?
Semitone scale
The bark scale
The Mel scale
The erb scale
The Bark scale
Divides frequency range of human hearing into 24 bands (each step from one band to the next sounds about equal)
Logarithmic part: difference in hertz between each bark gets larger as the range gets higher but below 500hz=nearly linear
At higher frequencies, we need a bigger difference in values to perceive a change
Normalization
We perceive all speech frequency/timing values as relative
Our brains use info in other parts of signal to normalize the stimulus (interpreting values relative to other values not to an external standard)
+
Same item can be heard differently when in different contexts (depends on the sounds around it)
Adjusts our expectation of what sounds should sound like using extraction of social and personal info
Explains why we can perceive consonants/vowels as the same in diff articulatory contexts
What is VOT?
Voice onset time
In a categorical perception experiment involving VOT, what was discovered?
Listeners=good at distinguishing sounds that are in DIFF categories, bad at distinguishing sounds in SAME category (even if acoustic difference is the same)
What affects category boundary perception?
Language experience (depends on the language you speak)
What’s special about human speech perception?
Processed by diff cognitive system than other acoustic data like music and non-linguistic sounds
**weakened when found that other domains use categorization such as visual perception and animals also do this
Cue integration
Brain takes multiple pieces of info that it receives and uses its perception of them to create a single object (ex: phoneme)
**pieces of info separated in time and extracted at diff points in auditory pathway when listening to words or sentences
What role does the auditory cortex play?
Puts pieces of phonetic info back together again after they’ve been extracted by other parts of the brain
Describe top-down processing
Acoustic cues come into brain from the BOTTOM UP but
Predictions made in brain for speech processing are TOP DOWN
Brain is PRIMED to hear certain sounds/words in a specific context
Responsible for PHONEME RESTORATION
**proof of word-level prototypes are stored
Phoneme restoration
Experiment where sound segment is replaced by a non-speech sound; sound segment will still be perceived even if missing because it is obvious through context
**proof of word-level prototypes are stored
What are prototype models?
Traditional answers of “to what linguistic representation is the speech signal mapped?” are based on this
Assumes that ideal version of a linguistic unit is stored in memory
Accounts for categorical perception
Diffs in detail are not noticed unless they cause a mapping to diff stored representation
Abstract/idealized version of linguistic unit stored in memory, incoming symbol checked against it
Detailed info discarded
How do feature detectors work? (Prototype models)
Pick up distinct acoustic events and match them to segment characteristics (different between languages)
Segments identified -> assembled into words
Cues before certain segments that help us predict
Proposition that units of perception=DEMI SYLLABLES (CV or VC combos) or WHOLE SYLLABLES
Motor theory of speech perception
Against view of stored word-level prototypes
Argue that units of perception are articulatory gestures themselves
Sound patterns=medium by which we hear but articulatory movements that caused the sounds=OBJECTS HEARD
Learn ability=mirror neurons (same neurons fired when seeing someone doing something and when doing it yourself)
How do the motor theory of speech perception theorists explain how it is learned?
Babies listen to their own utterances which teaches them what ACOUSTIC PATTERNS go with what SPEECH GESTURES
NEURAL LINK between auditory and motor cortex