LING330: Quiz #3 Flashcards
Mapping from acoustics to perception: linear or non-linear?
Equal changes in frequency/amplitude don’t end up as equal responses in the perceptual system
What’s the jnd?
Just noticeable difference
A young healthy ear can tell the difference between a few hertz at lower decibels but once it gets over 1000 hertz you can’t tell the difference
Scientists will often transfers raw frequency values into a logarithmic scale mathematically that better matches perceptual sensitivities. What are some of these scales?
Semitone scale
The bark scale
The Mel scale
The erb scale
The Bark scale
Divides frequency range of human hearing into 24 bands (each step from one band to the next sounds about equal)
Logarithmic part: difference in hertz between each bark gets larger as the range gets higher but below 500hz=nearly linear
At higher frequencies, we need a bigger difference in values to perceive a change
Normalization
We perceive all speech frequency/timing values as relative
Our brains use info in other parts of signal to normalize the stimulus (interpreting values relative to other values not to an external standard)
+
Same item can be heard differently when in different contexts (depends on the sounds around it)
Adjusts our expectation of what sounds should sound like using extraction of social and personal info
Explains why we can perceive consonants/vowels as the same in diff articulatory contexts
What is VOT?
Voice onset time
In a categorical perception experiment involving VOT, what was discovered?
Listeners=good at distinguishing sounds that are in DIFF categories, bad at distinguishing sounds in SAME category (even if acoustic difference is the same)
What affects category boundary perception?
Language experience (depends on the language you speak)
What’s special about human speech perception?
Processed by diff cognitive system than other acoustic data like music and non-linguistic sounds
**weakened when found that other domains use categorization such as visual perception and animals also do this
Cue integration
Brain takes multiple pieces of info that it receives and uses its perception of them to create a single object (ex: phoneme)
**pieces of info separated in time and extracted at diff points in auditory pathway when listening to words or sentences
What role does the auditory cortex play?
Puts pieces of phonetic info back together again after they’ve been extracted by other parts of the brain
Describe top-down processing
Acoustic cues come into brain from the BOTTOM UP but
Predictions made in brain for speech processing are TOP DOWN
Brain is PRIMED to hear certain sounds/words in a specific context
Responsible for PHONEME RESTORATION
**proof of word-level prototypes are stored
Phoneme restoration
Experiment where sound segment is replaced by a non-speech sound; sound segment will still be perceived even if missing because it is obvious through context
**proof of word-level prototypes are stored
What are prototype models?
Traditional answers of “to what linguistic representation is the speech signal mapped?” are based on this
Assumes that ideal version of a linguistic unit is stored in memory
Accounts for categorical perception
Diffs in detail are not noticed unless they cause a mapping to diff stored representation
Abstract/idealized version of linguistic unit stored in memory, incoming symbol checked against it
Detailed info discarded
How do feature detectors work? (Prototype models)
Pick up distinct acoustic events and match them to segment characteristics (different between languages)
Segments identified -> assembled into words
Cues before certain segments that help us predict
Proposition that units of perception=DEMI SYLLABLES (CV or VC combos) or WHOLE SYLLABLES
Motor theory of speech perception
Against view of stored word-level prototypes
Argue that units of perception are articulatory gestures themselves
Sound patterns=medium by which we hear but articulatory movements that caused the sounds=OBJECTS HEARD
Learn ability=mirror neurons (same neurons fired when seeing someone doing something and when doing it yourself)
How do the motor theory of speech perception theorists explain how it is learned?
Babies listen to their own utterances which teaches them what ACOUSTIC PATTERNS go with what SPEECH GESTURES
NEURAL LINK between auditory and motor cortex
Arguments against motor theory of speech perception
- we can perceive speech that no vocal tract could create (sine wave speech)
- brain injury affects production and perception separately
Prototype models vs exemplar models
Prototype: assume that match to linguistic unit is made -> detailed info in the signal=thrown out and speech processing continues on abstract level
Exemplar models: very detailed memory traces are retained and referenced
Episodic traces=linked together at multiple levels into multi-dimensional groupings
Detailed and specific, not abstract like prototype model
Categorization=label + details of specific instances
New tokens are compared against stored tokens, not an abstract ideal
How categorization differs in exemplar models
Stimulus SIMILAR to exemplar: faster and more accurate categorization; category label is reinforced + boundaries sharpen
Stimulus NOT similar: category may shift -> language change
Support for exemplar models
- supported by psycholinguistic research that says details do matter + connections exist at multiple levels
- consistent with frequency effects (how often you hear a sound makes a difference)
- priming
Commonly used method for conducting a discrimination experiment
ABX or oddball task Stimuli A, B and X A and B=always diff stimuli X=always matches A or B Listener has to indicate whether X matches A or B
What are humans better at: discrimination or identification?
Neither (same)
Perception differences between plosives and vowels
Plosives: categorically perceived (plosives=short/dynamic)
Vowels: perceived on a continuum (vowels=relatively long but when shortened, perceived categorically lol plosives)