Lecture 3: The Mental Lexicon II Flashcards
1
Q
perceptual invariance (3)
A
- When acoustically different stimuli are perceived as examples of the same phoneme or word.
- Sounds are not serial, i.e. there’s extensive overlap, meaning that we often can’t tell when one sound ends and the next begins → produce more sounds in less time.
- It’s much easier to recognize words in their context, than words spoken in isolation.
2
Q
motor theory of speech perception (6)
A
- States the perception of speech sounds involves accessing representations of the articulatory gestures that are required to make those speech sounds.
- i.e. Perception of speech is based on our ability to produce it:
- Speech signal is represented in articulatory gestures.
- The mechanism that allows us to process it is specialized for speech and speech alone (assumes innateness).
- However, chinchillas are also able to perceive discrete contrasts in human speech signals, suggesting that perhaps speech sounds aren’t innate.
- Perception of an utterance constitutes perception of a specific pattern of vocal gestures, which are represented as motor commands.
3
Q
McGurk effect (2)
A
- A mismatch between auditory information and visual info pertaining to a sound’s articulation → altered perception of that sound.
- This effect seems to support the motor theory quite nicely.
4
Q
auditory theory of speech perception (3)
A
- Proposes no link between perception and production (i.e. language sounds aren’t innate).
- Auditory system of humans is fine-tuned for perception of sounds.
- Can’t predict speech perception being affected by speech production disorders (which does happen).
5
Q
Ganong effect (4)
A
- The identity of a word can affect the perception of individual sounds within that word.
- When people hear a sound that’s acoustically ambiguous between two sounds, their identification of that sound can be shifted in one direction or another depending on which of the possible sounds results in an actual word.
- Explains why we’re not bothered by the inconsistency of sounds in word recognition: the same sound is perceived differently depending on word it’s embedded in.
- Context affects selection of candidates, but speech signal affects generation of candidates.
6
Q
cohort model (2)
A
- Proposes that a word is recognized incrementally through left-to-right activation of phonemes.
- Initial phonetic cues have priority because it comes first, meaning that the most important part of a word is the onset.
7
Q
cohort competitors (1)
A
- Words with overlapping onsets (e.g. candle, candy, candid, etc.).
8
Q
uniqueness point (1)
A
- When there’s enough info in the incoming speech stream to allow hearer to differentiate a single word candidate from its cohort competitors.
9
Q
TRACE model (4)
A
- Main competitor to the cohort model.
- Doesn’t rely on the left edge of words when accounting for lexical retrieval.
- Focuses on larger-scale competition effects: strings of sounds that aren’t (necessarily) associated with the left-edge.
- e.g. If you heard asdan, there are no words in English that begin with this sound (so cohort model couldn’t deal with it), but it could perhaps be a part of the sounds last dance.
10
Q
differences between cohort and TRACE models (4)
A
- Continuous Mapping
- TRACE: All phonologically similar words should be activated.
- Cohort: Cohort competitors will be activated early, whereas rhyming words will be activated later when the rhyming information becomes available.
- Top-Down Influences
- Cohort: Top-down information only affects selection, not initial activation.
- To distinguish between the two → need evidence that onset activation (cohort) is more important than rhyme activation (TRACE).
11
Q
Allopenna et al. (1998) (7)
A
- Found that overlaps at the beginnings of words results in greater competition than overlap at the ends of words.
- In an eye tracking study, participants had to choose between a target word and a cohort or rhyme competitor.
- Rhyme competitors (e.g. beaker and speaker) will be at a disadvantage because the activation of speaker will be pushed down relative to beaker or beetle, based solely on its mismatch with the first few sounds.
- As the word unfolds in time, there will be some overlap between speaker and beaker, which will boost the activation of speaker, but this new surge of activation will have to overcome the initial dampening of that word based on the early mismatch of sounds.
- This study provided evidence for both the cohort model and TRACE model.
- However, it does tell us that the left edge is more influential (but it’s not entirely based on the left edge).
12
Q
evidence for cohort model (6)
A
- Cross modal lexical decision task: participants heard a priming word, and a visual target was presented at three different time points into the priming word.
- If the target was presented right before the sound (e.g. beaker), no priming effects occurred (which we would expect, and this condition acted as a control).
- If the target was presented at the onset (bea- in beaker), beaker primed both glass (semantically related to beaker) and insect (semantically related to beetle).
- If the target was presented at the offset (-ker in beaker), beaker primed glass.
- Beaker did not prime stereo (semantically related to speaker) at any point of activation.
- But perhaps the lexical decision task isn’t a sensitive enough measure to rhyme priming, as the rhyme competitor is more time-sensitive.