Lec 3/ TB Ch 6 Flashcards
1
Q
- 5 Types of phonological errors
- Give ex
- Phoneme anticipation
- Anticipatory phoneme addition
- Phoneme shift
- Rhyme exchange
- Consonant cluster exchange
- How are they organized?
A
Types of phonological errors (Fromkin, 1973) – correct/error
- 1 Phoneme anticipation: a reading list -> a leading list
- 2 Anticipatory phoneme addition: an early period -> a pearly period
- 3 Phoneme shift: black boxes -> back bloxes
- 4 Rhyme exchange: help of junk -> hunk of jeep
- 5 Consonant cluster exchange -> squeaky floor -> fleaky squoor
- These errors single phonemes, gps of phonemes (ex. consonant clusters, rhymes); these words are organized hierarchically
2
Q
- Lemma model for word production
- 2 subsystems
- System 1: 2 parts; 5 steps
- System 2: 3 parts; 7 steps
A
- Lemma model for word production
- 2 main subsystems
- Lexical selection: identify the best word in the mental lexicon (library)
- Form encoding: prepare the word’s articulatory shape
- Big picture of model
- 2 parts in Lexical selection subsystem
- Conceptual prep
- Lemma selection
- 2 parts in form encoding
- Retrieving morphemic phonological codes
- Prosodification/syllabification
- Phonetic encoding
- 2 parts in Lexical selection subsystem
3
Q
- Conceptual Focusing and Perspective-Taking
- Step 1
- Lexical concept
- 3 Factors that influence selecting lexical concepts
- define each factors
- 2 types of perspectives
- theory of mind
- What happens during word production? (other concepts?)
A
Conceptual Focusing and Perspective-Taking
- 1 put your idea into lexical concepts
- Lexical concept: integrates various semantic features for the particular word
- Factors influencing lexical selection
- Cross-linguistic variation
- POV
- Subjective construal
- I Cross-linguistic variation: diff languages → don’t have equivalent words; have more specific words
- Words/message are specific/tuned to the language
- II: POV/ perspective taking
- 1 Deictic perspective taking: speaker’s perspective
- I see a chair w/ a ball to the left of it
- 2 Intrinsic perspective taking: a nearby object’s perspective (ex. chair)
- I see a ball w/ a chair right of it
- Mentalizing/theory of mind: the ability to imaging what others are thinking
- 1 Deictic perspective taking: speaker’s perspective
- III: Subjective construal -
- Language depends on the speaker’s goal/interpretation
- same object can be = “dog/animal/pet/pest
- Language depends on the speaker’s goal/interpretation
- NOTE: during word production, multiple lexical concepts are activated in parallel; only the target is activated more
- Ex. target word = horse; other concepts (ex monkey) less active
4
Q
- Lemma
- What does it contain
- 3 step process
- How to determine how likely a lemma is activated?
- What is the major rift in the lemma model
- What phenomenon can it explain?
A
- Lemma selection (lower lv nodes)
- Lemma: abstract word node (similar to arbitrary #)
- b/w semantics and phonology
- Contains morphosyntactic features (grammar category, gender class etc)
- Process
- 1 In conceptual preparation phase, the target lexical concept is activated the most; related lexical concepts are activated less
- 2 This activation pattern propagates to the lemma lv
- So, the lemma that matches the target lexical concept is activated the most; lemmas that match related concepts = less
- 3 then the correct lemma is selected
- During processing, likelihood of a lemma being selected = degree of activation/ total activation of al engaged lemmas
- x
- Interlude: Crossing the Rift
- “major rift”: gap b/w the 2 subsystems (lexical selection, form encoding)
- Tip of the tongue: the sound structure of the desired word is partially/wholly unavailable, even though you know the meaning and morphosyntactic feature
- This rift is magnified in brain-damated patients w/ anomia
- There is chronic blocking when retrieving the phonological form of words
5
Q
Form encoding subsystem
- morpheme
- phoneme
- lexicon
- lexical
-
Retrieving Morphemic and Phonological Codes
- What is extracted? (2 things)
- 3 main assumptions
- Evidence for assumption 2 - Study: word f effect is in phonological lv
- Hypothesis: lemma lv vs phonological lv
- 3 conditions
- Results
- Age acquisition alternative explanation
- Evidence for assumption 2 - Study: word f effect is in phonological lv
- Evidence for assumption 3: banana
A
Retrieving Morphemic and Phonological Codes
- morpheme: smallest unit of meaning (ex un / break/ able)
- Phonem: sounds
- Lexicon = word library
- Lexical = words
- Stage 1: retrieve morphemic phonological code of the target word
- access the morphemic representation → “spelling out” its segmental phonemic content
- Ex. “horseS’ example
- Slelcted lemma = horse
- Activated 2 morphemes: , and suffix; and their sound structur[EL3] e
- MP 1 only the selected lemma goes the through the “rift” b/w the lexical selection subsystem and form encoding subsystem
- MP 2 retrieval of morphemic phonological codes is influenced by word frequency
- Word frequency effect: phonological form of high freq word (ex dog) is retrieved faster than that of low freq word (ex. broom)
- MP 2 retrieval of morphemic phonological codes is influenced by word frequency
- MP 1 only the selected lemma goes the through the “rift” b/w the lexical selection subsystem and form encoding subsystem
- Study:
- Homophones: diff words that sound the same
- Ex. high f adj “more” vs low f adj “moor”
- H1: if the word-f effect is at the lemma lv, the time to produce more will be sig faster than that of moor
- H2: If the effect is at the phonological lv, the time to produce 2 words is the same
- Reason: they sound the same, so low f “moor” is accessed fast
- Homophones: diff words that sound the same
- 3 types of target words
- Low f homophones (ex. ~to moor vs more)
- Low f non-homophones w/ low f homophones (ex. ~to march vs moor)
- Low f non-homophones w/ high f twin (Ex. much vs moor)
- Much has similar f to more
- Results: Low-f homophones (moor) were produced just as fast as high-f controls (much)
- Support H2; helps “cross the rift” based on the model
- Alt explanation:
- Age of acquisition: some studies show it is easier to access words that are learnt early in life
- MP 3: morphemic phonological codes are retrieved faster w/ initial segment
- Ppl were faster at naming a banana when they knew beforehand the target word began w/ “ba”
- But not when they knew beforehand that the targe word ended w/ “na”
6
Q
- Form encoding subsystem
- Stage 2: Prosodification [EL1] and Syllabification - meaning
- input
- process
- output
- Syllable boundaries differ from morpheme boundaraies
- Ex horses (describe)
- Syllable boundaries can overlap into word boundaries
- Ex. escort us (describe)
- Is syllabification stored in LTM or is on the fly?
- Stage 3: Phonetic Encoding and Articulation
- mental syllabary
- Whats the advantage using this?
- 4 step process
- mental syllabary
- Self-monitoring stage
- What it does
- external vs internal feedback loop (ex. yellow; vs ye)
A
Prosodification [EL1] and Syllabification → filter into syllables
- Input: morphemic phonological codes
- Process: sounds are bundled into syllables
- Output: phonological word
- There are many cases where syllable boundaries differ from morpheme boundaries
- Ex. “horses”
- Bimorphemic & bisyllabic
- Morpheme: , (gren)
- Syllable: /hor/, /ses/(Pink)
- Syllabification can transcend word boundaries
- Ex. He’ll escort us
- Morpheme: “escort” “us”
- Syllable: /e/, /scor/, /tus/
- Ex. “horses”
- There are many cases where syllable boundaries differ from morpheme boundaries
- MP: syllabification is not stored in LTM, it happens in RT
- Phonetic Encoding and Articulation
- In this process, we use “mental syllabary”
- Mental syllabary: highly practiced syllables
- These overlearnt syllables are stored; don’t need to be recomputed each time
- Ex. to produce the word “horses”
- 1 Input: phonological code w/ syllable boundaries
- 2: phonetic encoding: matches each unit w/ the corresponding node in the syllabary
- 3: Articulatory score: instructions to combine the syllables (articulate everything smoothly)
- 4: sent to motor system → speech
- Self-Monitoring: detect and correct our own speech errors
- 2 self-monitoring feedback loops: external & internal
- Ex. speech error “yellow”
- Case 1: entrance to yellow…er, to gray
- Case 2: we can go straight to the ye-…to the orange dot
- Case 1: external FL
- said the whole word “yellow” -> paused to repair error
- hear the mistake → correct it
- Case 2: internal FL
- Speaker produced the first syllable “ye” -> paused to repair error
- internally identify mistake b4/during articulation
- Ex. speech error “yellow”
7
Q
- Indefrey and Levelt 2004 – brain mapping studies
- 4 spoken word production tasks
- Lead in process for each 4 task
- There are 15 brain regions that are jointly activated both the tasks→ what does this suggest?
A
- Neurobiological Evidence for the Model
- Indefrey and Levelt 2004 – brain mapping studies
- Meta-analysis: examine brain activation in diff areas
- spoken word production tasks
- Picture naming
- Associative word generation: Ex. say diff types of animals
- word reading
- pseudoword reading (ex. neem)
- These tasks have distinct lead in processes that must be done b4 the core processes of the word production system can proceed
- Pic naming: needs visual object recognition
- Associative word generation: needs recognition of visually or auditorily presented stimulus, and strategic mem search
- Word reading: need visual recog
- Pseudoword reading: need grapheme [EL1] recog and conversion of graphemic to phonological rep
- The tasks differ in which core stages of word production they recruit (based on Lemma model)Pic naming and associative word generation share core processes of word production
- Authors identified 15 regions that are jointly activated by both tasks
- The neuroanatomy overlap suggest speech perception and reception share some representations
8
Q
- brain regions involved in each stage of the Lemma model
- Stage 1 lexical selection
- what do lexical concepts do?
- Location?
- Why do they shoe low BOLD?
- Competition b/w lexical concepts
- location that resolves concepts?
- Competition lv among semantically related vs unrelated objects
- severe left IFG lesion → result?
- Perspective taking
- spatial Perspective taking → location?
- social Perspective taking/ theory of mind → location?
- what do lexical concepts do?
- Stage 1 lexical selection
A
Conceptual Focusing and Perspective-Taking
- brains areas involved in each stage in the Lemma model
- Stage 1: Lexical selection (our thoughts)
- 1 The various semantic (word meaning) features in the target word are located across brain
- Lexical concepts: connect those features
- Location: ATLs (air filled cavity → reduce BOLD)
- Lexical concepts: connect those features
- 2 competition b/w lexical concepts
- left pIFG (Broca’s area) resolves these conflicts
- Schnur et al
- semantically related objects (ex. truck, car, bike) → high competition (at Broca’s area)
- semantically unrelated objects (ex. truck, food, dog) → low competition
- Study: severe left IFG lesion → most interference/competition → word production deficits
- 3 perspective taking
- spatial perspective taking → left inferior parietal lobe
- social perspective taking/mentalizing/theory of mind → many regions
- 1 The various semantic (word meaning) features in the target word are located across brain
9
Q
- brain regions involved in each stage of the Lemma model
- Lemma selection
- Location
- Fx
- Left posterior MTG
- Where was it mentioned
- fx
- Studies
- Stimulation left mid/posterior MTG → ?
- left mid/posterior MTG lesion → ?
- Left TP lesions → ?
- Lesion in Anterior area of left IT region → ?
- Lesion in Posterior area of left IT region (aka IT+) → ?
- Verb retrieval location → ?
A
Lemma Selection
- 2 tasks activates it: Pic naming and generate associative word
- Location: left mid MTG
- activated 200 ms post-stimulus onset
- Left mid MTG: map meaning to sound during production
- Left posterior MTG: (opp) “lexical interface” in ventral pathway in the Dual Stream Model of speech perception: maps sound to meaning during comprehension
- Boatman et al 2000
- Stimulation left mid/posterior MTG
- no effect on map semantics to phonology)
- impairs map phonology to semantics)
- Stimulation left mid/posterior MTG
- Boatman et al 2000
- Other studies - noun retrieval
- left mid/posterior MTG lesion → poor lexical retrieval
- temporal pole (TP); inferotemporal (IT) cortex
- Left TP lesions: can’t access nouns for persons (ex. Obama)
- Lesion in Anterior area of left IT region: can’t access nouns for animals (ex. horse)
- Lesion in Posterior area of left IT region (aka IT+): can’t access nouns for tools (ex. hammer)
- Ex. Patient can’t name a skunk, but can describe it “it’s smelly, black and white”
- Verb retrieval location
- left IFG, left inferior parietal love, mid/posterior PTG
10
Q
- Happy vs neutral faces → which do we name faster
- Gallegos and Tranel 2005 - naming happy vs neutral celebs
- 3 groups of ppl
- Results:
- naming accuracy
- RT data
- Amygdala’s role
A
Box 6.3: Happy Faces Are Named Faster than Neutral Faces
- Gallegos and Tranel 2005
- Select 60 famous celebrities
- Obtained 2 images of each person’s face: 1 happy, 1 neutral
- Gave pic naming task to 3 gps of participants:
- A: normal ppll
- B: left anterior temporal lobectomy (LTL)
- C: right anterior temporal lobectomy (RTL)
- Results
- Naming accuracy: no effect on emo expression
- Patients did worse than normal ppl
- RT data
- Happy faces are named faster than neutral faces across 3 gps
- Naming accuracy: no effect on emo expression
- Select 60 famous celebrities
- Potential neutral mechanism
- Amygdala – emo processing → enhance top-down processing
11
Q
- brain regions involved in each stage of the Lemma model
- Retrieving Morphemic and Phonological Codes
- Phonological codes; Indefrey and Levelt (2004) – Meta-analysis
- subtraction method
- results: 4 commonly activated areas
- when was it engaged
- pSTG/pSTS corresponds to ? in dual stream model
- Morphemic phonological codes
- Location
- What is it regulated by?
- Lesion → ?
- Corina et al 2010 - Neurostimulation on patients while naming objects
- 6 errors
- semantic paraphasia
- circumlocutions
- phonological paraphasia
- neologisms
- performance errors
- no-response errors
- Results
- left mid-to-posterior STG/STS and MTG stimulated → ?
- 2 regions for phonological retrieval
- 6 errors
- Phonological codes; Indefrey and Levelt (2004) – Meta-analysis
A
- Retrieving Morphemic and Phonological Codes
- Indefrey and Levelt (2004) – Meta-analysis
- Subtraction: “data w/o phonological code retrieval”- “Data on phonological code retrieval”
- Results: Common activation areas
- left pSTG/pSTS (Wernicke’s area)
- left posterior MTG
- left anterior insula
- right SMA
- engaged from 200-400 ms post-stimulus onset
- This is also the “phonological network” in the Dual Stream Model (pSTG/pSTS)
- x
- Morphemic phonological codes
- Location: left posterior ST region
- regulated by word frequency; not by concept familiarity, not length
- Lesion in this -> Wernicke’s aphasia
- Corina et al 2010
- Neurostimulation on patients while naming objects
- 6 categories of errors
- 1 semantic paraphasia (ex. saw tiger -> said lion)
- 2 circumlocutions (ex. saw chair -> said sit down)
- 3 phonological paraphasia (ex. saw wagon -> say ragon)
- 4 neologisms (ex. saw fish -> say herp)
- 5 performance errors (ex. slurred, stutter, articulate imprecisely)
- 6 no-response errors ( = no utterance)
- Result:
- Phonological paraphasia and neologism
- least common
- Happen when left mid-to-posterior STG/STS and MTG stimulated
- FP regions, area Spt → phonological retrieval
- Phonological paraphasia and neologism
- Location: left posterior ST region
12
Q
brain regions involved in each stage of the Lemma model
- Prosodification and Syllabification
- Location
- Phonetic Encoding and Articulation
- What activity do these brain regions also engage in?
- Self monitoring
- 2 feedback loops
- Location for both?
A
Prosodification and Syllabification
- Syllabification is engaged in all 4 types of tasks overtly and covertly
- Phonetic encoding is only engaged when tasks done overtly
- Authors examined brain regions activated word production experiments in overt and covert responses
- Left posterior IFG (Broca’s area) only activated
- Time: 400-600 ms post-stimulua onset
Phonetic Encoding and Articulation
- Plausible regions linked w/ phonetic encoding and articulation
- These brain regions are also activated in speech production that lack phonemic content
- Self-Monitoring
- 2 feedback loops
- External & internal loop: posterior ST region; bilateral
- Internal loop: input = output of syllabification
13
Q
- Challenges of Lemma model
- Main idea of lemma model
- 2 major challenges
A
- Main idea – Lemma is a bridge b/w semantic and phonological structures of words
- Challenges
- # 1 cannot explain why patients have written word production errors; but not spoken word production error
- Possibility 1: Deficits at the lemma lv
- Impossible → person can’t write or speak
- Possibility 2: they can access lemmas, but not modality-specific lexical representations (words)
- impossible both are related to semantics
- Possibility 1: Deficits at the lemma lv
- # 2 Lemma model = discrete processing: processing is feedforward, w/o feedback
- Alternative: processing is interactive (feed forward + feedback)
- Evidence for interactive processing
- Mixed errors: words are semantically and phonologically related to the target
- Ex. say skirt instead of shirt
- Speech production system produced more errors for known words than pseudowords
- Mixed errors: words are semantically and phonologically related to the target
- # 1 cannot explain why patients have written word production errors; but not spoken word production error
14
Q
- DIVA computer model: main goal
- 2 main components
- fx
- 5 steps of how DIVA learns
- somatosensory target representations fx
A
The DIVA Model of Speech Motor Control
- DIVA computer model: Detects speech errors -> correct motor articulation
- 2 main component
- Feedforward control subsystem: produce speech in normal situations
- Feedback control subsystem: use feedback to produce speech in odd situations (ex. pencil in teeth)
- 2 main component
- How it learns process
- 1 present speech to the model (i.e. Auditory target rep = what it should sound like)
- 2 speech sound representation
- Sends motor instructions → produce speech
- Predict what it should sound like (i.e. Auditory target rep)
- 3 produce speech, compare it w/ Auditory target rep
- 4 auditory feedback system detects errors and makes adjusts motor instructions t
- 5 repeat until perfect
- How it learns process
- NOTE When practicing the speech, model acquires “somatosensory target representations”
- Somatosensory target rep: how an utterance is expected to feel (ex. in vocal tract)
- Detects error → correct motor system
15
Q
DIVA model cont
- modules/ maps = ?
- 2 large subsystems
- Name
- Location (1st one only)
- Feedback control subsystem
- 2 loops name
- location
- x
- Feedforward control
- fx
- What does it ignore?
- 2 exceptions
- Step 1: activate speech sound map
- Speech sound map
- location
- lesion → ?
- Input = ?
- Step 2: Articulatory velocity and position maps
- Input
- fx
- articulatory score
- Location
- Lesion = ?
- Output =?
- Step 3: speech sound map & AVP map’s branching route
- Location:
- fx:
- Lesion:
- Step 4 Initiation map
- Input:
- Output:
- Location:
- Regulated by?
- Lesion?
- bilateral damage?
- Parikinson’s disease
- cause
- result
- Locked in syndrome:
- lesion location
- result
- How computer helped restore vowel production
*
A
- There are many modules (i.e. maps)
- maps = gp of neurons
- 2 large subsystems
- Feedforward control subsystem
- Location: FL
- Feedback control subsystem – has 2 loops
- Auditory feedback loop
- Location: temporal lobe
- Somatosensory feedback loop
- Location: parietal lobe
- Auditory feedback loop
- Feedforward control subsystem
- Feedforward Control
- Feedforward control subsystem: produces speech sound under normal circumstances
- Ignores auditory/somatosensory feedback (unless there is white noise, numbed mouth)
- # 1 Activate “speech sound map”
- Speech sound map: a library of learnt speech sounds/syllables
- Location: left pIFG, and ventral premotor cortex
- (= lemma model’s mental syllabary + speech sound map location)
- Lesion = speech apraxia
- Speech apraxia: says “chookun” instead of cushion
- Input Pathway (not in figure)
- phonological rep in pSTG/STS” assigns a specific speech sound
- Speech sound map: a library of learnt speech sounds/syllables
- # 2 Articulatory velocity and position maps
- input from speech sound map
- has vocal tract representations (larynx, lips, jaw, tongue, palate)
- creates “articulatory score”: articulatory score = series vocal tract gestures for speech
- Location: PMC bilaterally
- PMC lesion → Spastic dysarthria, Speech arrest
- speech production and speech perception activate speech sound map and AVP map
- Outputs from these maps → computer → produce speech
- # 3 speech sound map & AVP map’s branching route
- Location: cerebellum and thalamus
- fx: timing of articulation
- Lesion: Ataxic dysarthria
- # 4 Initiation map
- Input: specific motor commands from speech sound map & AVPM
- Output: “go” signal for speech
- Location: supplementary motor area (SMA)
- Regulated by basal ganglia
- Lesion: speech arrest/ say random consonants
- Bilateral lesion: Akinetic mutism → no free will
- Ex. Parkinson’s disease
- less Output from BG to SMA
- hypokinetic dysarthria
- Brain–Machine Interface Restores Rudimentary Speech in a Patient with Locked-In Syndrome
- Locked in syndrome/ primary progressive aphasia
- Lesion location: Brain stem damaged
- Result: consciousness and cog intact; no motor control except eye movement
- Guenther et al 2009
- Implanted electrode in the precentral gyrus in locked in patient
- Signals sent to computer → speech → sound feedback
- Training improved vowel production
- Guenther et al 2009