Lec 3/ TB Ch 6 Flashcards

1
Q
  • 5 Types of phonological errors
  • Give ex
    • Phoneme anticipation
    • Anticipatory phoneme addition
    • Phoneme shift
    • Rhyme exchange
    • Consonant cluster exchange
  • How are they organized?
A

Types of phonological errors (Fromkin, 1973) – correct/error

  • 1 Phoneme anticipation: a reading list -> a leading list
  • 2 Anticipatory phoneme addition: an early period -> a pearly period
  • 3 Phoneme shift: black boxes -> back bloxes
  • 4 Rhyme exchange: help of junk -> hunk of jeep
  • 5 Consonant cluster exchange -> squeaky floor -> fleaky squoor
  • These errors single phonemes, gps of phonemes (ex. consonant clusters, rhymes); these words are organized hierarchically
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q
  • Lemma model for word production
    • 2 subsystems
    • System 1: 2 parts; 5 steps
    • System 2: 3 parts; 7 steps
A
  • Lemma model for word production
  • 2 main subsystems
    • Lexical selection: identify the best word in the mental lexicon (library)
    • Form encoding: prepare the word’s articulatory shape
  • Big picture of model
    • 2 parts in Lexical selection subsystem
      • Conceptual prep
      • Lemma selection
    • 2 parts in form encoding
      • Retrieving morphemic phonological codes
      • Prosodification/syllabification
      • Phonetic encoding
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q
  • Conceptual Focusing and Perspective-Taking
  • Step 1
  • Lexical concept
  • 3 Factors that influence selecting lexical concepts
    • define each factors
    • 2 types of perspectives
    • theory of mind
  • What happens during word production? (other concepts?)
A

Conceptual Focusing and Perspective-Taking

  • 1 put your idea into lexical concepts
    • Lexical concept: integrates various semantic features for the particular word
    • Factors influencing lexical selection
      • Cross-linguistic variation
      • POV
      • Subjective construal
    • I Cross-linguistic variation: diff languages → don’t have equivalent words; have more specific words
      • Words/message are specific/tuned to the language
    • II: POV/ perspective taking
      • 1 Deictic perspective taking: speaker’s perspective
        • I see a chair w/ a ball to the left of it
      • 2 Intrinsic perspective taking: a nearby object’s perspective (ex. chair)
        • I see a ball w/ a chair right of it
      • Mentalizing/theory of mind: the ability to imaging what others are thinking
    • III: Subjective construal -
      • Language depends on the speaker’s goal/interpretation
        • same object can be = “dog/animal/pet/pest
    • NOTE: during word production, multiple lexical concepts are activated in parallel; only the target is activated more
      • Ex. target word = horse; other concepts (ex monkey) less active
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q
  • Lemma
  • What does it contain
  • 3 step process
  • How to determine how likely a lemma is activated?
  • What is the major rift in the lemma model
  • What phenomenon can it explain?
A
  • Lemma selection (lower lv nodes)
  • Lemma: abstract word node (similar to arbitrary #)
    • b/w semantics and phonology
    • Contains morphosyntactic features (grammar category, gender class etc)
  • Process
    • 1 In conceptual preparation phase, the target lexical concept is activated the most; related lexical concepts are activated less
    • 2 This activation pattern propagates to the lemma lv
      • So, the lemma that matches the target lexical concept is activated the most; lemmas that match related concepts = less
    • 3 then the correct lemma is selected
  • During processing, likelihood of a lemma being selected = degree of activation/ total activation of al engaged lemmas
  • x
  • Interlude: Crossing the Rift
  • “major rift”: gap b/w the 2 subsystems (lexical selection, form encoding)
    • Tip of the tongue: the sound structure of the desired word is partially/wholly unavailable, even though you know the meaning and morphosyntactic feature
    • This rift is magnified in brain-damated patients w/ anomia
      • There is chronic blocking when retrieving the phonological form of words
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Form encoding subsystem

  • morpheme
  • phoneme
  • lexicon
  • lexical
  • Retrieving Morphemic and Phonological Codes
    • What is extracted? (2 things)
    • 3 main assumptions
    • Evidence for assumption 2 - Study: word f effect is in phonological lv
      • Hypothesis: lemma lv vs phonological lv
      • 3 conditions
      • Results
      • Age acquisition alternative explanation
  • Evidence for assumption 3: banana
A

Retrieving Morphemic and Phonological Codes

  • morpheme: smallest unit of meaning (ex un / break/ able)
  • Phonem: sounds
  • Lexicon = word library
  • Lexical = words
  • Stage 1: retrieve morphemic phonological code of the target word
    • access the morphemic representation → “spelling out” its segmental phonemic content
    • Ex. “horseS’ example
      • Slelcted lemma = horse
      • Activated 2 morphemes: , and suffix; and their sound structur[EL3] e
      • MP 1 only the selected lemma goes the through the “rift” b/w the lexical selection subsystem and form encoding subsystem
        • MP 2 retrieval of morphemic phonological codes is influenced by word frequency
          • Word frequency effect: phonological form of high freq word (ex dog) is retrieved faster than that of low freq word (ex. broom)
  • Study:
    • Homophones: diff words that sound the same
      • Ex. high f adj “more” vs low f adj “moor”
    • H1: if the word-f effect is at the lemma lv, the time to produce more will be sig faster than that of moor
    • H2: If the effect is at the phonological lv, the time to produce 2 words is the same
      • Reason: they sound the same, so low f “moor” is accessed fast
  • 3 types of target words
    • Low f homophones (ex. ~to moor vs more)
    • Low f non-homophones w/ low f homophones (ex. ~to march vs moor)
    • Low f non-homophones w/ high f twin (Ex. much vs moor)
      • Much has similar f to more
  • Results: Low-f homophones (moor) were produced just as fast as high-f controls (much)
    • Support H2; helps “cross the rift” based on the model
  • Alt explanation:
    • Age of acquisition: some studies show it is easier to access words that are learnt early in life
  • MP 3: morphemic phonological codes are retrieved faster w/ initial segment
    • Ppl were faster at naming a banana when they knew beforehand the target word began w/ “ba”
    • But not when they knew beforehand that the targe word ended w/ “na”
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q
  • Form encoding subsystem
  • Stage 2: Prosodification [EL1] and Syllabification - meaning
    • input
    • process
    • output
    • Syllable boundaries differ from morpheme boundaraies
      • Ex horses (describe)
    • Syllable boundaries can overlap into word boundaries
      • Ex. escort us (describe)
    • Is syllabification stored in LTM or is on the fly?
  • Stage 3: Phonetic Encoding and Articulation
    • mental syllabary
      • Whats the advantage using this?
    • 4 step process
  • Self-monitoring stage
    • What it does
    • external vs internal feedback loop (ex. yellow; vs ye)
A

Prosodification [EL1] and Syllabification → filter into syllables

  • Input: morphemic phonological codes
  • Process: sounds are bundled into syllables
  • Output: phonological word
    • There are many cases where syllable boundaries differ from morpheme boundaries
      • Ex. “horses”
        • Bimorphemic & bisyllabic
        • Morpheme: , (gren)
        • Syllable: /hor/, /ses/(Pink)
      • Syllabification can transcend word boundaries
      • Ex. He’ll escort us
        • Morpheme: “escort” “us”
        • Syllable: /e/, /scor/, /tus/
  • MP: syllabification is not stored in LTM, it happens in RT
  • Phonetic Encoding and Articulation
  • In this process, we use “mental syllabary”
    • Mental syllabary: highly practiced syllables
  • These overlearnt syllables are stored; don’t need to be recomputed each time
  • Ex. to produce the word “horses”
    • 1 Input: phonological code w/ syllable boundaries
    • 2: phonetic encoding: matches each unit w/ the corresponding node in the syllabary
    • 3: Articulatory score: instructions to combine the syllables (articulate everything smoothly)
    • 4: sent to motor system → speech
  • Self-Monitoring: detect and correct our own speech errors
  • 2 self-monitoring feedback loops: external & internal
    • Ex. speech error “yellow”
      • Case 1: entrance to yellow…er, to gray
      • Case 2: we can go straight to the ye-…to the orange dot
    • Case 1: external FL
      • said the whole word “yellow” -> paused to repair error
      • hear the mistake → correct it
    • Case 2: internal FL
      • Speaker produced the first syllable “ye” -> paused to repair error
      • internally identify mistake b4/during articulation
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q
  • Indefrey and Levelt 2004 – brain mapping studies
    • 4 spoken word production tasks
    • Lead in process for each 4 task
    • There are 15 brain regions that are jointly activated both the tasks→ what does this suggest?
A
  • Neurobiological Evidence for the Model
  • Indefrey and Levelt 2004 – brain mapping studies
  • Meta-analysis: examine brain activation in diff areas
  • spoken word production tasks
    • Picture naming
    • Associative word generation: Ex. say diff types of animals
    • word reading
    • pseudoword reading (ex. neem)
  • These tasks have distinct lead in processes that must be done b4 the core processes of the word production system can proceed
    • Pic naming: needs visual object recognition
    • Associative word generation: needs recognition of visually or auditorily presented stimulus, and strategic mem search
    • Word reading: need visual recog
    • Pseudoword reading: need grapheme [EL1] recog and conversion of graphemic to phonological rep
  • The tasks differ in which core stages of word production they recruit (based on Lemma model)Pic naming and associative word generation share core processes of word production
  • Authors identified 15 regions that are jointly activated by both tasks
    • The neuroanatomy overlap suggest speech perception and reception share some representations
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q
  • brain regions involved in each stage of the Lemma model
    • Stage 1 lexical selection
      • what do lexical concepts do?
        • Location?
        • Why do they shoe low BOLD?
      • Competition b/w lexical concepts
        • location that resolves concepts?
        • Competition lv among semantically related vs unrelated objects
        • severe left IFG lesion → result?
      • Perspective taking
        • spatial Perspective taking → location?
        • social Perspective taking/ theory of mind → location?
A

Conceptual Focusing and Perspective-Taking

  • brains areas involved in each stage in the Lemma model
  • Stage 1: Lexical selection (our thoughts)
    • 1 The various semantic (word meaning) features in the target word are located across brain
      • Lexical concepts: connect those features
        • Location: ATLs (air filled cavity → reduce BOLD)
    • 2 competition b/w lexical concepts
      • left pIFG (Broca’s area) resolves these conflicts
      • Schnur et al
        • semantically related objects (ex. truck, car, bike) → high competition (at Broca’s area)
        • semantically unrelated objects (ex. truck, food, dog) → low competition
      • Study: severe left IFG lesion → most interference/competition → word production deficits
    • 3 perspective taking
      • spatial perspective taking → left inferior parietal lobe
      • social perspective taking/mentalizing/theory of mind → many regions
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q
  • brain regions involved in each stage of the Lemma model
  • Lemma selection
    • Location
    • Fx
    • Left posterior MTG
      • Where was it mentioned
      • fx
    • Studies
      • Stimulation left mid/posterior MTG → ?
      • left mid/posterior MTG lesion → ?
      • Left TP lesions → ?
      • Lesion in Anterior area of left IT region → ?
      • Lesion in Posterior area of left IT region (aka IT+) → ?
      • Verb retrieval location → ?
A

Lemma Selection

  • 2 tasks activates it: Pic naming and generate associative word
  • Location: left mid MTG
  • activated 200 ms post-stimulus onset
  • Left mid MTG: map meaning to sound during production
  • Left posterior MTG: (opp) “lexical interface” in ventral pathway in the Dual Stream Model of speech perception: maps sound to meaning during comprehension
    • Boatman et al 2000
      • Stimulation left mid/posterior MTG
        • no effect on map semantics to phonology)
        • impairs map phonology to semantics)
  • Other studies - noun retrieval
    • left mid/posterior MTG lesion → poor lexical retrieval
    • temporal pole (TP); inferotemporal (IT) cortex
      • Left TP lesions: can’t access nouns for persons (ex. Obama)
      • Lesion in Anterior area of left IT region: can’t access nouns for animals (ex. horse)
      • Lesion in Posterior area of left IT region (aka IT+): can’t access nouns for tools (ex. hammer)
        • Ex. Patient can’t name a skunk, but can describe it “it’s smelly, black and white”
  • Verb retrieval location
    • left IFG, left inferior parietal love, mid/posterior PTG
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q
  • Happy vs neutral faces → which do we name faster
  • Gallegos and Tranel 2005 - naming happy vs neutral celebs
    • 3 groups of ppl
    • Results:
      • naming accuracy
      • RT data
  • Amygdala’s role
A

Box 6.3: Happy Faces Are Named Faster than Neutral Faces

  • Gallegos and Tranel 2005
    • Select 60 famous celebrities
      • Obtained 2 images of each person’s face: 1 happy, 1 neutral
      • Gave pic naming task to 3 gps of participants:
        • A: normal ppll
        • B: left anterior temporal lobectomy (LTL)
        • C: right anterior temporal lobectomy (RTL)
    • Results
      • Naming accuracy: no effect on emo expression
        • Patients did worse than normal ppl
      • RT data
        • Happy faces are named faster than neutral faces across 3 gps
  • Potential neutral mechanism
    • Amygdala – emo processing → enhance top-down processing
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q
  • brain regions involved in each stage of the Lemma model
  • Retrieving Morphemic and Phonological Codes
    • Phonological codes; Indefrey and Levelt (2004) – Meta-analysis
      • subtraction method
      • results: 4 commonly activated areas
      • when was it engaged
      • pSTG/pSTS corresponds to ? in dual stream model
    • Morphemic phonological codes
      • Location
      • What is it regulated by?
      • Lesion → ?
      • Corina et al 2010 - Neurostimulation on patients while naming objects
        • 6 errors
          • semantic paraphasia
          • circumlocutions
          • phonological paraphasia
          • neologisms
          • performance errors
          • no-response errors
        • Results
          • left mid-to-posterior STG/STS and MTG stimulated → ?
          • 2 regions for phonological retrieval
A
  • Retrieving Morphemic and Phonological Codes
  • Indefrey and Levelt (2004) – Meta-analysis
    • Subtraction: “data w/o phonological code retrieval”- “Data on phonological code retrieval”
    • Results: Common activation areas
      • left pSTG/pSTS (Wernicke’s area)
      • left posterior MTG
      • left anterior insula
      • right SMA
    • engaged from 200-400 ms post-stimulus onset
  • This is also the “phonological network” in the Dual Stream Model (pSTG/pSTS)
  • x
  • Morphemic phonological codes
    • Location: left posterior ST region
      • regulated by word frequency; not by concept familiarity, not length
    • Lesion in this -> Wernicke’s aphasia
        • Corina et al 2010
      • Neurostimulation on patients while naming objects
      • 6 categories of errors
        • 1 semantic paraphasia (ex. saw tiger -> said lion)
        • 2 circumlocutions (ex. saw chair -> said sit down)
        • 3 phonological paraphasia (ex. saw wagon -> say ragon)
        • 4 neologisms (ex. saw fish -> say herp)
        • 5 performance errors (ex. slurred, stutter, articulate imprecisely)
        • 6 no-response errors ( = no utterance)
      • Result:
        • Phonological paraphasia and neologism
          • least common
          • Happen when left mid-to-posterior STG/STS and MTG stimulated
        • FP regions, area Spt → phonological retrieval
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

brain regions involved in each stage of the Lemma model

  • Prosodification and Syllabification
    • Location
  • Phonetic Encoding and Articulation
    • What activity do these brain regions also engage in?
  • Self monitoring
    • 2 feedback loops
    • Location for both?
A

Prosodification and Syllabification

  • Syllabification is engaged in all 4 types of tasks overtly and covertly
    • Phonetic encoding is only engaged when tasks done overtly
  • Authors examined brain regions activated word production experiments in overt and covert responses
    • Left posterior IFG (Broca’s area) only activated
    • Time: 400-600 ms post-stimulua onset

Phonetic Encoding and Articulation

  • Plausible regions linked w/ phonetic encoding and articulation
    • These brain regions are also activated in speech production that lack phonemic content
  • Self-Monitoring
  • 2 feedback loops
    • External & internal loop: posterior ST region; bilateral
    • Internal loop: input = output of syllabification
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q
  • Challenges of Lemma model
    • Main idea of lemma model
    • 2 major challenges
A
  • Main idea – Lemma is a bridge b/w semantic and phonological structures of words
  • Challenges
    • # 1 cannot explain why patients have written word production errors; but not spoken word production error
      • Possibility 1: Deficits at the lemma lv
        • Impossible → person can’t write or speak
      • Possibility 2: they can access lemmas, but not modality-specific lexical representations (words)
        • impossible both are related to semantics
    • # 2 Lemma model = discrete processing: processing is feedforward, w/o feedback
    • Alternative: processing is interactive (feed forward + feedback)
    • Evidence for interactive processing
      • Mixed errors: words are semantically and phonologically related to the target
        • Ex. say skirt instead of shirt
      • Speech production system produced more errors for known words than pseudowords
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q
  • DIVA computer model: main goal
  • 2 main components
    • fx
  • 5 steps of how DIVA learns
  • somatosensory target representations fx
A

The DIVA Model of Speech Motor Control

  • DIVA computer model: Detects speech errors -> correct motor articulation
    • 2 main component
      • Feedforward control subsystem: produce speech in normal situations
      • Feedback control subsystem: use feedback to produce speech in odd situations (ex. pencil in teeth)
    • How it learns process
      • 1 present speech to the model (i.e. Auditory target rep = what it should sound like)
      • 2 speech sound representation
        • Sends motor instructions → produce speech
        • Predict what it should sound like (i.e. Auditory target rep)
      • 3 produce speech, compare it w/ Auditory target rep
      • 4 auditory feedback system detects errors and makes adjusts motor instructions t
      • 5 repeat until perfect
  • NOTE When practicing the speech, model acquires “somatosensory target representations”
    • Somatosensory target rep: how an utterance is expected to feel (ex. in vocal tract)
    • Detects error → correct motor system
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

DIVA model cont

  • modules/ maps = ?
  • 2 large subsystems
    • Name
    • Location (1st one only)
  • Feedback control subsystem
    • 2 loops name
    • location
  • x
  • Feedforward control
    • fx
    • What does it ignore?
    • 2 exceptions
    • Step 1: activate speech sound map
      • Speech sound map
      • location
      • lesion → ?
      • Input = ?
    • Step 2: Articulatory velocity and position maps
      • Input
      • fx
      • articulatory score
      • Location
      • Lesion = ?
      • Output =?
    • Step 3: speech sound map & AVP map’s branching route
      • Location:
      • fx:
      • Lesion:
    • Step 4 Initiation map
      • Input:
      • Output:
      • Location:
        • Regulated by?
      • Lesion?
        • bilateral damage?
  • Parikinson’s disease
    • cause
    • result
  • Locked in syndrome:
    • lesion location
    • result
  • How computer helped restore vowel production
    *
A
  • There are many modules (i.e. maps)
    • maps = gp of neurons
  • 2 large subsystems
    • Feedforward control subsystem
      • Location: FL
    • Feedback control subsystem – has 2 loops
      • Auditory feedback loop
        • Location: temporal lobe
      • Somatosensory feedback loop
        • Location: parietal lobe
  • Feedforward Control
  • Feedforward control subsystem: produces speech sound under normal circumstances
  • Ignores auditory/somatosensory feedback (unless there is white noise, numbed mouth)
  • # 1 Activate “speech sound map”
    • Speech sound map: a library of learnt speech sounds/syllables
      • Location: left pIFG, and ventral premotor cortex
      • (= lemma model’s mental syllabary + speech sound map location)
      • Lesion = speech apraxia
        • Speech apraxia: says “chookun” instead of cushion
    • Input Pathway (not in figure)
      • phonological rep in pSTG/STS” assigns a specific speech sound
  • # 2 Articulatory velocity and position maps
    • input from speech sound map
    • has vocal tract representations (larynx, lips, jaw, tongue, palate)
    • creates “articulatory score”: articulatory score = series vocal tract gestures for speech
    • Location: PMC bilaterally
    • PMC lesion → Spastic dysarthria, Speech arrest
    • speech production and speech perception activate speech sound map and AVP map
      • Outputs from these maps → computer → produce speech
  • # 3 speech sound map & AVP map’s branching route
    • Location: cerebellum and thalamus
    • fx: timing of articulation
    • Lesion: Ataxic dysarthria
  • # 4 Initiation map
    • Input: specific motor commands from speech sound map & AVPM
    • Output: “go” signal for speech
    • Location: supplementary motor area (SMA)
      • Regulated by basal ganglia
    • Lesion: speech arrest/ say random consonants
    • Bilateral lesion: Akinetic mutism → no free will
  • Ex. Parkinson’s disease
    • less Output from BG to SMA
    • hypokinetic dysarthria
  • Brain–Machine Interface Restores Rudimentary Speech in a Patient with Locked-In Syndrome
  • Locked in syndrome/ primary progressive aphasia
    • Lesion location: Brain stem damaged
    • Result: consciousness and cog intact; no motor control except eye movement
      • Guenther et al 2009
        • Implanted electrode in the precentral gyrus in locked in patient
        • Signals sent to computer → speech → sound feedback
        • Training improved vowel production
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q
  • Spoonerism
  • How does it happen: 3 steps

DIVA Model; Feedforward control (cont) - easy version

  • Feedforward control fx
  • speech sound map
    • fx
    • location
  • AVP maps
    • fx
    • location
  • Initiation map
    • fx
    • location
A
  • Box 6.5: What the Brain Does Before the Tongue Slips
  • Spoonerism: speech errors where initial consonants of 2 words are exchanged
    • Ex. normal: you have missed all my history lec
    • Ex. Spoonerism: you have hissed all my mystery lectures
  • Spoonerism what happens:
    • 2 speech motor programs competing
    • The wrong one wins
    • In the DIVA model, initiation map sends out incorrect program
    • Feedforward control subsystem of DIVA model underlies producing well-learnt speech sounds under normal circumstances
      * Speech sound map: syllables, phonemes library
      * Location: left pIFG and ventral premotor cortex
      * Articulatory velocity and position maps: sets up vocal tract motor commands
      * Location: ventral PMC bilaterally
      * Initiation map: motor commands are released “go” signal
      * Location: SMA bilaterally
      * Regulated by basal ganglia
17
Q
  • DIVA model cont
  • Forward model
  • Inverse model
  • the 2 feedback circuits
  • x
  • Auditory feedback control
  • speech sound map → 2 places
  • # 1 Auditory target map
    • fx
    • Location:
  • # 2 “speech sound map” to auditory target map branching route
    • Location:
  • # 2 Auditory state map: fx
    • fx
    • Location:
  • # 4 Auditory error map:
    • fx
    • Location:
    • Top-down: - how is it inhibited
    • Bottom-up: - how is it excited
  • # 5 Feedback control map
    • fx
    • Similar to an inverse/ forward model?
    • Location:
A
  • Forward and Inverse Models in DIVA model
  • Forward model: motor commands → sensations
    • speech sound map to “auditory and somatosensory parts”
  • Inverse model: Sensations → motor commands
    • auditory/somatosensory parts to “articulatory velocity and position maps”
  • 2 feedback circuits
    • 1 auditory feedback circuit
    • 2 somatosensory feedback circuit
  • Auditory Feedback Control
  • speech sound map → AVP maps + auditory target map
  • # 1 Auditory target map: send auditory predictions (aka auditory target representations)
    • Location: posterior STG and PT bilaterally
      • Same place as Dual stream model’s “phonological network”
  • # 2 “speech sound map” to auditory target map branching route
    • Location: cerebellum
  • # 3 Auditory state map: auditory input
    • Location: PAC and PT bilaterally
  • # 4 Auditory error map: detect discrepancies b/w anticipated and actual sounds
    • Location: pSTG and PT bilaterally
    • Top-down: Auditory target map inhibits auditory error map
    • Bottom-up: Auditory state map activates auditory error map
      • Fits w/ Lemma Model’s “external” self-monitoring loop (hear own voice)
    • Fits Dual stream model sensorimotor interface location
        • # 5 Feedback control map
    • Updates articulatory commands in AVP maps
    • Similar to an inverse model
    • Location: right ventral premotor cortex
18
Q

DIVA model cont

Somatosensory Feedback Control

  • # 1 somatosensory target map
    • input
    • location
    • fx
  • # 2 Somatosensory state map:
    • fx
    • Location:
  • # 3: Somatosensory error map:
    • Fx:
    • Location:
      • top-down process:
    • bottom up process:
    • Errors sent to ?? → AVP maps
  • DIVA model vs Lemma monitoring/feedback system difference
  • If the impaired DIVA model stutters → 2 ways to improve stuttering
  • Insula fx
A

Somatosensory Feedback Control

  • # 1 somatosensory target map:
    • Speech Sound Map activates somatosensory target rep
    • Location: ventral somatosensory cortex, anterior supramarginal gyrus
    • fx: Predict sensations to be felt in vocal tract
  • # 2 Somatosensory state map:
    • Receive what sensations are felt in vocal tract
    • Location: ventral somatosensory cortex
  • # 3: Somatosensory error map:
    • Fx: determine if utterance was produced correctly
    • Location: ventral somatosensory cortex, anterior supramarginal gyrus
      • top-down process: somatosensory target map inhibit Somatosensory error map
    • bottom up process: somatosensory state map excite Somatosensory error map
    • Errors sent to feedback control map → AVP maps
    • NOTE
      • DIVA model: auditory and somatosensory feedback control
        • can repair tiny errors and big
      • lemma model: only auditory self-monitoring
        • only repair big errors
          • Box 6.7: Using the DIVA Model to Simulate Stuttering
      • The model’s fluency improved when it was given more time to generate output, and when there is background noise to prevent error detection
  • Insula → speech motor control
19
Q
  • auditory and somatosensory feedback circuits send info to ???
  • DIVA model final output ACP maps → 2 steps
A

Are the Auditory and Somatosensory Feedback Circuits Integrated in the Planum Temporale?

  • auditory and somatosensory feedback circuits converge in feedback control map
  • DIVA model shows these 2 components are separate
  • auditory and somatosensory feedback in PT

Peripheral Mechanisms of Speech Production

  • DIVA model final output: AVP maps send motor commands → subcortical nuclei & CN → articulatory muscles
  • Subcortical nuclei hav cranial nerves
20
Q

Lec

  • Is speech vs writing more dominant/evolved?
  • Info flow: high order can feedback to low lv
    • Example “abolish, demolish” versus “embarrass, malpractice” → explain
A

What about flow from orthography/phonology back down to the perceptual system?

  • Both speech and writing is equally evolved
  • View 1: speech is predominant over writing: we hear and speak before we write;
  • View 2: app; Vision is used in writing; there is a lot of brain tissue devoted to vision -> it is quite evolved
  • X
  • Info flow: high order orthography (spelling) /phonology (incl lexical and semantic rep) can feed back to low lv perceptual system
  • E.g., perceiving the words “abolish, demolish” versus “embarrass, malpractice”
    • /sh/ vs /cs/ ending
    • Abolish and demolish = both have the –ish, and it sounds the same
    • Embarrass and malpractice = ending sounds the same but they are spelt differently -ss vs. –ice
21
Q

Lec

  • Samuel (2001)
  • 3 conditions
    • no consonant → result?
    • neutral consonant (linguistic context) → result?
    • mispronounced consonant → result
  • Conclusion
A

Samuel (2001) shows that there is also flow down to perceptual processing

  • 1 Presented words on /s/ to /sh/ continuum
    • Algo that blends the /s/ and /sh/ sounds in equal steps
  • 2 there are 3 conditions
    • A: No consonant condition (baseline)
      • Presented /s/ and /sh/ continuum items one at a time
      • Show categorical perception curve
    • B: Neutral consonant
      • Present the /s/ and /sh/ continuum w/ neutral consonant (i.e. prev consonant is not predictive of whether the next sound is /s/ or /sh/)
      • For the /s/ context, the curve is higher than /sh/
        • It doesn’t matter
        • b/c the continuum is not perfect
    • C: Mispronounced vs neutral consonant
      • Curve collapsed closer together (less difference compared to B)
    • Perceivers tend to hear things in categorical way more often when it is in the context of a neutral consonant condition
    • This suggest the more language-like the stimulus is, the more apparent the S shape curve will be, and the pre-existing biases in the stimuli will be more apparent
      • This goes down in condition C
        • The S shape curve is flatter in mispronounced/neutral consonant panel compared to the other 2
      • linguistic context
        • 1 Shape how you hear the same sound
        • 2 Having some consonant can boost the categorical perception effect even more than hearing the same letter in isolation
        • x
    • phonological representation differ b/w semantic system vs motor system
      • Motor: articulation
      • Semantic: understand knowledge in the sound
  • Stored in diff brain regions
22
Q

Lec

  • causal relationship b/w sound and perception
    • TMS study method/ result

Spoken word production circuitry

  • 8 steps
  • 9th step
A

sound & motor

Perception and Production and Intimately Intertwined (causal relationship)

  • Use TMS to stimulate 2 diff parts of brain, each for diff motor representations
    • Ex. labial (lips) vs dental (tongue behind teeth)
    • 1 Target M1 w/ TMS
    • 2 Manipulation: ppl perceive 2 diff speech sounds
      • /ba, pa/ (lips) vs /da, ta/ (tongue)
    • 3 They need to press button indicating is it /ba/ /pa/ vs. /da/ /ta/
      • If there’s noise -> lowers accuracy
  • Ex. If you stimulate Lips M1 area -> affect what you perceive
  • This shows ability to perceive speech is influenced by ability to produce speech
  • IOW: perception and production are tied together

Perception and understanding -> more distinct

  • Interact vs not
  • Knowledge stored in diff
  • Diff knowledge relate to one another
  • Differenes -> more cooperation

Spoken word production circuitry

  • A lot of brain is processing language
  • many brain regions are involved; info flows rapidly
  • 1 See picture
  • 2 Identify word associated w/ pic
  • 3 Retrieve word
  • 4 Identify a spoken target to emit
  • 5 Convert phonological code into output
  • 6 Produce the right segments, syllables
  • 7 Code for phonetics -> send to motor system
  • 8 Produce sound
  • Many steps; they are related to one another, but vary as well
  • 9 Self-monitoring
    • Key in language processes
    • Know when you make a mistake, rectify it
    • Ex. talk in noisy env -> you adjust how loud and clear you speak so others can understand
23
Q
  • LEc
  • Lemma
  • morphosyntactic feature
A

Lemma (very important)

  • Lemma: an abstract word representation that help us map b/w various representations (ex. semantics, phonology, morphosyntactic features)
  • morphosyntactic features
    • Ex. grammar
    • Ex. combine morphemes to produce new word
      • Ex. farmer = farm + er
        • “er” usually refers to the person
  • The lemma is the form that gives certain meaning to the word
  • Selecting the Lemma is one of the earliest stages in speech production

Are there independent neural representations for producing different grammatical categories?

  • There is (noisy) evidence that some patients may suffer from selective damage to their ability to produce either nouns or verbs.
  • How might the neural code be organized?
    • One account: formal “grammar-specific” brain regions.
    • Issue: Grammatical knowledge impairments tend to correlate with semantic category impairments.
      • Grammar and semantics are distinct knowledge
      • But they are also related
    • Stay tuned for an alternative view next class!