L&C, W2 Flashcards
Spoken word production
• There are 3 categories involved in spoken word production (Griffin & Ferreira, 2006)
1. Conceptualisation (pre-verbal): This stage involves determining what you would like to say > this is a pre-verbal stage so we are not using any words here (What you want to express)
2. Formulation: Involves translating the pre-verbal message concept that we already have into a linguistic form (how to express it) > this formulation involves different steps
i. Lexicalisation: Choosing the words you want to say (lemmas > meaning of the words + lexemes > sounds of the words)
ii. Syntactic planning: putting words together to create a sentence
iii. Phonological encoding: turning words into actual sounds
iv. Phonetic planning: Refers to how you will actually pronounce the words
3. Articulation: this is where you actually speak and say something (actually expressing it)
Weaver ++ model adds self-monitoring to these categories:
4. Self-monitoring: Can listen to yourself while you speak to help you self-correct if you say the wrong thing > involves internal monitoring (before speaking you monitor what you plan to say) + external monitoring (during speech)
- Evidence supporting this model comes from speech errors, picture-naming + picture-word interference and tip of the tongue states
Speech errors
General points
• About 15 speech sounds per second (2-3 words per second, 150 words a minute).
• Automatic, “impossible to think in the middle of a word shall I say ‘t’ or ‘d’” (Levelt).
• Less attention to speech production than comprehension.
• About 1 or 2 errors for every 1,000 words, 7-22 errors a day.
• What do they tell us? Freud: our repressed thoughts > Gary Dell: shows a person’s capacity for using language and its components
• Most think when you make a speech error “We just swap words or sounds and that’s it” – but this is not the case because errors do not occur at random!
Freud argues that speech errors are due to conflict of current intentions > repressed ideas intrude + cause speech errors
Speech error types
• There are 8 major types of speech errors which can occur at any level of word production (e.g. at the phoneme, morpheme or word level)
- Shift: One speech segment disappears from its appropriate location and appears somewhere else.
- Exchange: Exchanges are double shifts. You have two linguistic units (sounds or words or parts of words)which exchange aka change places. > model and nose swapped in example
- Anticipation: Where people anticipate a sound which will come later on > e.g. you are already thinking of the bike so instead of saying take you say bake because you took b from the thought of bike
- Perseveration: A sound which you already used earlier in the sentence perseveres aka comes back and replaces the sound from another word (like the opposite of anticipation)
- Addition: Linguistic material is added to a word when it shouldn’t be there
- Deletion: Opposite of^ > linguistic material removed
- Substitution: One segment is replaced by an intruder (a different linguistic unit/different word). The word which is the intruder does not appear elsewhere in the sentence > e.g. this suitcase is not light enough when you mean this suitcase is too heavy?
- Blend: Blends are a subcategory of lexical selection errors. More than one item is being considered during speech production. Consequently, the two intended items fuse together.
Common properties of speech errors:
• Exchange of phonemes in similar positions > e.g. You have hissed all my mystery lessons* (missed-history), a burst of beaden (beast of burden) or nife lite (night life) – not fight line
○ When phonemes exchange they usually exchange in the same kind of spot so if you change the first letter of a word, it is unlikely to swap with the end letter of the other word (both first letters would swap)
• Consonants tend to swap with consonants while vowels tend to swap with vowels
• Novel words will follow the phonological rules of language > if you have the word person + people and you blend them, you are more likely to create perple (like purple) instead of peorslpe because this is not an actual word
- Databases of spontaneous errors > this can be experimentally induced (refer to live session) > example on ON
Interpreting the above: Speech Errors
• There are 2 different processes happening: 1. retrieving the word itself and 2. constructing a syntactic frame for where the words are slotted
• Plural ending + other grammatical elements like past tense are APART of the frame so the words are just slotted in > this is a different process from retrieving the word itself
• Evidence for this comes from two types of errors > word errors + sound errors
• Word errors: not restricted by distance & always of the same type > e.g, .where a noun is swapped with a noun and distance doesn’t matter so a word can be swapped from the beginning of a sentence with one at the end of it > Happen early
- Sound errors: occur much closer together + can occur across word-types
Garrets model of speech production
• Similar to Weaver++ model but involves a lower section stage divided in 3 levels (highest to lowest)
○ Functional level: select the word itself but not the word form > attach the word with the function you want it to have in the sentence > e.g. choose the word horse (attach it to actor), word kick (Action) and man (object) > a word error may occur here where you swap horse and man (you won’t swap the function) so man kicks horse
○ Positional level: where you put words in the order you want them said > happens later
Sound level: see the sound form and sound errors > happens later
Lexicalisation - lexical retrieval
Lexicalisation = Process of turning thoughts into sounds > two stage retrieval process > 2 stages
Lexicalisation: stage 1
• 1st stage involves retrieving a representation of the lexical meaning (word meaning) and syntax, called LEMMA (this is the lemma stage) . Each word is represented by a lemma – these are syntactic and semantic but not phonological. Lemma deals with the word meaning without the word form itself > (This means that you do not retrieve everything you know about the concept of cats but you retrieve a representation of the word /CAT/ and the fact that it is a noun).
○ So you retrieve that there is a feline animal and is a noun but do not retrieve everything about feline animals + not yet the word cat
Lexicalisation: stage 2
• 2nd stage is lexeme process where you activate the word form itself > here you create the phonological sound which is associated with the lemma > cuh/ah/tuh > cat
• Evidence for this comes from the fact there are two different types of substitution errors:
○ Semantic errors: saying glass instead of cup
○ Phonological errors: having a large phonological overlap between what you want to say and what you actually say
Evidence: Picture-naming - (Wheeldon & Monsell, 1992; Monsell et al., 1992)
• Here you are given a picture and are asked what the picture is then record how quickly they respond (RT)
• First part of the study involved people having to finish the sentence such as “man’s best friend is his…” and people often said dog, > when they are later asked to name the image and are shown a dog they respond quicker
• Does this happen because you activated the sound dog or the meaning dog?
• Boot is a homophone because it has two different meanings but same sound (car boot, shoe boot) > this was used so in the first part asked “storage place at the back of car is…” > in part 2 they are shown a shoe boot to see if they respond as fast as when they were shown dog > found that there was no priming facilitation and this must be because the meaning differed so the phonological sounds does not facilitate a faster RT > Priming matters at a semantic level because you activate things with the same meaning
• Third part looked at English Welsh ppts + generated the word in “Tom is a cat and Jerry is a …” > then were asked to respond to the picture of a mouse in Welsh to see if RT would be faster as the meaning was the same > here there was no facilitation which indicates that activating something with meaning is not enough for facilitation but it was having both that did that
• Priming effect must be localised somewhere in the connection between the word meaning + word phonological form
This evidence supports the 2 stages in lexicalisation
Evidence: Picture-word interference - (Schriefers et al., 1990)
- Here you name a picture + while you do that you have headphones which play a different word > you have to ignore this auditory word > will this impact how you name the picture
- Early stage when semantic candidates are active (= LEMMA stage), later stage when phonological forms are active (LEXEME stage)
- Shown a picture of a dog + they have to name it then the word played in the headphone is either semantic (e.g. fox), phonological (doll) or unrelated (roof) > they also presented the distractor either before the picture was given, while the pic was given or after
Evidence: Picture-word interference - (Schriefers et al., 1990): results
• If the picture was presented with a semantic distractor > there would be priming so the semantic distractor meant you were slower at saying the picture was of a dog > this was if the distractor was presented just before the picture
• There was no difference between having phonologically related or baseline controlled condition > so playing distractors doll or roof did not affect how quick dog was named
• If you presented the word at the same time as the picture (at 0), then there was facilitation from the phonological distractor > so if you heard doll while you were about to say dog then you were faster at saying dog > no difference between semantic + unrelated condition
• Means there is a particular time difference from lema (meaning of word) to the lexeme (word form)
○ First you have periods where you are retrieving meaning + therefore distractors related to this meaning will have an effect (negative)
○ Later, you have a process retrieving sound form so when you hear something phonologically similar, this helps you say this word faster
• Looks at how the 2 stages of lexicalisation relate to one another in time (are they independent or is there overlap- that is, does the second stage of phonological specification only begin when the first stage of lemma retrieval is complete, or does it begin while the first stage is still occurring?
This supports the two stages of lexical activation
Evidence: Anomic Aphasia
• More mild form of Aphasia where people have difficulty finding words (mainly nouns + verbs) > they tend to use more vague words like “thing” or they make circumlocutions like “you can pick things up with it” when they mean tongs
• No clear area of damage
• 2 types of anomic aphasia:
○ Lexical-semantic anomia: meaning of words is lost (sometimes category-specific, e.g. mainly inanimate objects) > lemma level > caused by damage to the angular gyrus. If you showed array of objects + asked them to pick out a car they would not be able to do that
○ Phonological anomia: knows the meaning of the words + the use of it but selects the wrong phonology > lexeme level. Caused by damage to the posterior inferior temporal area. If you gave them an array of objects + told to pick out a car they would be able to do this.
• Supports the 2-stage model of lexicalisation.
Test of semantic knowledge: pyramid + palm-tree test
• Can test conceptual semantics by giving 3 pictures + saying which of the lower two matches that of the pyramid > if they can still process it at a conceptual level they should be able to choose the right answer
Lexical semantics can be tested for to see if they actually have the words available to them to say it by replacing the pictures with words + asking which matches the pyramid
Is Lexicalisation discrete or interactive?
• The discrete model says that the semantic stage must be completed before the phonological stage begins > a particular lema has to be activated before you look for the word form or lexeme
• The interactive model suggests that one level of processing will impact the operation of the next level
○ For instance, the cascading element of the interactive model suggests that you begin activating phonological word forms while you are still selecting between different semantic words > info from semantic level is is passed to the phonological level
○ E.g. you are thinking of the word dog but also cat and mouse > this causes you to partially activate the different lexemes of this words too before you decide which word you decide to say
- The feedback model within the interactive model suggests that the activation from the semantic level cascades down but there is also feedback from the phonological level to the semantic level > related word forms found on the phonological level will get feedback to the semantic level like dog and dhol > dhol becomes partially activated
Cascaded processing - how to test^
- Mediated priming was made (Levelt et al., 1991) where the ppt needs to name the picture of a sheep, but between them seeing the picture of sheep and saying it, they had to do a lexical decision task where they are given a word through headphones + they need to decide if it is a real word or not
- He used distractor words like goat, goal and sheet > we want to know if goat will be active during the naming of sheep
- According to the discrete model, at a semantic level you activate goat + sheep but choose sheep so you then go to the phonological level where you say sheep but you activate sheet to some extent due to the phonological overlap (Sounds similar)
- The cascaded model suggests you activate goat and sheep but before making a selection this info goes to the phonological level where you partially activate sheep and goat then because of phonological overlap sheep activate sheet and goat activates goal
- Is goal active during the same time as naming sheep?
- Levelt found that when goat was presented in the headphones while the sheep was presented, there was inhibition very early on > so inhibition must have been at the lemma level (meaning) because goat semantically relates to sheep
- If sheet was played instead, you got priming but this was late priming > this happens because priming occurs at a lower phonological level > priming at a phonological and lexeme level
- There was no priming of goal > this is evidence against cascading/feedback > other evidence supports it like if you use a synonym (something closely related) > still unresolved