Lecture 14: Language Flashcards
Mental lexicon
mental store of information that
includes semantic (word meaning), syntactic (word
combinations), and word form (visual or sound
patterns, spelling) information
Average adult speaker knows about 10,000-20,000 words and can easily recognize or produce about 3
words/sec: so, alphabetically ordered, dictionary-type organization could not work.
Other differences:
Words can be forgotten and added
More frequently used words more easily accessed
Symbol grounding problem
if words are defined by other words, one must know the meaning of some
words in advance
How is the mental lexicon organized?
In contrast to the standard dictionary model, access to a word in our mental
lexicon is affected by its relation to other words
way around Symbol grounding problem
One way around this problem: if some concepts are not
defined by other words, but are “grounded” by
interactions
with the environment, e.g., meaning of “pull” or “kick”
could be grounded by motor action
The Hub-and-Spoke Model
Amodal semantic ‘hub’ (anterior temporal lobe)
Grounded/embodied semantics (sensory/motor systems)
the model stores semantic information in various regions involved in sensory and bodily processes (the spokes) and these connect to a central amodal semantic syste (hub)
Two contrasting models for organization of mental
lexicon in relation to producing words.
Levelt’s Discrete Stages Model
Dell’s Interactive Stages Model
Levelt’s Discrete Stages Model
Model can account for basic tip of the tongue (TOT) phenomenon:
lemma activated, and activation fails to spread to next stage of lexeme
retrieval
Studies of TOT (Caramazza) show that people have access to purely
syntactical information (e.g., grammatical gender in Italian speakers)
without phonology
Model can account for basic tip of the tongue (TOT) phenomenon:
lemma activated, and activation fails to spread to next stage of lexeme
retrieval.
BUT: The same studies show that it is also possible to access
phonology (first phoneme) w/out grammatical gender
Interactive models of language processing reject the unidirectional flow of information in the Levelt stage model. why?
activation at later stages able to
influence what happens at earlier stages, because
there is some parallel processing
Dell’s Interactive Stages Model
Similar stages to Levelt’s but includes bidirectional or
interactive activation
Reprinted from Levelt
(1999).
Lemma selection influenced by both
phonological and semantic information
Can account for “mixed” speech errors, e.g., saying “oyster”
for “lobster” – error reflects both phonological and semantic
information
In interactive models like Dell’s, access to a word in the mental lexicon
affected by its relation to other words on a number of dimensions
auditory neighborhood of a word
number of similar
sounding words – more specifically, number of words that differ from
target by only a single phoneme.
Phoneme
smallest unit of sound that makes a difference for meaning:
ex, “L” and “R” in English: Late and Rate have different meanings (L
and R represented by a single phoneme in Japanese)
Neighborhood effect
we are slower to identify words with a large than small “auditory neighborhood”, i.e., more words differ on only a single phoneme.
Semantic/associative relations especially important.
Models of Knowledge as Semantic Networks
Words that have strong associative or semantic relations are closer
together in the network (e.g., car and truck) than are words that have
no such relation (e.g., car and clouds), as shown in studies of
semantic priming (e.g., car primes truck but not clouds).
(Semantically related words are colored similarly in the figure, and
associatively related words (e.g., firetruck–fire) are closely
connected)
-Semantic network models often include categorical organization
-Could brain damage destroy a particular category within the mental
lexicon?
-Is it possible to lose your ability to name specific categories of
objects?
Elizabeth Warrington and colleagues in London reported that some patients showed category-specific deficits
e.g., had little or no difficulty pointing to
or naming pictures of living things, but
had great difficulty pointing to or naming
man made objects such as tools
Other patients showed the opposite
pattern (i.e., double dissociation).
These patients had category-specific
deficits in conceptual/semantic
knowledge, but others have category-
specific naming deficits with intact
conceptual knowledge
Locations of brain lesions that are
correlated with selective deficits in
naming persons, animals, or tools
Actual averaged lesion data are
displayed for patients that had person
-naming (top), animal-naming
(middle), or tool-naming (bottom)
deficits.
The colors indicate the percentage of patients
with a given deficit whose lesion is located in the
indicated area. Red indicates that most patients
had a lesion in that area; purple indicates that
few had a lesion in that area.
Damasio et al. (1996,
Nature)
Similar Pattern of Results from Healthy
Subjects in an Early Neuroimaging Study (PET)
Naming persons activated
mostly the temporal pole,
naming animals activated
mostly the middle portion of
the inferior temporal gyri, and
naming tools activated mostly
the posterior portions of the
inferior temporal gyrus
Why are category-specific deficits in comprehension
and naming observed
Warrington & Shallice (1984): Differences in processing of
sensory perceptual information (most relevant for distinguishing
among
living things) vs. functional information (most relevant for
distinguishing nonliving things, e.g., manmade objects such as
tools)
Problems (see Ward):
Patients with selective deficits for living things don’t have more
difficulty answering sensory vs. functional questions about animals
or objects.
Some patients who have difficulties with sensory properties don’t
show expected category-specific impairments
Others have argued for the organization of brain into distinct
categories, but not a strict sensory-functional dichotomy –
e.g., Caramazza & Shelton (1998) argued that there may be
hardwired categories.
Still being debated, but evidence for some type of category-specific
organization is strong.
evidence toward categorical organization of semantic knowledge
Evidence from category-specific naming disorders and
neuroimaging studies point toward categorical organization of
semantic knowledge, and indicates that anterior temporal lobe is
associated with impairments in naming living things, more posterior
regions with impairments in naming manmade objects.
Sound
pressure
waves caused by
vibration
Sound waves vary in
frequency and
amplitude
Two Special Problems of Speech Perception
Lack of sharp boundaries
Segmentation
Lack of sharp boundaries
Written words/sentences have sharp physical boundaries, but spoken words/sentences don’t.
Ex Speech waveform of a single word can appear like two
words because of embedded silence
Segmentation
Spoken sentences often lack clear boundaries between
words because they are frequently coarticulated (i.e., ends and
beginnings
are united).
Ex Speech waveform of “What do you mean?”
Speech perception system attempts to solve these problems of speech perception by relying on
cues from
Prosody (tone of voice)
* Syllable stress
* Formant frequencies: Complex sound waveforms that carry the most
critical information about speech
* Different phonemes/sounds differ in 2 critical formant
frequencies (F1 and F2)
* By putting together different combinations of F1 and F2 formants,
you can create understandable synthetic speech!
formant frequencies
Formants are frequency peaks in the spectrum which have a high degree of energy. They are especially prominent in vowels. Each formant corresponds to a resonance in the vocal tract (roughly speaking, the spectrum has a formant every 1000 Hz). Formants can be considered as filters.
Even simple sounds are comprised of complex
waveforms containing formant frequencies
Gunnar Fant (1919-2009)
Pioneer in the development of
synthesized speech using formant
“How are you?…I love
you…”
Speech synthesis with just two formant frequencies.
Erik Ramsey: Locked-In
-Car accident in 1999, stroke in brain stem, “locked in”
-Can’t move any part of his body, except his eyes- Moving eyes is exhausting
-Awake and intelligent, can feel
-Hadn’t spoken since 1999 – his stroke disconnected
“motor plans” formulated in
cortex from subcortical
motoneurons necessary to
produce speech.
-Can think of/imagine speech sounds, just can’t produce
A Neural Implant Designer
Phillip Kennedy
-Neurologist who designs
electrodes to use as neural
implants for brain-machine
interface
-Implanted an electrode in
Ramsey’s brain: left premotor
cortex (speech planning area;
localized in Erik via
fMRI)
-Electrode could wirelessly
transmit information from
surrounding neurons.
-Collected extensive neural
data, gathered when
Kennedy’s team asked
Ramsey to imagine speaking
specific words.
-But they could not decode the
data.
-In 2006, Kennedy contacted
BU researcher Frank guenther
Cognitive Neuroscientist Working on a
Computational Model of Speech Processing
and Production
Key idea in model: speech output areas
represent intended speech sounds in
terms of formant frequencies.
Used his computational model to generate design of decoder software that
could translate information about neural
activity from the electrode in the
premotor speech planning area into
formant frequencies.
Output of decoder drives a speech
synthesizer – would this allow Erik
Ramsey to learn to control the
Schematic of the Brain-Machine Interface
Erik Ramsey: Not Totally Locked-
In
After he learned to imitate, Guenther et al. examined
whether Erik could alter a synthesized sound (by
changing the neural signals that drive the BMI) to a
slightly different vowel than what he heard (e.g., hears
“UH” as in “hut”, instructed to produce “OO” as in “hoot”).
Across 25 sessions with real-time feedback, Erik
showed significant improvement:
Average hit rate in producing 3 target vowel sounds
increased from 45% in first session to 70% in final
session, including 89% in final block of final session
Anatomy/Neuropsychology of
Language
Left lateralization of language
Language is extremely complex; we don’t know how many psychologically defined functions of
language map onto the brain
No animal model
Early clues from brain damage and disease
But they support overly simplified model of
language