midterm 2 Flashcards

1
Q

dense phonological neighborhood, effect on word recognition in noise?

A
  • a group of words that sound very similar to one another (e.g., “cat” has neighbors like “cap,” “cad,” “mat,” etc.). These similar-sounding words are called phonological neighbors.

Effect on word recognition?
-Words from dense neighborhoods are harder to identify.

  • High-frequency words in sparse neighborhoods are the easiest to recognize.
  • lexical competition
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Homophones vs. Homographs

A

Homophones: Same sound, different meanings (e.g., “watch” = a timepiece or to observe).

Homographs: Same spelling, different meanings/pronunciations (e.g., “bass” as a fish vs. a musical tone).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Parallel Access Effect for Homophones (Tanenhaus et al., 1979)

A

Cross-modal priming task: Participants hear a sentence ending in a homophone (e.g., “watch”) and quickly see a word on a screen (e.g., “TIME” or “DART”) to decide if it’s real.

Result: Both meanings (noun and verb) are activated initially — showing parallel access.

🕰 Context filters meaning later (~250 ms), suggesting bottom-up activation followed by contextual selection.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Cohort Model: What words might be activated by “element”?
The Cohort Model (Marslen-Wilson):

A

Words activated early on share the same initial phonemes.

For “element”: Possible cohort includes “elephant,” “elegant,” “elevate,” etc.

These are narrowed as more sounds are heard, until the uniqueness point (e.g., “elementa…” rules out others).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

TRACE Model: Words activated by “element”? The TRACE Model (McClelland & Elman):

A

Words are activated based on any overlapping phonemes, not just the beginning.

So for “element,” words like “cement,” “ornament,” or even “elephant” could be weakly activated.

TRACE allows bidirectional activation (phoneme ↔ word levels).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Cohort vs. TRACE Models

A

Activation:

Cohort: Words sharing initial sounds
TRACE: Words sharing any overlapping sounds

Processing:

Cohort: Strictly feedforward
TRACE: Interactive (bottom-up & top-down)

Handling of noise:

Cohort: Struggles with noisy input
TRACE: More robust to ambiguity/noise

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Allopenna et al. (1998): Visual World Eyetracking & the TRACE Model

A

Task: “Put the beaker above the square.”

Objects on screen: beaker, beetle, etc.

👀 Eyetracking showed: Participants looked at both “beaker” and “beetle” early on — consistent with TRACE’s prediction of overlapping activation.

🧪 Cohort would predict no fixation on “beetle” (not a cohort member).

📌 Thus, TRACE is better supported.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What does it mean to say that sentences are comprehended in “real time”?

A

Sentence comprehension is incremental and dynamic — we interpret meaning as each word is heard, not after the sentence ends.

This is evident from garden-path sentences, where we’re misled by initial interpretations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is a garden-path?

A

A garden-path sentence leads the reader/listener toward one interpretation, which later proves incorrect:
“The horse raced past the barn fell.”
The parser assumes “raced” is main verb, then has to backtrack when “fell” appears.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

syntactic ambiguity

A

A sentence has syntactic ambiguity when it can be parsed in more than one way.

Example:
“The boy put the book on the shelf into his backpack.”
Did the boy put the book that’s on the shelf into his backpack, or put the book onto the shelf that’s in the backpack?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Modular Syntactic Parser vs. Cascading Interactive Parser

A

Modular Parser:
- Uses only syntax at first
- Fast, automatic, rigid
- Explains garden-paths via structure

Cascading Interactive Parser
- Integrates syntax, semantics, pragmatics
- Flexible, context-sensitive
- Predicts garden-paths can be avoided

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Minimal Attachment & Parsing Ambiguity

A

Minimal Attachment: The parser defaults to the syntactically simplest structure.

For example, in:

“The athlete accepted the prize would not go to him.” We parse “the prize” as the object, not the subject of a clause — leading to confusion.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Tanenhaus Visual World Study: “Put the shoe on the towel…”

A

Sentence: “Put the shoe on the towel into the bucket.”

Ambiguous unless there’s visual context.

🧠 Participants misinterpret if there’s only one shoe (assume “on the towel” is destination).

In unambiguous control (e.g., two shoes: one on towel), they resolve correctly.

📷 Key finding: Visual context affects parsing in real time — evidence for interactive comprehension.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Kamide & Altmann: Predictive Eye Movements

A

Sentences: “The man/girl will ride/taste the…” (e.g., beer, candy)

Participants look at the appropriate object before it’s mentioned.

⏱ Shows real-time prediction using syntax + semantics.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

How Neural Networks Learn Output Patterns

A

Start: Random output.

Learning: Adjust weights (connections) using error correction (backpropagation).

Gradually: Patterns become closer to target output.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Why Feedforward Word Predictors Fail with Context

A

Simple model: Predicts based on just the previous word.

Fails in:

“The man will ride the…” vs. “The girl will ride the…”
Can’t distinguish because it lacks memory — produces same prediction regardless of earlier context.

17
Q

Can a Feedforward Digit Predictor Learn a Next-Digit Task?

A

Yes, it can eventually learn the pattern (e.g., 0→1, 9→0), because the mapping is:
Consistent

Deterministic But: It lacks flexibility — e.g., it can’t generalize to “if input is X, output X+1” rule without retraining.

18
Q

Simple Recurrent Network (SRN)

A

Like a feedforward net but stores memory via a context layer.

This context allows it to:

-Keep track of prior inputs.
-Learn sequential structure.
-Predict better over time.

19
Q

SRN vs. Large Language Model (LLM)

A

Architecture:

SRN: Small-scale, few layers
LLM: Deep Transformer models

Memory:

SRN: Fixed, short-term
LLM: long context via attention

Learning:

SRN: Trained on small datasets
LLM: trained on billions of tokens

20
Q

Are LLMs Good Models of Human Language?

A

Not quite.
LLMs:
Use surface-level statistical regularities.

Don’t “understand” meaning or have grounded experience.

Don’t acquire language like children (no embodied learning or social cues).

Humans:
Learn through interaction, attention, grounding in perception/action.

Readings (Blank, 2023; Cai et al., 2024) agree: LLMs are impressive but cognitively impoverished compared to human language use.

21
Q

Classic View — Language as a Mirror of Thought

A

Supported by Chomsky and Fodor

Language labels pre-existing concepts

Thought = universal; Language = expression

🧠 Example: Infants and non-linguistic primates have similar conceptual structures

22
Q

Linguistic Relativity (Whorfian View)

A

Language may alter concepts, The idea that language shapes thought

Labels change how we see, think, or remember

🧪 Evidence:
Words may cause conceptual merging or splitting

New labels → new mental categories

Ex: “Wug”, “dax”, “zif” might lead to category distortions over time

Linguistic Relativity:
“Observers are not led by the same evidence to the same conclusions unless their language backgrounds are the same.” (Whorf)

Examples from Event & Spatial Language:
Manner languages (English) → verbs describe how

Path languages (Spanish, Greek) → verbs describe where

Visual World Experiments:
Speakers of different languages attend to different parts of a scene depending on their verb bias (manner vs. path).

23
Q

Space Language

A

Egocentric (left/right) vs. Allocentric (north/south)

Tzeltal (Mayan): uses only allocentric

Early studies: Tzeltal speakers failed egocentric tasks

BUT…
👁 Li & Gleitman showed that Penn students behaved similarly depending on context — suggesting environmental factors matter, not just language

Language shapes what we attend to, especially during communication — not necessarily how we can think.

24
Q

How Common is Bilingualism? is one language turned off?

A

60%+ of world population speaks more than one language

Monolingualism is rare globally

❌ NO.

Bilingual minds show co-activation of both languages

Even in single-language contexts, non-target language activates subtly

25
Q

Competition-Inhibition Model

A

Both languages active → require inhibition

Cognitive control is engaged to select one language

🧪 Marian & Spivey (2003):
Russian-English speakers heard Russian “stamp” (марке)

Fixations showed English competitor (“marker”) also activated

26
Q

Code-Switching vs. Code-Blending

A

Code-switching
- Switching between languages (spoken)

code-blending: Producing words and signs from both languages simultaneously

Bimodal bilingual: Person fluent in one spoken and one signed language

🧪 Pyers & Emmorey (2008):
ASL-English bilinguals prefer code-blending, especially in narratives — because it’s less cognitively costly

TAKEAWAY:
Bilingualism shows that language systems are never fully inactive, parallel activation, leading to cognitive flexibility

Multilingual minds are constantly managing interference and cross-talk

The context and demands of the task influence whether language shapes cognition

27
Q

Writing Systems and What They Represent

A

Alphabetic:
- encodes phonemes (like english and spanish)

Syllabary:
- encodes syllables (like cherokee, japanese -kana)

Logographic:
- encodes morphemes/words (chienes, ancient mayan)

Pictographic:
- encodes semantics/images (early sumerian cuneiform)

Writing systems expose meta-linguistic awareness — we become aware of sounds, syllables, and morphemes because writing forces us to dissect speech.

Writing did not evolve biologically — it recycles existing brain architecture (visual object and face recognition regions) for symbol processing.

28
Q

Learning to Read – Developmental Pathways

A

Reading is unnatural and must be explicitly learned — even though spoken language is acquired effortlessly.

Pictographic stage – treats words like images (e.g., “dog” = picture of dog)

Semantic stage – recognizes words as holistic units (e.g., “shoes” = “dress” due to visual similarity)

Sound-based decoding – maps written forms to spoken language

Perfetti & Dunlap’s Universal Phonology Principle (UPP):
“Regardless of the system, skilled reading involves activating the sound structure of words.”
💡 Even in logographic systems (like Chinese), phonology is accessed.

29
Q

Eye Movements and Visual Word Recognition

A

👁 How do we read silently?
Reading involves a series of saccades (eye jumps) and fixations (pauses to gather info).

Fixations = ~250 ms; Saccades = ~30 ms

🔬 Rayner’s Moving Window Method (1980):
Showed text only around fixation; the rest was masked.

Readers need ~3–4 letters left, ~14–15 right to read fluently in English.

🧠 Asymmetry reflects:
Attentional spotlight shaped by both visual and linguistic systems.

In Hebrew (right-to-left), spotlight shifts leftward instead.

30
Q

Neural Basis of Reading

A

🔍 Visual Word Form Area (VWFA):
Found in left occipito-temporal sulcus

Specializes in recognizing written words regardless of:

Font

Letter case

Location on screen

🧠 It responds only to orthography (the study of letters and letter combinations in a language), not objects or spoken words.
🧬 Neuronal Recycling Hypothesis (Dehaene):
Reading “recycles” regions of the brain previously used for object recognition (especially faces).
🧪 Dehaene et al. (2002–2005):
fMRI shows VWFA activation even when:

Words are in lowercase vs. UPPERCASE

Words appear on different parts of the screen

Proves: VWFA encodes abstract letter identity, not mere visuals

🧠 VWFA connectivity:
Inputs: Primary visual cortex (V1)

Outputs:

To temporal lobe for meaning

To frontal areas for phonological output (Broca’s area)

31
Q

DYSLEXIA & DEVELOPMENT

A

🔬 Bradley & Bryant (1983): Landmark Study
Tested 400+ 4-year-olds for phoneme awareness

Found strong predictive link to reading at age 7

Intervention groups trained on sound categorization with letters had best outcomes

✅ Proved phonemic awareness is causally linked to later reading success

🧠 Dyslexia:
Genetic, developmental disorder affecting phonological processing

Children show:

Trouble identifying rhymes

Difficulty breaking words into phonemes

Delayed decoding skill

🧪 Brain imaging: underactivation of left temporo-parietal regions and VWFA in dyslexic children
✅ Best intervention: explicit phonics-based instruction targeting sound–letter mapping