Language Integration Flashcards

1
Q

What is meant by modularity of mind? How would you characterize those module?

A

= the relative independence of differenent components of a system
- modules are domain specific (e.g. visual) and informationally encapsulated (info initially availably only to one module)
- cannot choose not to process, fast, automatic e.g. V1 cannot not process

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is the strict modular view like? Is it really that way?

A

= the language module can be seen as distinct from memory (or emotions) module
BUT language is not a singular system e.g. semantics, lexemes, LTM x STM

=> we want to know how they interact (picture)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is meant by Morpho-syntax? Give example of what could be acquired this way.

A
  • We must have stored the rules of syntax into LTM
    • first two-word combinations babies make respect the word order
  • Morpho-syntax turns out to be a combination of aquiring rules and expectations
    - E.g. adding -ed to past (+exceptions)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

How is it with phonology and memory?

A
  • We aquire and store phoneme inventories into LTM
  • Elements of prosody appears to be universal
    - e.g. phrasal prosody can be recognized by a few month-olds
    - not much research done on specific patters e.g. questions
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What else do we need to store in LTP?

A
  • Storing lexical items in LTP
  • Plus acquiring the semantic and associative organization of the lexicon
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What two language process could benefit from WM and why?

A

Speech perception and speech production
- In both cases we are either listening or creating a stream of language - there is a need for a buffer that would keep up with such info

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Define speech segmentation.

A

= process of extracting words from a fluent utterance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Recall the research in the picture

A

Procedure: They presented a continuous speech composed of random syllables -> syllable combinations had different probabilities
- E.g. pu+le+ki tended to be together X ki+da had lower predictability

Results: 8-month olds prefered listening to high probability words as opposed to low probability words

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is the name of the phenomenon from previous study (aka diff probabilities for syllables)?
What does it help us with? Is it specific to language process?

A

Transitional probability - tells us the frequency of transition (can we predict what the next syllable will be)
- If I cannot predict it = word boundary

Domain-general:
- non-linguistic stimuli
- motor sequances (tapping e.g. guitar hero)
- other modalities (vision)
=> we can create simulation of what it’s like for babies to extract info

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What was the new paradigm on transition probabilities?

A

Nonsense words were embedded in equally-frequent but very low-TP “noise syllables

=> BUT the words were still extracted

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What could be the possible underlying model of transition probabilities

A

We cannot hold entire sequence in our WM/phonological loop (exceeds the capacity)
-> If all the utterance is processed in WM
=> then high-TP that only occur once in the WM buffer would NOT be extracted

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Explain procedure of the research concerned with TP and capacity of WM. Hiw did the stimuli differ - how were they same? What would you predict based on statistical learning?

A

Creation of a random utterance with
- Close-words (C) = clumped together
- Far-words (F) = evenly spaced
=> BOTH
- Same number of occurances
- Same average repeat distnaces
- Same TPs

Prediction according to statistical learning = both words will be prefer over the rest

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What’s the result of the research on TP and WM? What general conclusion does it hold for statistical learning?

A

=> Participants chose C words, but NOT D words over other words

Statistical learning can be a helpful tool for understanding linguistic segmentation
- BUT it does seem to be impacted by WM capacity
- BUT it is affected by serial position effect in prosodic chunks

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

How does prosody fit into the mix? (research on adding prosody to statistical learning)

A

Procedure: Creating sentences out of nonsence syllables just like already described
-> by adding prosody we get 2 conditions:
- Stranding words = those that have prosody in their middle (weird)
- Internal words = adding prosody as we normally would

Results:
=> Without prosody both words recognized
=> With prosody ONLY internal recognized

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What may also be the reason for the prosodic segmentation this way (memory related)?

A

Serial Position effect = remembering the first and last better than the middle

=> Words are better recovered (preferred over part-words) fro prosodic edges compared to middle

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Why do we need WM for sentence processing?

A
  • To keep previous material “active” e.g. names of the main characters in a book
  • To determine the correct antecedents for pronouns
17
Q

How do we investigate online parsing? Example? Measures?

A

We give people a sentence and ask questions about it:
- E.g. Joe figured the puzzle that Sue took out
-> Who took out the puzzle?
- a) Joe
- b) Sue
=> Measure how long do people spend on a word within a sentence to infer how difficult it was for them to integrade into the previous context

=> Integration cost measured as the distance to the attachment site

18
Q

Which one of the selected words will take longer to integrate and why?

A

Integration cost affects reading time -> higher integration cost implies holding more items in WM

19
Q

Could WM also interact with syntactic parsing?

A

Yes. The further the target word is from the word it is connected to -> the more people misjudge sentences as correct

E.g. The key to the cabinets are rusty

20
Q

Explain the research on (structurally ambiguous) sentences and WM, executive functions + results?

A

Different verbs prefer DO or SC continuation - and we automatically infer which one that is

Procedure: reserchers manipulated:
- Consistency of the final phrase to the initial verb (DO x SC)
- WM load of the final phrase
=> Measured activity in fMRI

Results:
=> Greater activity for NON-consistent in dlPFC (for that it has to work extra hard)
=> Greater WM load = activity in Inferior parietal cortex (IPC)
(mainly for NON-consistent

21
Q

What would be the bottom line for the study on WM and executive functions usage?

A
  • dlPFC is involved in strategic manipulations of WM content (even in other tasks e.g. Stroop)
  • IPC is involved in storing phonological info

=> Parsing of sentences recruits WM and executive functions (particularly if structurally ambiguous)

22
Q

How are spoonerisms/lexical bias related to WM?

A

If we make such errors (e.g. joints and bones -> boints and jones) it implies that when we’re trying to say the first word (joints) the second word (bones)
- must have already been selected
- must have been placed into the WM buffer

23
Q

Fill in: If the memory buffer is the same for ——- and ——— => then errors in ——- should mimic errors in ———

A
  • Spontaneous speech production
  • Repeating a list of syllables in STM
24
Q

Describe the research on WM and speech production.

A

They examined the effect of verbal and spatial WM on speech production
- Verbal condition = letter sequence
- Spatial condition = location within a grid
-> before reproducing it p. had to read a sentence aloud
-> Dependent = number of errors in a sentence

Results:
=> Load = more errors than controls
=> NO difference between verbal and spatial

=> verbal WM is not relevant to speech specifically - more so attention (e.g. how we perform when multi-tasking)

25
Q

How may in general be language connected to emotions?

A
  1. Prosody may carry this paralinguistic trait
  2. Lexical items can convey affective semantics e.g. sad, happy, frustrated
  3. Language can be used for multiple reasons in which e. play crucial role
26
Q

What kind of research was conducted on the connection of prosody and emotions?

A

P. from USA and India listened to recordings and judged its affective features
=> cross-cultural correlation was found for emotional categories

In the picture - prosody seems to have similar categorization and shape

Also emotional categories are better predictors of e. recognition than affective features like valence

27
Q

What kind of impact can emotions have on syntax (procedure)?

A

In a Spanish research p. were presented with sentences:
- Of different affective component: positive, negative, neutral
- Correct or incorrect (regarding plurality of the adjective)

28
Q

What kind of impact can emotions have on syntax (results+why)?

A

Results:
=> Syntactic violation resulted in Increased Left Anterior Negativity (LAN) around 400ms
=> LAN showed different patterns depending on the affective component
(note: picture shows a difference wave)

Conclusion:
=> emotional valence of a word changes its syntactic integration
- Negative = harder to integrate (thus larger LAN)