Language Integration Flashcards
What is meant by modularity of mind? How would you characterize those module?
= the relative independence of differenent components of a system
- modules are domain specific (e.g. visual) and informationally encapsulated (info initially availably only to one module)
- cannot choose not to process, fast, automatic e.g. V1 cannot not process
What is the strict modular view like? Is it really that way?
= the language module can be seen as distinct from memory (or emotions) module
BUT language is not a singular system e.g. semantics, lexemes, LTM x STM
=> we want to know how they interact (picture)
What is meant by Morpho-syntax? Give example of what could be acquired this way.
- We must have stored the rules of syntax into LTM
- first two-word combinations babies make respect the word order
- Morpho-syntax turns out to be a combination of aquiring rules and expectations
- E.g. adding -ed to past (+exceptions)
How is it with phonology and memory?
- We aquire and store phoneme inventories into LTM
- Elements of prosody appears to be universal
- e.g. phrasal prosody can be recognized by a few month-olds
- not much research done on specific patters e.g. questions
What else do we need to store in LTP?
- Storing lexical items in LTP
- Plus acquiring the semantic and associative organization of the lexicon
What two language process could benefit from WM and why?
Speech perception and speech production
- In both cases we are either listening or creating a stream of language - there is a need for a buffer that would keep up with such info
Define speech segmentation.
= process of extracting words from a fluent utterance
Recall the research in the picture
Procedure: They presented a continuous speech composed of random syllables -> syllable combinations had different probabilities
- E.g. pu+le+ki tended to be together X ki+da had lower predictability
Results: 8-month olds prefered listening to high probability words as opposed to low probability words
What is the name of the phenomenon from previous study (aka diff probabilities for syllables)?
What does it help us with? Is it specific to language process?
Transitional probability - tells us the frequency of transition (can we predict what the next syllable will be)
- If I cannot predict it = word boundary
Domain-general:
- non-linguistic stimuli
- motor sequances (tapping e.g. guitar hero)
- other modalities (vision)
=> we can create simulation of what it’s like for babies to extract info
What was the new paradigm on transition probabilities?
Nonsense words were embedded in equally-frequent but very low-TP “noise syllables
=> BUT the words were still extracted
What could be the possible underlying model of transition probabilities
We cannot hold entire sequence in our WM/phonological loop (exceeds the capacity)
-> If all the utterance is processed in WM
=> then high-TP that only occur once in the WM buffer would NOT be extracted
Explain procedure of the research concerned with TP and capacity of WM. Hiw did the stimuli differ - how were they same? What would you predict based on statistical learning?
Creation of a random utterance with
- Close-words (C) = clumped together
- Far-words (F) = evenly spaced
=> BOTH
- Same number of occurances
- Same average repeat distnaces
- Same TPs
Prediction according to statistical learning = both words will be prefer over the rest
What’s the result of the research on TP and WM? What general conclusion does it hold for statistical learning?
=> Participants chose C words, but NOT D words over other words
Statistical learning can be a helpful tool for understanding linguistic segmentation
- BUT it does seem to be impacted by WM capacity
- BUT it is affected by serial position effect in prosodic chunks
How does prosody fit into the mix? (research on adding prosody to statistical learning)
Procedure: Creating sentences out of nonsence syllables just like already described
-> by adding prosody we get 2 conditions:
- Stranding words = those that have prosody in their middle (weird)
- Internal words = adding prosody as we normally would
Results:
=> Without prosody both words recognized
=> With prosody ONLY internal recognized
What may also be the reason for the prosodic segmentation this way (memory related)?
Serial Position effect = remembering the first and last better than the middle
=> Words are better recovered (preferred over part-words) fro prosodic edges compared to middle