Articles Language and brain 3.4c Flashcards
W2: Color naming across languages reflects color use.
- Analyse the results of the World color survey to show that communication is better for warmer colors across languages.
- test hypothesis that differences in color categorization between languages is due to different in usefulness of a color to a culture
W2: Russian blues reveal effects of language on color discrimination.
RQ: does the difference between Russian and English in the classification of shades of blue lead to differences in how people discriminate these colors.
- subjects shown one target color and two reference colors, asked which one matched the target.
RESULTS: Russian speakers showed a category advantage when the reference colors were in different linguistic categories.
- critical diff is not that English speakers cannot distinguish the colors but that Russian speakers can’t help bt to discriminate.
W2: Language can boost otherwise unseen objects into visual awareness.
RQ: can language based activation of visual representations affect the ability to detect the presence of an object.
Method: binocular rivalry, one eye picture other eye mask. verbal cue is presented, either valid or invalid.
Results: detection more quick when valid verbal cue, support weak determinism
W2: Turning the tables: language and spatial reasoning.
Goal: challenge levinson and the Whorf hypothesis
RQ: do linguistic differences in the mapping of space onto language impact the ways that speakers come to conceptualize the world? (Whorf hypo)
(strange bc they only use speakers of one lang)
-2 ways of conceptualizing space in language. Relative vs absolute.
-Li and gleitman showed that environmental factors (blinds up or down/duck) can bias speakers of the same language to different frames of reference
W2: Returning the tables: language affects spatial reasoning.
Goal: response to Li and gleitman. refute criticism and show robustness
- li and gleitman made task to easy, with more cognitive load, people revert to FOR consistent with lang.
- redid Li and Gleitman experiments with more cognitive load and found no effects of environment.
W3: The activation of modality-specific representations during discourse processing.
- this study re-analyses data from two previous studies in which participants read extended narratives whilst being in an fmri scanner
- Study 1: do readers activate modality specific brain regions during discourse comprehension, without explicit judgement tasks?
-Sentences with auditory imagery: activation found in auditory areas of the brain
-Sentences with motor imagery: activation found in motor areas of brain
-No significant activation in visual brain areas when reading sentences with visual imagery
Study 2
- Used scrambled sentences (scrambled condition) alongside a story condition (global coherence)
- Results/Discussion
Motor and auditory imagery clauses activated brain regions only in the story condition (with global coherence) → situational model made
No significant activation for the scrambled condition
Discussion
- readers activate sensorimotor regions relevant to the perceptual information implied in the text.
W3: Language comprehenders mentally represent the shapes of objects.
RQ: do language comprehenders mentally represent shapes of objects?
paradigm: if they do than the presence of a label in a sentence would be less accurately remembered if a consecutive visual representation would mismatch the implied shape in the sentence (eagle flying vs eagle in nest)
-Results: Reaction times were faster in the matching condition as compared to mismatch
also more accurate in matching condition
Exp 2: Neutral condition added, meaning the sentence did not imply about the shape of the object.
Instead of answering if object was mentioned, instead participants had to name the object
If reaction time of neutral and mismatch equal → facilitating effect
If reaction time of neutral and match equal → inhibitory effect
- mismatch effect found again
We do not only comprehend a sentence, but we make a situation model which adds more to our understanding
W3: when peanuts fall in love: N400 evidence for the power of discourse.
Hypotheses
- Two-step model: we pay attention to local semantics first, and only afterwards we comprehend the global cohesion
- Single-step model: These steps happen at the same time
- N400 effect: shows semantic processing, the bigger the effect → the more difficult a word is to relate to the context
Experiment 1
- Procedure
Participants listened to 6 sentence stories, 30 with animate subject (e.g. a man) in animate context, and 30 with inanimate object in animate context (e.g. a whole story about a peanut falling in love)
- Looked at N400 effect in 1st, 3rd and 5th sentence
Results
- N400 effect was significantly different in 1st sentence for inanimate vs animate object
-No significant difference in N400 effect in 3rd and 5th sentence between the two
Since global coherence increased, participants read the inanimate condition story as “cartoonlike”
Experiment 2
-All stories were cartoonlike, but the last sentence was either context-appropriate but animacy-violating (the peanut was in love), or context-inappropriate but plausible (the peanut was salted)
Results
The context-appropriate condition had a lower N400 (so more easily comprehended) than the logically plausible condition
SUPPORTS SINGLES STEP MODEL
W3: Same story, different story: the neural representation of interpretive frameworks.
RQ: how are neural responses to a narrative altered when two groups are given opposing views and beliefs about the depicted situation
Procedure
- Participants heard the same story, but got one of two preceding stories which implied a completely different context (either cheating or paranoia) → manipulated prior knowledge of the participants
Results/Discussion
- Both conditions comprehended the story in a coherent way, but with a different interpretation
- Difference in brain activation in the default-mode network, mirror neuron system and ventrolateral prefrontal cortex (vlPFC) for the two conditions
W4: Linguistic signs in action: The neuropragmatics of speech acts.
- Speech act theory: utterances are linguistic actions, speech acts
-locutionary - content of the utterance e.g., give me an apple
-illocutionary act - social act, the utterance e.g. requesting an apple
perlocutionary act - effect is achieved by performing illocutionary acts, e.g. A gives B the apple
-Action prediction theory of communicative functions (APC)
-Neuromechanistic theory, speech acts involved circuits in the brain regions
-When does the processing happen? 2 hypotheses:
-Cascade: Phonological>lexical>semantic>pragmatic
-Instant/parallel: processing happens simultaneously
Concluding remarks
Only indirect insights into neural mechanisms in human brain
converging evidence for ultra-rapid processing of pragmatic info occurring in parallel
Supporting parallel models
Immediate activation cortical motor regions related to partners expected actions following directive speech
Articulatory motor region activated for question function mirroring preparation of vocal response
Initial evidence: expectatoin of partner action following speech is part of mental representation
W4: Emoji can facilitate recognition of conveyed indirect meaning.
RQ: what is the role of emoji in the processing of potentially face threatening indirect replies?
exp1:
Participants were more likely to endorse an indirect, face-threatening interpretation of a reply, and got do so more quickly, when the reply contained an emoji.
exp2: same but with emojis at the start
no shocking results
emojis can facilitate the recognition of the meaning of indirect replies, especially for opinions & disclosure
W4: Automatic intention recognition in conversation processing.
RQ: Do speech acts play a role in language comprehension?
Experiment 1:
-Method: written & auditory materials
comprehension of implicit speech entails online activation of speech act
-Hypothesis: if probe, it entails, slower, and more mistakes
-Results: Reaction time slower when speech act, and more likely incorrectly indicate the probe
Experiment 2:
-Purpose: replicate the results by using different task & examine evidence for speech act activation - probe present, but letters scrambled
Whether recognised faster after speech act condition
Whether speech act activation is automatic or controlled? Activation should occur quickly if automatic; priming effect only present in short intervals
-Results: replicated results from experiment 1 + significant results in speech act condition when 250ms, automatic speech act activation
Speech act activation doesn’t represent controlled process, but rather an automatic one
Experiment 3:
-Similar to experiment 1, but no context
-Hypothesis: speech act activation happens without the conversation context => expected the generalized implicatures
-Procedure: same probes as Experiment 1
Generalized implicatures - doesn’t need any context
Particularised implicatures - context dependent
-Results: context was not necessary => support for generalized implicatures
Default interpretation doesn’t require a context
Experiment 4:
-Conversation context, computer-mediated communication
Questions from the system: was this word said
-Procedure: measure the reaction times
-Results: reaction times reliably different; participants were slower at verifying that the target was missing when remark performed the speech act (2500ms) than when they did not perform the speech act (2279ms)
Results consistent with speech act theory
W4: Pragmatics in action: indirect requests engage theory of mind areas and the cortical motor network
RQ: are explicit word forms a necessary condition for cortical motor activation during lang comprehension?
-Hypothesis: Implied meaning should induce motor cortex activation -4 conditions: IR, PC, UC & PUC -Localiser task: to activate the regions and compared with the activation in the task -Results: Respond faster in IR condition than to PC -Indirect requests activate motor areas significantly more than the other 3 control conditions -fMRI results: ToM: trying to infer what the person means by the statement, comprehension of IR sentences
Motor cortex activated due to mental simulation of the action needed due to inferences drawn from the sentence
W5: Converging evidence for differential specialization and plasticity of language systems.
2 fmri experiments, one longitudinal and one cross-sectional. goal is to see how lateralization changes when second language is learned.
in both cases, comprehension of speech and text of L1 became more right lateralized whilst production stayed left lateralized (sick)
W5: An investigation across 45 languages and 12 language families reveals a universal language network.
45 languages were assessed in fmri scanner to test the universalist claim.
Results
- All langs had similar brain activation
Average activation in native language vs degraded lang in left hemisphere
-Order of brain activation in ROI
1.Native
2.Degraded
3.Unfamiliar lang
-Non-ling tasks elicit no activation -> Shows functional selectivity
Lang networks are universal, integrated and selective