Language Flashcards

1
Q

Language in the brain

A

Commonly activated
regions include superior, middle and
inferior temporal gyri in both
hemispheres and the left inferior
frontal gyrus (Broca’s area). White
matter tracts, especially arcuate
fasciculus and the extreme capsule
also play a major role.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Language building blocks

A
  • Words: The representations of words in the
    mental lexicon (ML) contain information about
    their spelling, pronunciation, meaning,
    grammatical category etc. Some of these
    features are explained in more detail below (this
    list is not intended to be exhaustive - several
    others have also been discussed in the
    literature):
  • Phonemes: Phonemes are the smallest units of
    speech sound that allow discrimination between words in a given language. Speech errors
    like spoonerisms show us that words are not monolithic blocks of information, and that
    phonemes are cognitively real. Phonological overlap between words has also been shown to
    affect their identification, further suggesting that phonemes contribute to the structure of the
    ML
  • Morphemes: Morphemes are the smallest units in the language that have meaning (e.g.
    dog,–s,-ness). They combine to form more complex words (dog+s, brave+ly, dark+ness).
    Errors such as ‘a geek for books’ instead of ‘a book for geeks’ point to the presence of
    morphemes, since the phoneme –s and its position alter the meaning of this phrase (this
    error is known as ‘stranding’). Morphological information also contributes to the structure of
    the mental lexicon.
  • Syllables and stress: Information about syllables and stress is also likely to be represented
    in the mental lexicon. At least two types of evidence support this conclusion: a) expletives
    can only be inserted into words with an appropriate syllabic and stress pattern (McCarthy,
    1982). b) stress can alter the meaning of a word, and some brain-damaged patients show
    selective stress errors (e.g., Cappa et al, 1997 documented the case of CV, an Italian patient
    who produced speech errors in which the stress fell on the wrong syllable; however, the
    phonemes were properly selected).
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

The components of
written word recognition are

A

extracting information from text; letter identification; access to the orthographic lexicon; grapheme to phoneme conversion; retrieval of word meanings.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Extracting information from text

A

While we read, our eyes make a series of movements (saccades), which are separated by
fixations. The role of fixations is to bring text in the foveal vision. The average fixation is 200-
250ms; the average saccade is 8 letters. 10-15% of the time readers move their eyes
backwards. The span of effective vision is about 14-15 letters to the right of fixation, and 3-4
letters to the left.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Letter identification and the visual word form area (VWFA)

A

Letters are identified following the initial analysis in the visual cortex. Letter recognition
involves two stages: letters are first recognised based on their physical characteristics and
then identified irrespective of the shape (A=a).

Alexia results from lesions in the
left posterior temporo-occipital regions, particularly the so-called visual word form area
(VWFA) in the left fusiform gyrus.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Alexia

A

Alexia is a selective impairment in identifying written words and
letters.Alexic patients have problems in determining
letter identities but they are still able to make
discriminations based on the physical characteristics
of letters (e.g., whether the letters are normally
oriented or real). Their oral spelling is intact,
suggesting that they retain information about orthography. Alexia results from lesions in the
left posterior temporo-occipital regions, particularly the so-called visual word form area
(VWFA) in the left fusiform gyrus.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Orthographic lexicon

A

The orthographic lexicon stores representations of spelling for familiar words. Access to
information stored at this level is especially needed for words with irregular spelling (yacht,
colonel, aisle), these cannot be named reliably unless readers have previously learnt their
pronunciations and orthographies.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Grapheme to phoneme conversion

A

In addition to reading previously known words, we can also read out loud newly encountered
letter strings (novel words or pseudowords). They cannot be retrieved from the orthographic
lexicon, suggesting that we have a mechanism to transcode letters into sounds.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Dual-route cascaded model(DRC, Coltheart et al 2001)

A

DRC postulates two reading routes,
lexical (i.e., lookup route) and non-lexical (grapheme to phoneme conversion route). Lexical
route is faster for reading words, and is also faster
for reading regular than reading irregular words.
Non-lexical route is faster for reading pseudowords.
When the lexical route is damaged, the processing
must go through the grapheme to phoneme
conversion, which is inadequate for words with
irregular spelling. This is the pattern seen in
patients with surface dyslexia.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Triangle model (Seidenberg & McClelland; 1989; Harm &
Seidenberg, 2004).

A

This model postulates three components: semantic units that encode the
meaning of words; phonological units that specify word sounds (phonemes); orthographic
units that represent word orthography (letters). This is a completely interactive model: the
components are interconnected and contribute jointly to the recognition of words and
pseudowords. Words are represented as patterns of activated semantic, phonological, and
orthographic units. Despite this massive interactivity,
differences would naturally emerge among regular
words, irregular words and pseudowords since
different activation patterns correspond to each of
these classes. Phonological units and their connections
to orthographic units are particularly relevant for
pseudowords (pseudowords have no meanings, so the
contribution from semantic units is necessarily
limited). On the other hand, the orthographic and
semantic units and the connections between them are
critical in the recognition of words with irregular spelling.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Visual word processing in the brain

A

Reading activates regions in the occipital, temporal and frontal brain areas, dominantly on the
left. Information is passed
from the occipital regions onto the VWFA,
where the letter stings are identified. It is
then distributed to numerous brain regions
to encode word meaning, sound and
articulation. These more anterior regions
are ‘amodal’, as they support processing of
both written and spoken language

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

The major issues addressed in the spoken word
recognition literature are therefore

A

the mechanisms for segmentation of the spoken stream,
lexical selection, access to meaning, and the effects of context.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Word segmentation (metrical segmentation strategy (MSS))

A

The first problem that listeners encounter is the identification of
word boundaries (segmentation problem). This is visible on
spectrograms, where there are no pauses corresponding to
word boundaries. This problem is also apparent while listening
to foreign speech. Cutler and Norris (1988) proposed that
English speakers use a metrical segmentation strategy (MSS) to
segment speech. This is based on syllabic stress, reflecting the
fact that the rhythmic structure of English is stress-timed (i.e.,
some syllables are more stressed than others, e.g. the first
syllable car in carpet is more stressed that the second).
Stressed syllables are perceived as having greater loudness,
increased duration, and change in pitch. They contain full
vowels, while unstressed syllables have reduced vowels (usually a schwa, as in the initial syllables
of behind and averse). Many grammatical words (pronouns, articles, prepositions) are typically
unstressed in continuous speech (e.g., him, it, of). The MSS theory proposes that stressed
syllables are taken as likely onsets of content words (nouns, adjectives, verbs, adverbs) and that
continuous speech is segmented at stressed syllables.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Lexical selection

A

The goal
of this rapid searching process is to identify the stored representations that match the input
information. The lexicon contains word representations that encode various features of word
sounds.
For English words, these include phonemes and stress, in other languages, other features
may be encoded (e.g., tones in Chinese).

Words are represented in the lexicon in an abstract
format that does not encode low-level acoustic characteristics (e.g., differences between male
and female voices). Lexical search starts as soon as word onset has been identified. One of the
clearest demonstrations comes from Marslen-Wilson’s (1975) shadowing paradigm study.
Participants listened to spoken passages and repeated back what they were hearing. Words in the
input were occasionally incorrect but participants corrected the incorrect words, and this often
occurred before the incorrect word was presented in full. Further studies estimated that words in
context can be recognised within 175-200 ms of their onset, or when only half or less than half of
their acoustic content has been presented.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Lexical selection- cohort model of spoken word recognition

A

Many words can be uniquely identified even
when they are incomplete. For example,
alligator is the only English word
corresponding to the sequence allig, so we
do not need to listen to the whole word in
order to recognise it. The point at which
words can be reliably recognised is called
uniqueness point

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Lexical selection-Allopenna et al. (1998) provided evidence that information coming at later points can also
activate lexical entries.

A

The participants saw a display containing several objects and were
instructed to grasp one of them (e.g., beaker). The distractor
objects had names that shared either the initial or the final
part of the target word (e.g., beetle, speaker). Eye
movements allowed detecting which words were fixated at
each point in time. The results showed that onset-related
words competed early on, whereas end-related words
competed only at a later point in time. This shows that the
word onset is not the only portion that triggers lexical
activation. Information coming at later points can also
activate lexical entries, such that the word speaker can
interfere with the recognition of the word beaker. This is
desirable since (a) word boundaries are not reliably detected;(b) noise may prevent listeners from hearing word onsets.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Access to meaning

A

Swinney et al (1979) provided data for understanding the access to meaning in speech comprehension.

Participants listened to a story containing words with ambiguous meanings (e.g., bug = insect vs. spying device). At the same time, a letter string (ant, spy or sew) appeared on the screen and they were asked to decide whether these strings corresponded to existing English words.

Early on, responses were faster for both ant and spy relative to sew, suggesting activation
for both of the meanings of the word bug. However, activation of the non-compatible meaning
(spy) faded after 200 ms – after that time, responses were faster only for words with the
compatible meaning (i.e, when the paragraph was about insects, priming only appeared with
ant).

These results suggest that: (a) different meanings are initially activated and contextual
information is not used to determine which words are considered for recognition, but that (b)
contextual information is critical for the selection of the appropriate word meaning from the
multiple activated alternatives.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Context effects

A

Data show that, when presented in
noise, word recognition rate is higher when words are presented in sentence context than in
isolation. (Warren, 2008).

So, context plays an important role in speech recognition. The demonstration gives an example
of Warren’s phoneme restoration effect (Warren, 2008).

The advantage for contextual presentation appears even if words are presented in a
good acoustic environment: when spoken words were sliced out from recorded conversations and
presented alone, word recognition dropped to about 50%. However, the inclusion of one or two
neighbour words from the original conversation was sufficient to increase word recognition
dramatically (Pollack & Pickett, 1964). Evidence from a gating study by Tyler (1984) further
shows that context does not affect the set of initially activated candidates, but rather the process
of selection and narrowing down the initial set of candidates activated by the sensory input

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Speech processing in the brain

A

Marinkovic et al (2003) used magnetoencephalography (MEG) to investigate the precise timing of
spoken word recognition in the brain. Around 50 ms after a word is heard, the activation appears
in the early auditory regions in left and right temporal lobes. This is where the acoustic properties
of words are processed. The activation then spreads out
to middle and inferior temporal regions, as well as to
inferior frontal regions. These are the amodal language
regions, hypothesized to process the meaning and the
grammatical properties of words.

Marinkovic et al (2003) used magnetoencephalography (MEG) to investigate the precise timing of
spoken word recognition in the brain. Around 50 ms after a word is heard, the activation appears
in the early auditory regions in left and right temporal lobes. This is where the acoustic properties
of words are processed. The activation then spreads out
to middle and inferior temporal regions, as well as to
inferior frontal regions. These are the amodal language
regions, hypothesized to process the meaning and the
grammatical properties of words.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Sentence processing-syntactic rules

A

The rules that govern how words in the language can be combined a

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

The major theories of sentence parsing

A

Garden-path model (Frazier & Rayner, 1982) argues that the initial parsing is purely syntactic,
with meaning not involved in the selection of the initial syntactic structure.

Constraint-based
theories (MacDonald et al, 1994) suggest that the initial interpretation depends on all available
sources of information (syntactic, semantic, general world knowledge).

The unrestricted racemodel (Van Gompel et al, 2000) argues that all sources of information are used to identify the
initial syntactic structure,

while the theory of ‘Good-enough’ representations (Ferreira et al,
2002) argues that processing depth / type of information depends on the task.

22
Q

Sentence parsing cues

A

Experimental evidence shows that people use a variety of information in order to understand
and interpret sentences: structural syntactic principles, statistical regularities, grammatical
categories, prosodic cues etc.

23
Q

Syntactic principles:

A

Frazer and collaborators (1987) suggested that listeners initially primarily
rely on principles such as late closure and minimal attachment to organise words into preferred
sentence structures.

For instance, experiments using reading times and eye-tracking have
shown that English-speaking participants prefer the interpretation of the girl holding the book in
the sentence John hit the girl with the book.

24
Q

Syntactic principles-the late closure principle

A

For instance, experiments using reading times and eye-tracking have
shown that English-speaking participants prefer the interpretation of the girl holding the book in
the sentence John hit the girl with the book.

The late closure principle states that this is because
new items are attached to the phrase or clause that was most recently processed, if
grammatically possible. This allows the listeners to overcome the limitations of the cognitive
system, which is under pressure to extract the sentence meanings quickly and reliably. The
cognitive system can hold a limited number of words in short-term memory. As words arrive,
memory traces of previous words are lost. Late closure and minimal attachment make it
possible to accommodate incoming words within partially formed syntactic structures and
minimize the loss of information.

25
Statistical regularities:
Some syntactic structures occur regularly: e.g., in English, active sentences have a canonical Subject-Verb-Object sequence. These regular sequences make it easier to predict the subsequent words in a sentence. Slobin (1966) showed that expectations about word order can guide sentence comprehension. Participants heard four types of sentences and chose the picture that best depicted the sentence meaning. active irreversible The man ate the cake active reversible The man pushed the woman passive irreversible The cake was eaten by the man passive reversible The woman was pushed by the man The responses were faster for active sentences than for passive sentences, which can (at least in part) be attributed to the canonical order of active sentences.
26
Grammatical categories
Knowing the typical function of a grammatical category can provide valid and reliable cues for sentence interpretation. This is particularly true for articles (a, an, the), prepositions (on, to, in), and pronouns (me, you, that). For example, when you hear the sentence John hit the girl with the book you most likely expect that the will be followed by a noun.
27
Prosodic cues:
Prosody involves variations in syllable length, loudness and pitch that jointly contribute to the rhythmic and intonational qualities of spoken language. Prosody varies depending on the sentence structure. Beach (1991) investigated the extent to which listeners use prosodic cues in sentence comprehension by testing verbs like argue, which can take two different sentence structures: Direct Object (DO) (verb + object) The city council argued the major position forcefully Complement (C) (verb + complement sentence) The city council argued the major position was incorrect The verb argued is produced differently in these two contexts: it has a shorter vowel with a stable pitch contour in DO, and a longer vowel with a fall-rise pitch contour in C. Participants heard sentence fragments and were asked to complete the sentence. When the verb had the prosody of DO sentences, participants usually responded producing DO sentence completions. This confirms that listeners are sensitive to prosodic cues and use them to understand the syntactic structure of sentences. The result was obtained for both long and short sentential fragments, suggesting very fast processing of prosodic information.
28
Semantic information:
Many experiments demonstrated the influence of semantic information on sentence comprehension. In Trueswell et al (1994), participants saw sentences like this: The witness examined by the lawyer was useless (animate/animate nouns) The evidence examined by the lawyer was useless (inanimate/animate nouns) Reading times were slower in the first case. This is because both of the animate nouns can be the subject of the verb to examine, so it takes slightly longer to understand this sentence.
29
World knowledge:
In addition to semantic information, the processing of sentences is also influenced by our conceptual knowledge of the world. For instance, even if the sentence ‘The weather in Cambridge is nicest in February’ is semantically perfectly fine, our knowledge of the world tells us that it is unlikely to be true. The effects of violation of world knowledge are rapid and parallel to those for semantic violations. Hagoort et al (2004) presented three types of sentences to Dutch participants: 3 The Dutch trains are yellow and very crowded (correct) The Dutch trains are white and very crowded (false, world knowledge violation) The Dutch trains are sour and very crowded (false, semantic violation) Both types of violation produced similar pattern of brain responses, which was different from the pattern triggered by the correct sentences. The difference occurred very early on, about 400ms after the critical word has been presented, suggesting rapid effects of world knowledge on sentence processing.
30
Conclusion of the sentence parsing cues
a. Sentence parsing involves a variety of processes that detect different types of information, from prosody to word order, semantics and world knowledge b. Listeners and readers rely on heuristics, quick ‘rough and ready’ procedures that are far from perfect but nevertheless provide convenient shortcuts for fast approximations c. Experimental evidence (particularly ERPs) is generally most supportive of the constraintbased / unrestricted race models
31
The neurobiology of syntax
Research shows involvement of an extended network of primarily left fronto-temporal areas, including inferior frontal; anterior, middle and superior temporal regions. It also requires intact left fronto-temporal white matter connections. A prominent model by Friederici and colleagues argues for a functional distinction between dorsal and ventral routes in the left hemisphere, with the ventral pathway supporting the processing of simple syntactic structures (The cat sat on the mat), and the dorsal pathway supporting the processing of complex syntax (The juice that the child spilled stained the rug).
32
Negative consequences of bilingualism
Studies have also consistently demonstrated a disadvantage in receptive vocabulary of bilingual children in the preschool and sometimes early school years. This is likely to be due to the fewer opportunities that bilinguals have to be exposed to the words in each of their languages (frequency effect). Gollan et al (2005) also demonstrated that proficient, adult bilinguals are more vulnerable than monolinguals to temporary word finding failures (tip-of-the-tongue, TOTs). Bilinguals produce more TOTs when naming objects, which can also be attributed to the frequency effects. TOTs for proper names are equally experienced by monolinguals and bilinguals, which suggests that bilinguals’ word retrieval is comparable for words that are encountered in both languages. First of all the child in question hardly learns either of the two languages as perfectly as he would have done if he had limited himself to one… Secondly, the brain effort required to master the two languages instead of one certainly diminishes the child’s power of learning other things.”
33
Positive consequences of bilingualism
The turning point was Peal and Lambert’s (1962) finding of a positive correlation between bilingualism and intelligence when other factors have been controlled. Subsequent studies also suggested that bilinguals might have greater mental flexibility, the ability to think more abstractly, easier concept formation, positive transfer between languages benefiting verbal IQ etc (Bialystok, 2001).
34
Lexical representations in bilingualism
Numerous imaging studies investigated whether L1 and L2 recruit the same brain regions. The results varied, but the bilingual groups also varied in the onset of L2 acquisition, proficiency and exposure. Also, different studies used different tasks - picture naming, word generation, semantic decision, sentence reading etc.
35
Indefrey et al (2006)
Indefrey (2006) preformed a meta-analysis, which allows identifying brain regions reliably activated across studies. It emerged that in picture naming, and similarly for other tasks, L1 and L2 activate the same areas; with differences found primarily when L2 speakers had a late learning onset or lower proficiency. This suggests that these NST 1B: Experimental Psychology PBS 1B: PBS4 2 differences probably reflect the degree of difficulty of L2 processing, such that the increased activation for speakers with lower L2 efficiency may reflect increased cognitive demands. Indefrey et al (2006) tested the effects of L2 acquisition over time. Chinese L1 speakers were tested after 3, 6, and 9 months into their Dutch learning program. The test involved listening to sentences and deciding whether the sentences described a visual scene. Identical regions in left frontal area were activated when Chinese speakers and Dutch speakers listened to sentences in their native language. For Chinese speakers listening to Dutch, a similar left frontal activation was first observed after 6 months of their exposure to Dutch.
36
Revised Hierarchical Model-Kroll & Steward (1994)
Kroll & Steward (1994) modelled the process of L2 acquisition. Their Revised Hierarchical Model suggests that during the initial stages of L2 learning, the meaning of L2 words is accessed through the corresponding L1 translation, i.e., the L2 word is translated into its L1 analogue, which mediates access to meaning. As L2 learning progresses, a link between L2 words and their meaning is established and becomes increasingly stronger, allowing a direct access to meaning without depending on L1. The predictions of this model have been tested and confirmed in several studies. For example, the model anticipates that semantic effects should appear in L1→L2 translation not in L2→L1 translation at later stages of L2 acquisition. This was confirmed with non-fluent English-Dutch bilinguals who translated word lists (Kroll & Steward, 1994). The words in the list were either semantically related (e.g., all animal words) or semantically unrelated. Responses were slower with lists of semantically related words, but only in L1-->L2 translations. This is interpreted as a consequence of it being mediated by meaning, ie selection and competition between similar representations at the conceptual level.
37
Spivey & Marian (1999)-Finding the right word in the right language
For example, a study by Spivey & Marian (1999) showed that even when Russian-English bilinguals are in a putatively monolingual mode (i.e., all the instructions in the experiment were only in one language), the two languages are still activated simultaneously. In their experiment, highly proficient Russian-English bilinguals were asked to move objects according to instructions spoken in English. The objects included a target (e.g., marker), a distractor whose name in Russian was phonologically similar to the target word (e.g., ‘marka’, stamp), and two control distractor objects whose names were not phonologically similar to the target word. Eye movements showed that bilinguals fixated the phonological distractor more often than the other distractors, suggesting that nativeand second-language form-based representations were activated in parallel and competing for recognition. Selecting between these competing candidates and retrieving the right word in the right language seems to involve suppression of the non-target language
38
Switching task-Meuter and Allport (1999)
Meuter and Allport (1999) provided evidence for the suppression hypothesis by using the switching task. The task demanded naming digits in the language indicated by the digit colour (e.g., black = English; red = Spanish). There were two types of trials: in no-switching trials, the language remained the same between consecutive trials, whereas in switching trials the language changed. Response latencies were faster for L1 than L2 in no-switching trials; seemingly paradoxically however it was slower to switch from L2 to L1, than from L1 to L2. The suppression hypothesis explains this as a ‘spillover’ effect of strong L1 suppression in the former case. Specifically, naming in the weaker language (L2) requires active inhibition or suppression of the stronger competitor language (L1), which persists into the following (switch) trial.
39
Cognitive consequences of bilingualism
Bilingualism and metalinguistic knowledge,Bilingualism and executive function
40
Bilingualism and metalinguistic knowledge:Ianco-Worall (1972)
Young children assume that each object takes a single name, a belief that facilitates word acquisition (but can also be misleading, as in the case of synonyms). Bilingual children cannot make the same assumption, since they constantly encounter objects with more than one plausible name. Ianco-Worall (1972) showed that at age 4-6, bilingual children accept that objects could have multiple names more often than monolingual children. This demonstrates that bilingual acquisition could affect language learning and metalinguistic knowledge (the knowledge individuals have about language).
41
Bilingualism and executive function-Simon task Bialystok et al (2004)
There have been numerous suggestions that bilinguals may have enhanced executive control and attention function, reflecting the fact that L1 and L2 are simultaneously active and competing for acess, requiring constant inhibition. One way to test this is to compare the performance of monolinguals and bilinguals on the Simon task, where participants need to overcome conflicting cues to make a decision. Bialystok et al (2004) showed that this causes greater interference in monolinguals than bilinguals, suggesting an advantage for bilinguals when responses require the resolution of interference. Performance in the Simon task also declines with age, but the bilinguals’ performance was less affected by ageing. The advantage observed for bilinguals was not due to general differences in response speed – in a control task in which there was no conflicting information, responses were equally fast for bilinguals and monolinguals. An advantage for bilinguals was also observed with the Simon task when short-term memory load was more taxing. These results were interpreted as showing that bilingualism facilitates decision-making in conflicting situations, reflecting bilinguals’ life-long practice with suppression and resolution of the L1/L2 competition. Hence, bilinguals might be better prepared to cope with conflicting information and also to offset age- or disease-related losses in executive processes (the hypothesis of ‘bilingual advantage’ in executive functions).
42
Bilingualism and neurodegenration
Consistent with this, Craik et al (2010) showed that bilingualism correlates with a delayed onset of progressive neurodegeneration. They showed that bilingual Alzheimer patients show the onset of symptoms 5.1 years later than the monolingual patients, suggesting that language experience may contribute to cognitive reserve, compensating for the effects of neuropathology.
43
Conclusion or introduction for bilingualism
In sum, evidence suggests that language membership is not an important factor in organising the bilingual lexicon. There are marked effects of AoA, proficiency and usage, such that a ‘weaker’ L2 increases the activity in brain areas associated with cognitive control. Words from both languages are activated in parallel, triggering competition and suppression. Bilingualism is beneficial across many personal, economic, social, and cultural dimensions, and these benefits overwhelmingly outweigh the costs. While ‘bilingual advantage’ defined as improved cognitive control and executive function across the board remains controversial, there is evidence that bilingualism triggers neuroplastic adaptation of processing mechanisms to enable optimal behavioural performance.
44
Four additional communication mechanisms that people use to complement language: Some of these mechanisms are potentially informative for the discussion about language evolution.
gestures, pauses and disfluencies, conversation convergence and speech-vision integration.
45
Major types of gestures
1. Beats: simple, brief and repetitive movements, coordinated with speech prosody, which bear no obvious relation to the meaning of the accompanying speech; 2. Pointing: simple movements referring to spatial locations or binding objects to spatial locations; 3. Symbolic gestures: gestures with specific, conventionalised meaning (e.g., OK, thumbs up); 4. Lexical gestures: hand movements of different shape and length, non-repetitive and changing in form, which appear to be related to the meaning of the accompanying speech
46
Functions of gesture
It was clear for a long time that gestures have a communicative and informative function. This is obvious for symbolic gestures and pointing, but gestures are also often vague, meaningless without words, and performed even in the absence of visible interlocutors: for example, congenitally blind speakers gesture equally while talking to other blind people, suggesting that speakers produce gestures even when they bear no information. Butterworth & Hadar (1989) and Krauss (1998) proposed that lexical gestures may facilitate word production, by facilitating the activation of semantic and lexical features. The finding that the words and their lexical gestures tend to occur simultaneously makes this proposal plausible. Rauscher et al (1996) supported this proposal, showing that there is an increase in disfluencies when gestures are prevented. Prelinguistic infants and primates also use gestures. Arbib et al (2008) and Tomasello (2007) suggest that visual/manual communication had crucial intervening role in the evolution of our current vocally dominated language system (gestural protolanguage hypothesis).
47
Pauses and disfluencies- functions in communication
Speech is peppered with pauses and disfluencies, but they do not occur randomly or simply introduce noise. Instead, 60-70% of the pauses fall at the juncture between clauses, providing a potential cue about syntactic structures.
48
Brennan & Schober (2001) -disinfluencies
Brennan & Schober (2001) demonstrated that listeners pay attention to disfluencies and can exploit their information value. Participants moved a mouse to a square of specific colour, according to instructions (‘Go to the yellow square’). The instructions were manipulated to introduce between-word interruptions (‘yellow purple square’), mid-word interruptions (‘yel purple square’) or mid-word interruptions with fillers (i.e., disfluencies, ‘yel uh purple square’). Response latencies (square choices) were fastest for mid-word interruptions with fillers (‘yel uh purple square’), since this disfluency allows listeners to compensate for disruptions and it signals repair.
49
Conversation convergence (inside joke)
Speakers and listeners make changes during the course of conversation in order to adapt to the conversation context. In Metzing & Brennan (2003), the participants and the confederate speakers collaborated on a task that required moving objects displayed on a grid. They told verbal instructions to each other. The confederate speakers and the words they used varied (new vs. old). As showed by their eye movements, participants were surprised when the old speakers used new words, but not when new speakers used new words. The results indicate that participants in a conversation quickly adopt new meanings and expect their interlocutors to stick to these new meanings. This suggests that listeners seem to keep track of ‘who says what’, which is necessary for rapid and successful communication. This also distributes the processing load between the interlocutors, as each reuses information computed by the other. Garrod and Pickering (2004) proposed that conversation convergence relies on the unconscious and automatic mechanisms of priming and imitation, which emphasizes the prominence of communicative intent in humans.
50
Multimodal integration - (senses integration,McGurk effect)
Visual context affects spoken word recognition. One example of this is the McGurk effect: When people see the mouth producing the syllable ga and are auditorily presented with the syllable ba, they report hearing something different - most often the syllable da. Speech-vision integration is informative: Seeing the face of a speaker supplements a degraded speech signal, functionally raising the SNR up to 20 dB (Sumby & Pollack, 1954). It can also enhance the comprehension of speech with complicated content, or produced with a heavy foreign accent (Reisberg et al, 1987). Speech-vision integration is observed in prelingustic infants and nonhuman primates, suggesting a potentially important evolutionary role.
51
The neural signature of successful communication
Illustrative evidence comes from Stephens et al (2010), who showed that there is a temporal and spatial coupling of the brain activity between the speaker and the listener during successful communication. fMRI was used to record brain activity of a speaker telling an unrehearsed story, and the brain activity of a listener listening to a recording of the story. The speaker’s spatiotemporal brain activity was used to model the listener’s brain activity, and the results showed that the speaker’s activity was spatially and temporally coupled with the listener’s activity. This coupling vanished when participants failed to communicate, for instance when a recording of a Russian speaker was played to a monolingual English listener. In addition, it was found that brain regions in the frontal lobe exhibit predictive anticipatory responses, the extent of which correlated with the level of comprehension.