Lecture 7 - Language Flashcards
Neuroscience: hyper scanning
Hyper scanning is when we measure Brain signals of 2 or more people simultaneously (with EEG or MRI( to relate them to each other
Time course of speaker preceded the almost exact brain activity (same location, same variation over time) but just a few seconds
Synchronicity between speaker and listener predicts listener’s compression and memory performance (e.g. students who sync more with the teacher have better learning outcome)
Human specialisation for language and rhythm
Is this hat facilitates systems like language? That we synchronise our brain rhythms?
Compared to other species, a peculiar human feature is our capacity for vocal learning, the ability to imitate and learn vocalisations which do not belong in our innate repertoire
Humans are unusual in our environment of rhythmic patterns and drive to synchronise nto them
We dance and sign before we walk and speak
Both rhythm and vocal learning, on which music and speech are based, are an evolutionary mystery
Both abilities are rare in mammals and scattered across taxonomic groups
Despite the huge complexity, most children learn their native language almost effortlessly and do not need formal teaching to achieve a rich language repertoire
Before children can understand language, they already understand intonation and rhythm of conversations
What is language?
System of communication using sounds on symbols to express feelings, thoughts, ideas and experinces
Hierarchical system
Components that can be combined to form larger units
Governed by rules, specific ways components can be arranged
Inherently social and communicative, connective to social cognition
Language can happen in a range of ways
Verbal/auditory - speaking, music
Visual - sign language, written
Tactile - braille
Human language vs animal communication
Similarities:
- dialects and syntax
- signal modalities
- complex species specific systems (e.g. birds song, bees dance)
- regulating social structures
- genes that are linked with communication ability
Differences:
- animals only communicate about ‘here and now’
- humans can communicate past, present, future,ideas and hypothetical scenarios
- animal systems are not ‘productive’ (limited signs and ways of expression, no new symbols)
- creation of new patterns of signs, human, er can understand and create and indefinitely larger number of utterances
Universality of language
Language is critical for quality of human life
E.g. deaf children invent sign language themselves if not instructed b y others
Drive for communication is innate in typical developing children (and also many atypical developing children)
All humans with normal captives develop a language and learn to follow its complex rules
Language developers is similar across cultures
Languages are unique but the same
- different words,sounds and rules
- but all have nouns, verbs, negative, questions,past/present/future
- all universally used fore the same functions e.g. speech acts, communication, thinking
History of studying language - Skinner vs Chomsky
Skinner believed children learn through pedant conditioning
- innate speech that they hear, repeat and correct because it is reinforced
- language is learnt through reinforcement
However this was disproven by Noam Chomsky
The ability for verbal behaviour is innate
Children say sentences that they have never heard being uttered or rewarded by parents - e.g. i hate you mam
Children go through incorrect grammar phases, despite not being reinforced
Chomsky believed in universal grammar
Human language codes in the genes
Underlying basis of all language is similar
Children produced speech they have never heard and that they have never had reinforced (challenged conditioning hypothesis)
Heavily focused on syntax (hierarchical structure in language)
So, who was right?
Both and neither
- as usual in science things are a little more complicated
- humans indeed have genetically/biologically encoded ‘language readiness’
- people learn speaking by different strategies. Some of them involves associative learning and conditioning (e.g. context dependent register, avoiding certain types of colloquiums is rewarded in certain contexts)
- register is a selected sub part of the mental lexicon suitable for certain situations
Comprehension (forming a semantic representation) requires…
1, decoding phenomena: classifying sounds that distinguish words
2. Accessing the mental lexicon: contains all words a person understands
3. Lexical semantics: the meaning of words. Each word has one or more meanings
4. Syntactic processing: understanding the relations between words
5. Semantics: understanding the relations between words
6. Discourse integration: relating and embedding meaning in context, understanding relations of sentences to each other
Sound -> phonemes -> words -> sentences
Hierarchical processing in the brain
Auditory cortex: sound processing (phonetic)
Speech sound recognition: classifying relevant language sounds (phonological)
Recognising words and combining their meaning: retrieving word meaning from mental lexicon (semantic_
Processing words order and syntax: combinatorial & hierarchical processing (syntactic)
Putting the meaning do words and syntax into context: contextual meaning integration
Independence of representations
Triangle model of the lexicon
For each words to have orthographic, phonological and semantic representations
You can read and understand a word without (silently) voicing it
Tip-of-the-tongue phenomenon - when you can remember what you want to say but can’t remember the phonological structure automatically
You can say or hear a word without processing its meaning (but able to recognise it)
Recognising boundaries in speech
Speech segmentation - phonemes, words, transcript
Challenging if you don’t know the rules of a spoken language
Slang can br hard
You use context to understand words with unfamiliar pronunciation (top-down)
Interpolating incomplete signal
Phonemic resorption effect
- phonemes are perceived in speech when when the sound of the phoneme is covered up by an extraneous noise (cough)
- affected by contextual processing: top down completion of missing sounds
Learning from patients
Broca’s aphasia aka production/expressive aphasia
- result of suffering stroke to the left inferior frontal cortex
- speech is slow and blurred
- jumbled sentence strcutre
- difficulty understanding syntactic variables (e.g. passive sentences)
Wernicke’s aphasia aka receptive aphasia
- result from suffering a stroke to the posterior left superior temporal cortex
- Speech is random and meaningless
- Inability to comprehend speech and writing
- General impairment in understanding meaning
Prediction plays a role in language processing
Word probability is based on lexical frequency and contextual expectations
This helps with:
- ambiguity
- words can have multiple meanings (some can be more dominant than others)
- interpolation in difficult conditions (distraction, noise)
- deciding on best candidate meaning
- frequent words are processed faster and more efficiently
Frequent words have faster reaction times
Words predicted by context are recognised faster (e.g. if several fruit words in a row, makes following fruit word easier to recognise)
Eye tracking
Less predictable worlds lead to frequent fixation times and more regression (looking back at5 previous words