Cognitive Deck 2 Flashcards
Define phonological, semantic, and visual LTM.
Phonological - Supports our ability to identify spoken words. No meaning associated with phonological LTMs.
Semantic - Our knowledge of the meaning and function of words and objects. Semantic memory supports inferences. i.e., an ostrich breathes.
Visual - Supports out ability to identify visual information, including written words, objects, faces, etc.
Briefly describe episodic memory.
Links memories from various LTM systems (visual, semantic, phonological) to store a record of a personal event.
Also procedural memory
Example of a LTM single dissociation (with semantic, visual or phonological).
Dyslexics have difficulty identifying words (Task 1) but are fine in recognizing faces (Task 2).
But could just mean one task is harder.
Example of a LTM double dissociation (with semantic, visual or phonological).
Prosopagnosic patients
have more difficulty in identifying
faces (Task 2) compared to words
(Task 1).
Rids the argument that one task is just harder.
What do double dissociations of LTM stores show?
That the stores (semantic, visual, phonological) can be split further.
Such as: visual into words and faces, and phonological into language sounds and non-language sounds eg woof.
Give a double dissociation between episodic LTM and procedural LTM.
Amnesia patients = bad episodic, good procedural.
Parkinsons/Huntingdons = good episodic, bad procedural.
What is the first thing you must do to encode information?
Pay attention.
Maintenance vs Elaborative rehearsal.
Maintenance rehearsal: Keeping
information active in STM by relying on phonological loop.
– That is, just repeating information, without considering the meaning. Not enough to encode to LTM.
Elaborative rehearsal: Encoding the
meaning of to-be-remembered information generally leads to better episodic memory.
Levels-of-processing: Memory is a by-product of perceptual and conceptual analyses.
* Perceptual –> phonological –> conceptual.
– Memory tends to be best for “deep” levels of encoding.
* Best when organizing
new new memories to fit with old memories.
* Intention to remember irrelevant.
What did the museum example for encoding to LTM find?
People often remember
quite different things depending on background knowledge, our interests, and how we organise new information with old.
” painting with a smooth surface, an easy one to spot check. It is approximately five feet high and seven feet long.” - security guard.
“film noir sort of feel, a mystery novel to it. The puzzle is there….” - art curator.
What is the picture superiority effect, with evidence?
We encode pictorial information better than verbal information.
– Participants studied lists of pictures and words and tested on pictures and words.
– Asked to attend to the names at encoding (verbal instructions) or the image (imagery instruction).
– Tested on pictures and words in Yes/No recognition task.
What is the concreteness effect?
Words like “car”, “house” better remembered than abstract
words like “truth”, “betrayal”.
What does the concreteness effect and picture superiority effect support?
Dual code theory-
Information store information in at least two forms: verbal/linguistic code and a mental image code.
What are mnemonic devices and methods of loci?
Mnemonic devices improve memory by improving the encoding of information.
– Deep levels of encoding, Organizing and linking
new information to old, Visual imagery.
Method of Loci: Imagine a journey through a familiar route:
* e.g., bed, closet, bathroom, Bedroom 2, stairs, lounge,
kitchen, front door, etc.
* Then take list of items you want to memorize and link them to the route through imagery.
* corn, potatoes, bread, flour, OJ, milk, coffee, etc.
What is consolidation?
The process of converting memories
into a format resistant to forgetting. Consolidation within hippocampus.
Binds information across different
systems located in different parts of
the cortex.
– Two types (short-term & long-term consolidation)
* Short-term consolidation: converting short-term memories into it more enduring format.
Occurs over seconds or minutes.
– Short-term consolidation involves the hippocampus linking
information from all the various LTM systems via the hippocampus to form an episodic memory.
Short-term consolidation: Takes seconds/minutes to develop long-term links from hippocampus to other memory systems.
What is long-term consolidation?
Long-term consolidation occurs over months, years. Observed in extended temporally graded retrograde amnesia.
Initially (due to short-term
consolidation), memory in hippocampus links all the various types of LTM in order to store a record of the episode.
Over time, memories in the various LTM systems are linked directly (without requiring the hippocampus) to form an episodic memory.
On this view, damage to hippocampus does not erase old episodic memories because they have moved to cortex.
What is the multiple-memory trace theory?
Older memories are better
coded within the hippocampus because they have been rehearsed more often.
– Episodic memories always rely on the hippocampus, and do not move to the cortex.
– This view denies that episodic memory undergoes long-term consolidation.
The hippocampus is always involved episodic memories, and the severity and extent of RA depends on the extent of the hippocampal lesion.
* Prediction: hippocampus should be involved in the
retrieval of old and recent episodic memories.
Give some features of retrieval.
- Retrieval is less affected by divided attention than encoding. Suggests an automatic component to memory retrieval.
- Recall more affected than recognition in Korsakoff amnesia (Korsakoff amnesia associated with frontal lesions).
*
What are the two types of retrieval for episodic memory?
1) Automatic Retrieval:* Hippocampus can retrieve information relatively
automatically with strong retrieval cues:
– In cued recall task part of the study items are repeated at test,
allowing retrieval under divided attention.
– In recognition task the study word itself is presented at test,
allowing Korsakoff patients to recognize some items.
* In automatic retrieval, memories often “pop out. These
memories are sometimes correct, sometimes not, and
hippocampus cannot correct itself. Need another system
to correct for false memories.
– False memories are the confabulations.
2) Effortful Retrieval:
* If you are not given a strong retrieval cue (as in
free recall), then hippocampus cannot retrieve
memories very well.
– Divided attention impairs free recall
– Free recall is poor in Korsakoff patients.
* The frontal system can generate better retrieval
cues that the hippocampus can use to generate a
memory.
* Frontal systems can also monitor and eliminate
errors in memory retrieval.
How does the frontal system work with the hippocampal memory system?
- The frontal system is the “boss” of the
hippocampal memory system
– Control the information that is presented to the
hippocampus at encoding (by directing attention)
– Initiate and guide retrieval
– Monitor information that is retrieved from hippocampus.
What is retrieval and encoding specificity?
The effectiveness of a retrieval cue
depends on how well it relates to initial encoding.
– That is, the way we perceive and think about events at encoding determines what cues will later
elicit episodic memories.
* Explains state dependent and mood dependent episodic memory.
What is the wet land dry land study and what can it explain?
Participants learned words either on land or under water, and were tested on land or under water. Better results when tested where they studied.
May help explain exceptional visual long-term memory when same identical images are repeated at study and test.
Why do episodic memories fail?
– Poor encoding
– Poor retrieval cues
– Loss of storage (the acquisition of new memories can interfere with previously stored memories).
Describe a study showing that encoding is better for familiar faces.
Experiment carried out with Asian and Caucasian participants making
perceptual decisions about Asian and Caucasian faces:
–Each trial consisted of a target face at the center of the screen for 250 ms, and after 1 second delay, two faces presented side by side.
–Participants pressed one of two response buttons to indicate which picture matched the target.
Better for own race.
Arguments that verbal language is innate.
Universal across cultures.
* Brain damage can specifically impair
language. e.g., Broca’s aphasia.
– Rarely a selective disorder of a general skills, e.g., chess.
* Critical period for language learning.
– Genie (isolated from language till 13.5 years).
– Sign-language.
– Phonology (the sounds of language).
* Language unique to humans.
Arguments against language being innate.
- The fact that something is universal doesn’t make it an instinct.
- If language evolved, then should you not expect something related to human language monkeys or other animals?
BUT humans are not descendents of Chimpanzees/Monkeys, evolution more like a bush. - Language is the by-product of increased intelligence.
– Evolution has played a general role in supporting
language by selecting for greater intelligence.
Define the following effects of language:
* Frequency effects
* Regularity effects
* Frequency regularity interaction
Frequency effects:
– High frequency words read more quickly than low frequency
words.
* Regularity effects:
– Regular words read more quickly than irregular words.
* Frequency*Regularity interaction:
– Regularity effect only found for low-frequency words.
Define the following acquired neuropsychological disorders in reading:
Surface dyslexia, phonological dyslexia, deep dyslexia.
Surface Dyslexia: Difficulty in reading irregular words, but fine with nonwords and regular words.
* Phonological Dyslexia: Difficulty in reading nonwords, but fine at regular and irregular words.
– Double dissociation between irregular words and nonwords!
* Deep Dyslexia: Difficulty with nonwords, irregular words, and regular words. But better with high
imageable than low imageable words. Often make semantic errors. E.g., read: the word PIG as
ELEPHANT.
Define: orthographic, phonological and semantic knowledge.
- Orthographic knowledge – Visual knowledge of letters and words.
- Phonological knowledge – Knowledge of how letters and words sound.
- Semantic knowledge – Meaning of words.
Define lexical and sub-lexical.
- Lexical – Word level-knowledge – could be lexical-orthographic, lexical-phonological, or lexical-semantic
- Sub-lexical – sub-word information – e.g., individual letters or phonemes. Or groups of letters (graphemes)
or groups of phonemes (e.g., syllable).
Describe the dual route model.
When reading print, it can go down three routes. A = print-to-sound, which goes straight to pronunciation. B = orthographic lexicon, which goes to phonological lexicon then pronunciation. C = semantic system, which goes to the phonological system then pronunciation.
A is the sub-lexical route, B is the Lexical-phonological route &
C is the Lexical-Semantic route
- Regular words can be read by all three routes.
- Irregular words can only be read by the lexical-routes (both of them).
- Nonwords can only be read by sub-lexical.
grapheme-phoneme route. - Naming speed and pronunciation is based on the route that finishes first.
How can the dual route model account for different types of dyslexia?
Surface Dyslexia, damage to both lexical routes, with sub-lexical route spared.
– Poor at reading irregular words.
Phonological Dyslexia, selective difficulty in using sub-lexical route.
– Poor at reading nonwords (e.g., blap).
Deep dyslexia, patients can only read by the lexical-semantic route (and semantic route partly damaged).
– Make semantic errors, e.g., say “table” to chair.
Horse Race account of
frequency & regularity effects.
- High frequency words processed more quickly than low frequency words (within the lexical routes).
- Regularity effects are due to the conflicting pronunciations of irregular words derived from lexical
and sub-lexical routes. - Conflict is avoided for high-frequency words, as lexical route finished before sub-lexical route –
producing interaction.
What is the impact of word length on reading?
Short words appear to be identified more quickly than long words all else being equal.
What is the impact of visual similarity on identifying words?
Hurts identification. Competition takes time, slows things down.
What is the impact of age-of-acquisition (AoA) on reading times?
Early acquired words read more quickly than late.
What is linguistic determinism?
The claim that speakers of different languages are constrained to think and perceive in certain ways because of their specific language (e.g., Orwell).
* e.g., if your language has no term for blue, you can’t see blue, if no term for “justice”, no corresponding concept. The idea behind “Newspeak”
– Largely rejected – It seems doubtful that Newspeak would work.
What is linguistic relativity?
The claim that different languages shape or bias (rather than determine) the thoughts of its speakers.
* e.g., easier to perceive the difference between two colours if your language distinguishes between them.
* e.g., If your language marks the word bridge as grammatically
feminine, then bridge has a feminine connotation.
– However, on this view, language does not fundamentally restrict our perceptual abilities, or prevent us from entertaining any thoughts. Rather, it biases thoughts (and in some cases improves them, to a limited degree).
What is thinking-for-speaking?
- The claim here is that different languages shape thoughts (perceptions) of speakers while speaking.
- This contrasts with linguistic relativity research which generally focuses on the impact of language on non-linguistic thinking.
– E.g., how language might impact on reasoning about space or time in a non-linguistic task.
Linguistic relativity with time.
Time:
* In English, we generally use front/back terms to talk about time.
– Good times ahead, hardships behind.
* In Mandarin, vertical metaphors are common
– Earlier events are “shang” (up), later events are xia
(down).
* Question: Do the different ways of talking about time lead to differences in how people think and reason about time?
Linguistic relativity with space.
Different languages use different spatial references:
* Relative terms specify directions and locations relative to the viewer (English, Dutch, Japanese)
– E.g., Left/right, front/back
* Intrinsic terms specify locations in term of object-centred coordinates (Arrente, Australia)
– E.g., “the ball is at the foot of the hill”
* Absolute terms specify locations based on a global reference frame centred on the object (Totonac,
Mexico).
* E.g., “the ball is North of the hill”
Linguistic relativity for objects.
- Many languages include grammatical gender.
– Spanish, French, Italian mark objects as being masculine or feminine.
– E.g., Toasters are masculine in some languages, feminine in others. - No grammatical gender in English.
- Question: Does talking about inanimate objects as if they were masculine or feminine actually lead
people to think of the objects differently?
Why do we show categorical perception in the domain of vision and speech?
One explanation: The language(s) we speak impact on our perception of colour and speech sounds.
– E.g., since English distinguishes between blue and green at a
given wavelength, we reorganize colour space to make this contrast salient.
– e.g., if p/b, or l/r contrasts are critical for our language, we
reorganize our perception to improve perception of these sounds.
Another explanation: The physiology of the visual and auditory systems is such that we would show categorical perception independent of language.
– The colour terms we use are a by-product of what colours appear the most salient (independent of language).
– The phonemes selected for language are the by-product of the
acoustic contrasts are most salient based on auditory physiology.
Evidence to why part of categorical perception for speech must be partly physiology and partly language experience.
Animals show similar categorical perception effects.
-Chinchillas first trained to
identify a /d/ (0 VOT) and /t/ (+80
VOT) stimulus spoken by humans.
-Then they were presented with
ambiguous (synthetic) sounds that
varied in their VOT from 0 to 80.
Task was to identify (label) the
stimuli.
-Humans given a similar task.
At about 6-8 months, babies are better at discriminating
phonemes in all languages, but perception changes with
more exposure to their native language, so that it becomes
more difficult to distinguish phonemes from other languages.
Is the perception of colour biased by
language?
Long-history of addressing this question, with mixed conclusions.
– Early study compared memory for focal (high codable) and non-focal (low codable) colours in English speakers. Participants shown four target chips, that were either high
or low codeable. After short delay, asked to select matches from an array of colour chips (240 chips). Participants better able to remember a colour that had a high codability colour name. This taken to support the claim that language impacts on colour perception and memory.
BUT logic is flawed. Codability effects might be due to the physiology of the
colour system. The REASON why we have the colour terms RED, BLUE, etc. is because these colours are
more salient (e.g., due to rods and cones in fovea). This in turn makes our memory better for these colours.
Difficult to determine whether language or visual physiology is responsible for these effects when
studying a single language.
12 monolingual Berimno speakers took part.
- 8 focal and 8 non-focal ENGLISH colours paired with pictures of familiar nuts.
* Berinmo participants did not remember English focal colours better.
* Rather, they remembered Berinmo focal colours better.
A better study to support perception of colour biased by language.
-One colour is different from the rest – press left/right button when ID
-Colours 1-step different from one another, either within or across colour boundary
-Target colour is either presented to the left or right visual field