Process Models of Memory Flashcards
What is the Eriksen ISI?
- fuse the random-dot display to see a nonsense syllable
What is Visible persistence?
the continued apparent visibility of a stimulus beyond its physical duration.
What did the inverse duration effect cause problems for?
The notion that we have an iconic memory store
What was Eriksen and Collin’s (1967) experiment to prove the visual ‘icon’ memory store theory (iconic store)?
- Stimuli: two patterns of dots that form a nonsense syllable when superimposed
- S1 + S2. S2 presented after a delay
- If you could still superimpose it after a delay shows you still have some visual perception of S1
- If you increase the delay performance will decrease e.g. ISI = 50 will perform better than ISI = 100
- ISI = inter-stimulus interval (in msec)
- Interpreted as evidence for icon decaying over around 300 msec
What did DiLollo and Hogben say about the persistence of visual information?
- that it is a phenomenon of visual processing not the sensory memory store
- they said that Data from Sperling and Eriksen & Collins were ambiguous
- they said that it could be due to the inverse duration effect
How does the dot fusion experiment support the decay from memory theory?
- Decay starts from offset of S1
- Decline in performance as ISI (inter-stimulus interval) increases due to less effective overlap between memory traces of S1 and percept of S2
How does the dot fusion experiment support the visual processing theory?
Processing starts from onset of S1 and runs its course. Decline in performance as ISI increases due to less effective overlap between processing of S1 and processing of S2
Why using the classic dot fusion experiment can’t you tell whether visual information is due to decay from memory or visual processing?
they predict the same pattern: declining performance as ISI increases
What is the key difference between the decay from memory and visual processing theory and so how can we test the difference? (Hogben and Di Lollo 1974)
- the decay from memory theory: a visual is shown and after it is shown the information decays after about 30 seconds
- Visual processing theory: an image is shown and it starts getting processed from the beginning, there is a set amount of time in which it is being processed and after this it is gone
Memory Decay starts from offset of S1
Visual processing starts from onset of S1 and runs its course
So if we manipulate the duration of S1 (SOA) and keep the gap (ISI) at 0, the theories make different predictions
What is the SOA?
stimulus onset asynchrony - the duration of the stimulus
What happened in dot-fusion experiment where the duration of S1 was manipulated?
- If decay from memory:
Decay starts from offset of S1
Predicts NO decline in performance as S1 duration increases because effective overlap doesn’t change (=ISI of 0 msec) - If overlap of visual processing:
Processing starts from onset of S1 predicts: decline in performance as SI duration increases because effective overlap decreases (same as with ISI) - Result:
Inverse duration effect (performance decreases)
How did Coltheart (1980) explain the decay from memory vs visual processing argument?
- he said there were three separate phenomena:
- neural persistence’ (overlap in neural processing; very brief)
- ‘Visible persistence’ (overlap in visual processing; Di Lollo <200 msec)
- ‘Informational Persistence’ (icon that decays’ Sperling -150-300 msec)
Who came up with the working memory (WM) model in 1974?
Baddeley and Hitch
How did ‘dual task paradigms’ give evidence to the working memory model?
- Participants would be given both a primary task (e.g. hold a few words in mind for some period of time) and a simultaneous secondary task (e.g. rehearse a sequence of digits)
- If both could be performed at the same time it was taken as evidence that the WM system was more complex than a simple short-term store
- And if the secondary task interfered with the primary task, then that was because they both relied on the same processing mechanism
Why did Baddeley come up with the WM theory?
He thought there was so much going on in the STS that it couldn’t just be a single box
Give an example of a dual task paradigm that gave evidence to the working memory model
- Remember and overtly (out-loud) rehearse sequences of 0-8 digits – secondary task
- At the same time perform a simple True/ False reasoning task e.g. (A precedes B): AB (true) – primary task
- Results: It is possible to carry out both tasks, despite both requiring STS
Error rate held constant (fairly low) when concurrent digit load changes
Reasoning time goes up as the concurrent digit load increases
Increase in reasoning time is significant but not large (35%) = speed-accuracy trade-off
- Shows there must be more than a single component to your short-term store
What are the three components of the working memory model?
- There is a phonological store
- There is a central executive Store
- There is a Visuo-spatial Sketch pad (inner eye)
What’s the phonological loop?
- Originally called articulatory loop
- It’s your inner voice
- Made up of the phonological store and central executive
How does the phonological store work?
- Auditory word presentation -> Phonological store -> articulatory control process
- Visual word presentation -> Articulatory control process -> phonological store
- If articulatory control process blocked by another task then visual words won’t be able to get to the phonological store whereas auditory words still will
- rehearsal process analogous to subvocal speech (inner voice)
How do dual task results provide evidence for the phonological loop?
Articulatory Suppression: what happens if you prevent material being articulated
Articulate irrelevant items (overtly or covertly) while performing a verbal span task
Result: word length effect disappears (for written words only)
Explanation:
Articulation of irrelevant items dominates articulatory control process, so words cannot be rehearsed – word length has no influence
But spoken words go straight to the phonological store
Shows an interaction between control conditions and articulation:
Word length effect dependent on articulation
- phonological confusability effect
How does the word span task provide evidence for the phonological loop and the word-length effect?
Greater span for short words than for long words, whether written or heard
And this seems to correspond well with speed at which the words are read
Spoken Duration appears to be crucial:
Memory spans are greater for short-duration words (‘bishop’) than for long-duration words (‘harpoon’) even though they have the same number of syllables (Baddeley et al., 1975)
Shows that you articulate the words in the phonological loop
what are the two components in the phonological system?
- Phonological Store:
Stores memory traces for a few seconds before they fade - Articulatory (Phonological) Loop:
Rehearsal Process analogous to subvocal speech (inner voice)
What is the phonological loop for?
- Learning to Read
Lower memory spans -> reduced reading ability - Vocabulary acquisition
Correlation between non-word repetition ability (which requires the phonological loop) and vocabulary size (Gathercole & Baddeley, 1989) - Language Comprehension
Patients with STM impairments have difficulty comprehending complex sentences
What is the visuo-spatial sketch pad?
A workspace in which an image can be stored and manipulated to guide behaviour
What is the Rooks Matrix Task (1967) and what were the results?
Learn sequence of sentences to remember
Spatial (left/ right, beneath/above:
‘In the starting square put a 1’
‘In the next square to the right put a 2’
Non-Spatial
‘In the starting square, put a 1’
‘In the next square to the quick put a 2’
Result: Recall – 8 spatial vs 6 non-spatial
Benefit of special imagery
Non-spatial condition: performance better with written instructions
Spatial condition: performance better with auditory instructions
Visual (written) instructions & spatial imagery interfere
What is Baddeley’s (1975) dual task to provide evidence that the sketchpad relies on spatial coding?
- Brooks Matrix task 2. Pursuit rotor task (spatial distractor) – do these tasks at the same time
Spatial distractor – keep the stylus on the cursor. Experimenter can tell when on the cursor
Result: tracking disrupts spatial but not non-spatial (nonsense) condition
What is the Sketch-pad for?
- Not as well studied as the phonological loop
- Geographical location – learning our way around our environment
- Planning and performing spatial tasks
What is the central executive?
- Most complex and least understood component of WM
- A ‘workspace’ divided between storage and processing demands
- In some ways the central executive functions more like an attentional system than a memory store: Baddeley
- Model suggests CE co-ordinates the activity of the 2 slave systems
- Other potential roles for the CE include coordinating retrieval strategies and selective attention
What are the problems with the working model?
- Articulatory Suppression: this secondary task doesn’t fully prevent registration of visually presented words (which should be recorded phonologically)
- STM-impaired patients: visual and verbal spans are usually similarly affected (e.g., digit span of 2, visual span of 4) – why would that be the case if independent sub-systems are responsible
- Rehearsal: how is non verbal visual information rehearsed? How do pre-verbal children rehearse verbal information?
- Consciousness: How can consciousness ‘bind’ information from different modalities without a multimodal short term store
What did the levels of processing model bring into question and why?
• LoP brought into question the notion that ‘rehearsal’ was necessary to transfer items from STS to LTS
- Firstly it was clear from surprise memory tests that people can successfully encode items they’ve been exposed to even when they didn’t rehearse them
- The nature of the orienting task seemed to determine the likelihood of successful retrieval
Who is the levels of processing model associated with?
Fergus Craik
What is the levels of processing (LoP) framework?
- Stimulus processing (of words at least) can occur at a number of different levels:
- Types of processing of words:
Orthographic – to do with the letters that make up the words
Phonological - sounds
Semantic – meanings - Level:
Orthographic is shallow
Phonological is a bit deeper
Semantic you have to go deeper still – (semantic information is the meaning of something. You can only understand it by finding associations with previously known knowledge)
Gradient from shallow to deep processing - The deeper the processing the better the retention – processing is better when you’ve processed to a semantic level
How does evidence from incidental learning paradigms support the idea that encoding can occur without rehearsal?
Subjects are unaware that their memory will be tested
Therefore they weren’t intentionally encoding (rehearsing)
What processes were active at encoding?
How does the process affect retention?
Finding: deeper the processing the better the memory
What is Craik’s (1977) different orienting tasks during encoding?
Orthographic: is the word upper or lower case?
Phonological: does the word rhyme with mat?
Semantic: does the word fit the end of the sentence?
Results:
The deeper the processing the better the retention
Semantic tasks have about the same retention as intentional encoding tasks
What two types of rehearsal does the LoP make a distinction between?
- Type 1: Maintenance rehearsal
- Type II: Elaborative rehearsal
- Found that only type II rehearsal is associated with increased retention
What was Craik and Watkins (1973) experiment to test if maintenance rehearsal works?
– Ss listen to list of 21 words and told to recall last word beginning with ‘g’
Control how many g words appear
As Ss don’t know which ‘g’ word will be last, they will rehearse the first one until the next one appears
By varying the number of words between ‘g’ words, the amount of rehearsal can be systematically varied
Amount of rehearsal had no effect on subsequent recall (maintenance rehearsal)
How did Crail and Tulving use semantic orienting tasks to test for the elaboration effect (1975)?
Q. could the word ‘watch’ fit into the follow sentence
Low elaboration: ‘she dropped her…’
High elaboration: ‘the old man hobbled across the room and dropped his…in the jug’
Result: high elaboration led to better retention
Speculative Explanation: elaborate processing increases the number of associations between stimulus and context
What are problems with LoP
- Circularity of the definition of ‘depth’
Better memory (semantic > orthographic)
Depth of Processing (semantic =’deep’, orthographic = ‘shallow’)
In order to break circularity, an independent measure of depth is required - Concepts such as ‘elaboration’ are as slippery as ‘depth’
- ‘Shallow’ orienting tasks almost certainly involve some automatic semantic processing (e.g., Stroop):
If you are prevented words in different font colours and you just have to say the word - It’s possible that ‘deep’ processing only leads to better performance under certain testing conditions
How did Lockhart and Craik respond to the criticism that the theory is circular?
qualitatively different domains of processing, e.g., semantic v phonemic can be defined as deep or shallow in absence of effects on memory performance
What was Morris, Brandsford & Franks (1977) orienting task experiment and what was the result?
- Orienting task
1. Deep (semantic): ‘The…had a silver engine’ (train)
2. Shallow (rhyme) ‘… rhymes with legal’ (eagle) - Test task:
1. Standard recognition: did you see the word ‘train?’
2. Rhyming recognition: Did you see a word that rhymes with: regal? - Result: LoP effect for recognition test, but opposite for rhyming test
- For the standard test the proportion recognised was higher for the semantic task than the rhyme task
- For rhyming recognition the proportion recognised was higher for the rhyme task than the semantic task
- Deep processing does not always enhance memory
What was Morris, Brandsford and Franks (1977) transfer appropriate processing theory and what was Craik’s response?
- Memory performance depends on the extent to which processes used at the time of learning are the same as those used when memory is tested
- ‘Deep’ orienting tasks often produce better results because the test task engages the same ‘deep’ processes
- A form of encoding which is ‘shallow; for one purpose may be ‘deep’ (a better match for another
- Craik responded: ‘Deeper encoding processes result in traces that are potentially very memorable, provided that an appropriate retrieval cue is available at the time of retrieval.