Process Models of Memory Flashcards
What is the Eriksen ISI?
- fuse the random-dot display to see a nonsense syllable
What is Visible persistence?
the continued apparent visibility of a stimulus beyond its physical duration.
What did the inverse duration effect cause problems for?
The notion that we have an iconic memory store
What was Eriksen and Collin’s (1967) experiment to prove the visual ‘icon’ memory store theory (iconic store)?
- Stimuli: two patterns of dots that form a nonsense syllable when superimposed
- S1 + S2. S2 presented after a delay
- If you could still superimpose it after a delay shows you still have some visual perception of S1
- If you increase the delay performance will decrease e.g. ISI = 50 will perform better than ISI = 100
- ISI = inter-stimulus interval (in msec)
- Interpreted as evidence for icon decaying over around 300 msec
What did DiLollo and Hogben say about the persistence of visual information?
- that it is a phenomenon of visual processing not the sensory memory store
- they said that Data from Sperling and Eriksen & Collins were ambiguous
- they said that it could be due to the inverse duration effect
How does the dot fusion experiment support the decay from memory theory?
- Decay starts from offset of S1
- Decline in performance as ISI (inter-stimulus interval) increases due to less effective overlap between memory traces of S1 and percept of S2
How does the dot fusion experiment support the visual processing theory?
Processing starts from onset of S1 and runs its course. Decline in performance as ISI increases due to less effective overlap between processing of S1 and processing of S2
Why using the classic dot fusion experiment can’t you tell whether visual information is due to decay from memory or visual processing?
they predict the same pattern: declining performance as ISI increases
What is the key difference between the decay from memory and visual processing theory and so how can we test the difference? (Hogben and Di Lollo 1974)
- the decay from memory theory: a visual is shown and after it is shown the information decays after about 30 seconds
- Visual processing theory: an image is shown and it starts getting processed from the beginning, there is a set amount of time in which it is being processed and after this it is gone
Memory Decay starts from offset of S1
Visual processing starts from onset of S1 and runs its course
So if we manipulate the duration of S1 (SOA) and keep the gap (ISI) at 0, the theories make different predictions
What is the SOA?
stimulus onset asynchrony - the duration of the stimulus
What happened in dot-fusion experiment where the duration of S1 was manipulated?
- If decay from memory:
Decay starts from offset of S1
Predicts NO decline in performance as S1 duration increases because effective overlap doesn’t change (=ISI of 0 msec) - If overlap of visual processing:
Processing starts from onset of S1 predicts: decline in performance as SI duration increases because effective overlap decreases (same as with ISI) - Result:
Inverse duration effect (performance decreases)
How did Coltheart (1980) explain the decay from memory vs visual processing argument?
- he said there were three separate phenomena:
- neural persistence’ (overlap in neural processing; very brief)
- ‘Visible persistence’ (overlap in visual processing; Di Lollo <200 msec)
- ‘Informational Persistence’ (icon that decays’ Sperling -150-300 msec)
Who came up with the working memory (WM) model in 1974?
Baddeley and Hitch
How did ‘dual task paradigms’ give evidence to the working memory model?
- Participants would be given both a primary task (e.g. hold a few words in mind for some period of time) and a simultaneous secondary task (e.g. rehearse a sequence of digits)
- If both could be performed at the same time it was taken as evidence that the WM system was more complex than a simple short-term store
- And if the secondary task interfered with the primary task, then that was because they both relied on the same processing mechanism
Why did Baddeley come up with the WM theory?
He thought there was so much going on in the STS that it couldn’t just be a single box
Give an example of a dual task paradigm that gave evidence to the working memory model
- Remember and overtly (out-loud) rehearse sequences of 0-8 digits – secondary task
- At the same time perform a simple True/ False reasoning task e.g. (A precedes B): AB (true) – primary task
- Results: It is possible to carry out both tasks, despite both requiring STS
Error rate held constant (fairly low) when concurrent digit load changes
Reasoning time goes up as the concurrent digit load increases
Increase in reasoning time is significant but not large (35%) = speed-accuracy trade-off
- Shows there must be more than a single component to your short-term store