Working Memory Flashcards
Origins of Working Memory
Atkinson and Shiffrin, 1968 - proposed a model with sensory store –attention–> STM –rehearsal–> LTM
Lost from STM by decay or displacement. Lost from LTM by decay, interference, failed retrieval.
Baddeley and Hitch, 1974 - Observed dual task experiments, changed STM to WM, including PL, VSS and CE.
STM is just a store, WM involves manipulation
Sensory store
Unlimited capacity, but v rapidly decaying
Stored in the primary sensory areas
Phonological loop, supporting observations
Baddeley 1966 - Phonological similarity effect - reduced performance, but not for semantically or visually similar words
Word length effect - longer words remembered less well, because less time for rehearsal. No of syllables, not no of letters. Digit span, therefore, varies with language (in Arabic, digit words are longer on average).
Articulatory suppression - repeating an irrelevant word like ‘the’ impairs performance, and abolishes word length effect. Prevents articulatory control process from rehearsing, or translating visual info into PL. Harder articulatory suppression tasks (e.g. spelling a word rather than just saying ‘the’) impair memory more, but this is more about attentional processes.
[Unattended speech effect - having irrelevant words or nonsense syllables played at the same time as encoding and retrieval impairs performance, but not as much as articulatory suppression. Music or noise has no effect. Interferes with but doesn’t entirely prevent rehearsal. Effect is additive to articulatory suppression, and to phonological similarity, but only when subjects are instructed to use articulatory rehearsal. Otherwise, irrelevant speech abolishes the phonological similarity effect, because people sometimes choose to use semantic rehearsal. Interferes with items stored in phonological store, rather than rehearsal.]
Neuropsychological evidence - Aphasic patients are impaired, dysarthric patients are not.
Phonological loop - characteristics
Best for serial recall
Consists of a phonological store, and an articulatory control process. Thought to have evolved from the need to hold auditory stimuli online to process a whole sentence, and the need to articulate yourself.
Capacity about 7 (Miller, 1956). Equal across a) number of items that can be held online at once, in ‘immediate memory’ b) number of items that can be distinguished from one another, c) number of items that can be rapidly recalled. He said to beware of ‘magic numbers’ though, and there’s controversy. However, capacity was constant as complexity of items increased, e.g. numerals vs words - hence ‘chunking’
Forgetting is due to ‘proactive interference’ - Wickens, Born and Allen, 1993 - back to near perfect recall immediately after items switched form e.g. ‘123, 938, 185, FJS’ was recalled just as well as ‘837’.
Changing state hypothesis
Jones et al 1993 - O-OER model, any kind of stimulus is represented in the same abstract way, as an ‘object’, part of a modality-specific ‘stream’. Anything from the same source also enters that ‘stream’. Each object has a ‘pointer’, used to encode serial position. False recall occurs when pointers from different streams interfere with each other. If the same stimulus is presented repeatedly, it creates only one object. This theory predicts, correctly, that irrelevant/unattended speech effect should be greater when the irrelevant speech consists of different items than one repeated item.
Also, because it’s pointers not objects that interfere with each other, this model predicts, correctly, that there is no impact of similarity between irrelevant speech and to-be-remembered speech on magnitude of irrelevant speech effect. PL model suggests there would be an effect.
Visuo-spatial sketchpad, evidence for
Luck and Vogel 1997 - Used two arrays of coloured dots, asked subjects which dot had changed colour. If our VSS had the same number of ‘slots’ as there were dots, memory would be perfect. If the dot that changed colour didn’t have a slot, subject would have to guess. They estimated number of slots at 3-4. Also, when testing memory of coloured, tilted bars, memory was the same no matter which characteristic was being tested. Hence ‘chunking’, storage as items rather than characteristics.
VSS, evidence against (2)
Wheeler and Treisman 2002 - subjects were poor at detecting changes in feature binding, i.e. a characteristic swapped between two items in the array. According to Luck and Vogel’s slot model, this shouldn’t happen. Instead, different features/characteristics may have separate WM stores.
Alvarez and Cavanagh 2004 - chunking cannot be happening, because stimulus change detection was worse for more complex stimuli - VSS capacity is dependent on ‘information load’. However, this study may not have kept the changes sufficiently large across different complexities of stimulus.
VSS vs transsaccadic memory
Irwin, 1991 - At fixation, we can integrate two briefly presented arrays to detect a missing dot. We’re much worse at this if there’s an intervening saccade. Whilst a small displacement in location will affect performance at fixation, it does not affect performance with the intervening saccade, because performance is so much worse already (i.e. there’s no integration to mess up). This suggests that transsaccadic memory is exactly the same as VSS WM, i.e. a limited-capacity, relatively long-lasting memory, not tied to spatial position.
So the fact we perceive vision as smooth, i.e. we don’t notice saccades, is not because we have this transsaccadic memory fusing together the images. It’s because we never check to see whether images line up, so we never notice that they don’t. It’s like a black hole.
Neuroimaging of WM (6)
PFC active in monkeys during WM task
Courtney et al 1997 - fMRI shows PFC active during WM maintenance in humans (but representation may be too coarse to support detailed WM, suggesting a supervisory role of PFC)
Todd and Marois, 2004 +2005 - intraparietal and intraoccipital sulcus activity correlates with number of items in visual WM, and with individual performance at memory tasks
Vogel and Machizawa, 2004 - Contralateral delay activity from posterior EEG (i.e. subtracting activity on opposite side to attended info from ipsilateral side) also correlates with no. items in visual WM and individual performance.
Harrison and Tong 2009 - multivariate pattern analysis allowed decoding of orientation of bars from fMRI signals in primary visual cortex.
Bettencourt and Xu, 2015 - decoding from visual cortex is impaired by presence of visual distractors, but only when those distractors were predicted. Parietal cortex signal decoding is unaffected.
–hence maybe when the brain knows it’ll be distracted, it chooses not to use occipital cortex?–
Resource theory
Bays and Husain 2008 - The greater the number of visual stimuli, the less precise the recall - proposed that WM is a limited representational resource, distributed between things to be remembered, with items of more salience/visual attention being assigned more of this resource and stored at higher resolution
Gorgoraptis et al 2011 - items more likely to be tested are stored at higher resolution - i.e. we remember more salient items better.
Murray et al 2013 - A ‘retro-cue’ can enhance recall for a specific item - give array of equally salient items, remove array, give cue suggesting which item would be tested for, then test for the cued item. Recall of the item was more likely, but not higher resolution. So attention can restore discrete items to VSS. Or maybe because cue protects item from interference?
Slot+averaging theory, for and against
Zhang and Luck 2008 - observed that as set size increases, error distribution becomes wider, less of a normal distribution. Hypothesised that multiple copies of an item can be represented in different slots (so slots are acting as a ‘resource’, but there’s still a finite limit on total number of items, which there isn’t with resource theory). Errors come from random guesses (uniform distribution) and noisy averaging across copies (normal distribution). Using this, capacity was estimated at 2-3 items
Bays, Catalao and Husain 2009 - argued that some of the errors seen were because of false localisation of items from the array. Erasing this effect removed any evidence of a finite limit of items
Other, more modern theories
Two studies in 2012 - Memory precision may vary randomly between items and over time, perhaps due to variability in resource at the time of encoding.
Bays 2014 - argued errors aren’t necessarily normally-distributed, and used a model based on neural population coding that outperformed slot+averaging when fitting to data. Errors arise from decreasing signal strength in probabilistically spiking neurons. We have fine control over neural representations, and can pick the most accurate circuitry to store the most salient memories.
Brady and Alvarez 2011 - errors in circle colour+size recall tended towards the average size of all circles presented of that colour. Therefore people don’t remember individual items. But maybe the average was only being computed at recall.
Brady and Alvarez 2015 - tested hundreds of people on many arrays, found strong consistency on which arrays were hardest and easiest to remember, so again people aren’t just remembering items.