week 11: models in memory Flashcards

1
Q

threshold model of recognition

A
  • idea that there is a threshold of activation that a memory trace must exceed for it to be considered ‘old’ or before it can be recognized
  • has recognition and rejection thresholds
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

descrete vs continuous modles for the threshold model

A

discrete: assumes features retrieved from memory are in an all or none manner
cont: info is retrieved along a continuum (based on signal detection theory ie hit, miss, correct rejection etc)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

generate recognize model of recall

A
  • unlike recognition, recall searches memory
  • assumes recall in two stage process: generate memory traces (cues>memory traces>more connections) and recognize
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

recognition failure

A
  • ## sometimes items can be recalled but not recognized
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

recall uses ____ to prompt retrieval where recognition uses ____

A

cues, the item itself

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

network model are about items aka ___ and their associations aka ____

A

nodes, links

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

spreading activation

A
  • related nodes are activated through strength of associative links
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

adaptive control of thought model

A
  • prepositions are made up of idea units (often 2 nodes and links)
  • type (general) vs token (specific) nodes
  • activation spread through the network but is a limited resource
  • production (mental maps) memory vs declarative memory (prepositions)
  • more practice = better production
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

hierarchical vs relatedness

A
  • hierarchical: each item is connected but there is a built in natural hierarchical relationship w their own features
  • relatedness: still a structure but more agonistic to do w hierarchy and more to do with how the concepts relate
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

latent semantic analysis

A
  • massive training set of millions of text and from this, we can derive a stat estimating co occurrence (how often do two words appear together)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

global matching models

A
  • memories are assessed through processes that consider the entire set of available traces
  • relation in memory occurs at retrieval
  • patterns of info imposed on a common framework
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

multiple trace models

A
  • many records for each type of memory
  • memory traces are activated in parallel
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

distributed storage models

A
  • many different memories are imposed on the same structure
  • emphasizes associative memory
  • common underlying framework and memories are connections between nodes
  • theory of distributed associative memory, composite holographic associative retrieval model
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

search of associative memory model

A
  • memories are stored in traces that contain content, associative and contextual info
  • remembering = cue overlaps w features in a trace (activates it and can be retrieved)
  • stronger is first
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

SAM recognition process

A
  • use test item and context as cues
  • determine recognition threshold (bias)
  • is familiarity greater than criterion (yes or no judgement based on familiarity)
  • doesnt have to just be one strong memory, can be many weak ones
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

SAM recall process

A
  • question
  • retrieval plan
  • assemble probe cues in STM
  • search (set selection)
  • sampling trace
  • retrieve the trace
  • evaluation
  • decide to or not to produce a response
16
Q

MINERVA 2

A
  • assumes memories are a string of features
  • each trace can be conceptualized as a set of number, one per feature indicating presence/absence
  • activates many memory traces which each have their own set of numbers
17
Q

minerva 2 echo general def

A
  • weighted composite of all activated traces
  • sum total
  • can produce schemas
  • many echos narrow on trace
18
Q

minerva 2 echo intensity

A
  • strength of which theyre active
  • weak= not known
19
Q

minerva 2 echo content

A
  • weighted average of the features of all active traces
20
Q

parallel distributed processing model

A
  • output, hidden, and output units
  • translates input memory through middle to the output, takes on same form in output
  • alters weights (strength of which input and output relate)
  • interconnected units, loosly corresponding to neurons and axons
  • translate input memory
  • each memory has a different pattern
21
Q

dual process model

A
  • assumes theres two processes: familiarity and recollection
22
Q

in dual process familiarity and recollection

A
  • familiarity: faster, quantitative
  • recollection: more conscious/effortful, qualitative
23
Q

neural basis for remember and know processes

A
  • remember: hippocampus only gets activated w highest confidence (yes or no is hippo driven)
  • know: more based on familiarity and has linear relationship in media temporal lobe that surrounds hippo (linear scale)