Localist vs distributed coding Flashcards

1
Q

Localist (grandmother cell)

A

each word represented by single unit

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

distributed

A

words are coded across a set pattern of activation (each unit involved in multiple codes)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Neural network models

A

reject localist word codes learn to name words via distributed codes
hidden layer of units so can never find full word units activate pattern of activation
knowledge distributed

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Back-propagation

A

train models to change connection weights to reduce to reduce output error.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

localist Macaque monkey study

A

high selectivity for complex info in cortex
recorded 850 neurons in IT
one cell responded selectively to 1 of 27 faces

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

OJ Simpson cell

A

cell from IT in Monkey responded to selectively

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

limitations of localist studies

A
  • only present a set number of images
    -representations highly selective but not grandmother cells
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

superposition catastrophe

A

difficult to coactivate multiple things in STM so cortical systems that support STM need to be highly selective (local) as they are unambiguous

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Example of superposition catastrophe

A

if want to code for two names at same time then add the codes together unable to tell what mean as could indicate multiple names
so cant unambiguously code for more then one thing at same time in a set of distributed units

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

neural system of STM (Botvinick and Plaut)

A

successfully co-activates multiple items
input takes a pattern of activation and reproduces it at the output
units in the hidden layer activate themselves so activation persists
distributed layer hidden allows for names to be recalled in order (blend pattern)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

evidence of hidden unit response

A

plot activation for each hidden unit response to word.
unit responds to a range of inputs with no selectivity - cannot infer what was presented by looking at activation for one unit.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

simulations of Botvinick and Plaut model

A

trained on 300 words. when trained 1 syllable at a time no superposition catastrophe so distributed as no single unit codes
when trained on multiple bonding patterns occur - unit responds so localist and superposition catastrophe

How well did you know this?
1
Not at all
2
3
4
5
Perfectly