Week 6: Competitive Learning model of place cells Flashcards

You may prefer our related Brainscape-certified flashcards:
1
Q

Sharp model of place cells developed in

A

1991

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Sharp model of place cells uses

A

standard artifical neurons with no dynamics = update everytime step it fires

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Sharp model of place cells uses interesting learning rule called

A

competitive learning rule

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Diagram of competitive learning (5)

A
    • We have input pattern
    • Some neurons active or not
    • We have output neurons (i.e., receiver neuron)
    • There is initally random weight between each input and output neuron
  1. We have strong inhbition between the output neurons
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Competitive learning rule has winner take all dynamics

A

Output neuron with higher activity of 1 compared to others will have strongest inhbition to neighbour and slowly suppress them even though they try to suppress it as well causing it to be lone survivor

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Competitive learning rule

For each input pattern X, with inital random connections, a particular output neuron (randomly) Oi with be the most

A

active neuron

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Competitive learning rule

Assuming output neuron O have )

A

inhbitory connections among each other (lateral inhbition) but we don’t model them explicility

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Competitive learning rule first step

We find the maximum Oi, set it to 1 (max activity) and set all other Ok

A

to 0

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Competitive learning rule second step

We then do Hebbian learning (2)

A

Wij –> wij + ε Oi Xj for all x other weights don’t change where Ok = 0

We do this for all other connecitons where output neuron is active (since if activity of O is 0 it does not change weight) and update my weights between neurons

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

epsilon

A

is a tiny number as want small incremental changes to weights in Hebbian learning

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Oi is the most active neuron

A

set to 1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Xj is

A

all the other input neurons

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Hebbian learning equation in words

Wij –> wij + ε Oi Xj (2)

A

ε Oi Xj = Activity of given output and input neuron multipled by each other and output of this is made very small by epilson (learn slowly)

This is added to my pre-existing weight (wij)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Before updating weights (doing Hebbian learning) in competitive learning rule betwen neurons we need to

A

normalise synaptic connections first

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

We repeat process of Hebbian learning and normalising synaptic connections in competitive learning rule

A

with presentation of each input pattern

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What happens as a result of competitive learnig rule is that outputs whose incoming weights

We have different..

(2)

A

are most similar to the pattern xn wins and its weight then become more similar as we do learning

Different oututs find their own clusters in input data

17
Q

Normalisation in competitive learning where is

A

For a given neuron, the total sum of connection (i.e. input) weights to each output neuron stays constant (e.g., sum (wi) = 1)

18
Q

If we did not have normalisation then… (3)

A
  • If we continue learning for a long time, the weight will become very strong in Hebbian learning
  • There must be a physiological limit in brain as how strong a synaptic conneciton can be
  • Output neuron can only fire at a given maximum rate
19
Q

Diagram of normalisation

A
20
Q

sum(wi) = ….

A

can be any random number

21
Q

Diagram of normalisation explained of sum (wi) = 1

A

The sum of all the input weights going into output neuron O2 have to 1

22
Q

How do we do normalisation and keep track of it since weights increase by learning?

A

We divide the weights after each learning step by the sum of the total weights (sum[wi]) so sum will always stay 1

23
Q

Example of keeping track of normalisation (2)

A

E.g., weights after 1 learning step are 2,1,8,3,2

sum(wi) = 2+4+5+3+2 = 16 => update weights to be 2/16, 1/16, 8/16, 3/16, 2/16 so sum of all these magenta weights is still 1

24
Q

Example of keep tracking of normalisation where now one of the weights gets much stronger (3) after example of keeping track

A
  • The strongest weights get stronger : wij –> wij + ε oi xj
  • For example, wij = 8/16 grows stronger faster at expense of wij = 1/16
  • The weakest weights get weaker
25
Q

Normalisation and tracking it repeats with

A

presentation of each input pattern

26
Q

Competitive learning rule in a nutshell (4)

A
  • Select output neuron
  • Update weight
  • Normalise weights = squash weak input and up strong inputs
  • Output neuron becomes selective to to the strongest neuron in input pattern
27
Q

Normalisation and keep tracking of it is done for

A

each output neuron in the network whenever they are active

28
Q

Normalisation over time - (3)

A

By presenting input 1 over time, over multiple learning steps, some input weights get stronger and some get weaker

This is what is meant by selective of input cluster

O2 becomes selective to X3 and X5 in this example

29
Q

Model sharp proposes (3)

A
  • Maybe at a given location, certain sensory inputs are very strong and select and map place cells on this
  • Maybe at different location, different sensory inputs are active and map those onto PC
  • In this way we get location specific input
30
Q

Diagram of Sharp (1991) palce cell firing model simulation of rat (5)

A
  • Simulation of rat (blue triangle) that moves around circular box
  • There is cues around the box
  • At each location, the simulated rat “observes” distance to visual cues and “observes” direction of cues relative to itself
  • So we progate the neural activity [i.e., updating the value the neuron equation gives for each neuron) and perform learning updates
  • Then the simulated rat “moves” and explore box and “rotates” a small distance
31
Q

What happens to the neurons as simulated rat moves around and “observes” distance to cue and “observes” direction of cues in Sharp’s 1991 place cell firing model? (3)

A

There is two stages of competitive learning

First stage: conjunctions of distance and direction to landmarks is learned

Second stage: conjunctions of there first stage output yields place cells

32
Q

First stage: conjunctions of distance and direction to landmarks is learned

meaning… (2)

A

Input pattern is now a representation of distance and direction of all cues

There will be a co-occurence of distance and direction input patterns,

33
Q

Output of the Sharps’ (1991) model
Simulated place cell firing is resistant to cue-removal

    • (4)
A
  • At each location , activity of PC output neurons is calculated
  • Get these place fields for specific locations for cell 3, 9 and 13
  • If you remove some cues, the place field is still there (exactly see experimental) as subset of remaining cues is sufficient once learned correct connections to reactive that cell
  • We also get directionality in linear tracks
34
Q

General results of simulation of Sharp’s 1991 place cell firing model (2)

A
  • Simulated place cell firing is resistant to cue-removal
  • Simulated place cells are omni-directional only after random exploration and not following directed exploration!
35
Q

Output of Sharp’s (1991) place cell firing model

Simulated place cells are omni-directional only after random exploration and not following directed exploration! (5)

A
  • We also get directionality in linear tracks
  • This is PC in model recorded its location and direction of “simulated rat”
  • Fires same locaiton roughly independent of head direction of rat in open field
  • If simulated linear track, take certain routes
  • PC cells are directional as some fire when “simulated rat” moving east or northeast but not moving in other directions in starburst maze