Week 6: Competitive Learning model of place cells Flashcards
(35 cards)
Sharp model of place cells developed in
1991
Sharp model of place cells uses
standard artifical neurons with no dynamics = update everytime step it fires
Sharp model of place cells uses interesting learning rule called
competitive learning rule
Diagram of competitive learning (5)
- We have input pattern
- Some neurons active or not
- We have output neurons (i.e., receiver neuron)
- There is initally random weight between each input and output neuron
- We have strong inhbition between the output neurons
Competitive learning rule has winner take all dynamics
Output neuron with higher activity of 1 compared to others will have strongest inhbition to neighbour and slowly suppress them even though they try to suppress it as well causing it to be lone survivor
Competitive learning rule
For each input pattern X, with inital random connections, a particular output neuron (randomly) Oi with be the most
active neuron
Competitive learning rule
Assuming output neuron O have )
inhbitory connections among each other (lateral inhbition) but we don’t model them explicility
Competitive learning rule first step
We find the maximum Oi, set it to 1 (max activity) and set all other Ok
to 0
Competitive learning rule second step
We then do Hebbian learning (2)
Wij –> wij + ε Oi Xj for all x other weights don’t change where Ok = 0
We do this for all other connecitons where output neuron is active (since if activity of O is 0 it does not change weight) and update my weights between neurons
epsilon
is a tiny number as want small incremental changes to weights in Hebbian learning
Oi is the most active neuron
set to 1
Xj is
all the other input neurons
Hebbian learning equation in words
Wij –> wij + ε Oi Xj (2)
ε Oi Xj = Activity of given output and input neuron multipled by each other and output of this is made very small by epilson (learn slowly)
This is added to my pre-existing weight (wij)
Before updating weights (doing Hebbian learning) in competitive learning rule betwen neurons we need to
normalise synaptic connections first
We repeat process of Hebbian learning and normalising synaptic connections in competitive learning rule
with presentation of each input pattern
What happens as a result of competitive learnig rule is that outputs whose incoming weights
We have different..
(2)
are most similar to the pattern xn wins and its weight then become more similar as we do learning
Different oututs find their own clusters in input data
Normalisation in competitive learning where is
For a given neuron, the total sum of connection (i.e. input) weights to each output neuron stays constant (e.g., sum (wi) = 1)
If we did not have normalisation then… (3)
- If we continue learning for a long time, the weight will become very strong in Hebbian learning
- There must be a physiological limit in brain as how strong a synaptic conneciton can be
- Output neuron can only fire at a given maximum rate
Diagram of normalisation
sum(wi) = ….
can be any random number
Diagram of normalisation explained of sum (wi) = 1
The sum of all the input weights going into output neuron O2 have to 1
How do we do normalisation and keep track of it since weights increase by learning?
We divide the weights after each learning step by the sum of the total weights (sum[wi]) so sum will always stay 1
Example of keeping track of normalisation (2)
E.g., weights after 1 learning step are 2,1,8,3,2
sum(wi) = 2+4+5+3+2 = 16 => update weights to be 2/16, 1/16, 8/16, 3/16, 2/16 so sum of all these magenta weights is still 1
Example of keep tracking of normalisation where now one of the weights gets much stronger (3) after example of keeping track
- The strongest weights get stronger : wij –> wij + ε oi xj
- For example, wij = 8/16 grows stronger faster at expense of wij = 1/16
- The weakest weights get weaker