Week 6: Competitive Learning model of place cells Flashcards
Sharp model of place cells developed in
1991
Sharp model of place cells uses
standard artifical neurons with no dynamics = update everytime step it fires
Sharp model of place cells uses interesting learning rule called
competitive learning rule
Diagram of competitive learning (5)
- We have input pattern
- Some neurons active or not
- We have output neurons (i.e., receiver neuron)
- There is initally random weight between each input and output neuron
- We have strong inhbition between the output neurons
Competitive learning rule has winner take all dynamics
Output neuron with higher activity of 1 compared to others will have strongest inhbition to neighbour and slowly suppress them even though they try to suppress it as well causing it to be lone survivor
Competitive learning rule
For each input pattern X, with inital random connections, a particular output neuron (randomly) Oi with be the most
active neuron
Competitive learning rule
Assuming output neuron O have )
inhbitory connections among each other (lateral inhbition) but we don’t model them explicility
Competitive learning rule first step
We find the maximum Oi, set it to 1 (max activity) and set all other Ok
to 0
Competitive learning rule second step
We then do Hebbian learning (2)
Wij –> wij + ε Oi Xj for all x other weights don’t change where Ok = 0
We do this for all other connecitons where output neuron is active (since if activity of O is 0 it does not change weight) and update my weights between neurons
epsilon
is a tiny number as want small incremental changes to weights in Hebbian learning
Oi is the most active neuron
set to 1
Xj is
all the other input neurons
Hebbian learning equation in words
Wij –> wij + ε Oi Xj (2)
ε Oi Xj = Activity of given output and input neuron multipled by each other and output of this is made very small by epilson (learn slowly)
This is added to my pre-existing weight (wij)
Before updating weights (doing Hebbian learning) in competitive learning rule betwen neurons we need to
normalise synaptic connections first
We repeat process of Hebbian learning and normalising synaptic connections in competitive learning rule
with presentation of each input pattern