Competitive Learning Flashcards
What is supervised learning?
Supervised learning is where you have input variables (X) and an output variable (Y) and you use an algorithm to learn the mapping function from the input to the output.
Y = f(X)
The goal is to approximate the mapping function so well that when you have new input data (X) that you can predict the output variables (Y) for that data.
3 examples of supervised learning
- Linear regression for regression problems
- Random forest for classification and regression problems
- Support vector machines for classification problems
What is unsupervised learning?
where you only have input data (X) and no corresponding output variables. The goal for unsupervised learning is to model the underlying structure or distribution in the data in order to learn more about the data.
2 examples of unsupervised learning
- k-means for clustering problems
2. Apriori algorithm for association rule learning problems
What 3 ideas are competitive learning built off of?
I. Hebbian learning principle: when pre-synaptic and post-synaptic units are co-active, the connection between them should increase.
II. Competition between different units for activation, through lateral inhibition / winner-take-all activation rule
III. Competition between incoming weights of a unit, to prevent all weights from saturating, by normalizing the weights to have fixed net size: if some incoming weights to a unit grow, the others will shrink.
What does competitive learning involve?
unsupervised training in which the output nodes try to compete with each other to represent the input pattern. There’re also some strong fixed negative connections. And due to lateral inhibition, this means there’s only one winner neuron
How would you build a competitive learning algorithm?
- Beginning with random initial connection weights, we’re going to present a series of examples of input patterns of activity to this network. This will give different patterns of firing across the input neurones.
- Their activity will then spread through connection weights to output neurones which also inhibit each other and then we’re going to change the relationship between input and output with the Hebbian learning rule.
- We can change the weight many, many times - we hope that these networks end up with a pattern of connection weights that makes them useful.
Why is there a winning neuron in competitive learning?
Different output neurones have different connection strengths. The effect of inhibitory connections between them is simply that; the output neurone with biggest input is the winner. Only the connection strengths that go to the winning neurone will change as only the winning neurone will have a post synaptic firing rate that is non-zero.
What is the connection strength of the winning neuron increased by?
connection strength to the winning neurone is increased by an amount that’s proportional to presynaptic firing rate
What is the problem with one winning neuron?
If one of the output neurones starts to win, its connection strength increases and is more likely to win the next time as well. This is unsustainable because it won’t lead to any variable behaviour — you will end up with a network where the same output neurone will be constitutively active.
What is normalisation?
vector length kept the same. This means that the same total amount of connection strengths going to all neurones, but the pattern of connection strengths is different. In other words, the overall total amount of connection strength doesn’t change even if the neurone fires.
How does competitive learning relate to self-organising maps in the brain (topographical maps)?
Like the competitive learning algorithm, in a SOM there will be a connection between all the inputs and all the output. Instead of having simple lateral inhibition, Willshaw and Von der Marlsburg’s (1976) suggested that maybe the connection weights between all the output neurones have a Mexican hat relation - any neurones nearby have short - far apart
How is competitive learning different to self-organising maps in the brain (topographical maps)?
They work in a similar way except that you don’t only have 1 winning neurone you tend to have a patch of local neurones -> that support each other’s firing and inhibit long range connections
Why can topographical maps be considered SOM’s?
Similar to brain because some areas of the brain develop structures with different areas, each of them with a high sensitivity for a specific input pattern
What function describes the relation between neurons in SOM’s?
Neighbourhood function F which describes how close together neurones are in this sheet. This neurone has a neighbourhood function of 1 in a cell and slowly decreases in relation to function to how far apart they are -> progressively goes to 0. This means that nearby neurons respond to similar kinds of inputs.