Competitive Learning Flashcards

1
Q

What is supervised learning?

A

Supervised learning is where you have input variables (X) and an output variable (Y) and you use an algorithm to learn the mapping function from the input to the output.

Y = f(X)

The goal is to approximate the mapping function so well that when you have new input data (X) that you can predict the output variables (Y) for that data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

3 examples of supervised learning

A
  1. Linear regression for regression problems
  2. Random forest for classification and regression problems
  3. Support vector machines for classification problems
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is unsupervised learning?

A

where you only have input data (X) and no corresponding output variables. The goal for unsupervised learning is to model the underlying structure or distribution in the data in order to learn more about the data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

2 examples of unsupervised learning

A
  1. k-means for clustering problems

2. Apriori algorithm for association rule learning problems

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What 3 ideas are competitive learning built off of?

A

I. Hebbian learning principle: when pre-synaptic and post-synaptic units are co-active, the connection between them should increase.

II. Competition between different units for activation, through lateral inhibition / winner-take-all activation rule

III. Competition between incoming weights of a unit, to prevent all weights from saturating, by normalizing the weights to have fixed net size: if some incoming weights to a unit grow, the others will shrink.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What does competitive learning involve?

A

unsupervised training in which the output nodes try to compete with each other to represent the input pattern. There’re also some strong fixed negative connections. And due to lateral inhibition, this means there’s only one winner neuron

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

How would you build a competitive learning algorithm?

A
  1. Beginning with random initial connection weights, we’re going to present a series of examples of input patterns of activity to this network. This will give different patterns of firing across the input neurones.
  2. Their activity will then spread through connection weights to output neurones which also inhibit each other and then we’re going to change the relationship between input and output with the Hebbian learning rule.
  3. We can change the weight many, many times - we hope that these networks end up with a pattern of connection weights that makes them useful.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Why is there a winning neuron in competitive learning?

A

Different output neurones have different connection strengths. The effect of inhibitory connections between them is simply that; the output neurone with biggest input is the winner. Only the connection strengths that go to the winning neurone will change as only the winning neurone will have a post synaptic firing rate that is non-zero.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is the connection strength of the winning neuron increased by?

A

connection strength to the winning neurone is increased by an amount that’s proportional to presynaptic firing rate

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is the problem with one winning neuron?

A

If one of the output neurones starts to win, its connection strength increases and is more likely to win the next time as well. This is unsustainable because it won’t lead to any variable behaviour — you will end up with a network where the same output neurone will be constitutively active.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is normalisation?

A

vector length kept the same. This means that the same total amount of connection strengths going to all neurones, but the pattern of connection strengths is different. In other words, the overall total amount of connection strength doesn’t change even if the neurone fires.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

How does competitive learning relate to self-organising maps in the brain (topographical maps)?

A

Like the competitive learning algorithm, in a SOM there will be a connection between all the inputs and all the output. Instead of having simple lateral inhibition, Willshaw and Von der Marlsburg’s (1976) suggested that maybe the connection weights between all the output neurones have a Mexican hat relation - any neurones nearby have short - far apart

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

How is competitive learning different to self-organising maps in the brain (topographical maps)?

A

They work in a similar way except that you don’t only have 1 winning neurone you tend to have a patch of local neurones -> that support each other’s firing and inhibit long range connections

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Why can topographical maps be considered SOM’s?

A

Similar to brain because some areas of the brain develop structures with different areas, each of them with a high sensitivity for a specific input pattern

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What function describes the relation between neurons in SOM’s?

A

Neighbourhood function F which describes how close together neurones are in this sheet. This neurone has a neighbourhood function of 1 in a cell and slowly decreases in relation to function to how far apart they are -> progressively goes to 0. This means that nearby neurons respond to similar kinds of inputs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

How would you build a topographical map?

A
  1. At the start of learning, all output neurones have connection strengths with small random values (a).
  2. As this learning happens, each neurone changes its connection strengths to spread out in the space of input firing rate - each output neurone is likely to be the inner for a different portion of the input space (d).
  3. This is different to competitive learning algorithm - all output neurones get to be the winner because if they happen to be near a output neurone that’s a winner their connection strength will change so that they will eventually get to be the winner.
  4. If our neighbourhood function is big enough means that …. eventually a neurone will have a similar output strength as its neighbour
17
Q

Why are KSOM’s useful?

A

In practice, these networks are used to visualise highly dimensional data. You can capture the similarities between different kind of inputs and the output activity responding to it. The most interesting aspect is that the complete final map will contain the attractors for all patterns which are organised to maximize the similarity between adjacent units. In this way, when a new pattern is presented, the area of neurons that maps the most similar shapes will show a higher response.

18
Q

Why are there different topographical maps?

A

Each input neurone is going to represent some aspect of an animal. Networks will try to find the best average similarity between different types of input

19
Q

How are Kohonen SOM’s represented?

A

as a bi-dimensional map (for example, a square matrix m × m or any other rectangular shape)

20
Q

What are KSOM’s?

A

where each cell is a receptive neuron characterised by a synaptic weight w with the dimensionality of the input patterns

21
Q

Why are KSOM’s different to SOM’s?

A

Instead of having these Mexican hat profiles of connections between neurones in W + V model, (a given neurone will only have 1 NT) Kohonen changed the Hebian learning rule to get this same topographical feature mapping where nearby neurones will respond to similar kinds of inputs. He decided to have this neighbourhood function F which describes how close together neurones are in this sheet. This neurone has a neighbourhood function of 1 in a cell and slowly decreases in relation to function to how far apart they are -> progressively goes to 0. Neurones I and K then they have a value near 1 and if they’re far apart they tend to 0.

22
Q

What is the solution for a single active neuron in competitive learning?

A

Normalisation