Hebbian learning Flashcards
1
Q
Hebbian learning inspiration
A
This learning method is based on Hebb’s rule - what fires together, wires together. Hebian learning can be used as an online PCA with adaptive tracking of the direction of largest variance.
2
Q
Problem with pure Hebbian Learning
A
- Since the covariance matrix C is positive semi-definite, this update rule leads to an exploding weight vector. Thus we need to normalise the weight vector in every step
2.
3
Q
HL normalisation methods
A
- Explicit normalisation - divide the weight by its absolute value
- Implicit normalisation - Oja’s Rule (normalized the vector length to 1 by introducing a decay term)
4
Q
Anti-Hebbian Rule
A
A novelty filter projects the data onto the smallest PC. The largest output for unexpected data is measured as an unexpected/novel data point. It is based on the Anti-Hebbian learning rule.