Unsupervised Learning Flashcards
What does clustering do?
reduces the number of examples by grouping them and considering a “group prototype”
Feature extraction:
reducing the number of descriptors by only considering the most informative ones or by creating more abstract features
What happens in a competitive learning mechanism?
The input patterns are projected to a pool of neurons that use lateral inhibitory connections and excitatory self connections to implement competitive dynamics:
How is the the activation of each hidden neuron is computed in competitive learning?
With normalized input vectors, the activation of each hidden neuron is computed as the inner product between the neuron’s weights and the input pattern:
Explain how clustering methods rely on the computation of distances to establish similarities
calculating the cosine distance between the data vectors and all weights vectors, and then reducing the angle between the input and the weights vector of the
winner neuron
what happens in Self organizing maps (a.k.a. “Kohonen maps”)?
Imposing a topological structure on the competitive layer => each neuron will form “coalitions” with its neighbors, which allow to more accurately map the
input space into a lower dimensional (2D) manifold
This way, each neuron will compete with distant neurons but cooperate with it’s close neighbors to represent each input pattern
How can the neighbourhood function be described?
each neuron will have strong excitatory connections with its immediate neighbors, which became gradually weaker as we go to longer ranges. After some point, the connections might become inhibitory
How does a Hopfield Network work?
Hopfield networks are fully-recurrent neural networks that can be used to store and retrieve data patterns
• The storage is performed by gradually changing the connection weights using a Hebbian like learning rule
• The retrieval is performed in a dynamical way, by iteratively updating the state of the neurons until a stable state (attractor) is reached
–to use an energy function to specify which states of the network are more likely to occur
(i.e., which configurations of the neuron’s activations are more probable)
What is the learning goal in Hopfield Networks?
The learning goal is to assign high probability to the configurations observed during training
Advantages and disadvantages
Major advantages:
–Learning does not require any labeled data. This implies that the learner can exploit the huge amount of “raw” information contained in its environment
–Once an internal model of the environment has been learned (a.k.a. feature extraction or representation learning) it can be effectively used to more easily learn also supervised tasks
–From a psychological / cognitive standpoint, it seems quite plausible that children and animals massively exploit this learning modality during development
–Unsupervised learning can be implemented using biologically plausible learning rules and processing architectures
Computationally demanding
Difficult to select a good representation
We cannot infer causality