wk 10 Neural Networks (Variations on Neural Networks and Unsupervised Learning) Flashcards
dynamic problems with a temporal or sequential component, such as stock market prediction or speech recognition, different approaches can be adopted:
-Shifting Time-Window Input
-Recurrent Neural Networks (RNNs)
Shifting Time-Window Input
This involves feeding the network with a sequence of inputs over a certain time window, enabling it to capture temporal patterns and dependencies. The NN can then predict future outcomes based on the recent past information it has received.
Recurrent Neural Networks
designed to handle sequential data.
They have feedback connections that allow information to persist across different time steps. (time delay)
model temporal dependencies by using recurrent connections to retain (context units) and utilize information from previous time steps in the network’s computations.
Backpropagation Through Time (BPTT)
is a learning algorithm used to train Recurrent Neural Networks (RNNs) by extending the backpropagation algorithm to handle the temporal dimension of sequential data.
BPTT algorithm issues :
May not converge & develop chaotic behaviour
Unsupervised
inputs given BUT targets not given.
Self-Organising Map (unsupervised learning)
Aim: learning to map points from a high-dimensional space to a low-dimensional (discrete) space (2D or 3D) in a way to preserve topological properties (spatial relations)
Self-organised : map emerging from local interactions (competition & co-operation) between data points
Used visualisation & discovery regularities in data.
Self-Organising Map assumptions
– Input data that belongs to the same class share some common features
– SOM will possibly be able to identify these key features across a number of data points
– SOM will be able to organise/order meaningfully the input data according to a given 2D/3D structure
- Every input pattern is a point in a high-dimensional space
- Every input is made to correspond to a node in output map via a competitive process among nodes on the output space
- The winner is the node whose weights have smallest (Euclidean) distance to input pattern