Week 9 - Neural Networks Flashcards
What are the two types of learning a neural network can do?
Supervised and unsupervised
What is supervised learning?
The output of the neural network is compared against the correct output. The network then corrects itself based on that output. Overall, the data is labelled
What is unsupervised learning?
The data is not labelled. The network organises itself according to patterns in the data and no external “desired output” is provided
What does a perceptron consist of?
It consists of a set of weighted connections, the neuron (incorporating the activation function) and the output axon
How does a perceptron learn?
Initialise weights & threshold
Present the input and desired output
Calculate the actual output of the network:
For each input:
* Multiply the input data (xi) by its weight (wi).
* Sum the weighted inputs and pass through the activation function
Adapt the weights:
* If correct wi(t+1) = wi(t)
* If output 0, should be 1 wi(t+1) = wi(t)+xi(t)
* If output 1, should be 0 wi(t+1) = wi(t)-xi(t)
What can the weight update function be used to give?
A decimal term η between 0.0 and 1.0 to slow learning
What does the Widrow-Hoff Learning Rule give us?
Weight updates proportionate to the error made, giving Δ = desired output – actual output
Name a limitation of the perceptron.
Only linearly separable problems can be solved
What does linearly seperable mean?
A straight line which seprates two classes can be drawn
What are the three layers of an MLP (Multi-Layer Perceptron)?
Input, hidden, output
What are weights in terms of neural networks?
Variable strength connections between units that propagate signals from one unit to the next. They are the main component changed during learning
Describe the feedforward learning algorithm.
Initialise weights and thresholds to small random values.
Present input and desired output.
Calculate actual output by:
- Multiplying incoming signal by weight
- Pass this through sigmoid activation function
- Pass on this output to units in the next layer
Describe the backpropagation learning algorithm.
Adapt the weights.
Start from the output layer and work backwards:
- New weight (t+1) = old weight, plus a learning rateerror for pattern p on node joutput signal for p on j
Compute error as follows:
- For output units, compute error sigmoid derivativetarget output - actual output
- For hidden units, use the sigmoid derivativeweighted error of the k units in the layer above
What are the two types of weight updating?
Batch and online
Describe batch weight updating.
All patterns are presented, errors are calculated, then the weights are updated