Artificial Neural Networks Flashcards
What does a perceptron model?
A brain neuron
How are perceptrons like neurons?
- They transmit info to other perceptions
- They multiply the inputs given by some pre determined weight
- They apply some function to the set of inputs at each node
Why are ANN’s fault and noise tolerant?
As they run in parallel
What are recurrent neural networks good at?
Things that happen repeatedly in a time series, like music prediction or speech recognition
What do ANN’s learn?
They learn to recognise patterns in the data.
Why is the complexity of a network important?
As it must be sufficiently complex in order to learn all the patterns, but not too complex that it takes too long to learn
What is necessary to use an activation function?
The result of the function applied by the perceptron must be Normalised
Why do we add a bias to the activation function?
This is essentially a threshold for the activation function.
What is back propogation?
It is the most common learning rule for ANN’s
How does back propagation work?
It uses the error to calculate how much the weight of each input should change, and goes through the network doing this
What do non-linearities allow a network to do?
Identify complex regions within the search space.
The more layers a network has, the more ‘Lines’ it can draw on a graph
What needs to happen to the output of each perceptron?
It needs to be normalised to between 0 and 1
Why do outputs of perceptrons need to be normalised?
To prevent value growth through a network
What are some termination conditions for back propogation?
- After a fixed number of iterations
- Once training error falls below some threshold
- Once validation error falls below some threshold
Why does back propogation work?
- Early steps represent an approximately linear function
- Weights move closer to global minimum
- In later stages, steps move towards a local minimum