Lecture 5: Artificial Neural Networks Flashcards
What is the behaviour of a neuron?
- A neuron receives inputs from its neighbors
- If enough inputs are received at the same time
—the neuron is activated
—and fires an output to its neighbors. - Repeated firings across a synapse increases its sensitibity and the future likelihood of its firing.
What is a perceptron?
A SINGLE coputational neuron
What are the inputs of a perceptron?
Input signals Xi, weights Wi for each feature xi (strenght of connection)
What are the output of the perceptron?
Output:
if sum of input weights >= some threshold, neuron fires (output=1)
otherwise output = 0
If (w1 x1 + … + wn xn) >= threshold
Then output = 1
Else output = 0
How is a perceptron trained?
- Step 1: Set weights to random values
- Step 2: Feed perceptron with a set of inputs
- Step 3: Compute the network outputs
- Step 4: Adjust the weights
- if output correct → weights stay the same
- if output = 0 but it should be 1 →
- increase weights on active connections (i.e. input xi=1)
- if output = 1 but should be 0 →
- decrease weights on active connections (i.e. input xi=1)
- Step 5: Repeat steps 2 to 4 a large number of times until the network
converges to the right results for the given training examples
What is a bias?
- Useful to avoid figuring out the threshold by using a “bias”
- A bias is equivalent to a weight on an extra input feature that has always the value of 1.
- Is added at the end of weighted calculation
Limit of perceptrons early on
Only linearly separable functions
can be represented by a
perceptron
What are Multilayer Neural networks?
- to learn more complex
functions (more complex
decision boundaries), have
hidden nodes - and for non-binary decisions,
have multiple output nodes - use a non-linear activation
function
What is “feed forward”
Feed-forward:
Input from the features is fed forward in the network
from input layer towards the output layer
What is backpropagation?
Error rate flows backwards from the output layer to the input layer (to adjust the weights in order to minimize output error)
- Essentailly, this is for computing the errors in hidden layers