Lecture 14 - Neural Networks Part 2 Flashcards
How is learning achieved in a MP Neuron?
For each example in a training set:
After firing, adjust the weight of each input to try to get the desired output.
How can the threshold of a MP Neuron be modified on-the-fly?
Set the threshold to 0 and add an adaptive input that acts as the threshold instead
The value of this threshold-input can be changed at the same time as the weights
What is the general formula for learning in MP Neurons?
New weight = old weight + change in weight
What is the Hebb rule?
Learning can take the form of strengthening the connections between any pair of neurons that are simultaneously active
What is the formula for the Hebb rule, as applied to MP Neurons?
wi(t+1) = wi(t) + axiz
where a is a constant that determines the rate of learning
z is the desired output
x is the current input
What is the perceptron rule?
Adjust weights like the Hebb rule, but only if the current weights would give the wrong answer
i.e. Hebb rule with error correction
What is the advantage of the perceptron rule?
Provided the function determining z is linearly separable and a small a is chosen, the rule will converge on a set of weights that give the correct rule
What is the drawback of the preceptron rule?
No convergence will occur if the data is not linearly separable
What is the Delta Rule also known as?
Widrow-Hoff rule
What is the benefit of using the Delta rule over the perceptron rule?
Converges even if the set is not linearly separable
What is the formula of the delta rule?
wi(t+1) = wi(t) + axi(z-y)
Suppose z = f(x1….xn)
The weights of the MP neuron will adapt to values such that the unit provides a linear approximation to _______
the function f
For function approximation, it can be shown that the delta rule is an ______ strategy for changing the weights
Optimal
What is a feed forward neural network?
Signals flow in one direction through layers of neurons
When is a neural network “deep”?
When there is more than one hidden layer