14-backpropagation Flashcards
1
Q
Why can’t we use the perceptron rule for neural networks?
A
The perceptron rule requires the true target outputs, but we only have access to true values at the output layer. As a result we use backpropagation
2
Q
What is a backpropagation?
A
Backpropagation provides us with a way to update weights in a MLP. It works by propagating the error from the output node through the hidden layer nodes