ANN Flashcards

1
Q

What does the ‘hidden layer’ hide?

A

A hidden layer “hides” its desired output. Neurons in the hidden layer cannot be observed through the input/output behaviour of the network. There is no obvious way to know what the desired output of the hidden layer should be.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Explain why a single-layer perceptron with inputs x1 and x2 cannot compute the XOR

function. Provide a geometric justification to support your answer. (9 marks)

A

1 layer perceptron ≡≡ 1 dimensional separator

The processing unit of a single-layer perceptron network is able to categorize a set of patterns into two classes as the linear threshold function defines their linear separability. Conversely, the two classes must be linearly separable in order for the perceptron network to function correctly. Indeed, this is the main limitation of a single-layer perceptron network.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

How does back propagation work?

A

When a training input-output example is presented to the system, the back-propagation algorithm computes the system output and compares it with the desired output of the training example.

The difference (also called the error) is propagated backwards through the network from the output layer to the input layer.

The neuron activation functions are modified as the error is propagated. To determine the necessary modifications, the back- propagation algorithm differentiates the activation functions of the neurons.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly