SECTION 2: Multi-layer perceptrons (MLP) Flashcards
Describe a simple multi-layer perceptron. (2p)
An MLP has an input layer, an output layer and at least one hidden layer (the simplest version of MLP), and uses the Sigmoid function as well as is enabled to backpropagation.
What does its structure allow it to compute that a single-layer perceptron can’t? (2p)
Linearly non-separable problems. That is, problems where classification can’t be made by using only one decision boundary.
Which of the following, given appropriate connection weights, can a multi-layer perceptron linearly classify? (may be more than one correct answer) (2p)
A. AND-gated inputs
B. OR-gated inputs
C. XOR-gated inputs
D. Not XOR-gated inputs
All of them. (But could have to be transformed from linearly non-separable to linearly separable first.)
Explain how a Multi-Layer Perceptron can transform a linearly non-separable problem into a linearly separable one. (3p)
An MLP can use the hidden layer to transform the input (from a linearly non-separable) to a linear separable output.
One example of this is the XOR-gate/problem where the hidden nodes “takes one each” of the logical functions OR and AND.
Explain the advantages and disadvantages of having many hidden nodes in a hidden layer. (3p)
Up to a certain point more hidden nodes give you a faster learning, because you increase the solution pace. But it can also make the learning slower, because the network has to take more parameters into account for the calculations
ADVANTAGES
+ bigger solution space
+ can handle complex problems
DISADVANTAGES
- takes more time to train
- increased risk of credit assignment error