Neural nets 2 Flashcards

1
Q

What problems can SLPs not solve?

A

XOR problems (non linearly separable problems)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is the difference between SLPs and multi-layered perceptrons?

A

MLPs have hidden layers between the input and output nodes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is the benefit of a hidden layer?

A

A hidden layer of neurons
can transform a non-linear
problem into a number of
simpler linearly separable
problems

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Name a problem with training MLPs

A

Credit assignment problem. Which weights cause the problem? Hidden->output layer weights or input->hidden layer weights?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

How do you train MLPs?

A

target (t) – output (y) error/cost term

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is backpropagation used for?

A

Backpropagation is used to solve the credit assignment problem. We want to back-chain
the error from the output layer to the input layer to update the weights of the network layer by layer

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What are the 8 backpropagation algorithm steps?

A
  1. Present the data vector at the
    input layer,
  2. Pass weighted input layer
    activations to hidden layer
  3. Pass weighted hidden layer
    activations to output layer
  4. Apply the target value to the
    output layer
  5. Calculate the δ 𝑜𝑜 output error
    values
  6. Update the hidden-output layer
    weights using using the δ 𝑜𝑜 values
  7. For each hidden node, calculate its
    δℎ error value using δ 𝑜𝑜.
  8. Update the input-hidden layer
    weights using the δℎ values.
    Step 1-3 Forward Pass
    Step 4-6Backward Pass
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

But how do we find the delta (error) values for each layer?

A

*Use gradient descent
principle
*Use the chain rule (from
calculus)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is a problem with the gradient descent principle?

A

We can get stuck in a local minima. We would like to find the global minimum but with gradient descent we can’t be sure we have.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

How do we solve the credit assignment problem?

A

1) Backpropagating (or backchaining) network error (and proxy error)
terms through the layers of weights in the network using calculus;
2) All nodes’ activation from the forward pass is maintained in order to appropriately assign error to the corresponding weights (if the
weights aren’t connected to active nodes they can’t have caused the error)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly