Homework 2 Flashcards

1
Q

Nesterov`s accelerated gradient scheme is often more efficient than the simple momentum scheme because the weighting factor of the momentum term increases as a function of iteration number.

A

False

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Weight decay helps against overfitting.

A

True

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Pruning helps against overfitting.

A

True

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

L_1 regularisation reduces small weights less than L_2​​ regularisation.

A

False

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

For the parity problem with N inputs, one can construct a perceptron that solves the problem with less than
2^N​​ hidden neurons.

A

True

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Using a stochastic path through weight space in backpropagation assures that the energy either decreases or stays constant.

A

False

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

The number of N-dimensional Boolean functions is

2^N.

A

False

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

How the weights are initialised for backpropagation does not matter (provided that they are not all zero) because one usually iterates for many epochs so that the initial conditions are forgotten.

A

False

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

When solving a t=+1/−1 problem in two dimensions using a decision boundary in the form of a convex polygon, the resulting output problem may sometimes not be linearly separable.

A

False

How well did you know this?
1
Not at all
2
3
4
5
Perfectly