C2W1 Practical Aspects of Deep Learning Flashcards

1
Q

Data should be divided into

A

Train (99%) / dev (0.5%) / test (0.5%)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Bias/Variance

A

High bias - bad on train - algorithms fails to lear train data
High variance - good on train, bad on test.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

How to reduce bias

A

Bigger network / train longer

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

How to reduce variance

A

More data / regularization

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Regularization L1 / L2

A

Just reduces weight of some weights, input parameter Alpha

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Dropout regularization

A

Drop random neurons during training on each sample. Do division of the remaining to adjust values. Don’t drop them when evaluating model.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Other regularizations methods

A

Twist the image, early stopping (when dev error is optimal)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Normalizing Inputs

A

Center the data with respect to the plot center (subtract the mean), x - mu
Reduce (normalize) the variance: x / sigma. Sigma is not an input parameter

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Weight Initialization for Deep Networks

A

There are different techniques, for example Xaviar initialization if activation function is tahn

How well did you know this?
1
Not at all
2
3
4
5
Perfectly