5 - Fixing Problems Flashcards

1
Q

What does regularization do?

A

Regularization makes optimization problems *better-behaved*, by adding something to the cost/loss function

L(theta) = L(theta) + lambda*omega(theta)

where:
L(theta) = loss function
lambda = value between (0,1) that tells us how much we should weight our Betas or Weights
omega(theta) = our regularization term (L1=MAE) or (L2=MSE)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What does Normalization do?

A

Normalization makes optimization *easier* by decreasing the range of values that a feature can take.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly