5 - Fixing Problems Flashcards
1
Q
What does regularization do?
A
Regularization makes optimization problems *better-behaved*, by adding something to the cost/loss function
L(theta) = L(theta) + lambda*omega(theta)
where:
L(theta) = loss function
lambda = value between (0,1) that tells us how much we should weight our Betas or Weights
omega(theta) = our regularization term (L1=MAE) or (L2=MSE)
2
Q
What does Normalization do?
A
Normalization makes optimization *easier* by decreasing the range of values that a feature can take.