Cost function Flashcards

1
Q

What is the formula of the squared error loss (MSE)?

A

L = (y-f(x)^2

Generally preferred for regression because the derivative is easy to compute.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is the absolute Error loss formula (MAE)?

A

L = |y-f(x)|

The MAE cost is more robust to outliers as compared to MSE. However, handling the absolute or modulus operator in mathematical equations is not easy. If I recall correctly the derivative is not stable at 0.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is the Huber loss?

A
L = (1/2)*(y-f(x)^2   if |(y-f(x))| <= d
L = d*|y-f(x)| - (1/2)d^2,   otherwise

The Huber loss combines the best properties of MSE and MAE. It is quadratic for smaller errors and is linear otherwise ( and most importantly similarly for its gradient). It is identified by its delta parameter.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is the binary cross entropy loss?

A

S = - integral(p(x)log(p(x)dx) if continuous

Linked to the log loss.

the integral is replaced with a sum for discrete function. The negative sign is used to make the overall quantity positive. A greater value of entropy for a probability distribution indicates a greater uncertainty in the distribution. Likewise, a smaller value indicates a more certain distribution. This make the binary cross-entropy suitable as a loss function.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is the multi-class cross entropy loss?

A

Its a generalization of the binary cross entropy loss. It is implemented as the softmax function.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

what is the KL-divergence?

A

To be completed later.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly