RMSE AND MSE Flashcards
use of performance metrics
Performance metrics like classification accuracy and root mean squared error can give you a clear objective idea of how good a set of predictions is
objective function
All the algorithms in machine learning rely on minimizing or maximizing a function, which we call “objective function”.
loss functions
The group of functions that are minimized are called “loss functions”
- A loss function is a measure of how good a prediction model does in terms of being able to predict the expected outcome.
gradient descent
A most commonly used method of finding the minimum point of function is “gradient descent”.
Loss functions can be broadly categorized into 2 types:
Classification and Regression Loss.
Regression functions
predict a quantity
classification functions
predict a label
RMSE (Root Mean Squared Error)
one of the methods to determine the accuracy of our model in predicting the target values.In machine Learning when we want to look at the accuracy of our model we take the root mean square of the error that has occurred between the test values and the predicted values mathematically
MSE (Mean Squared Error)
Mean Square Error (MSE) is the most commonly used regression loss function.
MSE is the sum of squared distances between our target variable and predicted values
Why use mean squared error
MSE is sensitive towards outliers and given several examples with the same input feature values, the optimal prediction will be their mean target value. This should be compared with Mean Absolute Error, where the optimal prediction is the median. MSE is thus good to use if you believe that your target data, conditioned on the input, is normally distributed around a mean value, and when it’s important to penalize outliers extra much.
When to use mean squared error
Use MSE when doing regression, believing that your target, conditioned on the input, is normally distributed, and want large errors to be significantly (quadratically) more penalized than small ones.
Classification loss
log loss, focal loss, KL Divergence/Relative Entropy, Exponential loss, Hinge loss
Regression loss
MSE loss/quadratic loss, Mean Absolute error, Log cosh loss, Huber loss/Smooth Mean Absolute error, quantile loss