Measuring Accuracy Flashcards
In-sample fit
often used to compare models
Common indicators used to evaluate forecast accuracy are:
Mean absolute deviation
Mean squared error
Mean absolute percentage error
Mean percentage error
Regardless of the measure being used, the lowest value generated indicates the most accurate forecasting model
Mean Absolute Deviation
MAD finds the average value of the absolute errors
Mean Square Error
With the MSE each residual is squared and then the mean of these squared errors is calculated.
This measure makes large forecasting errors even larger hence penalising models that have occasional large errors
MAD versus MSE
MA(3) forecast errors are less volatile than SES MSE punishes large forecasting errors
Mean Absolute Percentage Error
MAPE provides an indication of how large the forecast errors are in comparison to the actual values of the series:
Mean Percentage Error
f MPE is close to zero there is no bias. If it is a large negative value then it is overestimating on average. If it is a large positive value then it is underestimating on average.
In-sample and Out-of-sample Forecasts
By using the “optimal” parameters, one can minimize the in-sample forecast errors.
Estimation and Holdout Periods
to evaluate “out-of-sample” performance of a model, we separate the historical data into an estimation period and a holdout period. the optimal parameters for different models are estimated using data in the estimation period.
The forecasting performance of different models, using the optimal parameters from the estimation period, is evaluated using data in the holdout period.
Holdout Period
We want the best out-of-sample forecasts, not the best in-sample fit.
The holdout period is a way to compare several competing models in a pseudo “out-of-sample” setting
The parameters of the better model are then updated using the holdout period data and used to generate the ‘true’ forecasts