Measuring Accuracy Flashcards

1
Q

In-sample fit

A

often used to compare models

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Common indicators used to evaluate forecast accuracy are:

A

Mean absolute deviation
Mean squared error
Mean absolute percentage error
Mean percentage error

Regardless of the measure being used, the lowest value generated indicates the most accurate forecasting model

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Mean Absolute Deviation

A

MAD finds the average value of the absolute errors

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Mean Square Error

A

With the MSE each residual is squared and then the mean of these squared errors is calculated.

This measure makes large forecasting errors even larger hence penalising models that have occasional large errors

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

MAD versus MSE

A

MA(3) forecast errors are less volatile than SES  MSE punishes large forecasting errors

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Mean Absolute Percentage Error

A

MAPE provides an indication of how large the forecast errors are in comparison to the actual values of the series:

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Mean Percentage Error

A

f MPE is close to zero there is no bias. If it is a large negative value then it is overestimating on average. If it is a large positive value then it is underestimating on average.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

In-sample and Out-of-sample Forecasts

A

By using the “optimal” parameters, one can minimize the in-sample forecast errors.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Estimation and Holdout Periods

A

to evaluate “out-of-sample” performance of a model, we separate the historical data into an estimation period and a holdout period. the optimal parameters for different models are estimated using data in the estimation period.

 The forecasting performance of different models, using the optimal parameters from the estimation period, is evaluated using data in the holdout period.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Holdout Period

A

We want the best out-of-sample forecasts, not the best in-sample fit.

The holdout period is a way to compare several competing models in a pseudo “out-of-sample” setting

The parameters of the better model are then updated using the holdout period data and used to generate the ‘true’ forecasts

How well did you know this?
1
Not at all
2
3
4
5
Perfectly