QE 7/8 - time series Flashcards
Difference between a predicted value and a forecast?
- Predicted value – refers to value of Y predicted (using regression) for observation WITHIN SAMPLE used to estimate regression
- Forecast – refers to value of Y forecasted for observation OUT OF SAMPLE used to estimate regression
Difference between a forecast error and OLS residual?
- OLS residual = within sample (difference between predicted value and actual value)
- Forecast error = same concept but out of sample
What does RMSFE measure?
- Measures spread of forecast error distribution
2. Measures magnitude of typical forecasting ‘mistake’
Sources of error in the RMSFE
(1) future values of u unknown
2) error in estimating coefficients (B0 & B1
When is RMSFE not an appropriate measure of the magnitude of a typical forecasting mistake? Example?
- If forecasting mistakes asymmetric
- E.g. when forecasting time I’ll arrive at train station, under-forecast (being late) much worse than over-forecast (being early)
How to test the hypothesis that, say, regressors Yt-2, Yt-3,…,Yt-p don’t further help forecast (beyond Yt-1)?
- F-test that coefficients all jointly zero
- Information criterion (BIC or AIC)
(i) E.g. Bayes information criterion (BIC) determines how large the increase in R-squared must be to justify including the additional lag
What is the Granger causality test?
- Test of joint hypothesis that none of X’s a useful predictor, above and beyond lagged values of Y
- i.e. F-statistic testing hypothesis that coefficients on all values of 1 of variables are zero (implying regressors have no predictive content for Yt beyond that contained in other regressors)
- N.B. NOT a test of causality (causality here just refers to predictive content)
What is the trade-off of using additional lagged values as predictors?
- Too few lags decreases forecast accuracy because valuable information is lost
- Too many lags increases estimation uncertainty
Generally, an AR(…..) in 1st difference = AR(…..) in level
Generally, an AR(p) in 1st difference = AR(p+1) in level
- What does it mean for Yt to have very strong autocorrelation?
- What is the consequence of this?
- What happens in the extreme case when autocorrelation = 1?
- Possible solution?
- Very persistent process
- OLS estimator of the AR coefficient is biased towards zero
- In the extreme case, Yt no longer stationary
- Take 1st differences
- What does Granger causality mean?
2. Granger non-causality?
- Granger causality - at least 1 of the coefficients of the lags of X is not zero
- Granger non-causality - all the coefficients of the lags on X are zero
What is the only way to remove a stochastic trend? Exception?
Only way to remove a stochastic trend is by differencing, unless there’s co-integration
Problems caused by stochastic trends/unit root?
- Autoregressive coefficients biased downwards towards zero
- Distribution of OLS estimator and t-statistic not normal, even in large samples
- Spurious regression
Explain how ‘stochastic trend’ and ‘unit root’ can be used interchangeably?
- If Yt has a unit root, then Yt contains a stochastic trend (and so is non-stationary)
- If Yt is stationary (and hence doesn’t have a unit root), then Yt doesn’t contain a stochastic trend
Main methods for dealing with problem of spurious regression?
- Test for co-integration
2. Difference the data so it becomes stationary
- Benefit of co-integration (rather than differencing data) if possible, when dealing with problem of spurious regression?
- How do we do this?
1a. Co-integration allows us to see long-run relationship between X and Y
1b. Regressing on differences only allows short-run relationship
- Use error correction model
Initial (informal) indication of a stochastic trend?
- Fit a mean line through the data and see how often the series crosses the line
- If it doesn’t cross the line very often, this indicates data with stochastic trend