Quant Methods #11 - Time-Series Analysis Flashcards
types of time series models
LOS 11.a
Trend Models
- linear: yt = b0 + b1t + et
- log-linear: yt = eb0+b1t ⇒ ln(yt) = b0 + b1t + et
Lagged Models
- autoregressive AR(p): xt = b0 + b1xt-1 + b2xt-2 + … + bpxt-p + et
serial correlation in Trend model
LOS 11.b
One assumption of linear regression is that regression residuals are uncorrelated with each other; violation of this assumption is called autocorrelation or serial correlation.
- use Durbin Watson to test for serial correlation
- try log-linear transform of time series data to remove it
- if log-linear model still exhibits autocorrelation, try using autoregression model
covariance stationary
LOS 11.c
To use AR models, time series must be covariance stationary. A time series is covariance stationary if all 3 of these are true:
- constant & finite expected values (i.e. mean-reverting level)
- constant & finite variance
- constant & finite covariance between values at any given lead or lag
if not covariance stationary time series, AR model will produce meaningless results
serial correlation in AR model
LOS 11.e
xt = b0 + b1xt-1 + et
- cannot use Durbin Watson to test for serial correlation in AR models
- use a t-test on residual autocorrelations; if serial correlation exists, then model is incomplete
- solution: add more lagged variables e.g. AR(1) → AR(2)
testing for serial correlation in AR model (used to test the “fit” of the autoregressive model)
LOS 11.e
- estimate AR model using linear regression; start with AR(1)
- calculate autocorrelation of residuals (i.e. et-et-1, et-et-2, etc)
- test for autocorrelation significance:
- t-test; H0: pe(t), <em>e</em>(t-k) = 0
- t-stat = pe(t), <em>e</em>(t-k) / [1/sqrt(T)], df = T - 2
seasonality
LOS 11.l
seasonality - a time series showing consistent seasonal patterns
detecting seasonality - autocorreltaion of residuals will show strong correlation (i.e. high t-stats) to certain lags
solution - bring seasonal lag into the model
mean reversion of time series
LOS 11.f
a time series is mean-reverting if value of dependent variable tends to move towards its mean (i.e. ^xt = xt-1)
the mean-reverting level for AR(1) is
MRL = xt = b0 / (1 - b1)
NOTE: all covariance stationary time series have a finite mean-reverting level. For AR(1) this means that |b1| < 1
comparing model accuracy of models
LOS 11.g
in-sample forecasts
- forecasts (^yt) are within the time series data sample range
- forecast accuracy: compare errors (yt - ^yt)
out-of-sample forecasts
- forecasts (^yt) are outside the time series data sample range
- forecast accuracy: compare root mean squared errors (RMSE)
The model having lowest RMSE for out-of-sample data will have better predictive power in the future
regression coefficient instability
LOS 11.h
- instability (aka nonstationarity) - estimated regression coefficients can change (degrade) over time
- trade-off between statistical reliability of long time series and stability of short time series
- has economic progress or environment changed? If “yes” then historical data may not produce a reliable model
unit roots (def)
LOS 11.j,l
- In an AR(1) model, coefficient must be < 1.0
- if coefficient = 1 ⇒ unit root (i.e. b1 = 1)
- unit root is defining feature of random walks
- common in series that consistently increase or decrease over time (e.g. 1990’s stock market)
- Two types of random walks: “drift” and “no drift”
- w/o drift: xt = 0 + 1xt-1 + et (note b0 = 0)
- with drift: xt = b0 + 1xt-1 + et, b0 != 0
unit roots (impact, detection, correction)
LOS 11.i-k
impact: unit root models are not covariance stationary (i.e. “nonstationarity”)
MRL = b0 / (1 - b1) = b0 / 0 = infinity
detection: use Dickey-Fuller test for a unit root
correction: use first differencing to remove unit roots
Dickey-Fuller Test for a unit root
LOS 11.j-k
- transform AR(1) model: subtract xt-1 from both sides
xt - xt-1 = b0 + (b1 - 1)xt-1 + et
= b0 + g1xt-1 + et, where g1 = (b1 - 1)
if g1 = 0, then there is a unit root in AR(1) model
- DF test:
- H0: g1 = 0, times series has a unit root and is therefore nonstationary
- Ha: g1 < 0, no unit root in time series
- modified t-test for g1 with special tc table
- if null is rejected, , then no unit root
first differencing
LOS 11.i-k
- use First Differencing to remove unit roots
- create a new dependent variable, y, defined as the change in x:
- yt = xt - xt-1 = et ⇒ yt = b<span>0</span> + b1yt-1 + e1, b0 = b1 = 0
autoregressive conditional heteroskedasticity (ARCH)
LOS 11.m
For a single times series such as an AR model, ARCH exists when the variance of residuals in one period is dependent (i.e. a function of) on the variance of the residuals in a previous period (for ARCH(1) model, the most previous period).
impact: causes incorrect standard errors of the coefficients in AR models, which makes the hypothesis tests of these coefficients invalid
detection: use ARCH(1) regression of squared residuals
solution: use generalized least squares (or other method) to correct for heteroskedasticity
testing for ARCH
LOS 11.m
To test for ARCH, use ARCH(1) model that regresses the squared residuals on the first lag of the squared residuals:
êt2 = a0 + a1êt-12 + ut,
where a0 is a constant and ut is an error term
if the coefficient a1 is statistically different from zero, then the time series is ARCH(1)