Quant Methods #11 - Time-Series Analysis Flashcards

You may prefer our related Brainscape-certified flashcards:
1
Q

types of time series models

A

LOS 11.a

Trend Models

  • linear: yt = b0 + b1t + et
  • log-linear: yt = eb0+b1t ⇒ ln(yt) = b0 + b1t + et

Lagged Models

  • autoregressive AR(p): xt = b0 + b1xt-1 + b2xt-2 + … + bpxt-p + et
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

serial correlation in Trend model

A

LOS 11.b

One assumption of linear regression is that regression residuals are uncorrelated with each other; violation of this assumption is called autocorrelation or serial correlation.

  • use Durbin Watson to test for serial correlation
  • try log-linear transform of time series data to remove it
  • if log-linear model still exhibits autocorrelation, try using autoregression model
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

covariance stationary

A

LOS 11.c

To use AR models, time series must be covariance stationary. A time series is covariance stationary if all 3 of these are true:

  • constant & finite expected values (i.e. mean-reverting level)
  • constant & finite variance
  • constant & finite covariance between values at any given lead or lag

if not covariance stationary time series, AR model will produce meaningless results

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

serial correlation in AR model

A

LOS 11.e

xt = b0 + b1xt-1 + et

  • cannot use Durbin Watson to test for serial correlation in AR models
  • use a t-test on residual autocorrelations; if serial correlation exists, then model is incomplete
  • solution: add more lagged variables e.g. AR(1) → AR(2)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

testing for serial correlation in AR model (used to test the “fit” of the autoregressive model)

A

LOS 11.e

  1. estimate AR model using linear regression; start with AR(1)
  2. calculate autocorrelation of residuals (i.e. et-et-1, et-et-2, etc)
  3. test for autocorrelation significance:
  • t-test; H0: pe(t), <em>e</em>(t-k) = 0
  • t-stat = pe(t), <em>e</em>(t-k) / [1/sqrt(T)], df = T - 2
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

seasonality

A

LOS 11.l

seasonality - a time series showing consistent seasonal patterns

detecting seasonality - autocorreltaion of residuals will show strong correlation (i.e. high t-stats) to certain lags

solution - bring seasonal lag into the model

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

mean reversion of time series

A

LOS 11.f

a time series is mean-reverting if value of dependent variable tends to move towards its mean (i.e. ^xt = xt-1)

the mean-reverting level for AR(1) is

MRL = xt = b0 / (1 - b1)

NOTE: all covariance stationary time series have a finite mean-reverting level. For AR(1) this means that |b1| < 1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

comparing model accuracy of models

A

LOS 11.g

in-sample forecasts

  • forecasts (^yt) are within the time series data sample range
  • forecast accuracy: compare errors (yt - ^yt)

out-of-sample forecasts

  • forecasts (^yt) are outside the time series data sample range
  • forecast accuracy: compare root mean squared errors (RMSE)

The model having lowest RMSE for out-of-sample data will have better predictive power in the future

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

regression coefficient instability

A

LOS 11.h

  • instability (aka nonstationarity) - estimated regression coefficients can change (degrade) over time
  • trade-off between statistical reliability of long time series and stability of short time series
  • has economic progress or environment changed? If “yes” then historical data may not produce a reliable model
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

unit roots (def)

A

LOS 11.j,l

  • In an AR(1) model, coefficient must be < 1.0
  • if coefficient = 1 ⇒ unit root (i.e. b1 = 1)
  • unit root is defining feature of random walks
  • common in series that consistently increase or decrease over time (e.g. 1990’s stock market)
  • Two types of random walks: “drift” and “no drift”
    • w/o drift: xt = 0 + 1xt-1 + et (note b0 = 0)
    • with drift: xt = b0 + 1xt-1 + et, b0 != 0
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

unit roots (impact, detection, correction)

A

LOS 11.i-k

impact: unit root models are not covariance stationary (i.e. “nonstationarity”)

MRL = b0 / (1 - b1) = b0 / 0 = infinity

detection: use Dickey-Fuller test for a unit root

correction: use first differencing to remove unit roots

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Dickey-Fuller Test for a unit root

A

LOS 11.j-k

  • transform AR(1) model: subtract xt-1 from both sides

xt - xt-1 = b0 + (b1 - 1)xt-1 + et
= b0 + g1xt-1 + et, where g1 = (b1 - 1)

if g1 = 0, then there is a unit root in AR(1) model

  • DF test:
    • H0: g1 = 0, times series has a unit root and is therefore nonstationary
    • Ha: g1 < 0, no unit root in time series
    • modified t-test for g1 with special tc table
    • if null is rejected, , then no unit root
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

first differencing

A

LOS 11.i-k

  • use First Differencing to remove unit roots
  • create a new dependent variable, y, defined as the change in x:
    • yt = xt - xt-1 = et ⇒ yt = b<span>0</span> + b1yt-1 + e1, b0 = b1 = 0
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

autoregressive conditional heteroskedasticity (ARCH)

A

LOS 11.m

For a single times series such as an AR model, ARCH exists when the variance of residuals in one period is dependent (i.e. a function of) on the variance of the residuals in a previous period (for ARCH(1) model, the most previous period).

impact: causes incorrect standard errors of the coefficients in AR models, which makes the hypothesis tests of these coefficients invalid

detection: use ARCH(1) regression of squared residuals

solution: use generalized least squares (or other method) to correct for heteroskedasticity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

testing for ARCH

A

LOS 11.m

To test for ARCH, use ARCH(1) model that regresses the squared residuals on the first lag of the squared residuals:

êt2 = a0 + a1êt-12 + ut,

where a0 is a constant and ut is an error term

if the coefficient a1 is statistically different from zero, then the time series is ARCH(1)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

time series construction flowchart

A

LOS 11.o

17
Q

multiple time series

A

LOS 11.n

Before regressing two time series (e.g. yt = b0 + b1xt + et), check for covariance nonstationary (unit roots) on both time series (separate DF tests for each).

  • both are covariance stationry → OK
  • only one is covariance stationry → Not reliable
  • neither covariance stationry → OK if cointegrated
18
Q

cointegration

A

LOS 11.n

cointegration - two time series are related to the same macro variables or follow the same trend

if two time series are cointegrated, the residual terms from regressing the two on each other is covariance stationary and t-tests are therefore reliable.