Stationary Flashcards

1
Q

What is a stochastic process?

A

A random process that evolves over time e.g. x(t) = θx(t-1) + ε(t) (this is an AR process)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Is the probability distribution of ε(t) dependent or independent of time?

A

It is independent of time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

How do we write and AR process in MA form?

A

By backward substitution we have: x(t-1) = θx(t-2) + ε(t-1) x(t-2) = θx(t-3) + ε(t-2)… we come to: x(t) = ε(t) + θε(t-1) + θ²ε(t-2) … -> ∑θ(i)ε(t-i)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What assumptions do we make about this white noise?

A
  • It is Gaussian
  • its expected value is 0 with constant variance, following a normal distribution.
  • The error terms are uncorrelated i.e. E(ε(t),ε(t-1))=0
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is the necessary condition for an AR(1) process to be stationary?

A

Autocorrelations must approach zero exponentially - |θ| < 1 In order for |θ| < 1, L > 1 (the invertibility condition)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

How do we calculate the value of Φ(1) and Φ(2)?

A

Through expansion thing:

  • Find the equation with the lag operators (i.e. 1-0.1L+1.2L²=0)
  • Find the value for L(1) and L(2)
  • Find Φ(1) and Φ(2) through the equation (1-Φ₁L)(1-Φ₂L)=0
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Is it possible to estimate the parameters of the process given that we only observe one realisation?

A

Yes, assuming the process is ergodic i.e. if the sample moments of a particular realisation approach the population moments of the process as the length of the realisation becomes infinite

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Is it possible to test for ergodicity?

A

No, so we really on the weaker property of stationarity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is a strictly stationary stochastic process?

A

A stochastic process whose properties are unaffected by a change of time origin i.e. x(1) at t(1) must have the same joint probability distribution as x(1+k) at t(1+k)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is a weak or covariance stationary stochastic process?

A

One where: E[x(t)] is constant across time V[(x(t)] is constant across time Cov[x(t),x(t-k)] and Corr[x(t),x(t-k)] depend only on the lag k but not time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What’s the difference between Autocorrelation and Partial-Autocorrelation?

A

PAC uses the method of moments to compute, whereas AC uses OLS
With AC, we compute the AC between x(t)→x(t-1), then x(t)→x(t-2) directly, the PAC takes into account all the steps in between x(t)→x(t-k)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Are MA(1) processes stationary?

A

Yes because the value variance does not vary over time, nor does covariance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Why are PACs useful?

A

They are useful to capture the correlation between the series and its past value while allowing for the intermediate effects of lags

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What are the Yule-Walker equations?

A
ρ(1) = θ(1) + θ(2)ρ(1) 
ρ(2) = θ(1)ρ(1) + θ(2)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is the calculation for correlation? (ρ)?

A

ρ(1) = γ(1)/γ(0) ρ(2) = γ(2)/γ(0)
OR
ρ(1) = θ(1)/1-θ(2) ρ(2) = θ(1)²/(1-θ(2)) + θ(2)
where γ(k) is the covariance between x(t),x(t-1) and γ(0) is variance of x(t)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is the MA(1) equation?

A

x(t) = ε(t) + αε(t-1) It is a function of the error terms and their past values weighted by the value, α

17
Q

What is the expected value of x(t) in an MA(1) process?

A

0

18
Q

What is the variance (γ(0)) of the process?

A

σ²(1+α²)

19
Q

What is the covariance (γ(1)) = E[x(t)x(t-1)] of the process?

A

ασ²

20
Q

What is the covariance (γ(k)) = E[x(t)x(t-k)] of the process?

A

0, where k≥2

21
Q

Is the MA process weakly dependent?

A

Yes, because the autocorrelation moves quickly to 0

22
Q

What is the Stationarity condition of an ARMA(p,q) process?

A

|θ|<1 where: θ(L) = (1-Φ₁L)(1-Φ₂L)…=0

23
Q

What is the invertibility condition of an ARMA(p,q) process?

A

where: α(L) = (1-h₁L)(1-h₂L)…=0

|h|<1, L>1

24
Q

What are the steps to the Box-Jenkins method to fit an ARIMA process to a model?

A
  1. Identification (examine the correlogram) 2. Estimation (Fit an AR(1) process to the data) 3. Diagnostic Checking (examine correlogram of the residuals (again))
25
Q

What is the Box-Jenkins philosophy? (i.e. how to choose which model is the best fit if a few work?)

A

Select the most parsimonious one - the one with the fewest parameters i.e. an ARMA(1,1) is preferred to an ARMA(3,0)

26
Q

How do we identify a unit root process?

A

The AC will decrease linearly and very slowly

27
Q

Is Bayesian Information Criterion (BIC) stricter than Akaike Information Criteria (AIC)?

A

Yes, because it leads to lower values of p and q being chosen in the ARMA(p,q) model

28
Q

How do we choose the best AIC and BIC?

A

When estimating a range of ARMA(p,q) models we look for the lowest values in AIC and BIC

29
Q

How do we assess the performance of a forecasting model? (2 methods)

A

Root Mean Squared Errors

Theil’s U statistic

30
Q

What is the formula for RMSE?

A

√(1/k)∑(x-x^) This compares different models (note: the √ is over everything) x is the observed value and x^ is the forecasted value

31
Q

What is the formula for Theil’s U statistic?

A

RMSE/(√(1/k)∑x²)+(√(1/k)∑x^²) if U = 0, model predicts the data perfectly U = 1, negative/opposite relation between actual and predicted values

32
Q

How do static and dynamic forecasts differ depending on how far ahead we are forecasting?

A

If it is only 1 period ahead, static = dynamic If we are forecasting two or more ahead: -Static → substitute actual values into the forecasting equation → information set is lagged only one period in the past → x is used to forecast x^ -Dynamic → substitute forecast values into the forecast equation → information set is fixed at date T → x is used to forecast x^(1), x^(1) is used for x^(2)… -Static forecasts have lower forecast error variance i.e. lower s.e. bands -s.e. = RMSE remains constant for static but increases for dynamic