Week 2 Flashcards
What is autocorrelation? Verbal and mathematical
Relationship between variable and its lag
E(εiεj) = σij, i!=j
E(εε’) = Ω = matrix of all different σij -> symmetric and positive definite
Also possible heteroskedasticity
Can see in plot of residual against residual lag - correlated
What are the properties of OLS under autocorrelation?
- Unbiased
- Consistent
- Inefficient
- Incorrect SE
What estimators should we use if autocorrelation?
Newey-West Estimator Covariance Matrix
Where do the weights in the Newey-West estimator come from?
The Kernel function
What are the properties of Newey-West SE?
HAC (Heteroskedastic and autocorrelation consistent)
What is the idea of the Cochrane-Orcutt procedure?
Errors are from an autoregressive model of order 1
You have the normal regression, the lag regression, and the regression of error term with lag
Find value of y(i) - py(i-1)
What are the alternatives to Cochrane-Orcutt procedure?
NLS for y(i) = py(i-1) + B1(1-p) + B2(xi-px(i-1)) + n(i)
In Eviews, AR(1)
What is the idea of GLS?
Transform data s.t. the conditions for efficient OLS hold
What is the Choleski decomposition?
PP’ = Ω
Transformed data: y* = P^(-1)y ; X* = P^(-1)X ; ε* = P^(-1)ε
Properties of GLS disturbances
Homoskedastic and no autocorrelation
=> OLS for transformed model efficient estimator for β
Compare GLS to Cochrane-Orcutt
- In GLS 1st observation is included
- In GLS scaling factor 1/σ(n)
GLS estimator + expected value and variance
b = (X'Ω^(-1)X)^(-1)X'Ω^(-1)y E(b) = β Var(b) = (X'Ω^(-1)X)^(-1)
Why do we need feasible GLS?
In practice often Ω unknown and have to estimate it
What are the steps of FGLS?
1) Estimate the covariance matrix
a) Apply OLS in y=Xβ + ε -> b consistent
b) Estimate Ω using residuals e = y - Xb : Ω^ = ee’
2) Apply OLS on the transformed data
a) Use Ω^ to determine P^
b) Transform data with P^^(-1) : y=P^^(-1)y and X=P^^(-1)X
c) Estimate β with OLS in the model for the transformed data: y* = Xβ + ε -> b(FLGS)
What is the null hypothesis of the autocorrelation tests?
No autocorrelation
What is the equation for the autocorrelation of residuals?
r(k) = Σe(i)e(i-k)/Σe(i)^2 : first sum is from i=k+1 to n; second sum i=1 to n
Durbin-Watson test
DW = Σ(i=2->n) (e(i) - e(i-1))^2/Σe(i)^2 ≈ 2(1 - r(1))
0 (r(1) = 1 => perfect correlation)
4 (r(1) = -1 => perfect negative correlation)
H0: Value should be around 2
What are the disadvantages of the Durbin-Watson test?
- Distribution under H0 depends on the properties of regressors
- Not applicable when lagged dependent variables are included as regressors
Box-Pierce Test
H0: No autocorrelation
BP = nΣ(k=1 -> p) r(k)^2 ≈ χ2(p)
Ljung-Box Test
LB = nΣ(k=1 -> p) (n+2)/(n-k) r(k)^2 ≈ χ2(p)
What type of test is a Breusch-Godfrey test?
Lagrange Multiplier (LM) test
Which is the procedure for the Breusch-Godfrey test?
1) OLS on y(i) = x(i)’β + ε(i)
2) Run auxiliary regression
3) Under H0 (no autocorrelation) have nR^2 ≈ χ2(p)
What is the main difference between the Box-Pierce and Ljung-Box test?
The Box-Pierce test is an approximated version of the Ljung-Box test
What happens to the significance of the parameter of the independent variable with NW SE?
The significance may change
What happens to the marginal effect of the independent variables with NW SE?
Doesn’t change
Do NW SE automatically correct for possible heteroskedasticity?
Yes
Do NW SE not harm i.e. should they always be used?
True - Ω = σ^2*I
False - We add uncertainly, less efficient