Week 12: Chapter 12 Flashcards
Write out the adjusted VarB^1 formula
notes
OLS is only inconsistent with serial correlation and a lagged dependent variable if we
assume …
if we assume ut follows an AR(1) model
et is uncorrelated with yt−1, Cov(yt−1, ut ) = ρCov(yt−1, ut−1) which is not zero unless ρ = 0, what happens when p /= 0
OLS estimators of 𝛽0 and 𝛽1 are inconsistent
OLS is only inconsistent with serial correlation and a lagged dependent variable if we
assume ut follows an AR(1) model, write the formula for an AR(1) MODEL with E(….)
E(et|ut−1,ut−2,…) = E(et|yt−1,yt−2,…) = 0
What does the adjusted VAR(B^j) assume?
- Estimated errors following AR(1) process
- Homoskedasticity
Whats the serial correlation robust standard error of 𝛽^1
see notes
write the formula for the truncation lag
notes
what does g stand for in the truncation lag equation?
g is the truncation lag which controls for how much serial correlation is allowed in the model
What does HAC stand for?
Heteroskedastic and autocorrelation consistent standard errors
What are the two standard AR(1) model assumptions?
E(et|ut−1,ut−1,…) = 0 and Var(et|ut−1) = Var(et) = 0
A t-test for serial correlation with strictly exogenous regressors, what is the equation and what is the null?
ut = p(ut-1) + et
H0: p = 0
What are the steps for testing serial correlation with strictly exogenous regressors?
- Estimate the OLS regression on yt and obtain the residual, uˆt
- Regress ut on ut−1 to obtain pˆ and tpˆ
- Use tpˆ to test H0 : ρ = 0. Serial correlation is a problem if H0 is rejected at the 5% level.
Write the equation for the DW test (Durbin-Watson)
see notes
What are the conditions for the DW test to reject/not reject the hypotheses?
H0: ρ=0, H1: ρ>0 and for ρˆ≈ 0, DW ≈ 2
If DW < dL, we can reject H0
* DW > dU, we cannot reject H0
* dL ≤ DW ≤ dL, the test is inconclusive
In correcting for serial correlation, what can be done AND what is required?
Use GLS and strict exogeneity is required
Transform the model to get rid of the serial correlation in ut
see in notes
what are y ̃ and x ̃ called?
quasi-differenced data
Whats the problem with the FGLS in AR(1)
p is rarely known, but we can estimate p^ and use that instead
FGLS has no finite sample properties, which has two major implications
- FGLS estimators are biased but they can be consistent if the data is weakly dependent
- t and F statistics are only approximately t and F distributed due to the estimation error in
ρˆ
HOW is FGLS still useful?
FGLS is asymptotically more efficient than OLS when the errors are AR (1) and the explanatory variables are strictly exogenous
there are two variants of FGLS that differ in their treatment of ρ, name them and which one is used more
- Cochrane-Orcutt (CO) → omits the first observation
- Prais-Winsten (PW) → uses a transformed first observation
- Asymptotically, CO and PW are identical, but in small samples, we want to use PW
What are the benefits of differencing in time series?
- it will often eliminate most of the serial correlation in the errors if in the AR error process, ρ is positive and large
What happens to Var(ut) without differencing?
It grows overtime
BUT differenced, it has a zero mean, constant variance and is serially uncorrelated
What are two things you need to note before testing for heteroskedasticity
- Errors are uncorrelated (first test/correct for serial correlation than test for hetersokedasticity)
- Errors are homoskedastic
What does ARCH stand for
Autoregressive conditional heteroskedasticity
Var(ut |Z)
first order ARCH model
E(ut^2|ut−1, ut−2,…) = E(ut^2|ut−1) = α0 +α1u^2t-1