Week 12: Chapter 12 Flashcards

1
Q

Write out the adjusted VarB^1 formula

A

notes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

OLS is only inconsistent with serial correlation and a lagged dependent variable if we
assume …

A

if we assume ut follows an AR(1) model

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

et is uncorrelated with yt−1, Cov(yt−1, ut ) = ρCov(yt−1, ut−1) which is not zero unless ρ = 0, what happens when p /= 0

A

OLS estimators of 𝛽0 and 𝛽1 are inconsistent

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

OLS is only inconsistent with serial correlation and a lagged dependent variable if we
assume ut follows an AR(1) model, write the formula for an AR(1) MODEL with E(….)

A

E(et|ut−1,ut−2,…) = E(et|yt−1,yt−2,…) = 0

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What does the adjusted VAR(B^j) assume?

A
  • Estimated errors following AR(1) process
  • Homoskedasticity
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Whats the serial correlation robust standard error of 𝛽^1

A

see notes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

write the formula for the truncation lag

A

notes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

what does g stand for in the truncation lag equation?

A

g is the truncation lag which controls for how much serial correlation is allowed in the model

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What does HAC stand for?

A

Heteroskedastic and autocorrelation consistent standard errors

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What are the two standard AR(1) model assumptions?

A

E(et|ut−1,ut−1,…) = 0 and Var(et|ut−1) = Var(et) = 0

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

A t-test for serial correlation with strictly exogenous regressors, what is the equation and what is the null?

A

ut = p(ut-1) + et
H0: p = 0

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What are the steps for testing serial correlation with strictly exogenous regressors?

A
  1. Estimate the OLS regression on yt and obtain the residual, uˆt
  2. Regress ut on ut−1 to obtain pˆ and tpˆ
  3. Use tpˆ to test H0 : ρ = 0. Serial correlation is a problem if H0 is rejected at the 5% level.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Write the equation for the DW test (Durbin-Watson)

A

see notes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What are the conditions for the DW test to reject/not reject the hypotheses?

A

H0: ρ=0, H1: ρ>0 and for ρˆ≈ 0, DW ≈ 2
If DW < dL, we can reject H0
* DW > dU, we cannot reject H0
* dL ≤ DW ≤ dL, the test is inconclusive

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

In correcting for serial correlation, what can be done AND what is required?

A

Use GLS and strict exogeneity is required

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Transform the model to get rid of the serial correlation in ut

A

see in notes

17
Q

what are y ̃ and x ̃ called?

A

quasi-differenced data

18
Q

Whats the problem with the FGLS in AR(1)

A

p is rarely known, but we can estimate p^ and use that instead

19
Q

FGLS has no finite sample properties, which has two major implications

A
  • FGLS estimators are biased but they can be consistent if the data is weakly dependent
  • t and F statistics are only approximately t and F distributed due to the estimation error in
    ρˆ
20
Q

HOW is FGLS still useful?

A

FGLS is asymptotically more efficient than OLS when the errors are AR (1) and the explanatory variables are strictly exogenous

21
Q

there are two variants of FGLS that differ in their treatment of ρ, name them and which one is used more

A
  • Cochrane-Orcutt (CO) → omits the first observation
  • Prais-Winsten (PW) → uses a transformed first observation
  • Asymptotically, CO and PW are identical, but in small samples, we want to use PW
22
Q

What are the benefits of differencing in time series?

A
  • it will often eliminate most of the serial correlation in the errors if in the AR error process, ρ is positive and large
23
Q

What happens to Var(ut) without differencing?

A

It grows overtime
BUT differenced, it has a zero mean, constant variance and is serially uncorrelated

24
Q

What are two things you need to note before testing for heteroskedasticity

A
  • Errors are uncorrelated (first test/correct for serial correlation than test for hetersokedasticity)
  • Errors are homoskedastic
25
Q

What does ARCH stand for

A

Autoregressive conditional heteroskedasticity
Var(ut |Z)

26
Q

first order ARCH model

A

E(ut^2|ut−1, ut−2,…) = E(ut^2|ut−1) = α0 +α1u^2t-1