L13 - Testing for serial correlation Flashcards
How do you derive the Durbin Watson Test statistic for serial correlation?
- Under the null hypothesis that there is no autocorrelation we have E(DW)=2
- If there is positive autocorrelation then E(DW)<2. The critical bounds are as given in the tables i.e. dU and dL.
- If there is negative autocorrelation then E(DW)>2. The critical bounds are calculated as 4-dU and 4-dL.
- Note that DW is bounded between 0 and 4.
- The values given in the tables are for a one-tailed test. For a two sided alternative we must double the significance level e.g. what appears as 5% in the tables will actually be for a 10% significance level.
When do you accept/reject the null of a Durbin Watson Test statistic?
- e.g. H0: ρ = 0 and H1: ρ > 0
we reject if DW is greater than upper bound
we accept if DW is lower than the lower bound
invalid if DW is between the bounds –> cant make a decision and is called the region of uncertainty/indeterminacy
–> this is the draw back of the Durbin Watson test
What is Durbin’s h test?
- for both Durbin Watson and Durbin’s h test we are only looking at 1st order autocorrelation coefficient
In this test ρ(hat) = 1st order autocorrelation coefficient
What is the Breusch-Godfrey test?
- used when models have more than one autocorrelation
What is the Box-Ljung test?
- also called the Q statistic
j is the autocorrelation with the lag of j
What is another name for tests that test for serial correlation?
Tests of this type, in which the serial correlation can be of a very general type, are sometimes referrred to as portmanteau tests
What is the effect of autocorrelated errors on OLS estimation?
Example: OLS with AR (1) error
Yt = βXt + ut
ut= ρut-1+ εt
- OLS is unbiased because the proof of unbiasedness does not depend on GM2.
- However, the proof of the Gauss-Markov theorem does depend on GM2.
- Therefore OLS is no longer BLUE. It may be possible to find
Basically if autocorrelation errors do appear, it doesnt necessarily mean OLS is unbiased, but it will no longer be BLUE is GM2 is not met
- more efficient estimators.
What is the effect of autocorrelated errors on coefficient standard errors?
AR(1) –> first order autocorrelation error
Why when Gauss Markov 2 doesnt hold do we get errors in the standard error?
- If we expanded the variance of β(hat) we would get the second equation, if GM2 holds all the cross products should cancel out and we will be left with the variance formula
- However, if GM2 does not hold then these cross product terms will not be zero.
- This is what leads to bias in the OLS standard error
What happens when the X values are autocorrelated?
Xt is autcorrelated by a coefficent of Φ with the last the last period’s value Xt-1
- therefore
REVIEW SECOND TO LAST SLIDE