Section 6 Flashcards
Difference between homoskedasticity and heteroskedasticity?
Homo: var(εi)=σ^2 for all i
Hetero (violation): var(εi)=σi^2
See
eg and bit below it in notes
If we estimate a MRM with ignored heteroskedastic errors, what are the consequences? (2)
- OLS estimators are still unbiased!
- Equation for variances of OLS estimators are incorrect since they were proved using the homoskedasticity assumption
Why can’t we just correct the variance equations for the OLS estimators? Solution to this?
Because even with the hetero adjusted variance, OLS is not the best estimator since it no longer has the smallest variance tf not efficient
TF use white’s least squares instead
Explain how to do White’s test?
See notes
Why do the restrictions for white’s test imply homoskedasticity?
See notes
Note regarding White’s test?
When testing, the H0 hypothesis should not include alpha0
Explain how to the variant of White’s test?
Same as before, but also calculate fitted Y values then use these in the error regression (they implicitly contain all the X combos from before). Then do a t test to see if the coefficent to Y in the error regression is equal to 0 or not (see equations in notes)
What is the Goldfeld-Quandt test and when is it used?
Used if it’s known that the variance of error term changes with the value of a particular regressor Xi (see notes)
2 solutions if transforming the variables to logs doesn’t eliminate the heteroskedasticity?
Weighted least squares
OR
White’s HTSK-consistent variance estimator
See and learn
WLS
See and learn
White’s HTSK-consistent variance estimator
Why does GLS solve the problem of autocorrelation?
By transforming the model, it transforms the error term, into u(t), which satisfies all classical assumptions and therefore the model can be estimated using OLS