Week 2: Chapter 8 Flashcards

1
Q

Heteroskedasticity

A

Var(u|x) = σ^2x

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What Hetero DOES NOT effect

A
  • Whether OLS estimators are unbiased or inconsistent
  • Goodness of fit measures
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What Hetero DOES effect

A
  • OLS no longer BLUE.
  • A bias in Var (Bj) which invalidates t-tests, F-tests and confidence intervals.
  • OLS no longer being asymptotically efficient.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Var(Bj) under SLR

A

Look @ notes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Var(Bj) under MLR

A

Look @ notes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Reasons to use WLS over OLS

A

If variance is correctly specified, WLS is more efficient than OLS.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

WLS formula: The form of Hetero is known

A

Var(u|x) = σ^2h(x)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Transform model with heteroskedastic errors

A

notes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What do we divide by root hi?

A

To transform the heteroskedastic errors into a constant homoskedastic result

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

GLS, why do we use it?

A

Technique when correlation is suspected between the residuals.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

GLS: Interpret B1 after transformation

A

β1 is the change in yi/ √ hi given a one-unit change in (xi1/ √ hi), ceteris paribus.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

When do use GLS/WLS?

A
  • When the errors are dependent, we can use generalized least squares (GLS).
  • When the errors are independent, but not identically distributed, we can use weighted least squares (WLS), which is a special case of GLS
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Feasible GLS (FGLS) estimator

A

When the form of heteroskedasticity is unknown, weight residuals by hi^

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

FGLS

A

Var(u│x)=σ^2 exp⁡ (δ0+ δ1 x1+⋯+δk xk)v

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Why do we use an exponential function in FLGS?

A

It is required that our estimated variances be positive to use WLS. However, linear models do not guarantee that the predicted values produced are positive. Using a non-linear model ensures that we have strictly positive predicted values

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Why do we use an exponential function in FLGS?

A

It is required that our estimated variances be positive to use WLS. However, linear models do not guarantee that the predicted values produced are positive. Using a non-linear model ensures that we have strictly positive predicted values

17
Q

What does the v mean in FGLS?

A

Mean unity, conditional on x

18
Q

Assume v is independent of x, rewrite the equation using log form to linearise model

A

log (u^2) = … look @ notes

19
Q

What does the Breusch-Pagan test assume about U^2

A

Test assumes u^2 is a linear function of the independent variables.

20
Q

Write out the steps for the BP test

A

Find in notes

21
Q

White test assumption

A

MLR5 can be replaced with a WEAKER assumption that u^2 is uncorrelated with the explanatory variables, the squared independent variables and their cross-products.

22
Q

Weakness of white test?

A

Uses too many degrees of freedom, fix this by using the modified white test.

23
Q

Special Case white test

A

Uses residuals and fitted values, WILL ALWAYS HAVE TWO restrictions.

24
Q

If you reject the null, what does this mean?

A

There is evidence of heteroskedasticity within the model

25
Q

What does the LPM always violate?

A

Always violates homoskedasticity assumption, unless all slope parameters are 0

26
Q

Why does the LPM always violate MLR5? How to overcome this

A

Due to Var(y|x) = Var (u|x) = p(x)[1-p(x)]
Overcome this by estimating the LMP using FGLS

27
Q

For LPM, need to weight each observation i by 1/h^i, what does h^i entail?

A

h^i = y^i(1-y^i), NEED TO ENSURE 0 < y^i < 1

28
Q

How do we ensure 0 < y^i < 1 in LPM?

A
  • Throw away observations such that fitted values 0 < y^i < 1
  • Use OLS with hetero robust standard errors