lecture 5 Flashcards

1
Q

zero conditional mean

A
  • knowing x tells us nothing about the expected value
  • expected value is on the line
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

we’ve assumed that the error distribution is centered on the regression line BUT

A

our idealized picture suggests something more
- that the variance is constant for all values of X

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

error distribution is centered on the regression line IS NOT

A

this is not one of our assumptions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

in reality our little normal distributions don’t look even

A

they have different spreads

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

having all the exact same little normal distributions, variance of the error term in a regression model is constant

A

homoskedastic

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

little normal distributions do not look at the same; variance of the error term in a regression model is not constant

A

heteroskedastic

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

two assumptions that we have not made

A
  1. errors are homoskedastic
  2. errors are normally distributed
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

robust SE

A
  • if errors are heteroskedastic and you use plain SE, you estimates of the standard error will be wrong
  • use robust so all estimates will be fine, even if there isn’t heteroskedasticity
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

when variance of u decreases/increases estimate of slope gets more precise

A

decreases

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

as n increases/decreases our estimate of the slope gets more precise

A

increases

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

less variance for —- more variance for —-

A

u, x

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

as the variance of x increases/decreases our estimate of the slope gets more precise

A

increases

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

gauss-markov theorem

A
  • three least squares assumptions hold and if the errors are homoskedastic then the OLS slope estimator is BLUE
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

BLUE

A

B- best, minimum variance estimator
L- linear function of the dependent variable; restriction on how we estimate the line
U-
E-

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

sweet spot for our 1-3 assumptions

A

OLS is unbiased
- now, even other linear estimators may do better

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

in addition to the least squares assumptions, there are additional regression assumptions that can be made

A

homoskedastic and normality

17
Q

problem with homoskedastic and normality of the error term

A

assumptions care too unrealistic

18
Q

what we can do with homoskedastic and normality of the error term

A
  • small sample inference
  • easier OLS variance estimators
  • BLUE