Week 3: Chapter 5&9 Flashcards

1
Q

GM assumptions 1-5, what’s the sample property

A

finite-sample property
- Holds for any sample size n, so long as n > k+1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

CLM assumptions 1-6

A

Exact sampling property
- Violation of MLR 6 invalidates our inference

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Two implications of a large sample (as n approaches infinity)

A
  • Bj has an approximately normal distribution as n n approaches infinity)
  • t and F statistics have approximately t and F distributions when as n approaches infinity
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Unbiased VS Consistency

A

Unbiasedness:
- On average B^ equals B
- The midpoint of the distribution of B^ is B
- Nothing to do with spread or distribution of B^

Consistency:
- As we add more observations, the distribution of B^ gets more tightly distributed around B
- Tells you something about the speed and distribution of B^

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Consistency definition

A

B^j is consistent when Var(B^j) and E(B^j) become more tightly distributed around Bj, until they collapse into a single value, as n tends to infinity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Consistency formula P(Wn…)

A

notes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Show how consistency follows from unbiasedness

A

Notes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What happens when Cov(u,x) / Var ( x) = 0?

A

Plim B^1 = B1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

MLR4’

A

E(u) = 0 and Cov(xi,u) = 0.
- Zero Mean and Zero Correlation assumption
Requires that each xj is uncorrelated with u and E(u)
- Still consistent

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What does MLR1-4 Imply

A

OLS estimators are unbiased and consistent

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What does MLR1-4’ Imply

A

OLS estimators are consistent

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What does RESET test stand for and what does it test for?

A

Regression Estimation Specification Error Test
- Tests for non-linearities within the model
- RESET tests for functional form misspecification brought about by the exclusion of higher order polynomials of our x’s

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

RESET equation and why there is no y^4

A

check notes for the equation.
- No y^4 as it uses up too many dofs

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is the null of RESET test?
What does it mean when we fail to reject it?

A

H0: δ1 = δ2 = 0
If we fail to reject, means the original model was correct

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What are the drawbacks of the RESET test?

A

A drawback with RESET is that it provides no real direction on how to proceed if the model is rejected. THUS IT IS JUST A FUNCTIONAL FORM TEST

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Mizon-Richard Test

A
  • focuses on log transformations
  • y = g0 + g1x1 + g2x2 + g3log(x1) + g4log(x2) + u (g = a number symbol)
17
Q

Null and alternative hypothesis for MR test

A

H0: g3 = 0, g4 = 0 (original equation correctly specified)
H1: g1 = 0, g2 = 0

18
Q

David-MacKinnon Test

A

States that the fitted values from one model should be insignificant when added to another model, yˇ and y^ (fitted values)
1. y = b0 + b1x1 + b2x2 + d1yˇ + error
2. y = b0 + b1log(x1) + b2log(x2) + q1yˆ + error

H0 : d1 = 0 and H0 : q1 = 0

19
Q

How do you know if its a good proxy variable?

A

The closer Cor(proxy, variable) is to 1, the better the proxy

20
Q

The proxy variable final model

A

y = alpha0 + b1x1 + b2x2 + alpha3x3 + e

21
Q

What can we use instead of a proxy variable?

A

Lagged dependent variable
- accounts for historical factors that cause current differences

22
Q

Examples of measurement errors

A

Self-reported income, weight etc.
Always have errors, people misinterpret the information

23
Q

Proxy VS Measurement Error

A
  • In the proxy case, the omitted variable is important to the extent to which it affects our other independent variables
  • In the measurement error case, the mismeasurement in the independent variable is the issue
24
Q

Measurement error in the dependent variable:
- Model
- Implications

A

y* = 𝛽0 + 𝛽1x1 + . . . + 𝛽kxk + u
y = b0 + b1x1 + . . . + bkxk + v
V: e0 = y - y* (measurement error)

GM assumptions still hold, still consistent

25
Q

Measurement error in explanatory variables:

A

y = 𝛽0 + 𝛽1x1 + u
- CANNOT observe x
1, have to observe x1
e1=x1- x1* and assume E(e1)=0, E(y|x1,x1)=E(y|x1)

26
Q

When does ME in dependent variables cause consistent estimators?

A

For consistent estimates of Bj, we require E(e0|x) = E(e0) = 0
- e0 is independent of x and has zero mean

27
Q

When does ME in independent variables cause consistent estimators?

A
  1. e1 is uncorrelated with x1: Cov(x1, e1) = 0
  2. e1 is uncorrelated with x1* : Cov(x1*, e1) = 0
28
Q

e1 is uncorrelated with x1: Cov(x1, e1) = 0, what is the model for this?

A

y=b0 +b1x1 + (u- b1e1)

29
Q

e1 is uncorrelated with x1: Cov(x1, e1) = 0, create the model and what is it called?
- show how plim bˆ1 does not = b1

A

Classical errors-in-variables (CEV assumption)
- plim bˆ1 = b1 + Cov(x1, u - b1e1)/Var(x1)
- Cov(x1, u - b1e1) = Cov(x1, - b1e1) = - b1Cov(x1, e1)
find rest in notes
GIVEN THIS, OLS PRODUCES BIASED AND INCONSISTENT ESTIMATORS

30
Q

Show what the Attentuation bias is

A

notes, always less than 1

31
Q

What are some examples of issues that will lead to bias in the Random Sampling (MLR2) assumption?

A
  • Missing data
  • Non-random samples
  • Outliers
32
Q

What happens when data is missing at random?

A

Estimators are less precise (SSTx is lower in smaller samples), BUT they are still unbiased

33
Q

Exogenous Sample Selection

A

Sample selection based on the independent variables, can still be unbiased

34
Q

Endogenous Sample Selection

A

Sample selection based on the dependent variables, will lead to biased coefficients