Lecture 1 Flashcards

1
Q

Unbiasedness means

A

On average, OLS estimators will give us the true population parameters, so if we repeat sample til infinity, average of OLS estimates will converge to true population values.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

SLR.1

A

Assumption of linearity in parameters, i.e. the model can be written as:
Y = B0 + B1x + u

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Assumption SLR.2

A

Assumption of random sampling, sample size n {(xi,yi)} : i = 1,…,n}
- we can treat y,x and u for each observation as i.i.d variables, meaning each observation is independent from the others and all come from the same distribution
- ensures that our sample is randomly drawn from population, so estimates of parameters will be valid and representative of the population as a whole

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Assumption SLR.3

A

Assumption of sample variation in the explanatory variable
- essentially assuming not all values of the independent variable xi are not all the same in the sample, which would make it impossible to estimate the relationship between x and y

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

SLR.4

A

Assumption of zero conditional mean
- E[u|x] = 0
- crucial to show OLS estimator is unbiased, i.e. no correlation between u and x

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

By showing B1^’s distribution is centered at B1…

A

We are showing the OLS estimator is unbiased, so on average, across possible samples - B1^=B1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Under SLR.4:
- E[B0^|Xn] = …
- E[B1^|Xn] = …

A
  • B0 and B1
    By LIE:
    E[B1^] = E[E[B1^|Xn]] = E[B1] = B1
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Proof of unbiasedness in 3 steps:

A
  1. Obtain a convenient expression of the estimator
  2. Write estimator = population parameter + sampling error
  3. Show E[sampling error] = 0
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

SLR.5

A

Assumption of homoskedasticity
- the error has the same variance given any value of x
Var(u|x) = o^2 >0 for all x

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

E{y|x} = …
Var(y|x) = …

A
  • B0 + B1x
  • o^2
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Var (B1^|Xn) =
Var(B0^| Xn) =

A
  • o^2/SSTx
  • (o^2.sum (xi^))/ n.SSTx

The more noise in the relationship between y and x, i.e. larger variability in u, harder it is to learn about B1, as it would increase the variance of B1^

By contrast, more variation in x is a good thing, as it reduces Variance of B1^

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

The inverse 1/Var(B1^|Xn)

A

= SSTx/o^2 is often referred to as the signal to noise ratio, high signal to noise ratio implies high precision of B1^

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is the sample variance in x? What happens as n gets large

A

O^2^ = SSTx/n-1

By the LLN, as n gets large =, will tend to the true value

Thus, as n grows, variance of b1 estimator = 1/n-1, formally showing why more data is better

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

We can’t actually observe ui, so we have to estimate it

A

We replace it with it’s estimate, ui^, so:

O^2^ = SSR/n-2

Bias corrected by the degrees of freedom adjustment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Standard error of the regression

A

O^ = (SSR/n-2)^0.5

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Standard error of B1^

A

Se(B1^) = o^/(SSTx)^0.5

Take the root of the variance of B1 estimator