Lecture 3 (OLS and asymptotic theory) Flashcards

1
Q

What is the analogies principle?

A

The analogy principle proposes that population parameters be estimated by sample statistics which make known properties of the population hold as closely as possible in the sample.

We go from E(x) to N^{-1}\sum(x) and thus from beta to beta-hat!

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are the OLS.1 and OLS.2 assumptions?

A

OLS.1 (Orthogonality):

E(x’,u)=0

Which is also captured under the stronger assumption of zero conditional mean

E(u|x)=0

OLS.2 (Rank condition)

rankE(x’x) = K

This basically says that we have no perfect multicollinearity, and is a technicality that makes us able to invert the $E(\boldsymbol{x’x})$ matrix so we can solve the OLS equation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What does the central limit theorem say?

A

As the sample size increases, we approach a normal distribution.

The Central Limit Theorem (CLT) states that, for a large enough sample size, the distribution of the sample mean of i.i.d random variables approaches a normal distribution. This means that, even if the individual data points are not normally distributed, the distribution of the sample mean will tend to be normally distributed as the sample size increases. This result has important implications for statistical inference, as it allows us to use the normal distribution to make claims about the distribution of sample means. The formal definition yields:

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is CLT formally?

A

N^{-1/2}\sum_{i=1}^N{x_i} -> d N(0,Var[x])

Where $d$ refers to “converging in distribution”.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is the weak law of large numbers (WLLN)?

A

As the sample size increases, we approach the true population mean.

The main point of this theorem (WLLN) is that, as the sample size increases, the observed sample mean gets closer to the true population mean. This makes sense since we eventually gather the entire population as the sample size increases. Formally the theorem states:

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is WLLN formally?

A

N^{-1}\sum_{i=1}^N{\boldsymbol{x_i}} \underset{p}{\longrightarrow}E(\boldsymbol{x})

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

In the OLS-world, when is beta identified?

A

Under Assumptions OLS.1 and OLS.2, the parameter vector $\boldsymbol{\beta}$ is identified. In the context of models that are linear in the parameters under random sampling, the identification of $\boldsymbol\beta$ simply means that $\boldsymbol\beta$ can be written in terms of population moments in observable variables. Using basic algebra, we can solve $y=\boldsymbol{x\beta}+u$ for beta,

$$
\boldsymbol{\beta} = [E\boldsymbol{(x’x)]}^{-1}E(\boldsymbol x’y)
$$

Because $(\boldsymbol x, y)$ is observed, $\boldsymbol \beta$ is identified.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

When is the OLS Beta consistent?

A

Under orthogonality and full rank conditions, the estimator $\boldsymbol {\hat \beta}$ obtained from an i.i.d sample, consistently estimates $\boldsymbol \beta$. It does not matter where $y=\boldsymbol{x\beta}+u$ comes from, or what the $\beta_j$ actually represents.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

with beta-hat =

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly