Topic 6: Asymptotics Flashcards

1
Q

What is consistency?

A

An estimator converges in probability to the correct population parameter as the sample size grows without bound.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

How should you start the proof for consistency of the OLS estimator in the simple linear regression model?

A

Start with the estimate, beta^j = betaj + [sum(xi-xbar)ui] \ [sum(xi-xbar)^2]

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

As you divide both the numerator and denominator of the second term in the consistency proof by 1/n what happens?

A

As n grows without bound, the numerator goes to the Covariance of (x,u) and the denominator goes to the Variance of x

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What does Cov(x,u) converge on in probability?

A

Cov(x,u) = 0

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is the constant elasticity in a regression?

A

Found in the parameter that is part of a log-log model, for a one percent change in x, y changes by beta1 percent. Take care if either parameter is already in percent units, then it is percentage points.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

If x and u are correlated, what is the inconsistency in the estimator equal to?

A

Cov(x,u)/Var(x)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What happens to the distribution of an estimator, beta^j, as the sample size increases?

A

It becomes more tightly distributed around the parameter, betaj.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Stata: How can you get a t value from a regression output in Stata?

A

Under the column t, it is the coefficient/std. err.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is asymptotic normality?

A

The idea that OLS estimators are approximately normally distributed in large enough sample sizes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What two properties are needed for statistical inference in multiple regression analysis?

A

Consistency and asymptotic normally together allow for hypothesis testing about parameters in the MLR model.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Can we use the t stat, (beta^j - betaj)/se(beta^j), for a standard normal distribution?

A

Yes, as long as n is large enough, even without consistency

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is root n convergence?

A

Standard errors can be expected to shrink at the same rate as the inverse root of the sample size, 1/sqrt(n)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Is the root mean squared error of the regression (RMSE) the standard error of a parameter estimate?

A

No, it is the standard error of the regression in ML analysis. (for the population, not sample)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

A consistent parameter estimate will converge on what as the random sample size increases to infinity?

A

The true population parameter.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

If an unbiased population parameter estimate is averaged, the expected value will equal what as the number of random samples goes to infinity?

A

The true population parameter

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Stata: two ways to drop observations for a regression?

A

“drop in x/n” drops the observations in the x to n range. “reg y x1 x1…. if _n

17
Q

What is the convergence rule of thumb for shrinking standard errors?

A

sqrt(n1)/sqrt(n2) where n1 is the full sample and n2 is the partial sample.

18
Q

Consistency vs bias?

A

They are basically the same, except that inconsistency is expressed in terms of population variance of x1 and pop cov(x1, x2) while bias is in the sample.

19
Q

What is asymptotic efficiency?

A

For consistent estimators with asymptotically normal distributions, the estimator with the smallest asymptotic variance.