Lecture 4 Flashcards
What’s the purpose of MLR.1-4?
Imply that the expected value of the OLS estimator = the true parameter
Whats the purpose of MLR.5?
Allows us to derive the variance of Bj^ on the sample Xn
Now, under these conditions, the GM theorem is applicable, stating that the OLS is the BLUE
- means among all linear unbiased estimators, OLS has the smallest variance
Whats the purpose of MLR.6?
Introduced to enable exact statistical inference, which includes constructing CI and conducting hypothesis tests, the assumption allows the use of the t distribution and F distribution for inference, making results more precise in finite samples
What happens if MLR.6 fails?
Even if it fails, main results and inference can still hold approximately if the sample size is large enough due to the CLT
Consistency of OLS
Under MLR.1 to MLR.4, Bj^ converges in probability to Bj as n tends to infinity
Bias and direction of bias
Plim(B1^) = B1 + (Cov(x1,u))/(var(x1))
Positive bias if covariance is positive, negative bias if there is a negative covariance.
Asymptotic Normality of OLS
Under assumptions MLR.1 - MLR.5, the standardised version of Bj^ is asymptotically standard normal
The same convergence holds when we replace the standard deviation with the standard deviation, due to o not being known
Issues with the Normality Assumption
- assumption is highly unrealistic in many applications
- therefore last week’s results are of limited use in practice,
Issues like non-negative values in reality, which violates the symmetry of normality.
Implications now that MLR.6 is unrealistic
What was EXACTLY true with MLR.6 remains approximately true, in large samples, even without MLR.6
- approximations for critical values and p values are only accurate when the sample size is sufficiently large
- sample size required for accurate approximately depends on how far the population distribution is from normal
- when MLR.6 fails, may be more accurate to describe p values as asymptotic or approximate
Law of large numbers, LLN
For any random sample, the sample mean converges in probability to the population mean as n tends to infinity
- LLN can apply even when v is a function of other random variables
What does SLR.4 imply?
Assumes E(U|x) = 0
- leads to the plim of the estimator converging to B1 as the sample size increases, which implies consistency
How does CLT play into all of this?
For any random sample, the average of a sample, adjusted by dividing by the standard deviation over the square root of the sample size tends to follow a normal distribution as the same size grows, regardless of the initial
- can use the CLT to derive the asymptotic normality of the OLS