PP2: OLS Flashcards
List the 6 asssumptions to OLS
1)The population model is linear
2)n observations are randomly drawn from the population
3)There is variation in each explanatory variable and no perfect collinarity
4)The explanatory variables are uncorrelated to the error term
5)Error u has the same variance regardless of explanaory variables (aka homoskedasticity)
6)The error term is independent of regressors and normally distributed wih mean zero and variance
What happens when you have omitted variable bais?
Some of the omitted variable is absorbed into the error term and into other known variables to either over/under-estimate them
What is the formula for over/under-estimated omitted variable bias?
bias=B+/-corr(x1,x2)
T/F:Including irrelavent explanatory variables can increase precision.
False: it may increase multicollinearity, which increases the variance in the OLS estimator
Define the Gauss-Markov assumption
When MLR1-MLR5 are satisfied, then regression is the Best Linear Unbiased Estimator
aka has the smallest variation among all unbiased (linear) estimators
What is the formula for t-scores in hypothesis testing?
t=(B-a)/SE(B)
B=estimated coefficient
a=hypothesized number, normally 0
T/F: The larger the error variance, the larger the variance of B.
True: var=chi^2/(SST(1-R^2)
T/F: The larger the SST, the larger the variance of B.
False: var=chi^2/(SST(1-R^2)
higher SST leads to smaller variance
State the null hypothesis and altervative for one-sided and two-sided.
H0: B = a
Ha One-sided: B > or < a
Ha Two-sided: B != a
Is bathrooms stastically significant at the 5% level?
log(price) = 8.07+ 0.054rooms+0.22baths+0.298log(area)
Where bathrooms SE is 0.038
Yes, we can reject the null
0.22/0.038=5.79
Dermine whether three variables are jointly significant
F(3,137)=9.46
Prob > F = 0.0000
It is unlikely that all coefficients are zero, meaning we can reject the null and they are jointly significant.
Determine whether two variables are jointly significant
F(2,137) = 1.17
Prob > F = 0.3143
Fail to reject the null, and unlikely they are jointly significant
What happens when we have homoskedasticity in our model?
1) OLS is no longer BLUE
2) usual formula for variance can’t be used
3) The usual standard errors are wrong
4) Unbiasedness is unaffected
Why can’t you use heteroskedasticity-robust standard errors all the time?
In small samples the SE become less accurate and corresponding test statistics do not have the correct distributions
Given the following hettest output,
chi2(1) = 62.55
Prob > chi2 = 0.000
determine if heteroskedasticity is present.
Reject the null and we have heteroskedasticity
H0: error variance does not depend on x
HA: MLR.5 fails