Section 2 Flashcards
Derive the OLS estimator in matrix form, starting from Y=Xβ+ε?
See proof in notes
Show b=(X’X)^-1X’Y is the same as the b1 and b2 estimators in econometrics?
See page 4 notes
How do we know X’X will always be positive definite?
Since it is equivalent to a squared term tf will be positive definite (see top of page 1 side 2 for proof)
Shows that we always find a minimum point
5 classical assumptions in matrix form?
1) E(ε)=0
2) var(ε)=σ^2I(n) (combines both homoskedasticity and no autocorrelation assumptions)
3) E(X’ε)=E(X’)E(ε) (X and ε are independent)
4) X is of full rank k<=n (no linear dependency in columns of X tf no pure mulitcolinearity)
5) ε~N(0,σ^2I(n)) (Allows for hypothesis testing)
If X is a fixed non-RV then:
E(X’ε)=X’E(ε)
Show var(ε)=σ^2I(n) combines both homoskedasticity and no autocorrelation assumptions?
See notes
What does it mean for an estimator to be unbiased?
It means the distribution of the estimator is centered around the true but unknown value of the parameter
Show that b, the estimator for β, is unbiased?
See notes page 2 side 1
Which assumptions are required to prove b is unbiased?
1) E(ε)=0
and
3) E(X’ε)=E(X’)E(ε) (X and ε are independent)
Prove the formula for variance of the OLS estimator in matrix form?
See notes page 2 side 1
2 points about the var(b) matrix?
Main diagonal is variances of parameter estimates (ie. var(b1) up to var(bk))
Off diagonal elements are covariances between estimators (eg. cov(b1,b2))
See top of page 9 and bottom of page 8
Now
Why is high variability in X good?
leads to a better estimate since it is likely the data sample was larger
Show that the variance of OLS estimators is the smallest of all unbiased estimators?
See notes page 2 side 1