Econometrics I Revision Flashcards
Explain and derive the OLS estimator
Minimise the sum of squared deviations between the actual and the sample values. You choose β to minimise this between y and its linear approximation given by its conditional expectations.
(y-Xβ)’(y-Xβ)
dy/dx
-2X’y + 2X’Xβhat = 0
βhat = (X’X)-1 X’y
d2y/d2x
2X’X is positive definite
Assumptions:
Explain and derive MM estimator
MM estimator finds values for β that ensures the sample counterpart of the population moment condition E(X’e)=0 is satisfied:
E(X’e)=1/nX’(Y-Xβ) = 1/n(X’y-X’Xβ) = 0
βhat = (X’X)-1 X’y
What does (X’X)-1 X’X equal
I - the identity matrix
Explain the law of iterated expectations
E[E[X1|X2]] = E[X1]
Explain what random sampling means
The population model has been specified and an independent, identically distributed (iid) sample can be drawn
Explain what an unbiased estimator means
E(βhat) = β
Explain zero conditional mean assumption
Population orthogonality condition E(e|X)=0
Explain when X’X is nonsingular
If non-singular, the linear projection of y on Xn always exists and is unique
Show the equation when an estimator is consistent
plim(βhat) = β
What are the assumptions of the linear regression model?
- Population orthogonality condition: E(e|x)=0
- Full rank: x is an nxk+1 matrix
- Linearity: the true model is y=Xβ+e
- Spherical disturbances: homoscedasticity and non-autocorrelation E(ee’|X) = σ^2 I
Explain the BLUE properties
Best: βhat is more efficient than any other unbiased estimator. V(β~) - v(βhat) is a positive definite matrix
Linear: βhat is a linear function
Unbiased: E(βhat) = β
Explain multicollinearity
It implies the columns of X are linearly independent, X’X will not be invertible, OLS parameters are not identified
Explain asymptotic inference
If the small sample distribution of an estimator is unknown we can use an asymptotic approximation
Explain the Central Limit Theorem
If we have an infinite sequence of iid random variables, no matter what their distribution, in the limit they are normally distributed
Show that βhat is an unbiased estimator of β
βhat = (X’X)-1X’y
E(βhat) = E(X’X)-1X(Xβ+e)
Expand…
Identity matrix!
E(βhat) = E((X’X)-1X’e)
Because of Law of iterated expectations:
E(E(X1|X2)) = E(X1)
E(βhat) = (X’X)-1X’E(E|X)=0
E(βhat) = E(β) = β