Chapter 2 Asymptotic theory Flashcards
Continuous mapping theorem
If we apply a continuous function to a vector that converged to another vector, it now converges to the same vector but applying the continuous function to it.
Note that the inverse is a continuous function.
Slutsky’s lemma
How does Slutsky’s lemma apply to a matrix times rv?
Kolmogorov (strong) Law of large numbers
Average converges in probability to the mean IF Z_I IS IID
Stationarity
The distribution is invariate over time
Ergodicity
The two groups become independent when n goes to infinity
Ergodic theorem
iid is replaced by the ergodic theorem
LLN to CLT 1- Classical Lindenberg-Levy CLT
z_i is iid.
In this case, the dependence that ergodicity restricts is not enough for the classical LL-CLT
LLN to CLT 2- Martingale Difference CLT
Martingale difference sequence means that there is no serial correlation. If we have this + SE, we have MD-CLT
Note: the conditional variance remains unrestricted
Linear regression assumptions (AT): 1
Linearity
Linear regression assumptions (AT): 2
Stockastic assumption.
yi and xi are jointly stationary ergotic
Linear regression assumptions (AT): 3
Predeterminedness assumption: E[x_iepsilon_i]=0
Linear regression assumptions (AT): 4
Sigma_xx=E[xx’] is non singular (asymptotic full rank condition)
Linear regression assumptions (AT): 5
x_i epsilon_i is a MDS with variance S=E[xx’epsilon^2], where S is a non-singular variance covariance matrix
Consistency in OLS with Asymptotic Theory. Which assumptions were needed?
1 to 4.
Linearity for the formula, Stockastic assumption (StatErg) to apply ET, Predeterminedness to cancel out the second term, non-singularity for the ergodic theorem.
Thus, with infinite data we would find beta.