Econometrics 1: Asymptotic Theory Flashcards
What is convergence in probability?
A sequence of random variables Xn converges in probability to some value c if, for all e > 0,
P{|Xn-c|>e] -> 0
as n grows large.
Why does convergence in mean square imply convergence in probability?
A sequence of random variables Xn converges in mean square if the limit of E[(Xn-c)^2] = 0.
By Markov’s inequality, it follows that P[|Xn-c|>e] is smaller than this expression. So, mean square convergence implies this expression approach zero, which is the definition of convergence in probability.
State Chebyshev’s Law of Large Numbers.
Let {xi} be i.i.d. random variables with finite mean µ and variance σ^2. Then, the sample mean converges in probability to µ.
State the Lindeberg-Levy Central Limit Theorem.
Let {xi} be i.i.d. random variables with finite fourth moments. Its mean is µ and its variance is σ^2. Then,
√n(xbar - µ) -> N[0,σ^2]
State the multivariate Lindeberg-Levy CLT.
Let {Zi} be iid random vectors with finite mean µ and covariance matrix ∑. Then,
√n(Zbar - µ) -> N[0,∑].
What is the Continuous Mapping Theorem?
Continuous functions behave nicely when applied to random variables that are converging in distribution or probability.
What is the Slutsky Theorem?
Random variables that are converging in distribution or probability can be combined nicely.
What is the trace of a matrix?
The sum of the diagonal elements of a matrix.
What is a χ^2 distribution?
The sum of n independent squared standard normal random variables.
What is a t distribution?
tn = Z/√(χ2_N/N)
What is an F distribution?
An F distribution with p,q degrees of freedom is the ratio of two χ^2 distributions, each with p,q degrees of freedom respectively, standardised by their degrees of freedom.