Lecture 2 Flashcards
Given an Parameter estimator θn. When is such an estimator unbiased for θ?
An estimator is unbiased if E[θn] = θ
Given an Parameter estimator θn. When is such an estimator asymptotically unbiased for θ?
An estimator is asymptotically unbiased if lim as n->inf. E[θn] = θ
Do consistency and unbiasness imply each other? Give an example.
No, none imply the other.
Define Markov’s Inequality. (Theorem 3)
For any r>0 for which E|X|^r < inf., then
P(|X| > δ) <= δ^(-1) * E|X|^r
Define convergence in r-th mean and its assumptions.
Assuming E|X|^r < inf and E|Xn|^r < inf, we say that Xn converges to X in r-th mean if
lim n-> inf. E[|Xn - X|^r > δ] = 0
Does convergence in r-th mean imply convergence in probability? How an we prove it?
Yes, by markov’s inequality.
Define consistency in r-th mean.
An estimator θn is r-th mean consistent for θ if
E[|θn - θ|^r] -> 0 as n -> inf.
Name an important implication of r-th mean consistency when r >= 1. Give a proof.
It implies asymptotic unbiasedness.
Say Xn converges to c in r-th mean and g(*) is a continuous function. What does this imply? Which implication might be true at first but isn’t and why?
It implies that g(Xn) converges in probability to g(c). It doesn’t imply that g(Xn) converges to g(c) in r-th mean because E[|g(Xn)|^r] might not exist.
Explain Theorem 7
To complicated to write.
What is a linear process. Give examples of linear processes.
Given a sequence of non-random mxm matrices, ui is a linear process if:
ui = ∑Aj*ei-j and the sum of norms of Aj is less than infinity. With the sum indexing from j=0 to infinity.
Examples include AR(p), MA(q) or ARMA(p,q)
Given a linear process ui, what is its autocovariance structure?
the autocovariance E(uiui’) =
∑A(k)S*A’(k+j-i) i≤j index from k=0 to inf. (Review examples 10 and 11)