5. Limit Theorems Flashcards

1
Q

Multivariate Normal Distribution

A

-suppose X_ = (X1,X2,…,Xn) is a vector of random variables
-if X_ has pdf:
f(x_) = (2π)^(-n/2) det(Σ_)^(-1/2) exp(-1/2 (x_-µ_) Σ_(x_-µ_) )
-then we say that X_ is a multivariate normal distribution:
X_~Mulitvariate Normal(µ, Σ_)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Properties of the Multivariate Normal Distribution

Expectation

A

-suppose X_=(X1,X2,…,Xn) and X_~Mulitvariate Normal(µ, Σ_)
-then:
E(X_) = µ

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Properties of the Multivariate Normal Distribution

Covariance

A

-suppose X_=(X1,X2,…,Xn) and X_~Mulitvariate Normal(µ_, Σ)
-then:
cov(Xi,Xj) = Σij
-where Σij is an element of the matrix Σ

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Properties of the Standard Multivariate Normal Distribution

Bivariate Special Case

A

-suppose X_=(X1,X2) and X_~Mulitvariate Normal(µ, Σ_) then:
µ_ = (0,0)
Σ_ = 2x2 identity matrix
f(x1,x2) = product of two univariate standard normally distributed random variables

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Standard Multivariate Normal Distribution

A

-suppose Z_=(Z1,Z2,…,Zn) is distributed:
Z_~Mulitvariate Normal(0, In)
-where In is an nxn identity matrix
-then Z
is the standard multivariate normal

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Independence and the Multivariate Normal Distribution

A

-we have shown that if X and Y are independent: cov(X,Y)=0
-but zero covariance does no GENERALLY imply independence
-HOWEVER
- in the case of the multivariate normal:
X_~Mulitvariate Normal(µ, Σ_)
-where X_=(X1,X2,…,Xn)
-IF
cov(Xi,Xj) = 0
-THEN Xi and Xj are independent

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Change of Coordinates

A

-suppose X is a random variable and Y=g(X) where g:ℝ->ℝ, that is a monotonic function
-define g^(-1):ℝ->ℝ such that g^(-1)(g(x)) = x and is differentiable, then:
fy(y) = fx(g^(-1)(y))) * |∂g^(-1)(y)/∂y| if y=g(x) for some x and 0 otherwise
-this can be used to derive may pdfs

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Markov’s Inequality

A

-suppose X is a random variable that takes on non-negative values then for all a>0 :
P(X≥a) ≤ E(X)/a

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Chebyshev’s Inequality

A

-if X is a random variable with mean μ and variance σ² then for all k>0 :
P( |X-μ| ≥ k) ≤ σ²/k²

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Weak Law of Large Numbers

A

-suppose X1,X2,…,Xn is a random sample from a distribution with mean μ and var(Xi) = σ²
-then for all ε>0 :
P{ |(X1+X2+…+Xn)/n - μ| ≥ ε} ≤ σ²/nε²

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Sequence of Random Variables

A
  • sequence of independent and identically distributed random variables
  • sequence: X1,X2,…,Xn
  • think of a sequence as sampled data:
  • -suppose we are drawing a sample of n observations
  • -each observation will be a random variable Xi
  • -with realisation xi
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Mean / Variance of Sample Mean

A
  • let X1,X2,…,Xn be a ransom sample from a distribution with mean μ and variance σ²
  • let Xn^ be the sample mean
  • then E(Xn^)=μ and Var(Xn^)=σ²/n
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Weak Law of Large Numbers

Limit

A

-suppose X1,X2,…,Xn is a random sample from the distribution with mean μ and variance σ², then for all ε>0:
P{ |(X1+X2+…+XN)/n - μ| ≥ ε} -> 0 as n->∞

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Convergence Definition

A

-for a sequence of real numbers
{ai} = {a1,a2a3,…,an,…}
-we say that the sequence {ai} converges to a real number A if for any ε>0 there is a positive integer N such that, for n≥N:
|an-A| < ε

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Pointwise Convergence Definition

A
-for a sequence of functions:
{fi} = {f1,f2,f3,...,fn,...}
-suppose fi:X->ℝ for all i
-then {fi} converges pointwise to f if, for all x∈X and ε>0 there is an N such that for all n≥N:
|fn(x) - f(x)| < ε
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Sequence of Estimators of Parameters

A

-let θi^ be an estimator for θ based on i observations
-increasing sample size we get a sequence of estimators:
{θi^} = {θ1^,θ2^,…,θn^}
-what can we say about {θi^} as n->∞ ?
–what is the probability that θn^ differs from θ?
–what is the probability that {θi^} converges to θ?

17
Q

Convergence in Probability

A

-we say that the sequence θn^ converges to θis:
lim P( |θn^-θ|>ε) = 0 for any ε>0
-where the limit is taken as n->∞
-here ε is a tolerance parameter, it tells us the error we are making when we use the estimate θn^ instead of the true value θ
-note that this is not the same as saying that any θi^ tends to θ as n->∞

18
Q

Convergence in Distribution

A

-θn^ with cdf Fn(x) converges in distribution to random variable X with cdf F(x) if:
lim |Fn(x) - F(x)| = 0
-for all x∈ℝ where F(x) is continuous
-limit taken as n->∞
-note that this says that the cdfs are converging and tells us NOTHING about the convergence of the underlying random variables

19
Q

Central Limit Theorem

A

-let X1,X2,…,Xn be a sequence of independent random variables with mean μ and variance σ²
-let Xi have cdf P(Xi≤x)=F(x) and moment generating function M(t)=e^(tXi)
-let Sn=ΣXi, then:
lim P{ (Sn-nμ)/σ√n ≤ x} = 1/√(2π) ∫ exp(-x²/2)
-where the limit is taken as n->∞ and the integral is from -∞ to x
-note that the RHS is the cdf for a standard normal

20
Q

Propositions for Proof of the Central Limit Theorem

A

1) let Fn be a sequence of cdfs with corresponding MGFs Mn
let F be a cdf with MGF M
if lim Mn(t)->M(t) for all t in some interval, then Fn(x)->F(x) for all x where F is continuous
2) suppose lim an ->a, then lim(1 + an/n)^n = e^a
3) suppose M(t) is a moment generating function for some random variable X, then M(0)=1