Prob & Measure Flashcards

1
Q

(i) Define a sigma-algebra and a measure.
(ii) Define a pi-system and a d-system.
(iii) State and prove Dynkin’s Lemma

A

(i) A sigma algebra contains the whole set, is closed under complementation and countable unions. A measure is non-negative, countably additive, and is zero on the null set.

(ii) A pi system contains the empty set and is closed under intersections.
A d-system contains the whole set, is closed under differences and countable increasing unions.

(iii) If A a pi-system and D a d-system containing A, then D contains sigma(A).

Proof:
1. Note that intersecting d-systems preserves d-systems. So pick the smallest d-system D containing A.

  1. Let D_1 = {a in D : a intersect b in D for all b in A}. Note that D_1 is a d-system containing A. Hence D_1 contains D. Hence D=D_1.
  2. Let D_2 = {a in D: a intersect b in D for all b in D}. Note that D_2 is a d-system for the same reasons as D_1, and note that D_2 contains A. So D=D_2.
  3. So D is a pi-system and d-system, hence a sigma-algebra.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Applications of Dynkin’s:

(i) Show that if (E, E) is a sigma algebra and mu_1, mu_2 are two finite measures agreeing on a pi-system, then mu_1, mu_2 agree everywhere.
(ii) Show that if A is a pi-system and X a r.v., and A and X are indep, then sigma(A) and X are indep.

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

State and prove Kolmogorov’s Zero-One Law

A

Let X_n, n>=1 be indep. Then let T_n = sigma(X_{n+1}, X_{n+2}, …), T=intersect T_n. Then if A in T, P(A)=0.

Proof: We prove A is indep of A. So P(A)=P(A and A)=P(A)^2.

Let F_n = sigma(X_1, …, X_n) so that F_n and T_n are indep. Hence F_n and T are indep. This holds for all n. Hence F=UF_n and T are indep. Note that F is a pi-system, so T is indep of the sigma algebra F generates, namely F_{infinity}=sigma(X_1, X_2, …) (as the sets indep of T form a d-system). Then finally note that T is a subset of F_{infinity}. So T is indep of itself.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

(i) Prove P(A_n i.o.) = lim_{n–>infinity} P( A_m for some m >= n) >= lim_{n->infinity} P(A_n)
(ii) Let X_n iid with mean 0 variance sigma^2. Let A_n = {S_n/sqrt(n) >= K}, S_n the sum. Show that P(A_n) >= c > 0 for n large enough.

(ii) Prove P(A_n i.o.) >= c. Deduce that P(A_n i.o)=1.
(iv) Deduce limsup S_n/sqrt(n) = infinity with probability 1

A

We let B_n = {A_m for some m >= n}. Then B_n decreases to intersect B_n = {A_n i.o.}. Result follows.

(ii) Is immediate from Central limit.
(iii) Immediate form 1, and Kolmogarov.
(iv) P(limsup S_n/sqrt(n) >= K)=1 for all K. Call this B_k, k=1,2,3,4… Note that B_k is decreasing. So P(limsup S_n/sqrt(n)=infinity)=lim P(B_k) = lim 1 = 1.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

(i) Let f_n, f integrable, with mu( |f_n|) –> mu(|f|) and f_n –> f pointwise. Prove that mu(f_n+)–> mu(f+) and mu(f_n-)–>mu(f-).
(ii) If f,f_n >=0 are integrable, f_n–>f pointwise, and mu(f_n)–>mu(f), prove mu(|f-f_n|)–>0
(iii) If f,f_n are integrable, f_n–>f pointwise, and mu(|f_n|) –> mu(|f|), then mu(|f-f_n|) –> 0

A

(i) We have
mu(liminf f_n+)=mu(f+) <= liminf mu(f_n+)
mu(liminf |f_n|-f_n+)= mu(f-) <= liminf mu(|f_n|-f_n+)=mu(|f|) - limsup mu(f_n+).

Thus mu(f_n+) –> mu(f+).

(ii) Have f+f_n-|f-f_n| >= 0, so
mu(liminf f+f_n-|f-f_n|) = 2mu(f) <= liminf mu(f+f_n-|f-f_n|)=2*mu(f)-limsup mu(|f-f_n|), as required.

(iii) |f|+|f_n|-|f-f_n| >=0, use Fatou.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What does A_n i.o and A_n ev mean formally?

State Borelli Lemma version 1 and 2.

A

A_n i.o = intersection_{m} Union_{n >= m} A_n
A_n ev = Union_{m} Intersection_{n >= m} A_n.

Borelli Version 1:
If sum_{n>=1} P(A_n) < infinity, then P(A_n i.o.)=0

Borelli Version 2:
If A_n indep and sum_{n>=1} P(A_n)=infinit, P(A_n i.o)=1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Show that if X_n iid N(0,1), then limsup (X_n/sqrt(2logn))=1 a.s.

A

A_n = {X_n > sqrt(2*logn)+eps}.

The standard Gaussian is
f(x)=1/sqrt(2pi) e^(-x^2/2).
Prob(A_n) = 1/sqrt(2
pi) integral e^(-x^2/2) from sqrt(2logn) to infinity <= 1/sqrt(2pi) integral x*e^(-x^2/2) = 1/sqrt(2pi) e^(-t^2/2).

So Prob A_n i.o.=0 Thus Prob limsup X_n/sqrt(2logn) > 1+1/m = 0. This decreases to Prob limsup X_n/sqrt(2logn) > 1, which is 0.

Use the other side. The bound is simply the interval t,t+1!

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Let B a borel subset of [0,1]. Show for all eps > 0, there exists A=(a_1,b_1] u (a_2,b_2] u … u (a_n, b_n] disjoint intervals such that the Lebesgue measure of (A xor B) is less than eps. Show this remains true for every Borel set in R of finite Lebesgue measure.

A

Apply Dynkins. Let D the set of all B satisfying the above. We know that D contains the pi-system of finite unions of disjoint intervals. Since these generate the Borel sigma-algebra, it suffices to prove D is a d-system.

(i) Let B_1, B_2 in D, with B_1 subset of B_2. Take the xor of the eps approximations A_1,A_2 of B_1 and B_2. Note that
mu(A xor B) <= mu(A u B) <= mu(A)+mu(B). So,
mu(A_1 xor A_2 xor (B_2\B_1))=mu( (A_1 xor B_1) xor (A_2 xor B_2)) <= 2*eps.

(ii) Let A_n increase to A. So mu(A_n) –> mu(A). For eps > 0, mu(A_n) is within eps of mu(A). And A_n is eps approximated by B_n.

So mu(A xor B_n) = mu(A_n xor B_n xor (A\A_n)) <= mu(A_n xor B_n) + mu(A\A_n) < 2*eps.

[n, n+1) eventually contains no measure. So done.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Let alpha_n be reals such that sum alpha_n^2=sigma^2 < infinity.

Let X_n iid N(0,1). Show that Y_n = sum_{1 <= i <= n} alpha_i * X_i converges in L^2 to some Y.

Find the distribution of Y.

A

E[(Y_{n+r}-Y_n)^2] = alpha_n^2+…+alpha_{n+r}^2 –> 0 as n–>infinity. Since L^2 is complete, done.

We know that Y_n is normal, mean 0, variance alpha_1^2+…+alpha_n^2.

We know Y_n –> Y in L^2. Hence Y_n –> Y in probability. Indeed, P(|Y_n-Y|>eps)= P(|Y_n-Y|^2>eps^2) <= E[(Y_n-Y)^2]/eps^2 –> 0.

Hence Y_n–>Y in distribution. But the distribution is just

F_n(t) = 1/sqrt(2pisigma_n^2) Integral(-infinty to t) e^(-x^2/(2*sigma^2))

We take sigma_n –> sigma. By dom convergence, or even monotone convergence, we get that F_n(t) –> 1/sqrt(2pisigma^2) Integral e^(-x^2/(2*sigma^2)), i.e., Y is normal.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Show that L^p is Banach.

A
  1. Consider f_n Cauchy and ||f_{phi(n+1)}-f_{phi(n)}||_p < 2^{-n}.
  2. Using Monotone Convergence, deduce || sum |f_{n+1}-f_n| || <= 1.
  3. Deduce that the sum converges a.e., and hence define f (one may show it is measurable).
  4. Show that ||f_n-f|| converges to 0 using Fatou’s lemma.
  5. Explain why f is in L^p.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Define what it means for X_n–>X in distribution (for real r.v.s) and for X_n–>X weakly. Show that they are equivalent.

Show that X_n –> X in distribution iff for all bounded cts f, E[f(X_n)]–>E[f(X)].

Skip these:

  1. Show that X_n –> X in prob ==> f(X_n) –> f(X) in prob for f cts. (Do not need unif cts!)
  2. State the bounded convergence theorem.
  3. Show that convergence in prob ==> convergence in distribution.
A

X_n –> X in distribution if F_{X_n}(x)–>F_X(x) for all continuity points of F_X.

X_n –> X weakly if E[f(X_n)]–>E[f(X)] for all bounded and cts f.

W ==> D

  1. Let X_n–>X weakly. If C is closed, then f_m(x)=max(0,1-md(x,C)) is m-lipshitz cts and bounded in [0,1]. Let C_m = f_m^{-1} (0,infinity). Observe that C_m decreases to precisely C, since C is closed, and P(X_n in C) <= E[f_m(X_N)] –> E[f_m(X)] <= P(X in C_m). Hence limsup P(X_n in C) <= inf m>=1 P(X in C_m) = P(X in C).
  2. Note that if U open, R\U is closed. Hence liminf P(X_n in U) >= P(X in U).
  3. liminf P(X_n <= a) >= liminf P(X_n < a) >= P(X_n < a)=P(X_n=a) >= limsup P(X_n <= a). Thus we get P(X_n <= a) –> P(X <= a).

D ==> W

  1. Let f bounded and cts, and wlog that f has image in [0,1]. Then, note only countably many x in [0,1] have that P(f(X_k)=x) > 0 or P(f(X)=x)>0. Hence, we pick a_0=-1 < a_1 < … < a_M = 2 with a_i-a_{i-1} < 1/M.
  2. |E[f(X)]-E[f(X_n)]| <= |E[f(X)]-sum a_i 1{f(X_n) in [a_i,a{i+1}]}|+sum |a_i| |P(X_n in [a_i,a_{i+1}]) - P(X in [a_i,a_{i+1}]) + … < 2/M +something converging to 0 as n –> infinity. So this can be made arbitrarily small.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

State and prove the Bounded Convergence Thm

A

Let X_n r.v.’s with |X_n| <= C. Let X_n –> X in prob. Then |X| <= C a.e., and X_n –> X in L1.

Proof:

E[ |X_n-X| ] <= E[ |X_n-X| * 1(|X_n-X| >= eps)] + eps <= 2CP(|X_n-X| >= eps)+eps–> eps, using the fact that |X| <= C a.e.

Indeed,
P(|X|>C+eps) <= P(|X-X_n|+|X_n| > C+eps) <= P(|X-X_n|>eps)+P(|X_n|>C)=P(|X-X_n|>eps) –>0. So P(|X|>C+eps)=0.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is Markov’s inequality. State a better version of it.

You were stuck on this problem:

Show that n^(-1/2) max_{k <= n} |X_k| –> 0 in prob, for X_n iid in L^2.

A

P(X>=a) <= E[X]/a, since

a1(X>=a) <= X. Indeed, a1(X>=a) <= X1(X>=a). So, P(X>=a) <= E[X1(X>=a)]/a.

If X in L^2(P), show that nP(|X|>epssqrt(n))–>0 as n –> infinity.

nP(|X|>epssqrt(n)) <= nE[X^21(X^2 > eps^2n)]/(epsn) –> 0.

So Prob(|X_k| > eps*sqrt(n) for some k <= n) = 1-(1-p(n))^n
<= 1-1+np(n)--> 0.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

(i) Define a measure preserving map.
(ii) Define an invariant subset and show they form a sigma-algebra E_{theta}.
(iii) Define an invariant function. How does it relate to E_{theta}?
(iv) Define an ergodic function.

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

State but don’t prove Birkhoff’s Ergodic Theorem and Von Neumann’s Ergodic Theorem. IT IS NOT EXAMINABLE THANK THE GODS HOLY SHIT

A

Let E a sigma-finite measure space and f integrable. If theta is measure preserving, there exists an invariant function f^bar such that mu(|f^bar|) <= mu(|f|) and S_n(f)/n –> f^bar a.e.

Let E a finite measure space and let f be in L^p. Then there is f^bar in L^p such that S_n(f)/n –> f^bar in L^p.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Show that if X, Y indep and X=Y a.s., X=const a.s.

Show that if f(x) >= 0, and E[f(x)]=0, then P(f(X)=0)=1.

A
17
Q

2013 P1. If X_n is iid uniform in [0,1), and Y is an r.v. indep of X_n, then let Z_n = (X_n+Y) mod 1. What is the distribution of Z_1?

Show that Z_n are indep.

A
18
Q

Show that if |X| <= 1, and |E[X]|=1, then X=e^{i*theta} a.s. for some theta.

Deduce that if the characteristic function is 1 at u, then uX in theta+2nPi a.s.

A

Proof:
1-Re(e^{-itheta}X) >=0 and E[1-e^{-itheta}X]=E[1-Re(e^{-itheta}x]=0. Hence 1-Re(e^{-i theta}X)=0 a.s. Hence Re(e^{-i theta}X)=1 a.s. Hence e^{-i theta}X=1 a.s.

19
Q

Show that convergence a.e ==> convergence in measure in a finite measure space.
Show that this is false in general.

A

Let A_n = {|f_n-f| <= eps}. So P(lim inf A_n) <= lim inf P(A_n) by Fatou. But lim inf A_n = A_n e.v., so P(A_n ev) = 1. So lim inf P(A_n) = 1. So P(A_n) – > 1. So P(|f_n-f|>eps) –>0

Consider N with the counting measure, and f_n = 1_{x >= n}. So f_n —> 0 everywhere, but mu(|f_n| > eps) = infinity for all positive eps < 1.

20
Q

Show that convergence in measure ==> convergence a.e. of a subsequence.

A

Let f_n –> f in measure. Pick phi(n) so that mu( |f_{phi(n)}-f| > 1/n) < 1/2^n.

Goal: Apply Borell-Canteli, which holds on general measure spaces.
mu({ x such that f_{phi(n)}(x) does not conv to f(x)}) <= mu( |f_{phi(n)-f| > 1/n i.o) = 0.

21
Q

Show that convergence in Prob ==> conv in distribution.

A

If C is a closed set in R, then P(X_n in C) <= P(|X-X_n| > 1/m) + P(X in C_m) —> P(X in C). Thus limsup P(X_n in C) <= P(X in C). Thus X_n –> X in distribution.

22
Q

Define what it means to be UI.

Show that L1 convergence ==> Convergence in Prob and UI of the sequence. State this more precisely.

A

A family Chi of L1 r.v.’s is UI if it is bounded in L1 and also I_{delta} –> 0 as delta –> 0, where I_delta = sup {E[|X|1_A], pr(A) < delta}.

Let X_n, X be L1 and let X_n –> X. Then {X_n} is UI, and X_n –> X in prob.

Proof:

  1. P(|X_n-X|>eps) <= E[|X_n-X|]/eps –>0
  2. {X_n} is bounded in L1.
  3. Let eps > 0. Then let E[|X-X_n|} < eps for n > N. Note that {X, X_1, …, X_N} is a UI family. And for n > N, E[1A |X_n| ] <= E[|X_n-X|] + E[1A |X|] < eps + eps. So done.
23
Q

Define what it means to be UI (pick the one most useful to this question).

Show that if X_n UI and conv to X in prob, then X_n –> X in L1.

A

A family is UI is sup_X E[|X| 1_{|X|>K} ] –> 0 as K –> infinity.

  1. X is in L1 by Fatou’s, taking a subsequence of X_n converging to X a.e. and using boundedness of {X_n} in L1.
  2. Consider X_n,K which bounds X_n in [-K,K]. Note that X_{n,K} –> X_K in probability. Hence by bounded convergence, X_n,K –> X_K in L1.
  3. E[|X_n-X|] <= E[|X_n-X_n,K|}+E[|X-X_K|]+E[|X_n,K-X_K|].

But sup_n E[|X_n-X_n,K|]=E[|X_n| 1_{|X_n|>K}] –> 0 as K –> infinity
Similarly, X is in L1, so UI, so E[|X-X_K|] –> 0 as K –> infinity.
So we are done.

i.e., pick eps > 0. Then pick K so that the first and second terms are small. Then pick N so that the third is small.

24
Q

Show that boundedness in Lp implies UI, for p>1. Show also that finite families in L1 are UI.

A

If E[|X|^p] < M for all X, then E[|X| 1_A] <= E[|X|^p] ^ {1/p} Pr(A)^{1/q} –> 0 as Pr(A) –> 0.

E[|X|1_A] <= E[|X|] Pr(A) –>0 as Pr(A) –> 0.

25
Q

Define UI and show that it is equivalent to sup_X E[|X|1_{|X|>K}] –> 0 as K–>infinity

A

Let chi a UI family bounded by M in L1. Then Pr(|X| > K) <= E[|X|]/K < M/K. So the result is immediate.

Now, let RHS hold. Given eps > 0, there’s some K so that the bound is eps/2. Then, if A an event,

E[|X| 1A] = E[|X| 1{A, |X|>K}]+E[|X| 1_{A, |X| <= K}] < eps/2 + K Pr(A). So choose Pr(A) < eps/(2K).

DO NOT FORGET TO SHOW THAT CHI IS BOUNDED!! IT IS TRIVIAL HOWEVER.

E[|X|] <= K + E[|X| 1_{|X|>K}], so done.

26
Q

State and prove Slutsky’s theorem

A
27
Q

Show that in L2, if f_n –> f a.e., then ||f_n||–>||f|| iff ||f_n-f||–>0.

A

<== holds in any metric space by the triangle inequality. i.e., | ||f_n||-||f|| | <= ||f-f_n||.

===> By the quadrilateral identity, ||f_n-f||^2 + ||f_n+f||^2 = 2||f||^2+2||f_n||^2. So,
apply Fatou on RHS-||f_n-f||^2.