BMSC Theory Flashcards
Define one-dim Brownian motion.
Define Brownian bridge.
Def 1.1.:
We say that the real-valued process (B_t){t>=0} is a one-dim Brownian motion started from 0 if:
* B(0)=0 almost surely, and for all t>=0, the law of B_t is N(0,t) (centered Gaussian w. variance t).
* For every positive integer k and all strictly incr. {t_1,…,t_k} the k increments B{t_1}-B_0,B_{t_2}-B_{t_1},…,B_{t_k}-B_{t_{k-1}} are independent random variables.
* For each t>=0 and h>0, the law of B_{t+h}-B_t is the same as the law of B_h-B_0
* There exists a m-able set A with probability 1, s.t. for all w in A, the map t->B_t is continuous on R_+.
page 22, definition 3.7
Prove two “std. prob. theory results” used in construction of BM:
- X,Y indep Gaussian r.v.s with var.s sig_X^2,sig_Y^2 then X+Y is a Gaussian r.v. with var sig_X^2+sig_Y^2
- X and Y have the same variance sig^2, then X+Y and X-Y are two independent centered Gaussian r.v.s with variance 2*sig^2
- Hint: char function E[exp(ilamX)]=exp(-sig^2lam^2/2) and by indep: E[exp(i(lam(X+Y)+mu(X-Y))]=E[exp(ilam(X+Y))]E[exp(imu(X-Y))]=exp(-(sig_X^2+sig_Y^2)lam^2/2)
- E[exp(i(lam[X+Y]+mu[X-Y]))]=E[exp(i[lam+mu]X)exp(i[lam-mu]Y)]
=E[exp(i[lam+mu]X)]E[exp(i[lam-mu]Y)]
=exp(-sig^2([lam+mu]^2+[lam-mu]^2)/2)
=exp(-sig^2lam^2)exp(-sig^2mu^2)
=E[exp(ilam(X+Y)]E[exp(imu(X-Y))]
STATE LAST PART OF PROOF OF KOLMOGOROV, i.e. starting from |f_n-f_{n-1}| leq 2^{-n*gamma}
What is a continuous modification?
State and prove Kolmogorov’s continuity criterion.
Y is a cont. mod. of X if for all t: [ X_t=Y_t a.s.]
pg. 17, Proposition 2.1
Hint: Cantelli, Markov ( P(|X|>=a)<=E[phi(|X|)]/phi(a) where phi is mon. increasing., non-neg
Define centered Gaussian vector and centered Gaussian process.
Show that the law of a centered Gaussian vector X is completely characterized by its covariance matrix.
- A random vector (X1,…,Xn) in R^n is a centered Gaussian vector if any linear combination of the Xi is a centered Gaussian random variable.
- A stochastic process (X_t){t in I} is a centered Gaussian process if any finite-dimensional vector (X{t_1},…,X_{t_p}) is a centered Gaussian vector.
- For any lam_1,…,lam_n in R, the char func E[exp(iSum(lam_iX_i)] is equal to E[exp(iY)] where Y=Sum(lam_iX_i) is a centered Gaussian var. and it is thus equal to exp(-sig_Y^2/2) but sig_Y^2=E[Y^2]=Sum(lam_ilam_jE[X_iX_j]) is determined by the covar matrix.
Show that BM is a Gaussian process.
Indeed, for any t_1,…,t_p the vector (B_{t_1},…,B_{t_p}) is a linear combination of indep centered Gaussian vars (B_{t_1},B_{t_2}-B_{t_1},…,B_{t_p}B_{t_{p-1
}) and is thus a centered Gaussian vector.. Since for all t>=0 and h>=0, E[B_tB_{t+h}]=E[B_t^2]+E[B_t(B_{t+h}-B_t)]=t+E[B_t]E[B_{t+h}-B_t]=t and thus the covar matrix has entries: Sig_B(i,j)=min(i,j).
BTW: This characterises BM: BM is a real-valued centered Gaussian process (B_t)_{t>=0} with covar function Sig_B(i,j)=min(i,j), s.t. the map t->B_t is almost surely continuous.
State and prove invar. unter time reversal, scale invariance, inversion invariance
page 21, Proposition 3.4
State and prove Blumental’s 0-1 law.
page 28, Proposition 4.3
Show that almost surely, limsup B_t/sqrt(t)=inf and liminf B_t/sqrt(t)=-inf (as t–>0).
page 29, Proposition 4.5
State the two versions of the Law of the iterated logarithm and the convergence results of B_t/sqrt(t) for t –>0 or inf respecitvely.
[Prove the Laws of iterated logirthm? Update!]
1) limsup B_t/sqrt(t) for t –> 0
2) liminf B_t/sqrt(t) for t –> 0
3) limsup B_t/sqrt(t) for t –> inf
4) liminf B_t/sqrt(t) for t –> inf
page 30, Remark 4.9
Sheet 3, ex 4
1) inf
2) -inf
3) inf
4) -inf
State and prove the Reflection Principle and its corollaries
Page 32, Proposition 4.13 and Corollary 4.14, Corollary 4.15, Corollary 4.18
Define regular boundary points.
U open in R^d.
We say that x in Boundary(D) is a regular boundary point of D if when one considers a BM B started from x, then almost surely inf{t>0: B_t not in D}=0. We say that D has a regular boundary for BM (or in short a regular boundary) if every boundary point is regular.
Define harmonic function and solutions of the Dirichlet problem.
page 36, definition 5.1, definition 5.2
Let D be a bounded open subset of R^d with compact boundary and f a continuous function defined on the boundary of D.
Prove that if the solution to the Dirichlet problem in D with boundary values f exists, then it is necessarily equal to the function U(x):=E_x[f(B_T)]
(i.e. B BM started from x)
page 36, proposition 5.3
Granted that (last prop 5.3) if the Dirichlet problem in bounded open D equal to f on D's boundary has a solution U it is U(x):=E_x[f(B_T]], then: If all boundary points of the bounded domain D are regular, then there exists (for any continuous function f on D's boundary) a unique solution to the Dirichlet problem, and this solution is equal to the function U.
page 37, Theorem 5.5 (maybe clearer: lecture 7, part 1),
note that this came in Jonas’ exam and after doing the whole thing… the examiner said he only wanted the part after the bullet points…
Hints:
i,ii,iii
i uses same idea as iii and ii uses MP
Calculate P_x(T_r < inf) for x in D:=B(0,R) \ [ B(0,r) + dB(0,r)] in R^3.
P_x(T_r < inf)=r/||x||
page 40, section 5.3
Prove that in R^3 that ||B_t||–>inf as t–>inf
page 41, Proposition 5.11
Prove that for R^d with d>3 that ||B_t||–>inf as t–>inf
page 41, proposition 5.12
Let B be a 2-dim BM(x).
Prove B a.s. never hits 0, i.e. T_0.
page 39, proposition 5.13
P_x(T_0 < inf) = 0
Prove that {B_t: t>=0} is a.s. dense in R^2 for planar BM.
Lebesgue measure?
page 41, proposition 5.15
Lebesgue( {B_t: t>=0} )=0, Corollary 5.14
Define polar sets (for planar BM).
List some props/examples
Definition 5.20 : We say that the compact set K is polar for BM if almost surely {B_t: t>=0} n K = { }
- K in R^2 non polar then a.s. {t>0:Bt in K} is not empty (ex. sheets, check proof length. UPDATE!!)
- K singleton, then polar (because planar BM a.s. doesn’t visit specified points). Consequently, K countable, then polar (union of countably many events with P=0 has P=0)
- K positive Lebesgue measure, then non polar (because P of B1 in K is positive)
- If K is a segment [a,b] between distinct points, then it is non polar (because two indep BMs correspond to two coordinates of B parallel and orthogonal to b-a). (example of a non-polar Lebesgue=0 set).
- Cantor set non polar (proposition 5.21)
page 44
State the “cousins” of the Dirichlet problem and their solutions.
page 46, Proposition 5.22, remark 5.23, proposition 5.24
Check exercise sheet UPDATE (add excises of examples!!)
Note: LapH= - k^2H is called the Helmholtz equation.
Note: Lap*H=f is called the Poisson equation
State and prove Donsker’s theorem.
Refer to document BMpart1l.pdf
page 19, theorem 2.4,
page 48, theorem 6.1,
proof via coupling page 50, 6.2 Proof via coupling
(might ask proofs individually ie refers to them, he prefers to ask the first one ie proof via coupling)
Just Levy
What follows from Donsker’s theorem?
What is Donsker’s theorem useful for?
Useful for deriving limiting distribution of continuous functionals of simple random walks in terms of continous functionals of BM, e.g.
page 51 Proposition 6.5 (Paul Lévy): ( max_{t>=s} B_s -B_t){t>=0} has the same law as (|B_t|){t>=0}.
Exercise 3, sheet 5, especially part 1
https://math.stackexchange.com/questions/392042/an-application-of-donskers-theorem
What do we always implicitly assume about our filtration? (ie “usual conditions”)
- F_0 contains all sets A’ contained in measureable sets A (A in F) with probability 0
- The filtration is right-continuous, i.e. t>=0 F_t=Intersection( F_{t+eps} : eps>0 )
Define continuous martingales.
Name an example.
page 56, Definition 1.1: A process (M_t)_{t>=0} defined in this filtered probability space is said to be a continuous martingale with respect to the filtration (F_t)_{t>=0} if it is an L^1 process (i.e. each M_t in L^1(F_t,P) ) that is adapted to this filtration, such that: * For all t>=s: M_s=E[M_t|F_s] almost surely * There exists an event of probability 1 such that on this event, t-->M_t is a continuous function of R_+
Example par excellence: BM page 56, Definition 1.1.
State Doob’s L^2 inequality for continuous martingales and briefly explain why it follows from the discrete case.
L^p inequality
Maximal inequality
page 57, proposition 1.2:
If (M_t){t>=0} is a continuous martingale and M_t in L^2 for some given t, then:
4*E[M_t ^2] >= E[sup[0,t] M_s ^2].
Pf: discrete for (M_{tj2^-n})t implies 4E[M_t ^2] >= E[sup{ M^2_{jt2^{-n} : 2^n>=j} ].
A.s. continuity implies max_{2^n>=j} M_{jt*2^{-n}} ^2 –> max[0,t] M_s ^2, so that by mon. conv. follows.
L^p: (p/(p-1))^p * E[|M_t|^p] >= E[sup_[0,t] |M_s|^p]
((E[sup_[0,t] |M_s|^p]=lim E[max_{2^n>=j} |M_{jt2^{-n}|^p]))
Maximal inequality: E[|M_t|] >= lam* P(sup_[0,t] |M_s|>lam)
((P[sup_[0,t] |M_s| > lam ] =lim P[max_{2^n>=j} |M_{jt2^{-n}| >lam] leq E[ |M_t| 1{sup_[0,t] |M_s| > lam} ))
Def. (X_i)_i uniformly integrable.
State implications between L^1-convergence, L^p boundedness, uniformly integrable
*lim (sup_i E[ |X_i| * 1{|X_i|>A} ] = 0 ( lim of A–>inf)
- (X_i)_i converges in L^1 iff converges in Prob and uniformly integrable.
- (X_i)_i bounded in L^p for some p>1 implies uniformly integrable
- (X_i)_i uniformly integrable implies bounded in L^1
Recall that bounded in L^p when sup_i E[|X_i|^p]< inf.
State convergence criteria for continuous martingale (M_t)_t.
Prop 1.2
- bounded in L^1 implies converges almost surely to some r.v. M_{inf} as t–>inf
- uniformly integrable implies converges almost surely and in L^1 as t–>inf to some r.v. M_{inf} and M_t=E[M_{inf} | F_t] almost surely (in part. E[M_{inf}]=E[M_0]
- bounded in L^p for some p>1 implies converges almost surely and in L^p to some r.v. M_{inf}
Define stopping time.
The r.v. T with values in [0,inf] is said to be a stopping time for the filtration (F_t)_{t>=0} if for any positive t, the event {t>=T} is in F_t.
Only i) implies ii)!!
State the stopping time lemma (equivalence) and its corollary.
Prove? UPDATE
Page 57, lemma 1.10
in both (Optional stopping, bounded stopping times)
State the optimal stopping theorem (simple version).
part 2: proposition 1.12, pg 58
2019: page 60, proposition 1.9
State and prove the full optimal stopping theorem.
prop 1.13
page 61, proposition 1.10
Define the quadratic and exponential martingale and prove that they are martingales.
lemma 1.14
page 61, lemma 1.11
Define a bounded martingale.
(M_t)_t is a bounded martingale if
- it is a martingale
- |M| is almost surely bounded by some deterministic constant C (i.e. almost surely for all t>=0, C>=|M_t|)
- For some deterministic integer K, M is almost surely constant on [K,inf) (i.e. almost surely for all t>=K, M_t=M_K)
BTW: It follows that then M_t is in L^2, thus variance of increment is sum of variances of disjoint increments (in the increment). Same after conditioning. Thus (M_t^2)_t is a submartingale E[diff | F_t]>=0.
How do we define quadratic variation?
- M bounded martingale, then there exists a unique non-decreasing process A (called quadratic variation) s.t. A_0=0 and ((M_t)^2-A_t)_{t>=0} is an L^2-martingale. (page 63, Proposition 2.2)
- Some equivalences: eg exists unique L2 mart. X_t s.t. (A_t:=(M_t)^2-2*X_t)_t is non-decr. and A_0=0, M mart both L2 adapted
- Exists unique decomp of M^2_t=2*X_t+A_t, A non-decr A_0=0, X
- We simultaneously show that V_{delta_n}:=Sum[ (M_{t^n_{i+1}}-M_{t^n_t})^2 ; i=1,…,m_n ] converges in L^2 to A_t (page 66, proposition 2.5, still for bounded mart.s)
- We then prove in page 68, theorem 2.8 (quadratic variation for L^2-martingales) : M L^2 mart.
There exists a unique adapted contin. non-decr. pr. A with A_0=0 s.t. (M^2-A) is a mart.. Furthermore, we have V converges in probability to A. - We then prove in page 70, Theorem 2.13 (Quadratic variation of local martingales) the props with convergence in prob for local martingales. i.e.
There exists a unique adapted continuous non-decreasing process (A_t){t>=0} with A_0=0 s.t. ((M_t)^2-A_t){t>=0} is a local martingale. Furthermore, for all t>=0, and for all choices of nested sequences delta_n of subdivisions of [0,t] with |delta_n|–>0, V_{delta_n} does converge in probability to A_t as n–>inf.
Define local martingale.
A continuous adapted process (M_t){t>=0} is said to be a local martingale started from 0 (in some filtered prob space) if M_0=0 almost surely and if there exists a sequence tau_k of stopping times such that tau_k—>inf almost surely and such that for all k>=1, M^{tau_k} is a continuous martingale started from 0.
A continuous adapted process (M_t){t>=0} is said to be a local martingale if the process (M_t-M_0)_{t>=0} is a local martingale.