BMSC Theory Flashcards

1
Q

Define one-dim Brownian motion.

Define Brownian bridge.

A

Def 1.1.:
We say that the real-valued process (B_t){t>=0} is a one-dim Brownian motion started from 0 if:
* B(0)=0 almost surely, and for all t>=0, the law of B_t is N(0,t) (centered Gaussian w. variance t).
* For every positive integer k and all strictly incr. {t_1,…,t_k} the k increments B
{t_1}-B_0,B_{t_2}-B_{t_1},…,B_{t_k}-B_{t_{k-1}} are independent random variables.
* For each t>=0 and h>0, the law of B_{t+h}-B_t is the same as the law of B_h-B_0
* There exists a m-able set A with probability 1, s.t. for all w in A, the map t->B_t is continuous on R_+.

page 22, definition 3.7

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Prove two “std. prob. theory results” used in construction of BM:

  • X,Y indep Gaussian r.v.s with var.s sig_X^2,sig_Y^2 then X+Y is a Gaussian r.v. with var sig_X^2+sig_Y^2
  • X and Y have the same variance sig^2, then X+Y and X-Y are two independent centered Gaussian r.v.s with variance 2*sig^2
A
  • Hint: char function E[exp(ilamX)]=exp(-sig^2lam^2/2) and by indep: E[exp(i(lam(X+Y)+mu(X-Y))]=E[exp(ilam(X+Y))]E[exp(imu(X-Y))]=exp(-(sig_X^2+sig_Y^2)lam^2/2)
  • E[exp(i(lam[X+Y]+mu[X-Y]))]=E[exp(i[lam+mu]X)exp(i[lam-mu]Y)]
    =E[exp(i[lam+mu]X)]E[exp(i[lam-mu]Y)]
    =exp(-sig^2([lam+mu]^2+[lam-mu]^2)/2)
    =exp(-sig^2
    lam^2)exp(-sig^2mu^2)
    =E[exp(i
    lam(X+Y)]E[exp(imu(X-Y))]
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

STATE LAST PART OF PROOF OF KOLMOGOROV, i.e. starting from |f_n-f_{n-1}| leq 2^{-n*gamma}

What is a continuous modification?

State and prove Kolmogorov’s continuity criterion.

A

Y is a cont. mod. of X if for all t: [ X_t=Y_t a.s.]

pg. 17, Proposition 2.1

Hint: Cantelli, Markov ( P(|X|>=a)<=E[phi(|X|)]/phi(a) where phi is mon. increasing., non-neg

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Define centered Gaussian vector and centered Gaussian process.
Show that the law of a centered Gaussian vector X is completely characterized by its covariance matrix.

A
  • A random vector (X1,…,Xn) in R^n is a centered Gaussian vector if any linear combination of the Xi is a centered Gaussian random variable.
  • A stochastic process (X_t){t in I} is a centered Gaussian process if any finite-dimensional vector (X{t_1},…,X_{t_p}) is a centered Gaussian vector.
  • For any lam_1,…,lam_n in R, the char func E[exp(iSum(lam_iX_i)] is equal to E[exp(iY)] where Y=Sum(lam_iX_i) is a centered Gaussian var. and it is thus equal to exp(-sig_Y^2/2) but sig_Y^2=E[Y^2]=Sum(lam_ilam_jE[X_iX_j]) is determined by the covar matrix.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Show that BM is a Gaussian process.

A

Indeed, for any t_1,…,t_p the vector (B_{t_1},…,B_{t_p}) is a linear combination of indep centered Gaussian vars (B_{t_1},B_{t_2}-B_{t_1},…,B_{t_p}B_{t_{p-1
}) and is thus a centered Gaussian vector.. Since for all t>=0 and h>=0, E[B_t
B_{t+h}]=E[B_t^2]+E[B_t(B_{t+h}-B_t)]=t+E[B_t]E[B_{t+h}-B_t]=t and thus the covar matrix has entries: Sig_B(i,j)=min(i,j).
BTW: This characterises BM: BM is a real-valued centered Gaussian process (B_t)_{t>=0} with covar function Sig_B(i,j)=min(i,j), s.t. the map t->B_t is almost surely continuous.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

State and prove invar. unter time reversal, scale invariance, inversion invariance

A

page 21, Proposition 3.4

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

State and prove Blumental’s 0-1 law.

A

page 28, Proposition 4.3

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Show that almost surely, limsup B_t/sqrt(t)=inf and liminf B_t/sqrt(t)=-inf (as t–>0).

A

page 29, Proposition 4.5

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

State the two versions of the Law of the iterated logarithm and the convergence results of B_t/sqrt(t) for t –>0 or inf respecitvely.
[Prove the Laws of iterated logirthm? Update!]

1) limsup B_t/sqrt(t) for t –> 0
2) liminf B_t/sqrt(t) for t –> 0
3) limsup B_t/sqrt(t) for t –> inf
4) liminf B_t/sqrt(t) for t –> inf

A

page 30, Remark 4.9
Sheet 3, ex 4

1) inf
2) -inf
3) inf
4) -inf

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

State and prove the Reflection Principle and its corollaries

A

Page 32, Proposition 4.13 and Corollary 4.14, Corollary 4.15, Corollary 4.18

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Define regular boundary points.

A

U open in R^d.
We say that x in Boundary(D) is a regular boundary point of D if when one considers a BM B started from x, then almost surely inf{t>0: B_t not in D}=0. We say that D has a regular boundary for BM (or in short a regular boundary) if every boundary point is regular.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Define harmonic function and solutions of the Dirichlet problem.

A

page 36, definition 5.1, definition 5.2

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Let D be a bounded open subset of R^d with compact boundary and f a continuous function defined on the boundary of D.
Prove that if the solution to the Dirichlet problem in D with boundary values f exists, then it is necessarily equal to the function U(x):=E_x[f(B_T)]
(i.e. B BM started from x)

A

page 36, proposition 5.3

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q
Granted that (last prop 5.3) if the Dirichlet problem in bounded open D equal to f on D's boundary has a solution U it is U(x):=E_x[f(B_T]], then:
If all boundary points of the bounded domain D are regular, then there exists (for any continuous function f on D's boundary) a unique solution to the Dirichlet problem, and this solution is equal to the function U.
A

page 37, Theorem 5.5 (maybe clearer: lecture 7, part 1),
note that this came in Jonas’ exam and after doing the whole thing… the examiner said he only wanted the part after the bullet points…

Hints:
i,ii,iii
i uses same idea as iii and ii uses MP

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Calculate P_x(T_r < inf) for x in D:=B(0,R) \ [ B(0,r) + dB(0,r)] in R^3.

A

P_x(T_r < inf)=r/||x||

page 40, section 5.3

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Prove that in R^3 that ||B_t||–>inf as t–>inf

A

page 41, Proposition 5.11

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Prove that for R^d with d>3 that ||B_t||–>inf as t–>inf

A

page 41, proposition 5.12

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Let B be a 2-dim BM(x).

Prove B a.s. never hits 0, i.e. T_0.

A

page 39, proposition 5.13

P_x(T_0 < inf) = 0

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Prove that {B_t: t>=0} is a.s. dense in R^2 for planar BM.

Lebesgue measure?

A

page 41, proposition 5.15

Lebesgue( {B_t: t>=0} )=0, Corollary 5.14

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Define polar sets (for planar BM).

List some props/examples

A

Definition 5.20 : We say that the compact set K is polar for BM if almost surely {B_t: t>=0} n K = { }

  • K in R^2 non polar then a.s. {t>0:Bt in K} is not empty (ex. sheets, check proof length. UPDATE!!)
  • K singleton, then polar (because planar BM a.s. doesn’t visit specified points). Consequently, K countable, then polar (union of countably many events with P=0 has P=0)
  • K positive Lebesgue measure, then non polar (because P of B1 in K is positive)
  • If K is a segment [a,b] between distinct points, then it is non polar (because two indep BMs correspond to two coordinates of B parallel and orthogonal to b-a). (example of a non-polar Lebesgue=0 set).
  • Cantor set non polar (proposition 5.21)

page 44

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

State the “cousins” of the Dirichlet problem and their solutions.

A

page 46, Proposition 5.22, remark 5.23, proposition 5.24
Check exercise sheet UPDATE (add excises of examples!!)

Note: LapH= - k^2H is called the Helmholtz equation.
Note: Lap*H=f is called the Poisson equation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

State and prove Donsker’s theorem.

A

Refer to document BMpart1l.pdf
page 19, theorem 2.4,
page 48, theorem 6.1,
proof via coupling page 50, 6.2 Proof via coupling
(might ask proofs individually ie refers to them, he prefers to ask the first one ie proof via coupling)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Just Levy
What follows from Donsker’s theorem?
What is Donsker’s theorem useful for?

A

Useful for deriving limiting distribution of continuous functionals of simple random walks in terms of continous functionals of BM, e.g.
page 51 Proposition 6.5 (Paul Lévy): ( max_{t>=s} B_s -B_t){t>=0} has the same law as (|B_t|){t>=0}.
Exercise 3, sheet 5, especially part 1

https://math.stackexchange.com/questions/392042/an-application-of-donskers-theorem

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

What do we always implicitly assume about our filtration? (ie “usual conditions”)

A
  • F_0 contains all sets A’ contained in measureable sets A (A in F) with probability 0
  • The filtration is right-continuous, i.e. t>=0 F_t=Intersection( F_{t+eps} : eps>0 )
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Define continuous martingales.

Name an example.

A
page 56, Definition 1.1:
A process (M_t)_{t>=0} defined in this filtered probability space is said to be a continuous martingale with respect to the filtration (F_t)_{t>=0} if it is an L^1 process (i.e. each M_t in L^1(F_t,P) ) that is adapted to this filtration, such that:
* For all t>=s: M_s=E[M_t|F_s] almost surely
* There exists an event of probability 1 such that on this event, t-->M_t is a continuous function of R_+

Example par excellence: BM page 56, Definition 1.1.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

State Doob’s L^2 inequality for continuous martingales and briefly explain why it follows from the discrete case.

L^p inequality
Maximal inequality

A

page 57, proposition 1.2:
If (M_t){t>=0} is a continuous martingale and M_t in L^2 for some given t, then:
4*E[M_t ^2] >= E[sup
[0,t] M_s ^2].

Pf: discrete for (M_{tj2^-n})t implies 4E[M_t ^2] >= E[sup{ M^2_{jt2^{-n} : 2^n>=j} ].
A.s. continuity implies max_{2^n>=j} M_{j
t*2^{-n}} ^2 –> max
[0,t] M_s ^2, so that by mon. conv. follows.

L^p: (p/(p-1))^p * E[|M_t|^p] >= E[sup_[0,t] |M_s|^p]
((E[sup_[0,t] |M_s|^p]=lim E[max_{2^n>=j} |M_{jt2^{-n}|^p]))

Maximal inequality: E[|M_t|] >= lam* P(sup_[0,t] |M_s|>lam)
((P[sup_[0,t] |M_s| > lam ] =lim P[max_{2^n>=j} |M_{jt2^{-n}| >lam] leq E[ |M_t| 1{sup_[0,t] |M_s| > lam} ))

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Def. (X_i)_i uniformly integrable.

State implications between L^1-convergence, L^p boundedness, uniformly integrable

A

*lim (sup_i E[ |X_i| * 1{|X_i|>A} ] = 0 ( lim of A–>inf)

  • (X_i)_i converges in L^1 iff converges in Prob and uniformly integrable.
  • (X_i)_i bounded in L^p for some p>1 implies uniformly integrable
  • (X_i)_i uniformly integrable implies bounded in L^1

Recall that bounded in L^p when sup_i E[|X_i|^p]< inf.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

State convergence criteria for continuous martingale (M_t)_t.

A

Prop 1.2

  • bounded in L^1 implies converges almost surely to some r.v. M_{inf} as t–>inf
  • uniformly integrable implies converges almost surely and in L^1 as t–>inf to some r.v. M_{inf} and M_t=E[M_{inf} | F_t] almost surely (in part. E[M_{inf}]=E[M_0]
  • bounded in L^p for some p>1 implies converges almost surely and in L^p to some r.v. M_{inf}
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

Define stopping time.

A

The r.v. T with values in [0,inf] is said to be a stopping time for the filtration (F_t)_{t>=0} if for any positive t, the event {t>=T} is in F_t.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

Only i) implies ii)!!

State the stopping time lemma (equivalence) and its corollary.
Prove? UPDATE

A

Page 57, lemma 1.10

in both (Optional stopping, bounded stopping times)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

State the optimal stopping theorem (simple version).

A

part 2: proposition 1.12, pg 58

2019: page 60, proposition 1.9

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

State and prove the full optimal stopping theorem.

A

prop 1.13

page 61, proposition 1.10

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

Define the quadratic and exponential martingale and prove that they are martingales.

A

lemma 1.14

page 61, lemma 1.11

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

Define a bounded martingale.

A

(M_t)_t is a bounded martingale if

  • it is a martingale
  • |M| is almost surely bounded by some deterministic constant C (i.e. almost surely for all t>=0, C>=|M_t|)
  • For some deterministic integer K, M is almost surely constant on [K,inf) (i.e. almost surely for all t>=K, M_t=M_K)

BTW: It follows that then M_t is in L^2, thus variance of increment is sum of variances of disjoint increments (in the increment). Same after conditioning. Thus (M_t^2)_t is a submartingale E[diff | F_t]>=0.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

How do we define quadratic variation?

A
  • M bounded martingale, then there exists a unique non-decreasing process A (called quadratic variation) s.t. A_0=0 and ((M_t)^2-A_t)_{t>=0} is an L^2-martingale. (page 63, Proposition 2.2)
  • Some equivalences: eg exists unique L2 mart. X_t s.t. (A_t:=(M_t)^2-2*X_t)_t is non-decr. and A_0=0, M mart both L2 adapted
  • Exists unique decomp of M^2_t=2*X_t+A_t, A non-decr A_0=0, X
  • We simultaneously show that V_{delta_n}:=Sum[ (M_{t^n_{i+1}}-M_{t^n_t})^2 ; i=1,…,m_n ] converges in L^2 to A_t (page 66, proposition 2.5, still for bounded mart.s)
  • We then prove in page 68, theorem 2.8 (quadratic variation for L^2-martingales) : M L^2 mart.
    There exists a unique adapted contin. non-decr. pr. A with A_0=0 s.t. (M^2-A) is a mart.. Furthermore, we have V converges in probability to A.
  • We then prove in page 70, Theorem 2.13 (Quadratic variation of local martingales) the props with convergence in prob for local martingales. i.e.
    There exists a unique adapted continuous non-decreasing process (A_t){t>=0} with A_0=0 s.t. ((M_t)^2-A_t){t>=0} is a local martingale. Furthermore, for all t>=0, and for all choices of nested sequences delta_n of subdivisions of [0,t] with |delta_n|–>0, V_{delta_n} does converge in probability to A_t as n–>inf.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

Define local martingale.

A

A continuous adapted process (M_t){t>=0} is said to be a local martingale started from 0 (in some filtered prob space) if M_0=0 almost surely and if there exists a sequence tau_k of stopping times such that tau_k—>inf almost surely and such that for all k>=1, M^{tau_k} is a continuous martingale started from 0.
A continuous adapted process (M_t)
{t>=0} is said to be a local martingale if the process (M_t-M_0)_{t>=0} is a local martingale.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

Characterize local martingales.

A

A continuous adapted process (M_t){t>=0} is a local martingale started from 0 if and only if for each k>=1, the process M^{T_k} is a continuous martingale started from 0, where T_k:=inf{t>=0; |M_t|=k}.
A continuous adapted process (M_t)
{t>=0} is a local martingale if and only if for each k>=1, the process (M^{T_k}-M_0)_{t>=0} is a continuous martingale started from 0, where T_k:=inf{t>=0; |M_t-M_0|=k}.

38
Q

What follows if a local martingale started from 0 has a bounded variation?

A

Bounded variation means it can written as the difference between two adapted continuous non-decreasing processes and (by page 70, lemma 2.12)
What follows is that it is almost surely equal to 0 for all times.

Potentially UPDATE with proof

39
Q

State and prove the theorem on quadratic variation for local martingales.

A

page 70, theorem 2.13

40
Q

Is a martingale a local maringale?

A

Yes

41
Q

Define cross-variation between two local martingales, say what it is the limit of and use it to define a scalar product on the space of martingales bounded in L^2.
What can you say about this space?
Prove it.

A

page 71-,

bottom of page 77 lemma 3.2 (M^2 is a Hilbert space)

42
Q

What is the Doob-Meyer decomposition?

Prove the exercise.

A

solution 7, exercise 3

page 72, 2.6.2. Doob-Meyer decomposition.
lemma 2.14
theorem 2.15
results without proof

43
Q

Define càdlàg processes.

A

A random process (X_t){t>=0} is called a cadlag process if there exists an event of prob 1, s.t. on this event, for all t>=0, the function s–>X_s is right-continuous at t (i.e. lim X{t+h}=X_t as h–>0 from above) and has left-limit denoted by X_{t-} (equal to lim X_{t+h} as h–>0 from below).

44
Q

When is a local martingale bounded in L^p?

A

page 76, proposition 2.17

45
Q

State and prove the Kunita-Watanabe inequality (i.e. theorem with assumptions).

A

page 74, proposition 2.18

exercise 5, sheet 8

46
Q

Define elementary process.

In what spaces do they live in?

A

A process (K_t){t>=0} with K_t:=Sum[ Y{a_j}*1{ t in (a_j,a_j+1] } ; j =0,…,p-1 }, where p pos. int., where a_0,…,a_p are str. mon. non-neg times and each Y_{a_j} is an F_{a_j} m-able r.v..

We denote by E the set of all elementary processes. We denote E_B the set of al elementary processes s.t. each K_t is in L^2 (i.e. the previous r.v. Y_{a_j} are in L^2). These are vector spaces.

47
Q

How are stochastic integrals defined?

A
  • We first define stochastic integrals w.r.t. BM, where the integrand is an elementary process (K_t)t by I(K)t := Sum[ Y{a_j}*(B{min(t,a_{j+1})-B_{min(t,a_j}}) ; j=0,…,p-1 ]=:Int( K_s dB_s ; [0,t] )
  • One proves that E_B is dense in P_B (K in E_B iff K in L2 iff I(K) bounded in L2) (P=progr. measurable.; P_B:= in P and E[Int(K^2 ds)] finite)
  • One defines stochastic integral w.r.t. BM with integrands from P_B (page 82). K–>I(K) is an isometry between E_B and M^2 (:=cont. L2-mart.s on a filtered space), which we extend to an isometry on P_B=bar(E)_B:
    -Consider sequence K^n in E_B, which converges in P_B
    -Sequence (K^n)_n is Cauchy in norm, i.e. E[Int( (K^n_s-K^l_s)^2ds)]–>0, note that this implies (I(K^n))_n is Cauchy in M^2 because norm of I(K^n)-I(K^l) in M^2 is equal to the distance between K^n and K^l in P_B. (Jonas notes: ->!)
    -The space M^2 is complete, so that there exists a continuous martingale I(K) in M^2 s.t. I(K^n)–>I(K) in this space
    -We note that the continuous mart. I(K) does not depend on our choice of sequence (K^n)_n that approximates K in P_B (indeed if (K~^n) is another such sequence, then I(K^n-K~^n) does converge to 0 in M^2)
  • We generalize stochastic integrals to bounded mart.s: I(K)t:=Sum( K{a_j}(M_{min(t,a_{j+1})-M_{min(t,a_j)}) ; j=0,…,p-1).
    One defines E_M as elementary processes s.t. E[Y^2
    (A_{a_{j+1}}-A_{a_j})] is finite and P_M s.t. E[Int(H dA)] is fintie.
    Aagain notes that I(K) is an isometry, from E_M to M^2 that can be extended on P_M (since E_M is dense)
  • We then further generalize this to stochastic integrals w.r.t. local martingales
  • We further generalize this to stochastic integrals w.r.t. semimartingales: Int(K_s dZ_s)=Int(K_s dM_s)+Int(K_s dV_s)
48
Q

Define progressively measureable processes.

Define the space they live in.

What are sufficient conditions?

A
  • The process (K_t){t>=0} is said to be progressively measurable with respect to the filtration (F_t){t>=0} if for all t>=0, the map (s,w)–>K_s(w) defined on [0,t]xOmega is mable with respect to the product sig-field B_[0,t] \otimes F_t.
  • Set of prog. mable processes denoted P. Subset P_B of P sonsists of prog mable processes s.t. E[Int( (K_s)^2 ds : [0,inf) )] finite. Endowing P_B with the scalar product:
    (K,K’)_B := E[ Int( K_s * K’_s ds; [0,inf] ) ], makes it a Hilbert space.
  • Adapted process that a.s. has right or left cont. paths (page 81, lemma 3.5)
49
Q

Prove that the set E_B is dense in the Hilbert space P_B. In other words, for any process K in P_B, one can find a sequence of elementary processes K^n in E_B s.t. lim E[ Int( (K_s - K^n_s)^2 ds ; [0,inf] ]=0 as n–>inf.

A

page 79, lemma 3.6

50
Q

Let K be in P_B. Show that the quadratic variation of the stochastic integral I(K) is Int( (K_s)^2 ds; [0,t] ).

A

page 80

51
Q

Show that the cross-variation of I(K) and I(K’) for K, K’ in P_B is
Int( K_s*K’_s ds ; [0,t] ).

A

page 80

52
Q
τ_{a,b} = inf{t ≥ 0 : B t ∈ {a, b}}
Show: E(τ_{-1,1})=1
Show: a.s. finite
Show: Deduce using optional stopping:
E(exp(−λ^2 τ_{a,b} /2))=(1 + exp(−λ(a+b))/(exp( −λa) + exp(−λb))
A

part ii, pg. 59

Sheet 6, exercise 1

53
Q

Show that when K in P_B and N in M^2 that almost surely for all t>=0 that CV{I(K),N}_t=Int( K_s dCV{B,N} ; [0,t] ).

A

Ex 6, sheet 8

page 81, proposition 3.8

54
Q

Characterize I(K).

A

page 81

55
Q

Define the sets E_M and P_M.

Hilbertness?

A

Let A be quadr. var. of M.

  • E_M is the set of elementary processes (i.e. subset of E) s.t. K_t=Sum( K_{a_j}1{a_j,a_{j+1}] ; j=0,…,p-1 ), where a0,…,a_p are str. incr. non-neg times and K_{a_j} are F_{a_j} mable r.v.s s.t. E[ (K_{a_j})^2(A_{a_{j+1}}-A_{a_j}) ] finite for all j.
  • K prog. mable s.t. E[Int( (K_s)^2 dA_s ; [0,inf]] finite
  • P_M is a Hilbert space when endowed with (K,K’)_M:= E[ Int( K_s*K’_s dA_s ; [0,inf] )

pg 82

56
Q

Define semimartingale.

Define quadratic variation of semimartingale.

A

A process (Z_t)_{t>=0} is a cont. semimart. in the filtered prob. space (Omega, F,(F_t)_t,P) if it can be written as Z_t=M_t+V_t, where:

  • M is a local martingale in this filtered prob. space
  • V is the difference between two adapted cont. non-decr. processes V^+ and V^- started from 0.

The quadratic variation of a semimart. Z_t=M_t +V_t is the quadratic variation of its mart. M_t

57
Q

What holds for the integral of a cont. adapted process w.r.t the quadr. var. of a semimartingale?
What is the corollary of this?
State lemmas/cor

A

lemmas and cor pg 85-88

58
Q

State and give idea of proof of Ito’s formula for 1-d.

State the observations after the statement, i.e. interpretation of the summands.

A

page 88, theorem 4.7

59
Q

State Ito in higher dimensions.

A

page 89, theorem 4.9

60
Q

Prove the applications of optimal stopping theorem
What is the prob. of a d dim BM leaving unit sphere?
What is the prob of a 1 dim BM leaving interval (a,b) with 0 in (a,b)?

A

in particular that the stopping time is 1 (eg 1)

UPDATE

61
Q

Solve exercise 3, sheet 9

A

..

62
Q

Solve exercise 4, sheet 9

Maybe do and check one before doing all….

A

..

63
Q

Prove exercises 5 from 9 and 6 from 10

A

..

64
Q

Given a local martingale (M_t)_t define a local “exponential” martingale and give and prove two equations relating them. (IMPORTANT)

State generalization after proof.

A

page 90, proposition 4.11

Same proof F in C^2 on R^2 s.t. d_a F + d^2_x F/2=0, then (F(M_t,QV(M)t)){t>=0} is a local martingale as soon as M is.

65
Q

Give Lévy’s characterization of Brownian motion in 1.d
Prove.
Give for n-d.

A

page 93, corollary 4.13

also give page 94, corollary 4.14 if time allows

66
Q

How can one construct a BM from a local martingale?

A

page 91, corollary 4.15

67
Q

Apply Ito to BM.

What happens when F is harmonic?

A

Theorem 4.17

68
Q

Prove that if a cont. solution to Laplace(H(x))=2alpha(x)H(x) on D (open subset of R^d) with H=f in C^2 on dD (open subset of R^d) exists, then it is unique and it is equal to the function U defined for all x in D by: U(x)=E_x[f(B_T)exp(-Int( alpha(B_t) dt; [o,T] ))], where B is a BM started from x and T denotes its exit time from D.

A

page 93, proposition 4.18

69
Q

Define the Heat equation problem.

Prove Prop:
If the solution to the heat equation exists, then it is unique and it is equal to U(x,t):=E_x[f(B_t)*1{T>t}], where B is BM started from x and T denotes its exit time from D.

A

page 93

70
Q

State both theorems about strong solutions to stoch. diff. equations.
Prove the uniqueness part of Theorem 5.2 (page 100)

A

state gronwall’s lemma
proof:
page 97

Recall Borel-Cantelli and Markov’s inequality (prob the one on wiki)

71
Q

State and prove the Yamada-Watanabe criteria for (pathwise) uniqueness.

A

page 100, theorem 5.8

72
Q

State the theorem about weak solutions.

A

page 105, theorem 5.12

73
Q

State the corollary with the definition of squared Bessel processes.

Define Bessel processes.

List 3 properties of Bessel processes. And which theorems are used to prove them in the exercises.

A

page 106, corollary 5.14

page 106, definition 5.15

page 106 (see ex sheets and old script!)

74
Q

Explain Tanaka’s example

A

page 104, 5.3.2

UPDATE QUESTION TO BE MORE SPECIFIC!!

75
Q

Prove:
Proposition 5.11. Suppose that F is a continuous function on D that is C^2 in D, such that
F = f on ∂D and such that LF = 0 in D. Then necessarily,
F (x 0 ) = E[f (X_τ )].

A

page 102

76
Q

UPDATE

As text in this years skript in warm-up:

Prove for D_t:=Radon-Nikodym der. of Q w.r.t (Omega,F_t), where Q abs. cont. w.r.t. P. such that for any A ∈ F_t ,
Q(A) = E[1_A*D_t ]:
Proposition 6.2. This process (D_t ) t≥0 is a uniformly integrable martingale (in the filtered
probability space (Ω, F, (F t ) t≥0 , P )) that converges (almost surely and in L 1 ) to the non-negative
random variable D_∞ . Furthermore, for any stopping time T , the Radon-Nikodym derivative of Q
with respect to P on F T is the random variable D T .
A

page 109, proposition 6.2

77
Q

State the result on Cameron-Martin space for BM.

A

page 110, proposition 6.4

78
Q

State the general case of the result on the Cameron-Martin space.

A

page 111, proposition 6.7

79
Q

State Girsanov’s theorem.

A

page 111, theorem 6.8

80
Q

State Girsanov’s theorem for BM.

A

page 110, theorem 6.10

81
Q

Prove Girsanov’s theorem.

A

page 113

82
Q

Exercise 1 after Girsanov’s BM theorem, there are examples after thm..
Show that B_t - h(t) for h in C is a BM for some prob measure.

Generalize this to a random process.

A

UPDATE

83
Q

Solve ex. sheet 10

A

UPDATE

84
Q

State Novikov’s and Kazamaki’s criteria

A

page 114

85
Q

What results follow from Girsanov’s theorem regarding SDEs?

A

page 117, proposition 6.12

page 118, Proposition 6.14

86
Q

Prove the strong Markov property.

A

pg. 31, prop 4.12

87
Q

Name some Harmonic functions.

How can this come up in an exam?

A

const or affine polys (each coordinates degree is less than two),
real & imaginary parts of holomorphic functions,
ln(x^2+y^2), x/[r(r+z)], x/[r^2-z^2], ||x||^{2(1-n/2)} for n geq 2 on D=IR\0

A domain could be given, and a probability that a Brownian motion leaves/visits this domain could be asked for.

88
Q

Show that if F is a cont. function on the completion of D that is C^2 in D, s.t. F=f on the boundry of D and LF=0 in D. Then necessarily, F(x0)=E[F(X_tau)]

A

..

89
Q

Construct BM. (vaguely, at least explain the steps)

A

..

90
Q

Give weak solutions via Girsanov.

A

..