Chapter 7: Stochastic processes and martingale theory Flashcards
M and C
M or (M_n) is any stochastic process
C or C_n an adapted process wrt
F_n = sigma( M_i : i ≤n)
Martingale transform of C by M
The martingale transform of C by M is
(C•M)n = sum from i=1 to n of [C{i-1}(M_i - M{i-1})]
(C•M)_0 =0
Martingale transform of C by M notes
If M is a martingale process
(C•M)_n can be thought of as winnings after n plays of a game
Ie at round i, a bet of C_{i-1} is made and the change to our resulting wealth is C_{i-1}(M_i - M_{i-1})
For example,
If C_i =1 and M_n is the simple random walk:
Take (M_i) to be the symmetric RW
M_i - M_{I-1} =
{1, prob 0.5
{-1, prob 0.5
If up win game round i, if down lose round:
C_{i-1} is F_{i-•}-measurable
Ie bet you place on winning round i: win means get back and win C_{I-1}, lose get nothing
So (C•M)_n = profit/loss after n rounds
We bet C_{i-1} in F_{i -1} on play i
Ie ith bet using only the info gained during the first i-1 plays. In particular, don’t yet know result of ith play.
THM 7.1.1
for martingale transform to be martingale
Betting strategies don’t help
If M is a martingale and C is adapted and bounded
Then
(C•M)_n is also a martingale.
If M is a supermartingale/submartingale and C is adapted, bounded and non-negative then (C•M)_n is also a supermartingale/submartingale
Note: M is martingale implies winning and losing same ie fair game, bounded implies in L^1 and (C•M)_n will be profit and loss ie profit and loss is a martingale if game is fair)
And super/sub if M is biased down or up then profit loss is also down or up
Proof of thm
If M is a martingale and C is adapted and bounded
Then
(C•M)_n is also a martingale.
If M is a supermartingale/submartingale and C is adapted, bounded and non-negative then (C•M)_n is also a supermartingale/submartingale
Let (M_n) be a martingale. Write Y_n= (C•M)_n
1* C_n is F_n-measurable, M_n is F_n measurable
So Y_n is F_n measurable
2*show Y_n in L^1:
E(|Y_n|) = E( | sum from i=1 to n of [C_{i-1}(M_i -M_{i-1})|)
≤
E( sum from i=1 to n of [|C_{i-1}(M_i - M_{i-1})|]
By triangle law
= sum from i=1 to n of
[E( |C_{i-1}||(M_i - M_{i-1})|)]
**
(C_{i-1} is ≤ C)
Since (C_n) is bounded?? there exists a c in R st |C_n|≤ c for all n
**
≤
c [ sum from i=1 to n of E[| M_i - M_{i-1}|]]
(triangle law implies |a-b| ≤|a| + |b|)
Less than
c [ sum from i=1 to n of [ E[ |M_i| + |M_{i-1}|]]
= c[ sum from i=1 to n of [ E( m_i) + E(M_{i-1})]]
Less than ∞
(Each term less than ∞, finite sum)
This implies Y_n is in L^1
3* want to show E(Y_n | F_{n-1}) = Y_{n-1}
E(Y_n | F_{n-1}) = E[ (Y_{n-1} + C_{n-1}( M_n - M_{n-1}) )|F_{n-1}]
(We have Y_{n-1} F_{n-1} measurable, C_{n-1} F_{n-1} measurable, M_n F_n-measurable and M_{n-1} F_{n-1} measurable: so we can take some out)
= Y_{n-1} + C_{n-1} E((M_n - M_{n-1})| F_{n-1})
(Because M_n is a martingale the Expectation is 0- E(M_n|F_{n-1}) = M_{n-1})
=Y_{n-1}
( proof for submartingale bigger than or equal to, super ≤ Y_{n-1})
Consider roulette 37 segments 18B 18R and 1G
No matter which colour you pick
P(win) = 18/37
P(lose) = 19/37
So doesn’t matter bet on R or B
X_n =
{1 w.p 18/37
{-1 w.p 19/37
Win doubles or lose bet money
Setting up a martingale transform for roulette:
37 segments 18B 18R 1G
Define
M_n = sum from i=1 to n of X_i
M_n - M_{n-1} = X_n
*1 if player wins game n and -1 if they lose
Filtration is generated by (M_n), so F_n = sigma( M_i ; i ≤n)
Bet on spin n is C_{n-1}, require C_{n-1} to be F_{n-1}- measurable
So C_n is adapted .
Total profit/loss is
(C•M)_n =
Sum from i=1 to n
[C_{i-1} (M_i - M_{i-1})
(C_{I-1} is our bet
M_i - M_{i-1} is X_i;
Win X_i =1 win = +C_{i -1}
Lose X_1 =-1 =-C_{i-1}
This implies betting strategies
LEMMA 7.3.3
for a supermartingale
upcrossings probabilities
Suppose M is a supermartingale and bounded in L^1
Then P[U∞[a, b] = ∞] = 0
for a less than b
- condition that bounded in L^ implies sup of n of [E[|M_n|]] is less than ∞
- prob that number of upcrossings = ∞ is 0
- U_∞[a,b] = #upcrossings by the process M occurring over all time
- show probability of oscillating is 0
*Essentially, Lemma 7.3.3 says that the paths of M cannot oscillate indefinitely. This is the
crucial ingredient of the martingale convergence theorem.
PROOF
of LEMMA 7.3.3
Suppose M is a supermartingale and bounded in L^1
Then P[U∞[a, b] = ∞] = 0
From lemma 7.3.2 we have
(b − a)E[U_N [a, b]] ≤ E[ |M_N -a|]
by triangle inequality as |M_N -a| ≤ |M_N| + |a| and by linearity of expectation
≤E[ |M_N|] + |a|
≤supn∈N of [E|M_N|] + |a|
less than ∞
(**)
(NOTE RHS is indep of N)
want to let N tend to ∞
Apply monotone convergence to U_N[a,b]
* U_N[a,b] ≥ 0 (counting something)
*U_N[a,b]≤U_{N+1}[a,b] (increasing )
SO we can use MCT,
U_N[a,b] -a.s→U_∞[a,b]
as N →∞ (almost surely)
Hence by MCT,
E[U_N [a, b]] → E[U_∞[a, b]],
TAKING LIMITS of (**)
(b − a)E[U_∞[a, b]] ≤ |a| + sup
n∈N [E|M_n|] < ∞,
which implies that
P[U∞[a, b] < ∞] = 1.
a RV with finite expectation cant take the value ∞
convention E[|X|]
E[|X|] = E|X|
THM 7.3.4
Martingale Convergence Theorem I)
Suppose M is a supermartingale
bounded in L^1.
Then the limit
M_n -a.s.→ M_∞ exists
and
P[|M∞| < ∞] = 1.
- ie there exists a RV M_∞ st M_n converges to it almost surely
*and converge to something that is finite
PROOF
THM 7.3.4
Martingale Convergence Theorem I)
Suppose M is a supermartingale
bounded in L^1.
Then the limit
M_n -a.s.→ M_∞ exists
and
P[|M∞| < ∞] = 1.
Define event
Λa,b = {ω : for infinitely many n, Mn(ω) < a} ∩ {ω : for infinitely many n, Mn(ω) > b}.
=
“M_n keeps oscillating, infinitely”
Note:
Λa,b ⊆ U_∞[a,b]
(sub-event of event of infinitely many upcrossings)
we have shown that event had probability 0 by lemma 7.3.3. So
P[Λa,b]=0.
(we can restrict for rationals as any 2 will always lie in our interval [a,b] for which the process will still oscillate- rationals are countable)
consider:
{ω : M_n(ω) does not converge to a limit in [−∞, ∞]} ={ there exists a less than b st (M_n) oscillates below a and above b AND THESE ARE RATIONALS
[∪_a,b∈Q] [Λa,b]
P([∪_a,b∈Q] [Λa,b]] ≤
sum of pairs a, b ∈Q of [P[Λa,b]]
=0
so P(M_n has no limit)=0
so we have that
P[M_n converges to some M∞ ∈ [−∞, +∞]] = 1
which proves the first part of the theorem:
M_n -a.s.→ M_∞
An inequality: E{|M_∞| ≤ supE|M_n| lhs less than infinity as M_n bounded in L^1 So E|M_∞| < ∞
so P(|M_∞| = ∞ )=0
NOTES ABOUT
Martingale Convergence Theorem I
if M_n is a non-negative supermartingale then we have E[|M_n]= E[Mn] ≤ E[M0],
so in this case M is automatically bounded in L^1
Theorem 7.3.4 has one big disadvantage: it cannot tell us anything about the limit M_∞,
except that it is finite. To gain more information about M_∞, we need an extra condition.
Corollary 7.3.5 (Martingale Convergence Theorem II)
In the setting of Theorem 7.3.4,
suppose additionally that (Mn) is bounded in L^2
Then Mn → M∞ in both L^1 and L^2
and
limn→∞ E[Mn] = E[M∞]
limn→∞ var(Mn) → var(M∞)
KNOW HOW TO
- use the MCT
* property of conditional expectation