Chapter 7: Stochastic processes and martingale theory Flashcards

1
Q

M and C

A

M or (M_n) is any stochastic process

C or C_n an adapted process wrt

F_n = sigma( M_i : i ≤n)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Martingale transform of C by M

A

The martingale transform of C by M is

(C•M)n = sum from i=1 to n of [C{i-1}(M_i - M{i-1})]

(C•M)_0 =0

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Martingale transform of C by M notes

A

If M is a martingale process

(C•M)_n can be thought of as winnings after n plays of a game

Ie at round i, a bet of C_{i-1} is made and the change to our resulting wealth is C_{i-1}(M_i - M_{i-1})

For example,

If C_i =1 and M_n is the simple random walk:

Take (M_i) to be the symmetric RW

M_i - M_{I-1} =
{1, prob 0.5
{-1, prob 0.5

If up win game round i, if down lose round:

C_{i-1} is F_{i-•}-measurable

Ie bet you place on winning round i: win means get back and win C_{I-1}, lose get nothing

So (C•M)_n = profit/loss after n rounds

We bet C_{i-1} in F_{i -1} on play i
Ie ith bet using only the info gained during the first i-1 plays. In particular, don’t yet know result of ith play.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

THM 7.1.1

for martingale transform to be martingale

Betting strategies don’t help

A

If M is a martingale and C is adapted and bounded

Then

(C•M)_n is also a martingale.

If M is a supermartingale/submartingale and C is adapted, bounded and non-negative then (C•M)_n is also a supermartingale/submartingale

Note: M is martingale implies winning and losing same ie fair game, bounded implies in L^1 and (C•M)_n will be profit and loss ie profit and loss is a martingale if game is fair)

And super/sub if M is biased down or up then profit loss is also down or up

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Proof of thm

If M is a martingale and C is adapted and bounded

Then

(C•M)_n is also a martingale.

If M is a supermartingale/submartingale and C is adapted, bounded and non-negative then (C•M)_n is also a supermartingale/submartingale

A

Let (M_n) be a martingale. Write Y_n= (C•M)_n

1* C_n is F_n-measurable, M_n is F_n measurable
So Y_n is F_n measurable

2*show Y_n in L^1:

E(|Y_n|) = E( | sum from i=1 to n of [C_{i-1}(M_i -M_{i-1})|)

E( sum from i=1 to n of [|C_{i-1}(M_i - M_{i-1})|]

By triangle law

= sum from i=1 to n of
[E( |C_{i-1}||(M_i - M_{i-1})|)]
**

(C_{i-1} is ≤ C)

Since (C_n) is bounded?? there exists a c in R st |C_n|≤ c for all n

**

c [ sum from i=1 to n of E[| M_i - M_{i-1}|]]

(triangle law implies |a-b| ≤|a| + |b|)

Less than
c [ sum from i=1 to n of [ E[ |M_i| + |M_{i-1}|]]
= c
[ sum from i=1 to n of [ E( m_i) + E(M_{i-1})]]

Less than ∞

(Each term less than ∞, finite sum)

This implies Y_n is in L^1

3* want to show E(Y_n | F_{n-1}) = Y_{n-1}

E(Y_n | F_{n-1}) = E[ (Y_{n-1} + C_{n-1}( M_n - M_{n-1}) )|F_{n-1}]

(We have Y_{n-1} F_{n-1} measurable, C_{n-1} F_{n-1} measurable, M_n F_n-measurable and M_{n-1} F_{n-1} measurable: so we can take some out)

= Y_{n-1} + C_{n-1} E((M_n - M_{n-1})| F_{n-1})

(Because M_n is a martingale the Expectation is 0- E(M_n|F_{n-1}) = M_{n-1})

=Y_{n-1}
( proof for submartingale bigger than or equal to, super ≤ Y_{n-1})

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Consider roulette 37 segments 18B 18R and 1G

A

No matter which colour you pick
P(win) = 18/37

P(lose) = 19/37

So doesn’t matter bet on R or B

X_n =
{1 w.p 18/37
{-1 w.p 19/37

Win doubles or lose bet money

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Setting up a martingale transform for roulette:

37 segments 18B 18R 1G

A

Define
M_n = sum from i=1 to n of X_i

M_n - M_{n-1} = X_n

*1 if player wins game n and -1 if they lose

Filtration is generated by (M_n), so F_n = sigma( M_i ; i ≤n)
Bet on spin n is C_{n-1}, require C_{n-1} to be F_{n-1}- measurable

So C_n is adapted .

Total profit/loss is
(C•M)_n =
Sum from i=1 to n

[C_{i-1} (M_i - M_{i-1})

(C_{I-1} is our bet
M_i - M_{i-1} is X_i;
Win X_i =1 win = +C_{i -1}
Lose X_1 =-1 =-C_{i-1}

This implies betting strategies

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

LEMMA 7.3.3

for a supermartingale

upcrossings probabilities

A

Suppose M is a supermartingale and bounded in L^1

Then P[U∞[a, b] = ∞] = 0

for a less than b

  • condition that bounded in L^ implies sup of n of [E[|M_n|]] is less than ∞
  • prob that number of upcrossings = ∞ is 0
  • U_∞[a,b] = #upcrossings by the process M occurring over all time
  • show probability of oscillating is 0

*Essentially, Lemma 7.3.3 says that the paths of M cannot oscillate indefinitely. This is the
crucial ingredient of the martingale convergence theorem.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

PROOF
of LEMMA 7.3.3

Suppose M is a supermartingale and bounded in L^1

Then P[U∞[a, b] = ∞] = 0

A

From lemma 7.3.2 we have
(b − a)E[U_N [a, b]] ≤ E[ |M_N -a|]
by triangle inequality as |M_N -a| ≤ |M_N| + |a| and by linearity of expectation
≤E[ |M_N|] + |a|

≤supn∈N of [E|M_N|] + |a|

less than ∞
(**)

(NOTE RHS is indep of N)

want to let N tend to ∞
Apply monotone convergence to U_N[a,b]
* U_N[a,b] ≥ 0 (counting something)
*U_N[a,b]≤U_{N+1}[a,b] (increasing )

SO we can use MCT,

U_N[a,b] -a.s→U_∞[a,b]
as N →∞ (almost surely)

Hence by MCT,

E[U_N [a, b]] → E[U_∞[a, b]],

TAKING LIMITS of (**)

(b − a)E[U_∞[a, b]] ≤ |a| + sup
n∈N [E|M_n|] < ∞,
which implies that
P[U∞[a, b] < ∞] = 1.

a RV with finite expectation cant take the value ∞

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

convention E[|X|]

A

E[|X|] = E|X|

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

THM 7.3.4

Martingale Convergence Theorem I)

A

Suppose M is a supermartingale
bounded in L^1.

Then the limit
M_n -a.s.→ M_∞ exists
and
P[|M∞| < ∞] = 1.

  • ie there exists a RV M_∞ st M_n converges to it almost surely

*and converge to something that is finite

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

PROOF

THM 7.3.4

Martingale Convergence Theorem I)

Suppose M is a supermartingale
bounded in L^1.

Then the limit
M_n -a.s.→ M_∞ exists
and
P[|M∞| < ∞] = 1.

A

Define event
Λa,b = {ω : for infinitely many n, Mn(ω) < a} ∩ {ω : for infinitely many n, Mn(ω) > b}.

=
“M_n keeps oscillating, infinitely”

Note:
Λa,b ⊆ U_∞[a,b]
(sub-event of event of infinitely many upcrossings)
we have shown that event had probability 0 by lemma 7.3.3. So
P[Λa,b]=0.

(we can restrict for rationals as any 2 will always lie in our interval [a,b] for which the process will still oscillate- rationals are countable)

consider:
{ω : M_n(ω) does not converge to a limit in [−∞, ∞]} ={ there exists a less than b st (M_n) oscillates below a and above b AND THESE ARE RATIONALS
[∪_a,b∈Q] [Λa,b]

P([∪_a,b∈Q] [Λa,b]] ≤
sum of pairs a, b ∈Q of [P[Λa,b]]
=0

so P(M_n has no limit)=0

so we have that
P[M_n converges to some M∞ ∈ [−∞, +∞]] = 1
which proves the first part of the theorem:
M_n -a.s.→ M_∞

An inequality:
E{|M_∞|  ≤ supE|M_n|
lhs less than infinity as M_n bounded in L^1
So
E|M_∞| < ∞

so P(|M_∞| = ∞ )=0

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

NOTES ABOUT

Martingale Convergence Theorem I

A

if M_n is a non-negative supermartingale then we have E[|M_n]= E[Mn] ≤ E[M0],

so in this case M is automatically bounded in L^1

Theorem 7.3.4 has one big disadvantage: it cannot tell us anything about the limit M_∞,
except that it is finite. To gain more information about M_∞, we need an extra condition.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Corollary 7.3.5 (Martingale Convergence Theorem II)

A

In the setting of Theorem 7.3.4,

suppose additionally that (Mn) is bounded in L^2

Then Mn → M∞ in both L^1 and L^2
and
limn→∞ E[Mn] = E[M∞]

limn→∞ var(Mn) → var(M∞)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

KNOW HOW TO

A
  • use the MCT

* property of conditional expectation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

LEMMA 7.4.1

integer valued sequence (def of convergence)

A

Let (a_n) be an integer valued sequence, and suppose that a_n → a as n → ∞.

Then (a_n) is eventually equal to a constant:
there exists N ∈ N such that a_n = a for all n ≥ N

ε = 1/3
into the definition of a convergent real sequence, and we obtain that there
exists N ∈ N such that
|an − a| ≤ 1/3 for all n ≥ N.

Hence, for n ≥ N we have
|an − a_{n+1}| = 
|an − a + a − a_{n+1}| 
≤ |a_n − a| + |a − a_{n+1}|
≤ 1/3 + 1/3
=2/3

Since 2/3 < 1, and an takes only integer values, this means that a_n = a_{n+1}.

 Since this holds for
all n ≥ N, we must have a_N = a_{N+1} = a_{N+2} = . . ., which means that 
a_n → a_N as N → ∞ 
(and
hence a = aN ).
17
Q

Long term behaviour of Roulette

A

Lemma 7.4.2 Almost surely, a roulette player will eventually lose all their money.

Sequence of bets C_{n-1} on game n.
total profit and loss was (C◦ M)_n where M is a supermartingale (so (C◦ M)_n is)

start from k cash, play forever (after run out of money all bets C_n =0)

W_n=cash after n plays=
K + (C◦ M)_n ≥ 0

(this is a supermartingale: check, ≥0 because of C_n=0)

because W_n is a supermartingale biased downwards
so

0 ≤ E[W_n] ≤ E[W+)] =k
E(|W_n|) = E( W_n) ≤ k

so (W_n) is bounded in L^1

(k doesn’t dep on n)

Can model k, (C◦ M)_n as integers (of our units).

*2 conditions for martingale convergence: supermartingale and bounded in L^1 :

By martingale convergence there exists RV W_∞ st
W_n-a.s→W_∞.

SO, almost surely
{ω∈Ω, W_n(ω) →W_∞(ω)}

(W_n(ω) is integer ≥0)

W_n( ω) will become constant eventually.

Because of how our game is designed this only happens if we hit 0:

so, almost surely, W_n is essentially.

18
Q

Long term behaviour of urn processes

Recall: polya urn

**important to understand the process fully??

A

Start at t=0 with 2 balls, 1 red , 1 black

At each t=1,2,3,… we:

  • draw a ball from urn, replace
  • add a ball of the same colour

B_n = #red balls at time n

M_n= B_n /(n+2) = proportion of balls that are red 
n+2= total number of balls at time n

From 4.2 (m_n) is a martingale

E[|M_n|] ≤ 1
so (M_n) is bounded in L^1

So we can use the MCT on (M_n):

There exists a RV M_∞ st
M_n-a.s→M_∞

This tells us that as n increases the process with stabilize, it won’t oscillate forever.

19
Q

PROPOSITION 7.4.3

consider urn process: polya process (M_n)

A

M∞ has the uniform distribution on [0, 1].

M_n complicated but this helps us

*‘rule of thumb’: if n is large, then the fraction of red balls in
our urn is approximately a uniform distribution on [0, 1]. Therefore, we can approximate a
complicated object (our urn) with a much simpler object (a uniform random variable).

20
Q

M∞ has the uniform distribution on [0, 1].

proving

A

consider distribution of M∞:

Lemma 6.1.2, tells us that
M_n-d→ M∞,
that is
P[M_n ≤ x] → P[M_∞ ≤ x]

for all x.

21
Q

Lemma 7.4.5 distribution of B_n

polya process

A

For all k = 1, . . . , n + 1, it holds that P[B_n = k] = 1/n+1 .

proof:

Let A be the event that the first k balls drawn are red and the next j balls drawn are black. Then
P[A]= (1/2)(2/3)…(k/(k+1))(1/(k+2))(2/(k+2))…(j/(j+k+1))

probability of getting a red/black ball (as
appropriate) on the corresponding draw, given the results of previous draws. For example, 1/2 is the probability that the first draw results in a red ball, after which the urn contains 2 red balls
and 1 black ball, so 2/3
is the probability that the second draw results in a red ball, and so on

P[A]= (j!k!)/((j+k+1)!)

drawing k red balls and j black balls, in the first j + k
draws but in a different order, would have the same probability. We would simply obtain the
numerators in (7.6) in a different order. There are
j+k C k
possible different orders (i.e. we must
choose k time-points from j + k times at which to pick the red balls). Hence, the probability
that we draw k red and j black in the first j + k draws i

(j+k)C(k)( (j!k!)/((j+k+1)!))
= 1/(j+k+1)

Set j=n-k to obtain probability of drawing (and thus adding) k red balls within
the first j + k = n draws. This gives P[Bn = k + 1] = 1/(n+1) for all k = 0, 1, . . . , n. Hence
P[Bn = k] = 1/(n+1_ for all k = 1, 1, . . . , n + 1.

22
Q

Proof: [Of Proposition 7.4.3.]

M∞ has the uniform distribution on [0, 1].

proving

A

Proof: [Of Proposition 7.4.3.] Now that we know the distribution of Bn, we can use (7.4) to
find out the distribution of Mn. Lemma 7.4.5 tells us that Bn is discrete uniformly distribution on the set
{1, . . . , n + 1}. Hence Mn is uniform on {1/(n+2) ,2/(n+2) , . . . ,(n+1)/(n+2) }. This gives us that, for x ∈ (0, 1),
P[Mn ≤ x] = 1/(n + 2)× floor_funct(x(n + 2))

Here,  floor_funct(x(n + 2)), which denotes the integer part of x(n + 2), is equal to the number of elements
of {1/(n+2) ,2/(n+2) , . . . ,
(n+1)/(n+2) } that are ≤ x. 
Since x(n + 2) − 1 ≤ 
floor_funct(x(n + 2))
≤  x(n + 2) 

we obtain that
x(n + 2) − 1/(n + 1)
≤ P[Mn ≤ x] ≤
x(n + 2)/(n + 1)

and the sandwich rule tells us that limn→∞ P[Mn ≤ x] = x. By (7.5), this means that
P[M∞ ≤ x] = x for all x ∈ (0, 1).
Therefore, M∞ has a uniform distribution on (0, 1). This proves Proposition 7.4.3.

23
Q

Long term behaviour of Galton-Watson processes

A

(Let (X^n_i) n, i ≥ 1, be i.i.d. nonnegative
integer-valued random variables with common distribution G. Define a sequence (Zn) by Z0 = 1
and
Zn+1 =
{X^{n+1} _1 + . . . + X^{n+1}_Zn, if Zn > 0
{0, if Zn = 0

Then (Zn) is the Galton-Watson process

writing µ = E[G],
Mn =Zn/µ^n
is a martingale with mean E[M_n] = 1.

  • consider how Z_n behaves as n → ∞
  • if Z_N=0 for any N in N then Z_n=0 for all n ≥ N (Dies out)

*MCT: M_n is a martingale with expected value 1 and since Mn ≥ 0 we have E[|Mn|] = 1. Hence, (Mn) is bounded in L^1 and by the martingale convergence theorem (Theorem 7.3.4) we have that the almost sure limit
limn→∞ [Mn] = M∞
exists.

*offspring dist G is not deterministic

cases:µ less than 1, =1 or more than 1
(unbiased means extinction)
(we see the only is if there is a long term survival/ no extinction bias of increasing producing more than one child per individual)

24
Q

Lemma 7.4.6 Suppose that µ = 1.
for branching process

(on average each individual produces one extinct)

A

Suppose that µ = 1. Then P[Z_n dies out] = 1.

proof:
Here M_n= Z_n and so Z_n is a martingale. For as long as Z_n≠0, Z_n will keep moving by at least +-1. (as G is not deterministic )

(Z_n) is integer valued, so the only way (Z_n) can converge almost surely is if Z_n -a.s→0 and becomes =0 eventually. By MCT, Z_n does converge a.s thus P(Z_n becomes extinct)=1

25
Q

Lemma 7.4.7 Suppose that µ < 1.

A

Suppose that µ < 1. Then P[Zn dies out] = 1.

Sketch proof:
when µ =1 process becomes extinct. µ less than 1 “less chance to grow” then process becomes extinct.
less than one is bounded above by 1 case (almost surely, coupling the 2)

26
Q

Lemma 7.4.8 Suppose that µ > 1 and σ^2 < ∞.

A

Suppose that µ > 1 and σ^2 < ∞. Then P[Z_n → ∞] > 0.

proof:
Then P( Z_n -a.s→∞)is bigger than 0.

this is because if there is a positive chance that G is 0 then chance die out. here we have
at least some chance of getting to infinity.

First E[M_{n+1}^2]= E(M_n ^2) + (σ^2/(µ ^{n+2}))

iterating
E(M_{n+1}^2)= 1+ sum from i=1 to n of ((σ^2/(µ ^{i+2}))
less than or equal to
1+ (σ^2/(µ^2 )sum from i=1 to n of (σ^2)/(µ ^{i})

less than infinity
geometric sum that converges by condition

By 2nd MCT

E(M_n)→E(M_∞)
Var(M_n) →VarM_∞)

*because M_n is martingale these expectations =1,
hence P(M_∞ bigger than 0) is bigger than 0.

M_n = Z^n/µ^n
µ bigger than 1 so µ^n →∞. If Z_n doesn’t tend to ∞ then M_n →0 (ie M_∞ =0)

So P(Z_n doesnt tend to infinity) is less than 1

P(Z_n tends to infinity) is bigger than 0

** If z_n doesnt tend to infinity then (Z_n) becomes extinct (almost surely)

27
Q

branching process notes

A

pick extreme possibilities: explodes or extinct.

when upwards drift chance infinity, not certain

otherwise extinction

28
Q

Long term behaviour of random walks:

taking S_n to be the simple symetric random walk

A

taking S_n to be the simple symetric random walk S_n will oscillate as n tends to infinity and the magnitude of the largest oscillations will tend to infinity as n tends to infinity almost surely.

S_n = sum from i=1 to n of (x_i)

indep iid
X_i=
{1, w.p 0.5
{-1 , w.p 05

moving up and down as time passes
movement over long period are jagged oscillations
in a very random way.

Oscillations get bigger

29
Q

Long term behaviour of random walks: e asymmetric random walk

A

Next, let Sn be the asymmetric random walk, with p > q, so the walk drifts upwards. The
asymmetric random walk also experiences oscillations growing larger and larger, but the drift
upwards is strong enough that, in fact Sn a.s. → ∞ as n → ∞. So, our picture of the long-term
behaviour of the asymmetric random walk looks like:

oscillations jagged, drift upwards, bigger oscillations as time passes
*drift will win but still oscillate

Of course, if q < p then Sn would drift downwards, and then Sn
a.s. → −∞
In both cases, it is possible to show that Sn is not bounded in L^1 (even if we compensate for
drift) and the martingale convergence theorem does not apply. Different techniques, which we
won’t study, are needed here.