Chapter 4:Stochastic processes Flashcards

1
Q

Definition 4.0.1 A stochastic process

A

A stochastic process (in discrete time) is a sequence (S_n)_n=0 ^infinity or (Xn)^∞_n=0 of random variables. We think of n as ‘time’.

eg a sequence of i.i.d. random variables is a stochastic process, A martingale is
a stochastic process. A Markov chain is a stochastic
process

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

filtrations:

A
  • every stochastic process has a filtration:

For any stochastic process (X_n) the natural or generated filtration of (Xn) is the filtration
given by
F_n = σ(X_1, X_2, . . . , X_n).

  • if filtration isnt stated we assume this one
  • depends on bahaviour up until time n only
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Random walks

A

Random walks are stochastic processes that ‘walk around’ in space. We think of a particle that moves between vertices of Z. At each step of time, the particle chooses at random to either move up or down, for example from x to x + 1 or x − 1.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Symmetric random walk

A

Let (Xi)^∞_i=1 be a sequence of i.i.d. random variables where

P[Xi = 1] = P[Xi = −1] = 1/2

The symmetric random walk is the stochastic process
Sn = sum from i=1 to n of
[X_i]

X_0=0/ S_0=0 by convention.
We have shown S_n/X_n is a martingale already, wrt generated filtration.

we see its symmetric as S_n and -S_n have the same distribution.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

sample path

A

A sample path of Sn, which means a sample of the
sequence S0, S1, S2, . . ., might look like:

graph of time against R values. Dots with lines after , for intervals [0,1),[1,2),..

no drift is shown ie stays relatively close to 0.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Asymmetric random walk

A

Let (Xi)^∞_i=1 be a sequence of i.i.d. random variables.

Let p + q = 1 
with p, q ∈ [0, 1], p ≠ q 
and
suppose that
P[X_i = 1] = p, 
P[X_i = −1] = q.

The asymmetric random walk is the stochastic process
Sn =sum from i=1 to n of
[X_i]

*p ≠ q means neither can be 1/2 ie no fairness

if p is bigger than q then more likely up

(X_n) is not a martingale.
By lemma 2.2...
E[S_n] = sum from i=1 to n of E[X_i]
= sum from i=1 to n  of 
[1p + (-1)q] = n(p-q) ≠ 0
= E(M_0)

(E(M_0)=0)

so by this its not martingale

eg graph as before but with upward drift, tendency to move up on average

Lemma 3.3.6 confirms that Sn is not a martingale.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Restoring fairness:

A

Lemma 3.3.6 confirms that Sn is not a martingale.

However, the
process
Mn = Sn − n(p − q) 
is a martingale
the term n(p−q) compensates for the drift and ‘restores fairness’.

SUBTRACT OFF THE DRIFT:

checking (M_n) is a martingale:
(M1) Since X_i∈ mFn for all i ≤ n, by Proposition
2.2.6 we have Sn − n(p − q) ∈ mFn.

So M_n is adapted

(M2)
Since |Xi| ≤ 1 we have
|M_n|= |Sn − n(p − q)| 
(by triangle rule)
≤ |Sn| + n|p − q| 
(as S_n is less than or equal to sum of all X_i's)

≤ n + n|p − q| less than infinity

and hence Mn is bounded, so Mn ∈ L^1

(M3)
E[M_{n+1}|F_n]
=
E[(S_{n+1} − (n + 1)(p − q))| Fn] 
(S_{n+1} =X{n+1}+S_n)

= E[X{n+1}+S_n | F_n] − (n + 1)(p − q)
(by linearity and expectation of constant=constant)
= E[X_{n+1} | Fn] + E[Sn | Fn] − (n + 1)(p − q)

(as X_{n+1} is indep of F_n)

= E[Xn+1] + Sn − (n + 1)(p − q)
(X_{n+1} exp is p-q)
(neg term compensates for drift)

= (p − q) + S_n − (n + 1)(p − q)
= S_n − n(p − q)
=M_n

Therefore E[Mn+1 | Fn] = Mn, and (Mn) is a martingale.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

URN PROCESSES:

P´olya urn

A

just a single urn (i.e. bag) that contains balls of two different colours.

At time 0, an urn contains 1 black ball and 1 red ball.

Then, for each n = 1, 2, . . . , we generate the state of the urn at time n by doing the following:

  1. Draw a ball from the urn, look at its colour, and return this ball to the urn.
  2. Add a new ball of the same colour as the drawn ball.

So, at time n (which means: after the nth iteration of the above steps is completed) there are n + 2 balls in the urn.

This process is known as the P´olya urn

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Notes on urn process:

A

B_n=#red balls in the urn at time n,
and note that

B_0 = 1.

Set (Fn) to be the filtration generated by (B_n).

(#black balls= 2+n-B_n)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Is B_n a martingale?

A

Martingales should increase as much as they decrease but B_n only ever increases.

The reason is that over time we will put more and more red balls into the urn, so the number of red balls drifts upwards over
time:

E[B_{n+1}| F_n] is bigger than B_n so not a martingale.

E[B_{n+1}| F_n] =
E[(B_{n+1}1{(n+1)th draw is red} + B{n+1}1{(n+1)th draw is black})| F_n]
=
E[B
{n+1}1{(n+1)th draw is red}| F_n] + E[B{n+1}1{(n+1)th draw is black}| F_n]
(by conditioning)
=
E[(B
{n}+1)1{(n+1)th draw is red}| F_n] + E[B_n1{(n+1)th draw is black}| F_n]

(n+1 draw black or red cases, B_n ∈mF_n and 1_{(n+1)th draw is black} is not F_n measurable as randomness after time n

=(B_n +1)E[1{(n+1)th draw is red}| F_n] +
B_nE[1
{(n+1)th draw is black}| F_n]

( probability of black given the state of time n = (n+2-B_n)/(n+2) = 1 - (B_n)/(n+2)
and

prob of red given state till n is B_n/(n+2))

= (Bn + 1) [B_n/(n + 2)] 
\+ 
B_n(1 − [B_n/(n + 2)])
=
[B_n(n + 3)]/(n+2)
bigger than B_n

(as (n+3)/(n+2) bigger than 1)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Is B_n a submartingale?

A

We do have Bn ∈ mFn and since 1 ≤ Bn ≤ n+2 we also have Bn ∈ L^1, so Bn is a submartingale,
but Bn is not a martingale:

M1: B_n ∈mF_n
M2: 1 ≤ Bn ≤ n+2 sp B_n bounded Bn ∈ L^1
M3: submartingale version

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

related martingale to urn process B_n:

A

Let
Mn = B_n/(n + 2)

Then M_n is the proportion of balls in the urn that are red, at time n. Note that M_n ∈ [0, 1].

We can think of the extra factor n + 2, which increases over time, as an attempt to cancel out the upwards drift of B_n:

E[(B_n/(n + 2)) |F_n]
= B_n/(n+2) less than B_n so M3 satisfied for martingale.

NOTE:

E[Mn+1 | Fn] = E[M_{n+1}1{(n+1)th draw is red} | Fn]
+ E[M_{n+1}1{(n+1)th draw is black}| Fn]

= E[(B_n + 1)/(n + 3) 1{(n+1)th draw is red}| Fn)
+ E[B_n/(n + 3) 1{(n+1)th draw is black}| Fn]
=
(B_n + 1)/(n + 3)E[1{(n+1)th draw is red}| F_n]
+
B_n/(n + 3)E[1{(n+1)th draw is black}| Fn]

=
[(B_n + 1)/(n + 3)][B_n/(n + 2)]
\+
[B_n/(n + 3)] [1 −(B_n)/(n + 2))
=
[(B_n^2+ B_n)/((n + 2)(n + 3))] +
[(n + 2)B_n − B_n^2/((n + 2)(n + 3))]
=
[(n + 3)B_n)/((n + 2)(n + 3))]
= B_n/(n+2)

We can think of Mn =
B_n/(n+2) as a compensation mechanism; Bn tends to increase, and we
compensate for this increase by dividing by n + 2.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Fairness: note on martingale and urn process

A

It is clear that the symmetric random walk is fair; at all times it is equally likely to move up as
down. The asymmetric random walk is not fair, due to its drift (4.2), but once we compensate
for drift in (4.3) we do still obtain a martingale.
Then urn process requires more careful thought. For example, we might wonder:

Suppose that the first draw is red. Then, at time n = 1 we have two red balls and one
black ball. So, the chance of drawing a red ball is now 2/3. How is this fair?!

*the quantity which is a martingale is Mn, the proportion of red balls in the urn.
*Secondly, suppose that the first draw is indeed red. So, at n = 1 we have 2 red and 1 black, giving a proportion of 2/3 red and 1/3 black. The expected fraction of red balls after the next
(i.e. second) draw is (2/3). [(2+1)/4] + (1/3)(2/4) = (6+2)/12 =2/3
which is the proportion of red balls at n=1

  • note that it is equally likely that, on the first go, you’d pick out a black. So, starting
    from n = 0 and looking forwards, both colors have equally good chances of increasing their own
    numbers. In fact, if we were to pretend, right from the start, that black was red, and red was
    black, we would see the same urn process. This type of fairness is known as symmetry. We’ve
    seen that Bn tends to increase (because we keep adding more balls), and we can think of Mn as
    a way of discovering the fairness ‘hiding’ inside of Bn.
  • The fact that Mn
    is a martingale does not prevent us from (sometimes) ending up with many more red balls than
    black, or vice versa. It just means that, when viewed in terms of Mn, there is no bias towards
    red of black inherent in the rules of the game.

being a Martingale is about fairness in the future not fairness in the past

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

A branching process

A

The Galton-Watson process is parametrized by a random variable G, which is known as the offspring distribution. It is simplest to understand the Galton-Watson process by drawing a
tree, for example:

tree diagram with different amounts of offspring per vertex
Z_0=1, Z_1=2, Z_2=5 Z_3=6…

one offspring has 2 offspring which have a total of 5 offspring (one has 3, other has 2) which have a total of 6 offspring (0,2,1,0,3),…

The Galton-Watson process is the process Zn, where Zn is the number of dots in
generation n.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Formally define Galton-Watson process:

A

Let X_i^n, where n, i ≥ 1, be
i.i.d. nonnegative integer-valued random variables with common distribution G.

Define a sequence (Z_n) by Z_0 = 1 and
Zn+1 =
{X_1 ^{n+1} + . . . + X_{Z_n}^{n+1}, if Z_n > 0
{0, if Z_n = 0

Then Z is the Galton-Watson process. The random variable X_i^n
represents the number of
children of the ith parent in the nth generation.

Note that if Z_n = 0 for some n, then for all m > n we also have Z_m = 0.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Galton-Walton process and martingales

*does this process have constant expectation?

A

Galton-Walton process isn’t a martingale

Let µ = E[G], and let Fn = σ(Xm,i ; i ∈ N, m ≤ n). In general, Zn is not a martingale
because
E[Zn+1] = E[X_1 ^{n+1} + . . . + X_{Z_n} ^n+1]
= E[ sum from k=1 to infinity of
[(X_1 ^{n+1} + … + X_k ^{n+1}) 1{Z_n=k}]
(by considering cases, conditioning on Z_n and 6.8…)

(iid vars X_i ^{n+1}, indep of F_n, indicator value is F_n-measurable)

= sum from k=1 to infinity of
[E[(X_1 ^{n+1} + … + X_k^{n+1})] E[1{Z_n=k}]]

= sum from k=1 to infinity of
[E[(X_1 ^{n+1}] + … + E[X_k^{n+1})] P[Z_n=k]
(iid vars sum of expectations)

=sum from k=1 to infinity of
[kµP[Z_n = k]]

= µ * [sum from k=1 to infinity of
[kP[Z_n = k]]]

= µ E[Z_n]

Here, we use that the X_i ^n+1
are independent of F_n, but Z_n (and hence also 1{Z_n = k}) is F_n measurable. We justify exchanging the infinite sum and expectation.

Lemma 3.3.6 tells us that if (Mn) is a martingale that E[Mn] = E[Mn+1]. But,
if µ < 1 we see that E[Zn+1] < E[Zn] (downwards drift) and if µ > 1 then E[Zn+1] > E[Zn]
(upwards drift

17
Q

Galton-Walton process and martingales: compensate for the drift

A

(By tower property)

compensate for the drift and obtain a martingale.
More precisely, we will show that
Mn =Z_n/ (µ^n)
is a martingale

  • by lemma 3.3.6. If Z_n is a Martingale µ=1, otherwise we compensate
  • by induction E(Z_n) = E(Z_{n-1})µ =…=E{Z_0}µ^n=µ^n

(M1)
We have M0 = 1 ∈ mF0, and if Mn ∈ Fn then from (4.6) we have that Mn+1 ∈ mFn+1.
Hence, by induction Mn ∈ Fn for all n ∈ N. From (4.7), we have E[Zn+1] = µE[Zn] so as
E[Zn] = µ^n
for all n. Hence E[Mn] = 1 and Mn ∈ L^1

(M3)

Lastly, we repeat the calculation that led to (4.7), but now with conditional expectation in
place of E. The first few steps are essentially the same, and we obtain
E[Z_{n+1} | F_n] = sum from k=1 to ∞ of [E[Z_{n+1}1{Zn = k} | Fn]]
=
sum from k=1 to ∞ of
[E[(X_1 ^{n+1} + . . . + X_k ^{n+1})
1{Zn = k} | Fn]
(X_i ^{n+1} are indep of f_n, z_n is F_n measurable
=
sum from k=1 to ∞ of
[1{Zn = k}E{X_1 ^{n+1} + . . . + X_k ^{n+1}| Fn]

sum from k=1 to ∞ of
[1{Zn = k}E[X_1 ^{n+1}+ . . . + X_k ^{n+1}]]
=( k copies of mu, E(G), each was indep of F_n)

sum from k=1 to ∞ of
[kµ1{Zn =k}

= µ sum from k=1 to ∞ of
[k 1{Zn = k} ]
(Non-zero only when Z_n=k)
= µZ_n.

Here we use that Zn is Fn measurable to take out what is known, and then use that X_i ^{n+1} is independent of Fn. Hence, E[Mn+1|Fn] = Mn, as required.

18
Q

expectation of indicator functions

A

expectation of indicator functions is the probability of the event