2 Wiener Process or Brownian motion Flashcards
RECAP
probability space
RVs
F\B(R) measurable
prob space
(Ω,F,P)
rvs
functs X : Ω → R which are F\B(R) meausrable
this means
inverse image X₋₁(A)∈F
recap
lemma linking sigma algebra is generated by a collection of sets
which are open intervals
{X<a}
equivalent so saying
inverse image of all elements in this generating class
X₋₁(A)∈F ∀ A∈[ (a,infinity) : a∈(R ]
{X<a}={w∈Ω : X(w)<a}
subset of Ω so is an event
this is a RV if ∈F for any a
RECAP CONDITION EXPECTATION
G ⊂ F be a sub-σ-algebra of F
provided
||x|| =E[|X|] < infinity
then we can define new RV,
the conditional expectation
E[X|G] which has 2 properties:
1) E[X|G] G measurable: meaning {X>a}⊆ G⊆ F for all a in R. In particular, if I take an X which is G-measurable and thus it is F-measurable
2) E[1_A X] = E[1_A E[X|G]] ∀ A∈G
cond exp is itself a RV measurable wrt sigma algebra G
Consider a fixed prob space
(Ω,F,P)
stochastic processses will
describe evolutions in time
time will be an added parameter to w in Ω
Consider:
probability space tossing a coin n times
im interested in first outcome:
RV X
Ω={ (a_1,…,a_n): a_i∈{H,T}}
each outcome of the form
(H,T,H,H,…T)
tells you what happens at each coordinate index
if we are interested only in say second outcome we will need to state a function which outputs the second coord
Define RV X₁ which tells you what happened in the first toss: for w in Ω
Xₙ:Ω to {H,T} or to R
X₁(w)=X_1((a₁,…aₙ)) = a₁
Xₙ(w)=X_1((a₁,…aₙ)) = aₙ
considering DICRETE time as well as
Ω={ (a_1,…,a_n): a_i∈{0,1}}
Define RV X giving nth
X:Ω to {0,1}
instead we consider with to vars
X_t(w)
X_t : Ω × {1,…n} → R
Time variable {1,…n}
if i fix time a variable it becomes:
X : Ω → R
a random var as a function of Ω
If I fix an w in Ω
X : {1,…n} → R
n choices in the time var
X_t(w) = (X₁(w),…,Xₙ(w))
if w is fixed I get real numbers, giving a vector corresp to coordinates (not RV)
X₁(w)=X_1((a₁,…aₙ)) = a₁
Xₙ(w)=X_1((a₁,…aₙ)) = aₙ
remark : “to make my conscious clear”
X_t : [0,1] ×Ω → R
You can actually put a sigma algebra on this set,
taking the collection A of sets of the form
A={ (a,b) X A : a,b in [0,T], A ∈F}
sigma algebra generated by this collection
σ(A)=:B([0,T]) ⊗ F
It follows that we can fix t or w
Definition 2.1.1. A stochastic process
A stochastic process is a map X : Ω × [0, T] → R which is F ⊗ B([0, T])-measurable.
Notice that a stochastic process is a function of two variables (ω, t) → X(ω, t).
(a stochastic process is a collection of random vars, uncountably many?)
A stochastic process is a map X : Ω × [0, T] → R which is F ⊗ B([0, T])-measurable.
If we fix t ∈ [0, T]
function X(·, t) : Ω → R is F-measurable, hence it is a random
variable.
That is, we can see a stochastic process as a collection of random variables X(t) indexed
by t ∈ [0, T].
notation stochastic process
(X(t))_{t∈[0,T]}
.
A stochastic process is a map X : Ω × [0, T] → R which is F ⊗ B([0, T])-measurable.
, if we fix ω ∈ Ω,
, if we fix ω ∈ Ω, then we have a function of time t 7→ X(ω, t), which will be called
the trajectory or path corresponding to the fixed ω.
if we fix time then we have a rv corresponding to this
continuous stochastic process
a stochastic process X is continuous if for all ω ∈ Ω if the trajectory t → X(ω, t) is a continuous function of time.
From now on, unless otherwise indicated, we drop the ω dependence as usual
(if with probability 1 the map is continuous
P(t → Xₜ, is continuous)=1
{ω ∈ Ω | t → Xₜ(ω,) is continuous}
note on stochastic processes
trajectories correspond to individual ω, and these are continuous
each outcome is a function not a number
Remark 2.1.2. Functions Y : Ω × [0, T] → R which satisfy the following…
Caratheodory functions
Functions Y : Ω × [0, T] → R which satisfy the following
* for each ω ∈ Ω, the map t → Y (ω, t) is either left or right continuous
* for each t ∈ [0, T], the map ω → Y (ω, t) is F-measurable,
are called Caratheodory functions. It can be checked that Caratheodory functions are F ⊗B([0, T])-
measurable.
Definition 2.1.3. Wiener process
Brownian motion
Definition 2.1.3. A continuous stochastic process
W_t or B_t
(W(t))t∈[0,T] is called Wiener process if
1. W(0) = 0
2. For all s < t, W(t) − W(s) ∼ N (0, |t − s|)
3. For all 0 ≤ t₁ < t₂ < … < tₙ ≤ T, the random variables
W(t₂) − W(t₁), …, W(tₙ ) − W(tₙ₋₁)
are independent.
summary brownian motion
(all trajectories start from origin)
(differences/ which are RVS themselves have Normal distribution with mean and variance)
(increments along intervals are independent , time intervals are disjoint wouldnt be true otherwise)
In summary:
Brownian motion is a stochastic process with continuous trajectory, continuous funct of time
starts from 0 probability 1
distribution for intervals is gaussian
The distributuion of RV W_t~N(0,t)
Exercise 2.1.4. Let (W(t))_{t∈[0,T]} be a Wiener process,
let c > 0 and set
W~(t) = cW(t/c²) for
t ∈ [0, c²T].
Show that W~ is a Wiener processes on [0, c²T].
W~(t) is a stochastic process
We need to check:
1. W~(0)=cW(0/c²) =c W(0)= 0
2. For all s < t, W(t) − W(s) ∼ N (0, |t − s|)
3. For all 0 ≤ t₁ < t₂ < … < tₙ ≤ T, the random variables
W(t₂) − W(t₁), …, W(tₙ ) − W(tₙ₋₁)
are independent.
Notes…
LECTURE:
1)Due to property of wiener process
2)we verify it has a normal distribution, w~(T)-W~(s) =. cw(t/c^2)-cw(s/c^2)=c[w(t/c^2)-w(s/c^2)] by definition we know this is normally distributed w(t/c^2)-w(s/c^2)] ~N(0, (t/c^2)-(s/c^2))
which means that this is multiplied by c mean is multiplied by the constant variance multiplied by the constant squared
~N(0, t-s)
3) looking at the increments and finding the sequence w~(t_3)-w~(t_2)=c[….] in terms of a sequence of c times w(t_n/c^2)This is an increasing sequence, multiplying by the constant doesn’t change the property of the Weiner process and we can conclude they are independent
by conclusion we have verified all properties of Weiner process-
Can you find a function of time which has this property? That if you scale it you get same object
f:(0,infinity) to R
f(x)=c f(x/c)
if you choose
f(x)=x scaled xc divide by c is is x
???
what about if you divide by c^2=
f(x)=c sqrt(X/c^2)
Exercise 2.1.5. Let (W (t))_t∈[0,T ] be a Wiener process and let c ∈ (0, T ). Set W~(t) = W (c + t) − W(c) for t∈[0,T −c].
Show that W~ is a Wiener process on [0,T −c].
solution: Very similar to the exercise above.
verifying properties again
1)w~(0)=W(0+c)-W(c)=W(c)-W(c)=0
2)For s<t increments W~(t)-W~(s)= W(c+t)-W(c)-W(c+s)+W(c)=W(c+t)-W(c+s)
~N(0, c+t-c-s)
This has normal distribution N(0, t-s)
3)for increasing times compute the increments for these times
t_1<t_2<…<t_
w~(t_2)-w~(t_1),…, w~(t_n)-w~(t_(n-1))
w(c)-W(c+t_2)-w(c)+w(c+t_1,….
these are still increments over disjoint intervals and so are indepenentend
def 2.1.7 FILTRATION
A family F := (Ƒₜ)ₜ∈[0,T ] of σ-algebras Ƒₜ ⊂ Ƒ is called filtration if Ƒₛ ⊂ Ƒₜ for s < t.
The filtration F is called complete if Ƒ_0 contains all the null sets of Ƒ.
A filtration is called right continuous if
Ƒₜ:= ∩_ₛ>ₜƑₛ
(knowledge always increasing never decreasing)
fix filtration
fix filtration
F := (Ƒₜ)ₜ∈[0,T ]
Def 2.1.7
adapted to the filtration
Stochastic process is called adapted to the filtration F if X(t) is Ƒₜ measurable for each t ∈ [0, T ],
Notice that every stochastic process X is adapted to its own filtration, that is, the family of σ-algebras given by Ƒˣ(t) = σ(X(s),s ≤ t).
Def 2.1.8 F-Wiener process
We will say that (W (t))t∈[0,T ] is a F-Wiener process if
1. (W (t))_t∈[0,T ] is a Wiener process
2. (W (t))_t∈[0,T ] is adapted to the filtration F
3. for all 0 ≤ s ≤ t ≤ T, the random variable W(t) − W(s) is independent from Ƒ_s.
If we have F-wiener process then it will be a martingale
DEF 2.1.9 F-martingale
A stochastic process (X(t))ₜ∈[0,T] is an F-martingale if
1. (X(t))ₜ∈[0,T] is F-adapted,
2. E[|X(t)|] < ∞ for all t ∈ [0,T],
3. E[X(t)|Ƒ_s] = X(s) for all s ≤ t.
EX 2.2.10
Let (W (t))_t∈[0,T ] be a F-Wiener process. Show that it is a martingale with respect to F.
W_t is an F-wiener process, then this process is also a martingale
verifying the properties for martingale
1)By the definition of F Weiner process adapted to F
2) For t ∈ [0, T ].Then using Hölder’s inequality, and that since W is a Wiener process, W(t) ∼ N(0,t), we have
(recall X has nORMAL DIST then distribution function f stated …property of distribution discussed)
Expected values E|W(t)|<= (E[|W(t)^2|])^0.5 = t^0.5< ∞
3)For0≤s<t:
E[W (t)|Fs] = E[W (t) − W (s) + W (s)|Ƒ_s]
= E[W (t) − W (s)|Ƒ_s] + E[W (s)|Ƒ_s]
=: (⋆).
W (t) − W (s) is independent of F_s , as W is an F-Wiener process
* W(s) is Ƒ_s-measurable, as W is adapted (again, because W is an F-Wiener process). so expected values aren’t affected
(⋆) = E(W(t) − W(s)) + W(s) = W(s).
as W (t) − W (s) ∼ N (0, t − s):
Exercise 2.1.11. Let (W (t))t∈[0,T ] be an F-Wiener process. Show that ((W (t))^2 − t)t∈[0,T ] is a martingale with respect to F.
If W_t is f_t measurable then this function may be? X_T will be, Adapted to F?
Let (W (t))t∈[0,T ] be a an F-Wiener process. Then (we show X is a martingale s.t X= W^2-t)
(i) (CHECK X IS F ADAPTED)
Let t ∈ [0, T ]. By assumption W (t) is Ƒ_t-measurable. So since the map x → x² − t is
continuous and thus Borel, the composition (W (t))² − t is also Ƒ_t -measurable.
Thus X_t is F adapted
(ii) (CHECK E[|X_t|] finite
Let t ∈ [0, T ]. Then (using the triangle inequality and that W (t) ∼ N (0, t)), we have
E[|(W (t))² − t|] ≤ E[(W (t))²] + t =t+t
< ∞.
by considering the variance and expectation
(iii) (We want to show the martingale property …)
For0≤s<t:
E[(W(t))² − t|Ƒ_s] = E[(W(t))² − 2W(t)W(s) + (W(s))² + 2W(t)W(s) − (W(s))² − t|Ƒ_s] = E[(W (t) − W (s))² |Ƒ_s] + E[2W (t)W (s)|Ƒ_s] − E[(W (s))² |Ƒ_s] − t
=: (⋆)
Note the following:
* W(t)−W(s) is independent of Ƒ_s, therefore
E[(W (t) − W (s))²|F_s] = E[(W (t) − W (s))²]
as W (t) − W (s) ∼ N (0, t − s):
=t−s
- W (s) is Ƒ_s-measurable, hence
E[2W (t)W (s)|Ƒ_s] = 2W (s)E[W (t)|Ƒ_s]
since F-Wiener processes are F-martingales
= 2W (s)W (s)
= 2(W (s))² - W (s) is Ƒ_s-measurable, hence E[(W (s))2|Ƒ_s] = (W (s))²
Substituting these into (⋆), we get
(⋆)=t−s+2(W(s))² −(W(s))² −t
That is:
= (W (s))² − s.
E[(W (t))² − t|Fs] = (W (s))² − s.
constant
is indeed a martingale
Theorem 2.1.12 (Doob’s inequality)
Let (M (t))t∈[0,T ] be a continuous martingale. Then, for any
p ∈ (1,∞) we have
E[supₜ∈[₀,ₜ ] |M(t)|ᵖ≤ (p/(p−1))ᵖ E [|M(T)|ᵖ]
missed out?
Recall property
for conditional expectation
If Y is g measurable and Z is a RV
then
E[YZ|g]
If we have RV Y measurable wrt sigma algebra g
then conditional expectation of Y given g
E[Y|g]= Y
when y indep of g also true
E[Y|g]=E[Y]
g is the event it is raining
g doesnt influence map at all
If Y is g measurable and Z is a RV
then
E[YZ|g] = yE[Z|g]