2 Wiener Process or Brownian motion Flashcards
RECAP
probability space
RVs
F\B(R) measurable
prob space
(Ω,F,P)
rvs
functs X : Ω → R which are F\B(R) meausrable
this means
inverse image X₋₁(A)∈F
recap
lemma linking sigma algebra is generated by a collection of sets
which are open intervals
{X<a}
equivalent so saying
inverse image of all elements in this generating class
X₋₁(A)∈F ∀ A∈[ (a,infinity) : a∈(R ]
{X<a}={w∈Ω : X(w)<a}
subset of Ω so is an event
this is a RV if ∈F for any a
RECAP CONDITION EXPECTATION
G ⊂ F be a sub-σ-algebra of F
provided
||x|| =E[|X|] < infinity
then we can define new RV,
the conditional expectation
E[X|G] which has 2 properties:
1) E[X|G] G measurable: meaning {X>a}⊆ G⊆ F for all a in R. In particular, if I take an X which is G-measurable and thus it is F-measurable
2) E[1_A X] = E[1_A E[X|G]] ∀ A∈G
cond exp is itself a RV measurable wrt sigma algebra G
Consider a fixed prob space
(Ω,F,P)
stochastic processses will
describe evolutions in time
time will be an added parameter to w in Ω
Consider:
probability space tossing a coin n times
im interested in first outcome:
RV X
Ω={ (a_1,…,a_n): a_i∈{H,T}}
each outcome of the form
(H,T,H,H,…T)
tells you what happens at each coordinate index
if we are interested only in say second outcome we will need to state a function which outputs the second coord
Define RV X₁ which tells you what happened in the first toss: for w in Ω
Xₙ:Ω to {H,T} or to R
X₁(w)=X_1((a₁,…aₙ)) = a₁
Xₙ(w)=X_1((a₁,…aₙ)) = aₙ
considering DICRETE time as well as
Ω={ (a_1,…,a_n): a_i∈{0,1}}
Define RV X giving nth
X:Ω to {0,1}
instead we consider with to vars
X_t(w)
X_t : Ω × {1,…n} → R
Time variable {1,…n}
if i fix time a variable it becomes:
X : Ω → R
a random var as a function of Ω
If I fix an w in Ω
X : {1,…n} → R
n choices in the time var
X_t(w) = (X₁(w),…,Xₙ(w))
if w is fixed I get real numbers, giving a vector corresp to coordinates (not RV)
X₁(w)=X_1((a₁,…aₙ)) = a₁
Xₙ(w)=X_1((a₁,…aₙ)) = aₙ
remark : “to make my conscious clear”
X_t : [0,1] ×Ω → R
You can actually put a sigma algebra on this set,
taking the collection A of sets of the form
A={ (a,b) X A : a,b in [0,T], A ∈F}
sigma algebra generated by this collection
σ(A)=:B([0,T]) ⊗ F
It follows that we can fix t or w
Definition 2.1.1. A stochastic process
A stochastic process is a map X : Ω × [0, T] → R which is F ⊗ B([0, T])-measurable.
Notice that a stochastic process is a function of two variables (ω, t) → X(ω, t).
(a stochastic process is a collection of random vars, uncountably many?)
A stochastic process is a map X : Ω × [0, T] → R which is F ⊗ B([0, T])-measurable.
If we fix t ∈ [0, T]
function X(·, t) : Ω → R is F-measurable, hence it is a random
variable.
That is, we can see a stochastic process as a collection of random variables X(t) indexed
by t ∈ [0, T].
notation stochastic process
(X(t))_{t∈[0,T]}
.
A stochastic process is a map X : Ω × [0, T] → R which is F ⊗ B([0, T])-measurable.
, if we fix ω ∈ Ω,
, if we fix ω ∈ Ω, then we have a function of time t 7→ X(ω, t), which will be called
the trajectory or path corresponding to the fixed ω.
if we fix time then we have a rv corresponding to this
continuous stochastic process
a stochastic process X is continuous if for all ω ∈ Ω if the trajectory t → X(ω, t) is a continuous function of time.
From now on, unless otherwise indicated, we drop the ω dependence as usual
(if with probability 1 the map is continuous
P(t → Xₜ, is continuous)=1
{ω ∈ Ω | t → Xₜ(ω,) is continuous}
note on stochastic processes
trajectories correspond to individual ω, and these are continuous
each outcome is a function not a number
Remark 2.1.2. Functions Y : Ω × [0, T] → R which satisfy the following…
Caratheodory functions
Functions Y : Ω × [0, T] → R which satisfy the following
* for each ω ∈ Ω, the map t → Y (ω, t) is either left or right continuous
* for each t ∈ [0, T], the map ω → Y (ω, t) is F-measurable,
are called Caratheodory functions. It can be checked that Caratheodory functions are F ⊗B([0, T])-
measurable.
Definition 2.1.3. Wiener process
Brownian motion
Definition 2.1.3. A continuous stochastic process
W_t or B_t
(W(t))t∈[0,T] is called Wiener process if
1. W(0) = 0
2. For all s < t, W(t) − W(s) ∼ N (0, |t − s|)
3. For all 0 ≤ t₁ < t₂ < … < tₙ ≤ T, the random variables
W(t₂) − W(t₁), …, W(tₙ ) − W(tₙ₋₁)
are independent.
summary brownian motion
(all trajectories start from origin)
(differences/ which are RVS themselves have Normal distribution with mean and variance)
(increments along intervals are independent , time intervals are disjoint wouldnt be true otherwise)
In summary:
Brownian motion is a stochastic process with continuous trajectory, continuous funct of time
starts from 0 probability 1
distribution for intervals is gaussian
The distributuion of RV W_t~N(0,t)
Exercise 2.1.4. Let (W(t))_{t∈[0,T]} be a Wiener process,
let c > 0 and set
W~(t) = cW(t/c²) for
t ∈ [0, c²T].
Show that W~ is a Wiener processes on [0, c²T].
W~(t) is a stochastic process
We need to check:
1. W~(0)=cW(0/c²) =c W(0)= 0
2. For all s < t, W(t) − W(s) ∼ N (0, |t − s|)
3. For all 0 ≤ t₁ < t₂ < … < tₙ ≤ T, the random variables
W(t₂) − W(t₁), …, W(tₙ ) − W(tₙ₋₁)
are independent.
Notes…
LECTURE:
1)Due to property of wiener process
2)we verify it has a normal distribution, w~(T)-W~(s) =. cw(t/c^2)-cw(s/c^2)=c[w(t/c^2)-w(s/c^2)] by definition we know this is normally distributed w(t/c^2)-w(s/c^2)] ~N(0, (t/c^2)-(s/c^2))
which means that this is multiplied by c mean is multiplied by the constant variance multiplied by the constant squared
~N(0, t-s)
3) looking at the increments and finding the sequence w~(t_3)-w~(t_2)=c[….] in terms of a sequence of c times w(t_n/c^2)This is an increasing sequence, multiplying by the constant doesn’t change the property of the Weiner process and we can conclude they are independent
by conclusion we have verified all properties of Weiner process-
Can you find a function of time which has this property? That if you scale it you get same object
f:(0,infinity) to R
f(x)=c f(x/c)
if you choose
f(x)=x scaled xc divide by c is is x
???
what about if you divide by c^2=
f(x)=c sqrt(X/c^2)
Exercise 2.1.5. Let (W (t))_t∈[0,T ] be a Wiener process and let c ∈ (0, T ). Set W~(t) = W (c + t) − W(c) for t∈[0,T −c].
Show that W~ is a Wiener process on [0,T −c].
solution: Very similar to the exercise above.
verifying properties again
1)w~(0)=W(0+c)-W(c)=W(c)-W(c)=0
2)For s<t increments W~(t)-W~(s)= W(c+t)-W(c)-W(c+s)+W(c)=W(c+t)-W(c+s)
~N(0, c+t-c-s)
This has normal distribution N(0, t-s)
3)for increasing times compute the increments for these times
t_1<t_2<…<t_
w~(t_2)-w~(t_1),…, w~(t_n)-w~(t_(n-1))
w(c)-W(c+t_2)-w(c)+w(c+t_1,….
these are still increments over disjoint intervals and so are indepenentend
def 2.1.7 FILTRATION
A family F := (Ƒₜ)ₜ∈[0,T ] of σ-algebras Ƒₜ ⊂ Ƒ is called filtration if Ƒₛ ⊂ Ƒₜ for s < t.
The filtration F is called complete if Ƒ_0 contains all the null sets of Ƒ.
A filtration is called right continuous if
Ƒₜ:= ∩_ₛ>ₜƑₛ
(knowledge always increasing never decreasing)
fix filtration
fix filtration
F := (Ƒₜ)ₜ∈[0,T ]
Def 2.1.7
adapted to the filtration
Stochastic process is called adapted to the filtration F if X(t) is Ƒₜ measurable for each t ∈ [0, T ],
Notice that every stochastic process X is adapted to its own filtration, that is, the family of σ-algebras given by Ƒˣ(t) = σ(X(s),s ≤ t).
Def 2.1.8 F-Wiener process
We will say that (W (t))t∈[0,T ] is a F-Wiener process if
1. (W (t))_t∈[0,T ] is a Wiener process
2. (W (t))_t∈[0,T ] is adapted to the filtration F
3. for all 0 ≤ s ≤ t ≤ T, the random variable W(t) − W(s) is independent from Ƒ_s.
If we have F-wiener process then it will be a martingale
DEF 2.1.9 F-martingale
A stochastic process (X(t))ₜ∈[0,T] is an F-martingale if
1. (X(t))ₜ∈[0,T] is F-adapted,
2. E[|X(t)|] < ∞ for all t ∈ [0,T],
3. E[X(t)|Ƒ_s] = X(s) for all s ≤ t.
EX 2.2.10
Let (W (t))_t∈[0,T ] be a F-Wiener process. Show that it is a martingale with respect to F.
W_t is an F-wiener process, then this process is also a martingale
verifying the properties for martingale
1)By the definition of F Weiner process adapted to F
2) For t ∈ [0, T ].Then using Hölder’s inequality, and that since W is a Wiener process, W(t) ∼ N(0,t), we have
(recall X has nORMAL DIST then distribution function f stated …property of distribution discussed)
Expected values E|W(t)|<= (E[|W(t)^2|])^0.5 = t^0.5< ∞
3)For0≤s<t:
E[W (t)|Fs] = E[W (t) − W (s) + W (s)|Ƒ_s]
= E[W (t) − W (s)|Ƒ_s] + E[W (s)|Ƒ_s]
=: (⋆).
W (t) − W (s) is independent of F_s , as W is an F-Wiener process
* W(s) is Ƒ_s-measurable, as W is adapted (again, because W is an F-Wiener process). so expected values aren’t affected
(⋆) = E(W(t) − W(s)) + W(s) = W(s).
as W (t) − W (s) ∼ N (0, t − s):
Exercise 2.1.11. Let (W (t))t∈[0,T ] be an F-Wiener process. Show that ((W (t))^2 − t)t∈[0,T ] is a martingale with respect to F.
If W_t is f_t measurable then this function may be? X_T will be, Adapted to F?
Let (W (t))t∈[0,T ] be a an F-Wiener process. Then (we show X is a martingale s.t X= W^2-t)
(i) (CHECK X IS F ADAPTED)
Let t ∈ [0, T ]. By assumption W (t) is Ƒ_t-measurable. So since the map x → x² − t is
continuous and thus Borel, the composition (W (t))² − t is also Ƒ_t -measurable.
Thus X_t is F adapted
(ii) (CHECK E[|X_t|] finite
Let t ∈ [0, T ]. Then (using the triangle inequality and that W (t) ∼ N (0, t)), we have
E[|(W (t))² − t|] ≤ E[(W (t))²] + t =t+t
< ∞.
by considering the variance and expectation
(iii) (We want to show the martingale property …)
For0≤s<t:
E[(W(t))² − t|Ƒ_s] = E[(W(t))² − 2W(t)W(s) + (W(s))² + 2W(t)W(s) − (W(s))² − t|Ƒ_s] = E[(W (t) − W (s))² |Ƒ_s] + E[2W (t)W (s)|Ƒ_s] − E[(W (s))² |Ƒ_s] − t
=: (⋆)
Note the following:
* W(t)−W(s) is independent of Ƒ_s, therefore
E[(W (t) − W (s))²|F_s] = E[(W (t) − W (s))²]
as W (t) − W (s) ∼ N (0, t − s):
=t−s
- W (s) is Ƒ_s-measurable, hence
E[2W (t)W (s)|Ƒ_s] = 2W (s)E[W (t)|Ƒ_s]
since F-Wiener processes are F-martingales
= 2W (s)W (s)
= 2(W (s))² - W (s) is Ƒ_s-measurable, hence E[(W (s))2|Ƒ_s] = (W (s))²
Substituting these into (⋆), we get
(⋆)=t−s+2(W(s))² −(W(s))² −t
That is:
= (W (s))² − s.
E[(W (t))² − t|Fs] = (W (s))² − s.
constant
is indeed a martingale
Theorem 2.1.12 (Doob’s inequality)
Let (M (t))t∈[0,T ] be a continuous martingale. Then, for any
p ∈ (1,∞) we have
E[supₜ∈[₀,ₜ ] |M(t)|ᵖ≤ (p/(p−1))ᵖ E [|M(T)|ᵖ]
missed out?
Recall property
for conditional expectation
If Y is g measurable and Z is a RV
then
E[YZ|g]
If we have RV Y measurable wrt sigma algebra g
then conditional expectation of Y given g
E[Y|g]= Y
when y indep of g also true
E[Y|g]=E[Y]
g is the event it is raining
g doesnt influence map at all
If Y is g measurable and Z is a RV
then
E[YZ|g] = yE[Z|g]
Exercise
Let (W (t))t∈[0,T ] be an F-Wiener process. Define another process Y_t= (W_t)^3 - 3(W_t)
is a martingale with respect to F.
“One we need to practice”
similar to previous
Definition 2.3.1
MODIFICATION
for 2 stochastic processes
Let X, X˜ : Ω × [0, T] → R be stochastic processes. We call X˜ a modification of X if for all t ∈ [0, T] we have
P(X˜(t) = X(t))= 1
E.G
Consider RV R with continuous distribution .
X_t=0 for all t
X~_t
is defined by
=
{0 if t not equal to R
{1 if t=R
These are modifications of each other.
Because for each t, P(X_t=X_t)= P(t not equal to R?)= 1
R is a rv with continuous dist so P(R=t)=0
Def 2.3.2 α-Hölder continuous
α in [0.1]
Let α ∈ (0, 1]. A function f : [0, T] → R is called α-Hölder continuous if there exists a constant C𝒻 such that |f(t) − f(s)| ≤ C𝒻 |t − s|ᵃ for all t, s ∈ [0, T].
The collection of all α-Hölder continuous functions on [0, T] will be denoted by Cᵃ
The closer α to 1 the more regular function is
if α=1: differentiable at almost every point
An example of Holder continuous funct
f(t)=t
satisfies
|f(t)-f(s)|=|s-t| for every s and t in domain
f is 1-Holder continuous
(satisfies Lipshitz)
alpha-holder funct?
f(t)=|t|
higher exponent is more regular
e.g.
if f(t)=|t|
if t,s >0
|f(t)-f(s)|= |t-s|
if α=1: differentiable at almost every point C^1 function
Now, let’s consider two cases:
If t ≥ 0 and s ≥ 0:
|t| = t and |s| = s, so | |t| - |s| | = |t - s| = |t - s|.
If t < 0 and s < 0:
|t| = -t and |s| = -s, so | |t| - |s| | = |(-t) - (-s)| = |-(t - s)| = |t - s|.
In both cases, |f(t) - f(s)| simplifies to |t - s|
alpha-holder funct?
f(t)=sqrt(t)
need to show that: there exists a constant C𝒻>0 such that
|f(t) − f(s)| ≤ C𝒻|t − s|^α for all t, s ∈ [0, T].
e.g.
if f(t)=sqrt(t)
if t,s >0
|f(t)-f(s)|= |sqrt(t)-sqrt(s)|
≤C|√(t-s)|
(squaring ????
|√t - √s|²= t +s - 2 (√st) ≤ t+s
(C|√(t-s)|)²= C² |t-s|
α=1/2
example of alpha-holder continuous
F(t)=tᵃ
F(t)=tᵃ
|f(t)-f(s)|=||tᵃ-sᵃ|≤ C_𝒻 |t − s|ᵃ for all t, s ∈ [0, T].
easy if alpha=0.5
o/w requires calculus
f belongs to Cᵃ
(element of)
example of holder continuous
sqrt funt is C^0.5
|f(t)-f(s)|
<=C x sqrt(t-s)
sample path
Of wiener process
trajectory of process
Theorem 2.3.3 (Kolmogorov).
continuity …
there exists a **continuous modification **
Let X : Ω×[0, T] → R be a process such that there exists positive
constants α, K and β, such that
E[|X(t) − X(s)|^α]
≤ K|t − s|^{1+β}
for all s, t, ∈ [0, T].
Then, there exists a **continuous modification **
X˜ : Ω × [0, T] → R of X
such that for all γ < β/α, we have P(X˜ ∈ C^γ) = 1.
If X is continuous then it holds that
P(X(t) = X˜(t), ∀t ∈ [0, T]) = 1
and for all γ < β/α
P(X ∈ C^γ) = 1
REMARK
Theorem 2.3.3 (Kolmogorov).
continuity …
there exists a **continuous modification **
If X Was deterministic funct Expectation doesnt do anything so
E[|X(t) − X(s)|^α] = |X(t) − X(s)|^α
≤ K|t − s|^{1+β}
raising both sides to 1/a
|X(t) − X(s)|
≤ K|t − s|^{(1+β)/α}
{(1+β)/α} could be >1
thus if something is a-holder continuous with exponent >1
has to be constant
{(1+β)/α} could be >1
thus if something is a-holder continuous with exponent >1
has to be constant
suppose we have funct f
|f(t) − f(s)|<= C_f |t − s|
Kolmogorov thm meaning
For every t
the P(X(t)=X~(t))=1?
Proof is technical so skipped
thm will help see properties of trajectories of wiener process
thm quantifies trajectory continuity
{1+β}
means exponent >1
Applying Kolmogorov thm to wiener process
cant apply kolmogorov thm as beta =0 not >0
We can apply this to see that the Wiener process is γ-Hölder continuous for all γ < 1/2.
Indeed, since W(t) − W(s) ∼ N (0, |t − s|) for s < t, we have that
moments: normal distribution with mean 0 and variance t-s
E[|W(t) − W(s)|²ᵐ]
=(1/sqrt(2π|t − s|
)) * integral_[0,∞]
|x|²ᵐ exp(− x²/(2|t−s|)) dx
raising powers we can see apply Kolmogorov to see the wiener process is a continuous modification
trajectories of BM are continuous but we quantify how continuous they are
trajectories of BM are continuous but we quantify how continuous they are
differentiable from the right and left but maybe not at origin e.g |x|
square root funct deriv goes to infinity as x go to 0
sqrt funct similar to BM:
has the same scaling
if BM w_t
cW(t/c^2) this is also BM
BM behaves like sqrt does at 0
f is bounded and a holder cont
The closer α to 1 the more regular function is
then it is B holder cont for any B<a
bounded using another constant
do as exercise by splitting up powers
Considering continuity of functions
If we consider functions such as
sin
cos
absolute value
a trajectory of Brownian motion
they are continuous
square root function
which ones are more regular?
cos/sin trig
as they do not have a point at which you cant differentiate
if we consider
absolute value
and square root
their differentials/derivatives
abs: -1 and 1 except for at 0
sqrt: 0.5x^-0.5 approaches infinity as we get closer to 0
sqrt funct has things in common with brownian motion
sqrt funct has things in common with brownian motion
it can be scaled
Take B, W_t W(t)
and scale
cW(t/c^2) is also brownian motion
we will see more rigourously is that every point behaves like the sqrt behaves at 0
RECALL A function is α-Hölder continuous if
Let α ∈ (0, 1]. A function f : [0, T] → R is called α-Hölder continuous if there exists a constant C𝒻>0 such that
|f(t) − f(s)| ≤ C𝒻|t − s|^α for all t, s ∈ [0, T].
The collection of all
α-Hölder continuous functions on [0, T] will be denoted by C^α.
the higher/closer to 1 α the more regular a function is
α=1 implies differentiable at every point
Suppose that f is bounded and a-holder continuous
then its also B-holder continuous for all B <=a
TRUE OR FALSE
TRUE
show by using that |f(s)-f(t)|= |f(s)-f(t)|ʸ||f(s)-f(t)|¹⁻ʸ
Kolmogorov’s continuity Criterion
(Theorem 2.3.3 )
If you have stochastic process X_t s.t
there exists a K:
E[|X(t) − X(s)|^α]
≤ K|t − s|^{1+β}
for all s, t, ∈ [0, T].
β,α>0
we will assume: β<α
Now if we have this, we will assume that with Probability 1:
map t→X(t) ∈Cˠ for all Ɣ< β/α
(Ɣ-holder continuous)
Note that: if it was a deterministic function we would have exponent (1+β)/α. Which is greater than β/α so there is a trade off.
we considered a non-deterministic function, allowed to be stochastic, we check the condition with the expectation. This tells you that in fact your function is Ɣ-holder continuous for such a Ɣ above. Trade off? 1/α. In order to consider stochasticity with probability
Kolmogorov’s continuity Criterion
there exists a K:
E[|X(t) − X(s)|^α]
≤ K|t − s|^{1+β}
WHAT HAPPENS IF X WAS DETERMINISTIC
the expectation does nothing,
E[|X(t) − X(s)|^α]
≤ K|t − s|^{1+β}
satisfied becomes
|X(t) − X(s)|^α
≤ K|t − s|^{1+β}
becomes
|X(t) − X(s)|
≤ K|t − s|^{{1+β}/α}
for a {{1+β}/α} holder continuous funct
can be something greater than 1? if something is a-continuous with exponent greater than 1
If something is a-holder continuous with a greater than 1 then it has to be continuous
If something is a-holder continuous with a greater than 1 then it has to be continuous
suppose we have
a function f with a= 3/2
suppose we have f with a= 3/2
|f(t) − f(s)| ≤ C_f|t − s|³/²
I claim function f is constant:
take any two such points
I can take a partition of n pieces
|t-s|/n
and re write
tₖ= s+ [k|t-s|/n]
t ₀=s
tₙ=t
note that:
|tₖ − tₖ₋₁|= | [k|t-s|/n] − [(k-1)|t-s|/n]|
= ||t-s|/n|
|f(t) − f(s)| =
|Σₖ₌₁ⁿ [(f(tₖ)-f(tₖ₋₁)]|
(by triangle inequality)
≤ Σₖ₌₁ⁿ |[(f(tₖ)-f(tₖ₋₁)]|
(by holders condition)
≤ C_f[Σₖ₌₁ ⁿ |tₖ − tₖ₋₁|³/²]
= C_f[Σₖ₌₁ⁿ [|t-s|/n|³/²]
=C_f|t-s|³/²(1/n)³/² n
if i choose a smaller and smaller pieces this converges to 0, thus
0≤|f(t) − f(s)|≤0
and function is constant
Thats why we only talk about holder continuous functions with exponent between 0 and 1
We will apply kolmogorov thm to see that the Wiener process is γ-Hölder continuous for all γ < 1/2.
For BM W(t):
E[|W(t)-W(s)|²]
= |t − s|
since W(t) − W(s) ∼ N (0, |t − s|) for s < t,
its the variance
Looking at the form we want we need 1 + beta: so instead look at, for m>1
E[|W(t) − W(s)|²ᵐ]
= integral_[0,∞][
(1/sqrt(2π|t − s|
)) *
|x|²ᵐ exp(− |x|²/{2|t−s|}) ]
dx
from the gaussian distribution
(density function of W(t)-W(S)*|x|²ᵐ as its the expectation of random var to the power ²ᵐ )
Then we have:
=(|t-s|ᵐ/sqrt(2π|t − s|
)) * integral_[0,∞]
(|x|²ᵐ/|t-s|ᵐ)* exp(− |x|²/{2|t−s|}) dx
by change of vars u= x/√ |t-s| we have
= Cₘ |t-s|ᵐ
thus
E[|W(t) − W(s)|²ᵐ]≤ Cₘ |t-s|ᵐ
m arbitrary so i can choose any m>1
The exponent alpha = 2m
beta =m-1
γ< (m-1)/(2m)
true for all m, for m close to 1 this is close to 0
for large m this is close to 1/2
so
γ<1/2
and almost behaves like a square root
SHOW THAT
=(|t-s|ᵐ/sqrt(2π|t − s|
)) * integral_[0,∞]
(|x|²ᵐ/|t-s|ᵐ)* exp(− |x|²/{2|t−s|}) dx
by change of vars u= x/√ |t-s| we have
= Cₘ |t-s|ᵐ
=(|t-s|ᵐ/sqrt(2π) * integral_[0,∞]
(u²ᵐ)* exp(− u²/2) .du
But this integral which only depends on m, thus is a constant
which dep only on m
= Cₘ |t-s|ᵐ
(density function of W(t)-W(S))
f(x) = (1 / sqrt(2 * pi * |t - s|)) * exp(-x^2 / (2 * |t - s|))
True or false?
Brownian motion W
P(W∈ Cˠ)=1 for all Ɣ< 1/2
TRUE
prev example
We can show that P( W∈ Cᴮ)=0 for all B> 1/2 also
A bit technical for now
TRUE OR FALSE
BM W cannot be continuously differentiable
TRUE cannot be continuously differentiable
shown using the concept of quadratic variation
Theorem 2.3.4 (Quadratic Variation).
Let s < t and let (tᵢⁿ)ᵢ₌₁ ᵐ⁽ⁿ⁾, for n ∈ N be a sequence of partitions of [s, t]
such that max ᵢ≤ₘ₍ₙ₎
|tᵢ₊₁ⁿ - tᵢⁿ| → 0 as n → ∞. Then,
Σᵢ₌₁ⁿ |W(tᵢ₊₁ⁿ) − W(tᵢⁿ)|² → (t − s),
in L₂(Ω), as n → ∞
(take increment for neighbouring points of BM and sum)
let (tᵢⁿ)ᵢ₌₁ ᵐ⁽ⁿ⁾, for n ∈ N be a sequence of partitions of [s, t]
such that max ᵢ≤ₘ₍ₙ₎
|tᵢ₊₁ⁿ - tᵢⁿ| → 0 as n → ∞.
ie
first partition from s to t
t₀¹,t₁¹,t₂¹,t₃¹,t₄¹…
t₀²,t₁²,t₂²,t₃²,t₄²..
i take partition such that the distance between partitions converges to 0
PROOF
Theorem 2.3.4 (Quadratic Variation).
Let s < t and let (tᵢⁿ)ᵢ₌₁ ᵐ⁽ⁿ⁾, for n ∈ N be a sequence of partitions of [s, t]
such that max ᵢ≤ₘ₍ₙ₎
|tᵢ₊₁ⁿ - tᵢⁿ| → 0 as n → ∞. Then,
Σᵢ₌₁ⁿ |W(tᵢ₊₁ⁿ) − W(tᵢⁿ)|² → (t − s),
in L₂(Ω), as n → ∞
Σᵢ₌₁ⁿ |W(tᵢ₊₁ⁿ) − W(tᵢⁿ)|² → (t − s),
in L₂(Ω), as n → ∞
MEANS that the L₂ norm converges
this is
E[
(Σᵢ₌₁ᵐ⁽ⁿ⁾⁻¹ |W(tᵢ₊₁ⁿ) − W(tᵢⁿ)|² -(t-s))²
]
=
E[
(Σᵢ₌₁ᵐ⁽ⁿ⁾⁻¹ |W(tᵢ₊₁ⁿ) − W(tᵢⁿ)|² -(tᵢ₊₁ⁿ - tᵢⁿ) )²
]
=
Σᵢ₌₁ᵐ⁽ⁿ⁾⁻¹ E[||W(tᵢ₊₁ⁿ) − W(tᵢⁿ)|² -(tᵢ₊₁ⁿ - tᵢⁿ))²]
≤
4*
Σᵢ₌₁ᵐ⁽ⁿ⁾⁻¹ E[||W(tᵢ₊₁ⁿ) − W(tᵢⁿ)|⁴ -(tᵢ₊₁ⁿ - tᵢⁿ))²]
≤
N Σᵢ₌₁ᵐ⁽ⁿ⁾⁻¹ (tᵢ₊₁ⁿ - tᵢⁿ)²
≤
N MAXᵢ|(tᵢ₊₁ⁿ - tᵢ)| *[Σᵢ₌₁ᵐ⁽ⁿ⁾⁻¹ (tᵢ₊₁ⁿ - tᵢⁿ)
=
NMAXᵢ|(tᵢ₊₁ⁿ - tᵢ)| *[Σᵢ₌₁ᵐ⁽ⁿ⁾⁻¹ (tᵢ₊₁ⁿ - tᵢⁿ)|t-s|
which conv to 0
by assumption that our seq of terms gets closer
as n → ∞.
. For the second equality we have used that if i≠ j, then
E[|W(tᵢ₊₁ⁿ) − W(tᵢⁿ)|² -(tᵢ₊₁ⁿ - tᵢⁿ)(|W(tᵢ₊₁ⁿ) − W(tᵢⁿ)|² -(tᵢ₊₁ⁿ - tᵢⁿ))]
=0
For the second equality we have used that if i≠ j, then
E[|W(tᵢ₊₁ⁿ) − W(tᵢⁿ)|² -(tᵢ₊₁ⁿ - tᵢⁿ)(|W(t ⱼ₊₁ⁿ) − W(t ⱼⁿ)|² -(t ⱼ₊₁ⁿ - t ⱼⁿ))]
=0
BY defn. of brownian motion increment |W(t ⱼ₊₁ⁿ) − W(t ⱼⁿ)|is independent of|F_{t ⱼⁿ} so conditional expectation is just the expectation
we use properties of cond exp:
WLOG assume i<j
E[|W(tᵢ₊₁ⁿ) − W(tᵢⁿ)|² -(tᵢ₊₁ⁿ - tᵢⁿ)(|W(t ⱼ₊₁ⁿ) − W(t ⱼⁿ)|² -(t ⱼ₊₁ⁿ - t ⱼⁿ))]
The same as taking cond expectation:
E[E[|W(tᵢ₊₁ⁿ) − W(tᵢⁿ)|² -(tᵢ₊₁ⁿ - tᵢⁿ)(|W(t ⱼ₊₁ⁿ) − W(t ⱼⁿ)|² -(t ⱼ₊₁ⁿ - t ⱼⁿ))]
|F_{t ⱼⁿ}]
j is greater than i thus this implies i+1≤j
so we have that
W(tᵢ₊₁ⁿ) and W(tᵢⁿ) are F_{t ⱼⁿ} measurable (F_{t ᵢ₊₁ⁿ} and F_{t ᵢⁿ} )
t ᵢ₊₁ⁿ and t ᵢⁿ :these are constants
so the first term is F_{t ⱼⁿ} measurable and we can take it out of the conditional expectation
thus
=
E[|W(tᵢ₊₁ⁿ) − W(tᵢⁿ)|² -(tᵢ₊₁ⁿ - tᵢⁿ)]E[(|W(t ⱼ₊₁ⁿ) − W(t ⱼⁿ)|² -(t ⱼ₊₁ⁿ - t ⱼⁿ))]
|F_{t ⱼⁿ}]
E[|W(tᵢ₊₁ⁿ) − W(tᵢⁿ)|² -(tᵢ₊₁ⁿ - tᵢⁿ)]E[(|W(t ⱼ₊₁ⁿ) − W(t ⱼⁿ)|² -(t ⱼ₊₁ⁿ - t ⱼⁿ))]]
=0
as E[**(|W(t ⱼ₊₁ⁿ) − W(t ⱼⁿ)|² ]= (t ⱼ₊₁ⁿ - t ⱼⁿ))] thus second exp is 0
so we have shown i≠ j, are 0 thus used prev
also used
(a-b)^2
≤ 4(|a|^2+|b|^2|
we use that
Σᵢ₌₁ᵐ⁽ⁿ⁾⁻¹ E[|W(tᵢ₊₁ⁿ) − W(tᵢⁿ)|² -(tᵢ₊₁ⁿ - tᵢⁿ))²]
with
[|W(tᵢ₊₁ⁿ) − W(tᵢⁿ)|² =a
and
(tᵢ₊₁ⁿ - tᵢⁿ))=b
note squared term
Theorem 2.3.4 (Quadratic Variation). DISCUSSION
has some negative implications:
From this follows that BM cannot be continuously differentiable
we discuss the concept of variation
Given a function f
Variation of funct on interval;
Var_[a,b] f
Take partition of interval [a,b]
P={t₁,t₂,…,tₖ}
for each partition I take
Σᵢ₌₁ᵏ |f(tᵢ₊₁)-f(tᵢ)|
now take supremum of all partitions on the interval
CLAIM:
If we take func f which is cont differentiable: deriv is a cont funct
then for any interval
its variation is finite
(if cont differtiable then f’ is continuous, then sup f’ over any interval is finite by continuity we use this to show)
CLAIM:
If we take func f which is cont differentiable: deriv is a cont funct
then for any interval
its variation is finite
(if cont differtiable then f’ is continuous, then sup f’ over any interval is finite by continuity we use this to show)
Take any partition and consider the sum:
Take partition of interval [a,b]
P={t₁,t₂,…,tₖ}
for each partition I take
Σᵢ|f(tᵢ₊₁)-f(tᵢ)|
(from MVT)
=Σᵢ |f’(tᵢ₊₁)||tᵢ₊₁-tᵢ|
≤ sup_{x in [a,b]}(f’(x)) Σᵢ |tᵢ₊₁-tᵢ|
=sup_{x in [a,b]}(f’(x)) (b-a)
then taking supremum of partitions on both sides
the sup is finite if f continuous differentiable
Facts shown for BM
Fact 1:
Σᵢ₌₁ⁿ |W(tᵢ₊₁ⁿ) − W(tᵢⁿ)|² → (t − s),
in L₂(Ω), as n → ∞
for partitions defined previously
Fact 2:
If f is continuously differentiable then variation on any interval is finite
Var f _[a,b]]< ∞
we will use these to claim :
fact 1 implies
P(Var W _[a,b]]< ∞)=1
CLAIM:
Corollary 2.3.5. Let (W(t))ₜ∈[₀,ₜ] be a Wiener process.
Then P(Var_[₀,ₜ] W = ∞) = 1.
PROOF
ASSUME the contrary:
P(Var_[₀,ₜ] W <∞) > 0
by thm 2.3.4 we have that there is a sequence of partitions
[0,1] (tᵢⁿ)ᵢ₌₁ ᵐ⁽ⁿ⁾, for n ∈ N be a sequence of partitions of [s, t]
such that max ᵢ |tᵢ₊₁ⁿ - tᵢⁿ| → 0 as n → ∞. Then,
Σᵢ₌₁ⁿ |W(tᵢ₊₁ⁿ) − W(tᵢⁿ)|² → (t − s)=
1 (2.2)
almost surely. On the other hand, on the event {Var_{[0,1]} W < ∞} we have
Σᵢ₌₁ᵐ⁽ⁿ⁾⁻¹ |W(tᵢ₊₁ⁿ) − W(tᵢⁿ)|²
take out the max
≤ max_i |W(tᵢ₊₁ⁿ) − W(tᵢⁿ)|Σᵢ₌₁ᵐ⁽ⁿ⁾⁻¹ |W(tᵢ₊₁ⁿ) − W(tᵢⁿ)|
≤
max_i |W(tᵢ₊₁ⁿ) − W(tᵢⁿ)|Var_{[0,1]} W →0
(as we have a partition, the variation is the supremum over all of these)
which contradicts 2.2
as W is continuous, BM, and increments become closer together functions of these values become closer for this continuous funct
remark
BM and square root?
We have show P(W is cont diff)=0
stronger statement would be
P( there exists s>0 s.t W is differentiable at s)=0 (at any point)
at whatever point we look at BM not differentible (thm 2.3.6)
There is no single point where BM behaves better than square root
Theorem 2.3.6 (Nowhere differentiability)
Let (W(t))t∈[0,1] be a Wiener process. Then with
probability one, W is nowhere differentiable on [0, 1]
proof wont be asked in exam, technical
SUMMARY OF BM chapter
BM paths are ALMOST (1/2)-holder continuous
nowhere differentiable
(cannot speak of derivative of BM in sense of being a funct)
wiener process