Discrete-time Markov chains Flashcards
① Markov chain
▪ Definition
P( Xₙ₊₁ = j | Xₙ = i, Xₙ₋₁ = iₙ₋₁ ,…, X₁=i₁, X₀ = i₀ )
A Markov chain is
▪ a stochastic process
▪ The past and the future are independent given the present
(conditionally on the present)
P( Xₙ₊₁ = j | Xₙ = i )
For:
▪ all n⩾0
▪ all states i₀, i₁,…, iₙ₋₁ , i, j ϵ S
① Markov chain
▪ Stochastic process
A sequence
- (Xₙ) —> n ϵ N
of random variables
with values in a set S
is called a discrete-time stochastic process with state space S
① Markov property
Equation (MP) is the Markov property interpreted as, for a Markov Chain:
▪ the conditional distribution of any future state Xₙ₊₁ is independent of the past states and depends only on the present state
▪ pij = P( Xₙ₊₁ = j | Xₙ = i )
it is the probability of going from i to j in one time step
① Markov chain
▪ Matrix
The matrix
P = (Pij) —> i,j ϵ S
is the one-step transition matrix of the Markov chain
₀ ₁ ₂
P = p₀₀ p₀₁ p₀₂ … p₀j
p₁₀ p₁₁ p₁₂
:
pᵢ₀ pᵢ₁ pᵢ₂ … pᵢj
▪ Σ pᵢj = 1 –> sommatoria per j ⩾ 0
probability to jump from i to any of the states j ϵ S
① Markov chain
▪ Transition graph
A transition matrix P is sometimes represented by its transition graph G
▪ a graph having for vertices the elements of S
P = ⅓ ⅓ ⅓
⅕ ⅖ ⅖
1 0 0
The transition graph is
1 ⅓↙ ↘⅓ 2 → 3 ⅖
② Distribution of a Markov chain
Let (Xₙ)ₙϵN be a Markov chain with:
▪ state space S
▪ transition matrix P
The n-step probabilities (pᵢ j)ⁿ is:
▪ the probability that a Markov chain in state i will be in state j after n additional transition
(pᵢ j)ⁿ= P( Xₙ₊ₖ = j | Xₖ = i )
with
▪ n>0
▪ i,j ϵ S
② Distribution of a Markov chain
▪ Matrix n-step
Pⁿ denote the matrix of the n-step transition probabilities
Pⁿ = P
then we have the n-step transition matrix is the n-th power of the one- step transition matrix
② Distribution of a Markov chain
▪ initial distribution
The probability distribution of the initial state of the Markov chain is
αᵢ = P(X₀ = i)
with
▪ i ϵ S
▪ Σ αᵢ = 1
Important to calcolate the unconditional distribution of the state at any time n
② Distribution of a Markov chain
▪ unconditional probabilities
All unconditional probabilities are obtained by conditioning on the initial state
P(Xₙ=j) = ΣP(Xₙ=j|X₀=i)P(X₀=i)
=Σ(pᵢ j)ⁿαᵢ= α(Pⁿ)j
P(Xₙ=j) = α(Pⁿ)j –> vector notation
▪ α is the row vector of the αᵢ’s
▪ j component of the row vector α(Pⁿ) relative to state j
② Distribution of a Markov chain
▪ Take-home message
The distribution of a Markov chain is completely determined by the initial distribution α and the transition matrix P
③ Classification of states (Markov chain)
▪ Definition (accessibility)
Let i,j ϵ S
▪ Accessibility: state j is accessible from state i and we write i→j
if (pᵢ j)ⁿ>0 for some n⩾0
③ Classification of states (Markov chain)
▪ Definition (communication)
Let i,j ϵ S
▪ Two states i and j that are accessible to each other are said to communicate and we write i↔j
③ Classification of states (Markov chain)
▪ Properties
i) i↔i, for all iϵS
ii) if i↔j, then j↔i
iii) if i↔j and j↔k, then i↔k
Classification of states (Markov chain)
▪ Communication class
Two states that communicate are said to be in the same class.
The communication relation generates a partition of the state space S into disjoint classes called communication classes
③ Classification of states (Markov chain)
▪ Definition (irreducibility)
If there exists only one communication class then:
▪ the Markov chain
▪ its transition matrix
▪ its transition graph
are said to be irreducible
③ Classification of states (Markov chain)
▪ Recurrent state
For
▪ any state i ϵ S
▪ fᵢ is the probability that the process will reenter state i
The state is recurrent if
▪ fᵢ=1 ↔ Σ(pᵢ j)ⁿ=∞
▪ for sure the process will reenter state i and it will do infinitely many times
(due to the Markov property (MP))
③ Classification of states (Markov chain)
▪ Transient state
For
▪ any state i ϵ S
▪ fᵢ is the probability that the process will reenter state i
The state is transient if
▪ fᵢ<1 ↔ Σ(pᵢ j)ⁿ<∞
▪ The process might end up again in state i but it will visit this state only a finite number of times
Remark: In a finite-state Markov chain not all states can be transient
④ Long-run proportions and limiting probabilities
▪ Number of transition from state i to return to state i
Let (Xₙ)ₙϵN be a Markov chain with:
▪ state space S
▪ transition matrix P
If state i is:
▪ recurrent
▪ let mᵢ denote the expected number of transitions that it takes the Markov chain starting in state i to return to that state
mᵢ=E[Nᵢ|X₀=i]
with
Nᵢ = min{n>0 : Xₙ=i}
Note: Nᵢ Is the number of steps taken by the chain ti reenter for the First time in state i
④ Long-run proportions and limiting probabilities
▪ Definition (positive and null recurrence)
The recurrent state i ϵ S is said:
▪ positive recurrent if mᵢ<+∞
▪ null recurrent if mᵢ=+∞
④ Long-run proportions and limiting probabilities
▪ Proposition initial stable
If the Markov chain is:
▪ irreducible
▪ recurrent
then for any initial stable
πᵢ= mᵢ⁻¹ with i ϵ S
④ Long-run proportions and limiting probabilities
▪ Theorem (existence and uniqueness of the stationary distribution)
Consider a Markov chain is:
▪ irreducible
▪ positive recurrent
then the long-run proportions
{πᵢ, i ϵ S} are the unique solution of the equations:
▪ π = πP –> vector notation
▪ πᵢ= Σ πₖ pₖᵢ –> for all i ϵ S
If there is no solution of the preceding linear equations, then Markov chain is either
▪ transient or
▪ null recurrent
and πᵢ=0 for all i ϵ S
④ Long-run proportions and limiting probabilities
▪ Why are they called stationary (or invariant) probabilities
▪ The long-run propositions {πᵢ, i ϵ S} are called stationary (or invariant) probabilities
▪ The reason is that if the initial distribution is chosen according to {πᵢ, i ϵ S}, then the probability of being in state i at any time is n is πᵢ
▪ Formally if
P(X₀=i) = πᵢ then
P(Xₙ=i) = πᵢ –> for all n >0 (i ϵ S)
④ Long-run proportions and limiting probabilities
▪ Definition (periodicity)
The period of state i ϵ S is
dᵢ = gcd{n⩾1|Pᵢᵢⁿ>0}
with the convention:
▪ dᵢ = ∞ if the set is empty
▪ dᵢ = 1 state i is said aperiodic
Periodicity is a class property
④ Long-run proportions and limiting probabilities
▪ Theorem (convergence of the stationary distribution)
Let (Xₙ)ₙϵN be an
▪ irreducible
▪ positive recurrent
▪ aperiodic
Markov chain with
▪ state space S
▪ transition matrix P
▪ stationary distribution π
Then, it holds
lim (pᵢ j)ⁿ = πj –> for all i,j ϵ S
n→∞