Discrete-time Markov chains Flashcards

1
Q

① Markov chain
▪ Definition

A

P( Xₙ₊₁ = j | Xₙ = i, Xₙ₋₁ = iₙ₋₁ ,…, X₁=i₁, X₀ = i₀ )

A Markov chain is
▪ a stochastic process
▪ The past and the future are independent given the present
(conditionally on the present)

P( Xₙ₊₁ = j | Xₙ = i )

For:
▪ all n⩾0
▪ all states i₀, i₁,…, iₙ₋₁ , i, j ϵ S

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

① Markov chain
▪ Stochastic process

A

A sequence
- (Xₙ) —> n ϵ N
of random variables
with values in a set S
is called a discrete-time stochastic process with state space S

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

① Markov property

A

Equation (MP) is the Markov property interpreted as, for a Markov Chain:

▪ the conditional distribution of any future state Xₙ₊₁ is independent of the past states and depends only on the present state

▪ pij = P( Xₙ₊₁ = j | Xₙ = i )
it is the probability of going from i to j in one time step

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

① Markov chain
▪ Matrix

A

The matrix
P = (Pij) —> i,j ϵ S
is the one-step transition matrix of the Markov chain
₀ ₁ ₂
P = p₀₀ p₀₁ p₀₂ … p₀j
p₁₀ p₁₁ p₁₂
:
pᵢ₀ pᵢ₁ pᵢ₂ … pᵢj

▪ Σ pᵢj = 1 –> sommatoria per j ⩾ 0
probability to jump from i to any of the states j ϵ S

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

① Markov chain
▪ Transition graph

A

A transition matrix P is sometimes represented by its transition graph G
▪ a graph having for vertices the elements of S

P = ⅓ ⅓ ⅓
⅕ ⅖ ⅖
1 0 0

The transition graph is

  1 ⅓↙   ↘⅓ 2    →   3
  ⅖
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

② Distribution of a Markov chain

A

Let (Xₙ)ₙϵN be a Markov chain with:
▪ state space S
▪ transition matrix P

The n-step probabilities (pᵢ j)ⁿ is:
▪ the probability that a Markov chain in state i will be in state j after n additional transition

(pᵢ j)ⁿ= P( Xₙ₊ₖ = j | Xₖ = i )

with
▪ n>0
▪ i,j ϵ S

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

② Distribution of a Markov chain
▪ Matrix n-step

A

Pⁿ denote the matrix of the n-step transition probabilities

Pⁿ = P

then we have the n-step transition matrix is the n-th power of the one- step transition matrix

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

② Distribution of a Markov chain
▪ initial distribution

A

The probability distribution of the initial state of the Markov chain is

αᵢ = P(X₀ = i)

with
▪ i ϵ S
▪ Σ αᵢ = 1

Important to calcolate the unconditional distribution of the state at any time n

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

② Distribution of a Markov chain
▪ unconditional probabilities

A

All unconditional probabilities are obtained by conditioning on the initial state

P(Xₙ=j) = ΣP(Xₙ=j|X₀=i)P(X₀=i)
=Σ(pᵢ j)ⁿ
αᵢ= α(Pⁿ)j

P(Xₙ=j) = α(Pⁿ)j –> vector notation

▪ α is the row vector of the αᵢ’s
▪ j component of the row vector α(Pⁿ) relative to state j

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

② Distribution of a Markov chain
▪ Take-home message

A

The distribution of a Markov chain is completely determined by the initial distribution α and the transition matrix P

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

③ Classification of states (Markov chain)
▪ Definition (accessibility)

A

Let i,j ϵ S
▪ Accessibility: state j is accessible from state i and we write i→j
if (pᵢ j)ⁿ>0 for some n⩾0

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

③ Classification of states (Markov chain)
▪ Definition (communication)

A

Let i,j ϵ S
▪ Two states i and j that are accessible to each other are said to communicate and we write i↔j

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

③ Classification of states (Markov chain)
▪ Properties

A

i) i↔i, for all iϵS
ii) if i↔j, then j↔i
iii) if i↔j and j↔k, then i↔k

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Classification of states (Markov chain)
▪ Communication class

A

Two states that communicate are said to be in the same class.

The communication relation generates a partition of the state space S into disjoint classes called communication classes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

③ Classification of states (Markov chain)
▪ Definition (irreducibility)

A

If there exists only one communication class then:
▪ the Markov chain
▪ its transition matrix
▪ its transition graph
are said to be irreducible

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

③ Classification of states (Markov chain)
▪ Recurrent state

A

For
▪ any state i ϵ S
▪ fᵢ is the probability that the process will reenter state i

The state is recurrent if
▪ fᵢ=1 ↔ Σ(pᵢ j)ⁿ=∞
▪ for sure the process will reenter state i and it will do infinitely many times
(due to the Markov property (MP))

17
Q

③ Classification of states (Markov chain)
▪ Transient state

A

For
▪ any state i ϵ S
▪ fᵢ is the probability that the process will reenter state i

The state is transient if
▪ fᵢ<1 ↔ Σ(pᵢ j)ⁿ<∞
▪ The process might end up again in state i but it will visit this state only a finite number of times

Remark: In a finite-state Markov chain not all states can be transient

18
Q

④ Long-run proportions and limiting probabilities
▪ Number of transition from state i to return to state i

A

Let (Xₙ)ₙϵN be a Markov chain with:
▪ state space S
▪ transition matrix P

If state i is:
▪ recurrent
▪ let mᵢ denote the expected number of transitions that it takes the Markov chain starting in state i to return to that state

mᵢ=E[Nᵢ|X₀=i]
with
Nᵢ = min{n>0 : Xₙ=i}

Note: Nᵢ Is the number of steps taken by the chain ti reenter for the First time in state i

19
Q

④ Long-run proportions and limiting probabilities
▪ Definition (positive and null recurrence)

A

The recurrent state i ϵ S is said:
▪ positive recurrent if mᵢ<+∞
▪ null recurrent if mᵢ=+∞

20
Q

④ Long-run proportions and limiting probabilities
▪ Proposition initial stable

A

If the Markov chain is:
▪ irreducible
▪ recurrent
then for any initial stable
πᵢ= mᵢ⁻¹ with i ϵ S

21
Q

④ Long-run proportions and limiting probabilities
▪ Theorem (existence and uniqueness of the stationary distribution)

A

Consider a Markov chain is:
▪ irreducible
▪ positive recurrent

then the long-run proportions
{πᵢ, i ϵ S} are the unique solution of the equations:
▪ π = πP –> vector notation
▪ πᵢ= Σ πₖ pₖᵢ –> for all i ϵ S

If there is no solution of the preceding linear equations, then Markov chain is either
▪ transient or
▪ null recurrent
and πᵢ=0 for all i ϵ S

22
Q

④ Long-run proportions and limiting probabilities
▪ Why are they called stationary (or invariant) probabilities

A

▪ The long-run propositions {πᵢ, i ϵ S} are called stationary (or invariant) probabilities

▪ The reason is that if the initial distribution is chosen according to {πᵢ, i ϵ S}, then the probability of being in state i at any time is n is πᵢ

▪ Formally if
P(X₀=i) = πᵢ then
P(Xₙ=i) = πᵢ –> for all n >0 (i ϵ S)

23
Q

④ Long-run proportions and limiting probabilities
▪ Definition (periodicity)

A

The period of state i ϵ S is
dᵢ = gcd{n⩾1|Pᵢᵢⁿ>0}
with the convention:
▪ dᵢ = ∞ if the set is empty
▪ dᵢ = 1 state i is said aperiodic

Periodicity is a class property

24
Q

④ Long-run proportions and limiting probabilities
▪ Theorem (convergence of the stationary distribution)

A

Let (Xₙ)ₙϵN be an
▪ irreducible
▪ positive recurrent
▪ aperiodic
Markov chain with
▪ state space S
▪ transition matrix P
▪ stationary distribution π

Then, it holds
lim (pᵢ j)ⁿ = πj –> for all i,j ϵ S
n→∞