Chapter 1 Flashcards
Markov property
Process over time: given the present the future is independent of the past. We predict solely on the present.
ABC past present future
P(C\ AnB) = P(C\B)
State space
Finite or countable infinite
Eg N
A B C D E
Transition probability
One step tp
For X_n, X_(n+1) Random vars taking values in state space S for states at times n and n+1.
Transition probabilities are conditional probabilities:
One step transitional probabilities are the probabilities of moving to state j given that you were at state i on the previous time point.
p_i,j = P (X_(n+1) =j | X_n = i) for i and j in S.
We assume p_i,j doesn’t depend on n: Time homogeneous
Markov property allows us to equate probabilities p_ij with P(X_(n+1)=j | X_n =i, PREVIOUS HISTORY)
X_n
For X_n, X_(n+1) Random vars taking values in state space S for states at times n and n+1.
What is Time homogenous?
We assume p_i,j doesn’t depend on n: Time homogeneous
One step tps
Transition matrix
TM
For each ordered pair of states (i,j) corresponding transition probability p_i,j form elements of TM.
• each row Σ=1 as represents a PROBABILITY DIST (double check)
- non neg elements
- |S| X |S| matrix for finite S
- TM is used to predict more days into the future
Ehrenfest model for diffusion: 2 containers A,B with N particles overall. each step one moved. State is the number of particles in A, Model as markov chain (MC).
S= {0,1,…,N}
Transition probabilities
No of particles IN PICKED CONTAINER/ TOTAL
p_i,i+1= (N-i)/N
Random picked is in B and so moves to A
p_i,i-1=(i )/N
Random picked is in B and so moves out of A
Form elements of TM
Eigenvectors of TM
Transition matrix always has eigenvalue 1 and eigenvectors (1,…,1)
Stochastic matrix
(TM is one)
A square matrix with properties:
Bold{1} column vector.
P1 = 1 with lambda = 1
We can also work with transpose with columns as the state i.
Initial distribution
We specify unconditional probabilities for starting probabilities of the chain
π_i ^ (0) = P(X₀ = i)
time=0, i = state for i ∈S.
Probability that state at time 0 is i.
Elements of π(⁰)_i = P(X₀ = i) can be in vector vector{π}(⁰) in R ^ |S|
St = ( π₁(⁰) π₂(⁰) … )
Eg if chain begins at t=0 at state 2 we might have INITIAL DISTRIBUTION
Vector π(⁰) = ( 0 1 0 0 )
Calculating probabilities at later states
• from the initial distribution and conditional probabilities given in TM
In general..
P(X₀ = i₀, X₁ = i₁,…, X_n = i_n)
= π⁰i(0) P_i₀, i₁ • P_i₁,i₂•…• P_i(n-1), i_(n)
Ie probability for SPECIFIC PATH equals
Stat dist element * one step transitional probabilities for the specific path
π_j ^(1) = ( vector(π) ^(0) P)_j
Jth element of initial * TP
n-step transition probabilities of MCs
P_i,j ^(n)
= P(X_(m+n) =j | X_m =i)
Elements of n-step transition matrix.
P^(n)
Allow us to find, say distribution at time 20 by chapman kolmogorov (multiply stat dist and N step TM)
As t increases the dist changes less and less
Theorem 1: chapman -kolmogorov equations
a) for all positive integers m,n
P^(m+n) = P^(m) P^(n)
For the m and n step transitional matrices RHS.
b). For all n=1,2,…
P^(n) = P^n
n step TM = (one step TM to the power of n)
Hence I,j elements equal
Corollary of the chapman kolmogorov equations
For all non-meg integers m an n
Vector(π ^(m+n)) = vector(π^ (m)) • P^n
RHS term is a One step TM to the power of n
Derived from the law of total probability and the markov property
•matrix multiplication: multiply row by COLUMN
Diagonalise TM
P= CD C^-1
Using right eigenvectors
So P^n (one step TM to the power of n)
= C D^n C^-1
As n tends to infinity the n step transition matrix can be investigated by this diagonalisation. That is the diagonal elements likely tend to 0 if smaller than 1.
D^n = diag( elements^n)
Working out P^n from this we see that for a lot of examples after a large amount of steps initial conf doesn’t matter. Might converge to a dist. (Each same row?)