Chapter 1 Flashcards

1
Q

Markov property

A

Process over time: given the present the future is independent of the past. We predict solely on the present.

ABC past present future
P(C\ AnB) = P(C\B)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

State space

A

Finite or countable infinite
Eg N
A B C D E

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Transition probability

One step tp

A

For X_n, X_(n+1) Random vars taking values in state space S for states at times n and n+1.

Transition probabilities are conditional probabilities:
One step transitional probabilities are the probabilities of moving to state j given that you were at state i on the previous time point.

p_i,j = P (X_(n+1) =j | X_n = i) for i and j in S.

We assume p_i,j doesn’t depend on n: Time homogeneous
Markov property allows us to equate probabilities p_ij with P(X_(n+1)=j | X_n =i, PREVIOUS HISTORY)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

X_n

A

For X_n, X_(n+1) Random vars taking values in state space S for states at times n and n+1.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is Time homogenous?

A

We assume p_i,j doesn’t depend on n: Time homogeneous

One step tps

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Transition matrix

TM

A

For each ordered pair of states (i,j) corresponding transition probability p_i,j form elements of TM.

• each row Σ=1 as represents a PROBABILITY DIST (double check)

  • non neg elements
  • |S| X |S| matrix for finite S
  • TM is used to predict more days into the future
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Ehrenfest model for diffusion: 2 containers A,B with N particles overall. each step one moved. State is the number of particles in A, Model as markov chain (MC).

A

S= {0,1,…,N}

Transition probabilities
No of particles IN PICKED CONTAINER/ TOTAL
p_i,i+1= (N-i)/N
Random picked is in B and so moves to A

p_i,i-1=(i )/N
Random picked is in B and so moves out of A
Form elements of TM

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Eigenvectors of TM

A

Transition matrix always has eigenvalue 1 and eigenvectors (1,…,1)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Stochastic matrix

A

(TM is one)

A square matrix with properties:

Bold{1} column vector.

P1 = 1 with lambda = 1

We can also work with transpose with columns as the state i.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Initial distribution

A

We specify unconditional probabilities for starting probabilities of the chain

π_i ^ (0) = P(X₀ = i)

time=0, i = state for i ∈S.

Probability that state at time 0 is i.

Elements of π(⁰)_i = P(X₀ = i) can be in vector vector{π}(⁰) in R ^ |S|
St = ( π₁(⁰) π₂(⁰) … )

Eg if chain begins at t=0 at state 2 we might have INITIAL DISTRIBUTION
Vector π(⁰) = ( 0 1 0 0 )

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Calculating probabilities at later states

A

• from the initial distribution and conditional probabilities given in TM

In general..
P(X₀ = i₀, X₁ = i₁,…, X_n = i_n)

= π⁰i(0) P_i₀, i₁ • P_i₁,i₂•…• P_i(n-1), i_(n)

Ie probability for SPECIFIC PATH equals
Stat dist element * one step transitional probabilities for the specific path

π_j ^(1) = ( vector(π) ^(0) P)_j
Jth element of initial * TP

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

n-step transition probabilities of MCs

A

P_i,j ^(n)

= P(X_(m+n) =j | X_m =i)

Elements of n-step transition matrix.

P^(n)

Allow us to find, say distribution at time 20 by chapman kolmogorov (multiply stat dist and N step TM)
As t increases the dist changes less and less

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Theorem 1: chapman -kolmogorov equations

A

a) for all positive integers m,n

P^(m+n) = P^(m) P^(n)

For the m and n step transitional matrices RHS.

b). For all n=1,2,…

P^(n) = P^n

n step TM = (one step TM to the power of n)

Hence I,j elements equal

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Corollary of the chapman kolmogorov equations

A

For all non-meg integers m an n

Vector(π ^(m+n)) = vector(π^ (m)) • P^n

RHS term is a One step TM to the power of n

Derived from the law of total probability and the markov property

•matrix multiplication: multiply row by COLUMN

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Diagonalise TM

A

P= CD C^-1

Using right eigenvectors

So P^n (one step TM to the power of n)
= C D^n C^-1

As n tends to infinity the n step transition matrix can be investigated by this diagonalisation. That is the diagonal elements likely tend to 0 if smaller than 1.
D^n = diag( elements^n)
Working out P^n from this we see that for a lot of examples after a large amount of steps initial conf doesn’t matter. Might converge to a dist. (Each same row?)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Stationary distributions

MC in equilibrium

A

A distribution where if it’s starts there is stays there for all n.

Vector(π⁰)

Find by LEFT eigenvectors

Vector(π)P = Vector(π) for any n=1,2,…

  • solving a number of equations
  • using the fact that the sum of a row =1

For this stationary dist:
Vector(π)^(n) = Vector(π)P^n = Vector(π)

Dist at time n equals dist at any time

Unconditional distribution of X_n is the same as that of X_0 for any n

17
Q

Finding the stationary distributions

A

•solve left eigenvectors equations

π_j = Σ for i in S
of
π_i p_i,j

for all j∈S

  • from matrix we use the columns of TM
  • for a finite S lambda =1 and linearly dep equations discarded
  • use that prob dist and so sum of π_j ‘s =1

Eg solve (π₁ π₂ π₃ π₄ ) P = (π₁ π₂ π₃ π₄ )

π₁ +π₂ +π₃ +π₄ =1 allows us to find specific values

•may use symmetry or hints from question or structure of equations