Markov chains Flashcards

1
Q

stochastic processes

A

processes that evolve over time
random variables Xt, X is a measurable variable at time t -> state of the system at t : Discrete time
{Xo,X1,..} shows the evolution
only depends on present states
the ones in the past don’t matter (independent)
so given information about before is not used: only present state

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Markov chains

A

Markov chain represent stochastic processes

the conditional proba of a future event given a past one and the present one is independent

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

transition probabilities

A

conditional proba of arriving at state x knowing that you are at state y
stationary if the transition proba does not change over time
transition matrix at step n
P(n) = [P00(n) …]

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

absorbing states

A

once you enter i you never leave
pii=1
absorption probability is the proba to go to i
to compute this proba starting
write the equation for all state j to go to i
pji = all transition probas of j to state x * pxi
absorbing state i = 1, other = 0
solve equations by taking the pji of interest -> pki

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

computing the n step transition probability

A

pij(n) = sum pik(m) pkj(n-m)
given starting point i, the process goes to k after m steps
the n step transition probability is the transition matrix to the power n (can just multiply the matrix n times)
if rows converges to having the same values then they are steady states (due to the fact previous events are independent)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

steady state probabilities

A

Not the case for all !
Works for irreducible matrix (state communicate and are recurrent)
can also be called stationary probabilitie(=/ stationary transition probas) or long run distribution: pi
when enough time has elpased that initial state is no longer relevant (independent of starting state!). n step transition matrix has its row converging to the same probas (not a steady state when n too small).
=> proba of having state i at n=5 or n=500 is the same !

written pi: pi=piP
pj = sum_i pi_i pij (multiply proba of being in state i * proba going to j from i, sum all possibilities) sum pi j = 1
then solve system of equation
(probas in (pis of state to go to this) = probas out(same pi multiplied by its transition probas)

if not irreducible, need to know the starting state. compute in the same way the probas from the transient state then might need to multiply the proba to the result of a subsystem of absorbing state (if there is a single separate absorbing state, compute proba to reach that 1 first, then 1-p to reach the subsystem). Use the p notation from the general markov and pi for the subsystem (which is irreducible thus allowed to use pi)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

unconditional probability

A
if n is too small and thus not a steady state, must compute the p by taking the initial state into account. compute the p(n) (multiply proba n times) and multiply by inititial possibilities
the P(X=j) = P(X0=0)p0j(n) + P(X0=1)p1j(n)+...
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

accessible

A

state i is accessible from state j if pji(n)>0 n>0
if accessible from both ways then they communicate
if all communicate then markov is irreducible (if not irreducible then could be absorbable: be trapped in a subsystem). in that case all states are recurrent (visited infinite times)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

transient state

A

entering state i and maybe never going back to it by going to state j which doesn’t have a link to I.
transient will be visited a finite amount of times (at some point you leave them and never return there, thus must lead to a recurrent state)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

recurrent state

A

if enter state and it is sure will go back to the state

will be visited infinetly many times

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

periodicity

A

chains can have period to enter a specific state
eg: only possible to enter at 2t, 4t, …
its possible to not have a period (period 1)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

rate diagram

A

graph with the transition probas

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

expected number of times in transient stage

A

do a proba matrix for only the transient then

S = ( I - P_T)^-1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Continuous time

A

Discrete time is 1 time unit per state. Here can spend x time per state. Thus we compute the stationary probabilities then multiply by the time spent in that state. For the expected amount of time spend E we also need to adapt for this.
use exponential distribution

=> distribution (1/2,1/2) if time spent is 4 and 2 then => (4/2,2/2) => then normalize => (4/6,2/6)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

How many steps to get a coin flip sequence ?

A

take the sequence HTHTHT
know the proba to get each letter: here 1/2
take length : her 6
compute proba of the full sequence: (1/2)^6
Now you know the global number of steps
now check if a prefix is in common with a suffix
here : HT
compute proba of getting this
check if the subpart also has a common prefix and suffix. Here no -> sum all

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Expect number of times to see something

A
have states 0-1
with probas pij to go from i to j
E(S) = 1 + p01 E1 + p02 E2 +...
E1 = 1 + p10 E0 + p12 E2 + ...
E(END STATE) = 0

solve by replacing equations in each other. if want number starting at S then solve for that one.

17
Q

What is the probability that sequence 1 appear before sequence 2 ?

A

Make a common markov chain :
common starting state, with transition leading from one outcome to another depending on what happens
do equations with end state being the 1 we want to appear first
proba pij going from state i to j
P_S = psj Pj + …
P_END = 0

18
Q

How many coin flips to end game on average (finishes when 1 of 2 given sequence appear) ?

A

compute expected number of times to see the sequence for both. Then take the average

19
Q

Difference expected number of times and probability equations

A
E(X) = 1 + proba * E (for each transition)
PX = proba * P (for each transition) => thus no 1 + ...
20
Q

probability of ever arriving in state Y from X

A

set the p=0 to absorbing states which isn’t Y and set p=1 for Y
then for each transient do
p1 = proba of going to 2 * p2 + proba of staying in 1 * p1
then solve such as to get Px

if it is in a subsystem, compute proba of arriving in the subsystem (often it is 1-p the proba of going to a single absorbing state)