Markov Jump Processes Flashcards
Residual holding time for time homogeneous process
P(x(s) = j | x(s) = i, R(s) = w) =
What is Markov Jump Process?
A Markov process with
a continuous time set
discreet state space
What is distribution of holding time in Markov chain process?
If staying in state 1 transition rate is - mu then holding time T1 would be distributed T1 ~ exp (mu)
Find variance of the MLE estimator in Markov chain
Variance can be found with CRLB rule which is -1/second derivative with respect to transition rate.
Probability of remaining in a state A for at least 5 years.
Transition rate AA = -sum ( coming out from A transition rates)
Transition rate 0.15
P– (5) = exp (-0.15 *5)
AA
Derive an expression for F(i) to be the probability that a person currently in i will never be in state P.
F(A) = mu(AT)/mu(AA) * F(T) + mu(AP) /mu (AA)* F(P) + mu(AD)/mu(AA) * F(D)
Where F(D) = 1 as can never be in P once dead
F(P) = 0 since if currently in state P it is impossible to never visit it
Calculate the expected future duration spent in state P for a person currently in state A given the probability that a person in a state A will visit state P is 0.5
If the person will visit state P the timer spent there will be exponentially distributed with parameter lambda which is transitional rate from P (0.2) then mean waiting time in state P will be 1/lambda or 1/0.2 = 5
Then expected future duration spender in state P for a person currently in A : 0.5 * 5
What are the parameters of the Markov jump process?
Parameters of the model are the transition rates mu ij between the states where mu ij = lim p ij (h)/ h , i¥j
h - 0
List all transition rates
Confidence interval of mu given d and v
Mu = d / v
95% confidence interval use the following formula
Mu +- 1.96*sqroot( mu^2 / d)
Irreducible chain definition
Each state can be ultimately reached starting from any other
Periodic chain
Chain with period of d is one where return to a given state is only possible in a number of steps that is a a multiple of d
If d =1 then chain is aperiodic
Finite chain definition
When a chain has a finite number of spaces
If a state space is finite ……
- there is at least one stationary distribution
- but the process may not conform to this distribution in the long term
If a chain is finite and irreducible …..
- there is a unique stationary distribution
- but the process may not conform in the long run
In chain is finite, irreducible and aperiodic …..
- there is a unique stationary distribution pi
- the process will conform to this distribution in the long term
lim as n -> ∞ p ij (n) = π j
Stationary distribution
If π is a solution satisfying equation π = π P where π j ≥ 0
and Σ π j = 1 then π is a stationary distribution
Key assumptions underlying a Markov multiple state model:
- probability of being in state i going to j depends on the current state only no previous information required
- for any two states g and h over a short interval dt
dtp(t)¹²=μ(t)¹²+o(dt) where t > 0 - μ(t)¹² is constant for 0 < 1
When calculating probability of being in state j in n years starting in state i….
The solution would depend on what matrix we are given:
- generator matrix ( with transition rates p ij) then the solution would be
Exp (- λ * t) for probability of staying (hence using holding rate λ )
When jumping from 2 to 3 for example we use μ₂₃/λ₂ where λ₂ is the total force of transition from state 2.
We know that if it is generator matrix the some of transition rate across would be zero
- probability matrix (when the some of probabilities is equal to 1)
Then probabilities after a few years are found simply rough matrix multiplication!
Expected time to reach a subsequent state k formula
is in the tables book next to Kolmogorov forward and backward equations
Residual holding time for time homogeneous process
P(x(s) = j | x(s) = i, R(s) = w) =