05 Decision and Cost-Effective Analysis Flashcards
What does a decision tree do?
Characterizes transition from one health state to another
What does a Markov model do?
Characterizes transition among multiple states over time
What are the Markov Modeling steps?
Delineate a set of possible health states. Determine transitions during a cycle. Assign transition probabilities. Assigns incremental utility
What is the terminology for the Markov Model?
Going from one state to another are “State Transitions”. Ending up in the dead category is the “Absorption State”
How are calculations done in a P-Matrix?
Assume utility of “well” = 1, “sick” = 0.60, and “dead” = 0. Multiplying the incremental utility for each state by the fraction of the cohort occupying that state and summing across all states yields the TOTAL incremental utility generated by the cohort for a given cycle. Cycles are then summed over time intervals to determine cumulative utility of a treatment option
What is a P-Matrix Table?
Sometimes probabilities change over time (e.g. risk of arthritis increases with age). Most Markov software uses a database table to store individual P-matrices. This allows for changing probabilities
How do you compute transition probabilities?
Transition probabilities are risk of transition within a specified cycle length. It is not appropriate to divide published risk data by number of cycles (e.g. 20% 10 year risk does not mean 2% 1 year). Such calculations produce an unrealistic variable risk model
What is the Markov Theorem?
Theorem: Let “p” be the (current) probability vector and “M” be the transition matrix for a Markov chain. After “n” steps of the process, where n > 1, the new probability vector is given by pn = (M^n) x p
What is the Markov Assumption?
Everyone in a cohort that has made it to a particular health state has the same prognosis regardless of how or when they got there