Keuzevak - Value based healthcare Flashcards
2 Core characteristics of Porter VBHC
Value chain; all activities of an organization must create a valuable product
- Create value for consumers
central premise: (in any industry) a successful and sustainable enterprise needs to create value for clients (in competition)
Outcomes
effects of care on the health status of patients
Costs
Value comparison and go with the cheaper one with the same value
Patient value;
health outcomes that matter divided by the costs to achieve those outcomes
Value is created
At the level of a medical condition over the full cycle of care
3 concepts in VBHC
- outcomes
- costs
- Value for patients
Medical condition level to create value
a) patients seek HC to address health-related issues
b) issues usually directly related to medical condition
c) ergo: professionals create value by looking at these conditions
Created at specific levels of care; lungcancer, skincancer and not cancer
Value is created over full cycle of care
a) value generated through a set of activities → value chain
b) full care cycle: from diagnosis to rehabilitation → start-to-end care
c) e.g surgery = 1 element of full care cycle
5 goals of VBHC!!
- organize into IPU
- Measure outcomes and costs for every patient
- Move to bundled payments for care cycles
- integrate care delivery across facilities
- Build and enable an IT platform
- expend the excellent services across geography
Measurement & reporting
Providers should measure outcomes and cost of their care cycles per patient
- Publicly reported (competition)
- enabling fair comparison
Systematic outcome measurement
Key Actions to Implement VBHC: 4 keys
- Organizing Around Care Cycles: Health care providers should be organized based on the full care cycle for medical conditions, not just by specialties.
- Integrated Practice Units (IPUs) that consist of multidisciplinary teams focusing on specific medical conditions.
- Measurement & Reporting: Providers should measure and report outcomes and costs for full care cycles, ensuring transparency for comparison.
- Payment Aligned with Value: A shift from fee-for-service to bundled payments that reward outcomes and efficiency, rather than volume of services provided.
Provider competition - value based competition
- Outcome Measurement: Systematic measurement allows for comparisons, and excellent providers are rewarded with more patients.
- Provider Incentives: Providers should be incentivized for achieving good outcomes and efficiency, not for providing more treatments.
Difference between hospital structure and VBHC principle
Hospitals; based on medical specialties
VBHC; per medical condition
IPU
multidisciplinary team of professionals and supporting staff are grouped together to coordinate their independent tasks with the overarching goal to improve value for a particular group of patients
- Hospitals should organize around medical conditions
Payment implication
Bundled payment; reimbursement for the whole care cycle of a medical condition
Rewarding good outcomes and efficiancy
- rewarding more patients
- financial bonusses; p4p
Providers that can not keep up
- Should restructure of go out of business
Confusion around IPUs
concepts refer to organizational units not to care paths
Multidisciplinary collaboration is common, IPU rare
Rare in Netherlands
- No own budget
- No own decision making
units in organization
refer to specific groups, departments, or divisions designed to handle particular functions or tasks.
You can achieve coordination within a unit in two ways-
- Lines of authority; coordination trough supervision decision making power
- Close contact; informal communication
function based grouping
- Vertical lines
- each line represents a group of people with a particular skillset and knowledge
- These groups are based on the means
Market based grouping or condition based grouping
- Each line represents a group (i.e. unit) that serve a particular market (hip replacement)
- serve along the whole cycle of care
- these units are group based on the ends of a production process
long tradition in hospitals and problems
- tradition of medical specialization
- with complex knowledge and skills
- Issues of coordination and dealing with interdependencies
(cross over all different IPU) - Outdaded legacies of medical specialization
Why IPU is solution
- coordination is value based (looked at the patient to provide care)
- to organize around customer needs, not the supply of particular services
VBHC Practical challenges IPU
- History and interest
- Public opinion (concentration of hospitals; specialized)
- radical vs incremental change; lot of time and effort
- informal collaboration vs formal reorganization
- MD teams as liaisons (between coordination mechanisms; work together to improve outcomes) vs. IPU as units
IPU and MD teams
- Relies on communication and cooperation between existing departments or teams to achieve shared goals
- Promotes accountability and alignment with organization by responsibility for patient outcomes
Diabeter as IPU
- Co-located; all care in one facility
- Multidisciplinary Team (endocrinologists, dietitians, nurses, and psychologists))
- Full Care Cycle
- Improving value by routine outcome Measurement and negotiating Bundled Payments
- Diabetes; one medical condition
One team
- One database
- Medical outcome focus
- integrated care delivery
- independent decision making
- Bundled payments and P4P
Bundled payment reward Diabeter
Rewarded with more budget
- More innovation and more patients
Outcomes examples
PREM; Patiënt Reported Experience Measure
(ervaringen)
PROM; patient reported outcome measures
(uitkomsten)
VBHC in EMC
- Function based
- Disease specific (not really medical condition)
- ## outcomes closed monitored
Prom to measure QoL levels
- General measures
- Domain specific
- Disease specific
How to generate prom monitoring? Look at?
- Number of questions
- domain specific and disease questions
- individual / population level
- Suitable for childern
Tracking via E-prom
Challanges to value implementation in EMC
- No answering
- Patient compliance; no burden!
- Rogers innovation curve
- Provider compliance with the PROM measurement (in a already busy schedule)
Change management for innovation such as implementing PROM measurement - determinants
- technical implementation
- Skills
- Knowledge
- Awareness
- Intrinsic motivation (Relatedness, skilss, autonomy)
- extrinsic motivation (punishment or compensation)
Prom measurement in EMC
- Minimal patient burden with minimal loss of information
- Disease overarching approach and visability
Three mechanisms for VB competition via public transperency
- Patient choice; patients choose providers on their outcomes
- Provider comparisons learning; provider learns from systematic comparison of outcome; benchmarking
- Value based competition; providers compete on measurable results with payments tied to the outcomes
Donabedian framework
Quality of care
- Structure
- Process
Results in outcomes; can be predicted by proces and structure indicators + can compare
Ellipse form; structure and process indicators form the quality of care as the provider controls it
structure indicator
What healthcare providers have available to treat patients
- Staff, facilities, systems, equipment
- Certified diabetes team
Process indicator
What providers actually do to treat patients ( how well do they follow the medical guidelines)
- Guidelines; are you doing the right thing
- Diabetes pump
Outcome indicators
Measures of what happens to the patient health
- Manifest outcomes maybe much alter
- QoL, mortality
VBHC and donabedian differences
Freedom of providers
- VBHC; free to experiment
- D; improvement described by indiciators
Level of analysis
- VBHC; medical condition over the full cycle of care, not Individual services
- D: can be individual services to evaluate
Value;
- VBHC; outcomes as proxy for value; costs also into account
- D; Direct focus on outcomes only to patients; structure and process secondary
Freedom for providers in VBHC to experiment with outcomes
Provider encouraged
- design care process
- organize teams
- choose technology
- innovate to delivery methods
ICHOM
Defines indicators that matter the most to patients over the full cycle of care per disease
- Enable worldwide benchmarking
- enables provider comparison
How in NL outcomes public for provider comparisons
- 32 disease specific outcome measures
- using proms
Inform patients, insurers, and providers
Provider comparisons difficulties in interpretation
- Observered differences in outcomes
- Statistical uncertainty
- Case Mix
- Residual confounding
- Registration bias (methodological)
Remains; quality of care or outcomes that have to do with just the
- Observed differences in outcome dimensions
Outcomes matter to patients
- Relevant for patient
- Between provider variation; variance in outcome
Validity
- does the instrument really measure what we know
- systematic biases or error
Reliability
- Repeated measurements give the same result
- random error
Standardization
- each provider collects information the same way?
- error
Reliable between provider differences need to work, because?
- Patient choice for value
- Provider learning
- Value based competition
- statistical uncertainty (within provider differences)
Makes differences harder to detect
- Smaller sample size; large CI
- Extremely rare events
- Extremely common events
Adjust random effects model
random effects model
to tackle statistical uncertainty
- Doesn’t use the distance between intercept and individual centre
- Schrunk factor; put intercept closer to the mean for small sample sizes
- tackle rare events in small hospital sizes
Put it closer to the mean to correct for noise and unreliable data; no overinterpretation of differences
Goal; small hospitals are not punished shift more towards benchmark
Random effects model + formula
method to separate true differences from noise when analyzing data across facilities
between hospital variance (T^2)/
T^2 + (within hospital variance (sigma ^2)/ sample size)
More shrinkage when
Small between-hospital variance - Hospitals appear very similar; small differences are treated as noise.
Large within-hospital variance (
High variability within hospitals increases uncertainty in estimates.
Small sample size: Limited data per hospital amplifies uncertainty and shrinkage.
- Casemix and residual confounding
Casemix; characteristics or co variables influence the outcomes (sex, age, ses, severity); patient population
Residual confounding; confouding that you can not measure (because of no data on variables) and is left
registration bias
differences in outcomes due to how the data for outcomes are collected by the providers
- association with quality of care
Once controlled fro
- Case mix
-statistical uncertainty
- registration bias
Left is quality of care that correlated with the outcomes
Outcomes are the outcomes of provider work and resources and not characteristics of context
VBHC why correcting for these influences
Incorporating Structure and process outcomes
- Otherwise no fair comperison on what the provider does and the resources
Verify that the outcomes are the result of what the provider does and what he has available
Rankability
proportion of variation that is not due to chance?
Between provider variance/ median within provider precision
= T2 / (T2 + σ2)
Needs to be high as possible
- <50% LOW
-50-75% moderate
- >75% high
AUC
Area under curve
- how well the predictors in a model predict outcome
- 0,5 –> do not predict, due to chance
- 1; fully predict outcomes
O/E ratio
The OE ratio compares observed outcomes (e.g., mortality rates) to expected outcomes (adjusted for case mix).
- Ratio above 1; more than average
- Ratio under 1; less than average
Perfect in line; no casemix
Reliability function of
Magnitude between hospital variation
- significant variation are easier to detect, making the measurement more reliable (similar hard to detect)
Certainty of estimation
- Sample size
- statistical noise lower
How to improve rankability
Aggregation
- more information; combining information for more robust model
- Regional indiciators; improves entire health system value, not one hospital - precise estimates large N
Reporting data over 5 years (more robust data)
- conflicting goal to steer by when needed (feedback loop)
-reducing short term fluctuations
Composite indicators; one overarching indicator
- overall quality indicator;
- reduces noise and balances out variability in individual measures
Prioritize impactfull indicators; indicators with low validity and public relevance low priority
Discrimination
the ability to distinguish high and low performing hospitals
all or nothing approach
textbook outcome; if they included all quality indicators are met
- Stricter standard of measuring
Why HSMR not good
- Refferal bias
- Inadequate case mix adjustment; not adjusting for severity
- End of life care inflated mortality rates
- differences in capturing data; comorbidity and mortality
Not reflecting avoidable deaths
HSMR recommendations
- Enhanced adjustments; disease severity, end of life care and regional factors
- Regional perspective; regional level to look at referrals and specialized centres
- Alternative metrics; condition spesific mortality or proces indicators; adherence to guidelines
- Standardized coding; uniform data capturing
low rankability
Quality indicator is unreliable for benchmarking
- case mix
- random variation
Why is little casemix favourable
- Easier comparisons: Differences in outcomes are less influenced by patient characteristics.
- Less adjustment needed: Results are more reliable without complex corrections.
- Understand residuals: Small differences are easier to attribute to other factors!!!
Little casemix makes comparisons easier
Rankability - Between hospital variation
Shows how much hospitals differ in performance.
If high: Hospitals differ significantly, making rankings meaningful.
If low: Hospitals are similar, making it harder to rank them reliably.
Rankability - within hospital variation
Refers to variation within a single hospital, due to randomness or small sample sizes.
If high: Outcomes within a hospital are unstable or noisy, making true differences harder to see.
If low: Results within a hospital are consistent, making hospital ranking easier.
Rankability - N
Small N: Increases uncertainty (n) and lowers rankability.
Large N; Reduces noise σ ^2 and improves rankability by making estimates more precise.
relative risk
worst outcome percentage/ best outcome percentage
random effects model CI
CI becomes smaller
- if the CI is in the mean not significant
Correct noise of low sample sizes
Important with beta in casemix
- Look at the mean, is left or right worse
excersise case mix
Define the case mix variable which is worst condition in the population
- compare patient population and not the mean
Describe what is the worst outcome in case mix example
Right side is more than one, more worse outcomes. Beta is above average
- Confidence interval overlapping with mean
aftercase mix; were has the dot switched to
Vos et al breastcancer aim
Quantifies how much observed variation in these indicators was true to quality, taking into account
- Casemix
- Random variation
Argue for which of these quality indicators case-mix correction is most needed when comparing hospitals and argue why.
Look at O/E ratio, which are of line are due to casemix
- or look at rankability