Part II Flashcards

1
Q

What are ways of calculating value?

A

Expected value - of gambling: (1/80) x $1000 = $12.50 Expected utility - subjective assessment / person preferences

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

EXAM What is Blois’ Funnel?

A

The breadth of diagnostic considerations are refined, restricted over interaction between patient, physician

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

EXAM What are some common heuristics that people employ?

A

There are 8 major ones: availability representativeness ascertainment bias confirmation bias diagnosis momentum anchoring premature closure value-induced bias

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is availability bias?

A

Overestimating probability of unusual events because of recent / memorable instances

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is representativeness bias?

A

Overestimating rare diseases by matching patients to “typical picture” of the disease

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is ascertainment bias?

A

thinking is shaped by prior expectations

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is confirmation bias?

A

tendency to look for confirming evidence and not disconfirming evidence

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Diagnosis momentum

A

Things that are initially diagnostic consideration are sticky

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Anchoring

A

Failure to adjust probability of a disease or outcome based on new information

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Premature closure

A

A tendency to accept a diagnosis before it is fully confirmed

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Value-induced bias

A

Overestimate the probability of an outcome based on the value associated with that outcome

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What are some ways of defending against cognitive bias?

A

(11) Develop insights / awareness, consider alternatives, metacognition, decrease reliance on memory, specific training, simulation, cognitive forcing, make task easy, minimize time pressure, establish accountability, feeback

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What are EHR / CDS considerations for avoiding bias?

A

(3) Decrease reliance on memory, cognitive forcing strategies, make task easier

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Describe Tree Decision Analysis

A

Sum of prob = 1 per scenario conditional prob: P(HIV | IV) sequential events: describe on a tree

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What notation is used for a tree diagram?

A

Decision node - square Chance node - circle Outcome node - triangle

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Describe the sample tree example

A

Expected Yes branch = 0.18 Expected No branch = 0.15

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What is sensitivity analysis?

A

Do a “what if” analysis across a range of values

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Cost effectiveness analysis

A

The value of outcome nodes become units instead of binary values

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What is utility?

A

Perceived utility to patient: 1. Standard gamble 2. Time trade-off 3. Visual analogue

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What is a QALY?

A

TTO = (# years perfect health) / (# years in current health)

Can be calculated using Time Trade Off (TTO) x yrs = QALY

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What is “cost effectiveness”?

A

Operating with constrained resources; NICE uses QALY; can also calculate ICER

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What is ICER?

A

Incremental cost/effectiveness ratio (ICER) = compare “willingness to pay” to determine if therapy cost effective

ICER = (C1-C2)/(E1-E2)

E1, E2 = effectiveness = QALY

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What is a Markov Model

A

Chain of events each with known, fixed prob of transition in defined time period, STOCHASTIC, and 1st order is memoryless (next state not depend on prior)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Worked Markov Model example

A

Slides 27-29 Slides 34-36

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

What is a Monte Carlo Simulation?

A

Mathematical model can be deterministic / stochastic: 1. deterministic - variable states determined by parm. 2. probabilistic / stochastic - variable states determined by prob. distrib.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Describe the 2x2 table?

A

Rows = test results Columns = disease state

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Calculate P(Disease), P(no disease), P(test+), P(test-)

A

Use 2x2 table

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

Describe TPR, FPR, TNR, FPR

A

TPR = A/A+C FNR = C/A+C TNR = D/B+D FPR = B/B+D

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

Describe sensitive tests

A

Good at ruling out disease, good for screening tests

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

What is PPV?

A

A / A+B = P(disease+ | test+) Aka Precision

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

What is Precision?

A

PPV

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

What is Recall?

A

TPR

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

What is specificity?

A

TNR

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

UTI worked example

A

Slide 58

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

Spam worked example

A

Slide 65

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

What is a ROC?

A

Receiver Operating Characteristic curve Y-axis: Sensitivity (TPR) X-axis: 1-specificity (FPR)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

What is Relative Risk?

A

RR = P(disease | exposure) / P(disease / no exposure) Levels: weak (1.1-1.5), mod (1.5-3), strong (3-7), very strong (>7)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

Relative Risk example

A

Slide 72

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

What is Bayes Theorem?

A

P(A|B) = (P(B|A) * P(A)) / P(B)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

What is a likelihood ratio?

A

Positive Likelihood ratio = (LR+) = sens/1-spec = TPR/FPR (LR-) = 1-sens/spec = FNR/TNR

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

What is post-test odds?

A

pre-test odds x likelihood ratio (positive)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

LR Example

A

Slide 75-76

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

What is a Fagan Nomogram?

A

Can determine post-test probability given pre-test probability (prevalence) and LR+

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

What is clinical decision support?

A

• Most restrictive: an electronic system that provides structured guidance based on patient-specific inputs – Expert systems – Conditional alerts • Less restrictive: any electronic tool that reduces the cognitive burden of patient care in an EHR – Order sets & corollary orders – Data visualization techniques, visual design standards • Least restrictive: “Not all decision support is electronic decision support”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

What are key components to CDS?

A

Knowledge base

Patient-specific information

Mode of communication => CDS intervention

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
46
Q

What are the key functions for CDSS?

A

Alerting Highlighting out-of-range laboratory values

Reminding Reminding the clinician to schedule a mammogram

Critiquing Rejecting an electronic order

Interpreting Interpreting the echocardiogram

Predicting Predicting risk of mortality from a severity-of-illness score

Diagnosing Listing a differential diagnosis fora patient with chest pain

Assisting Tailoring the antibiotic choices for liver transplantation and renal failure

Suggesting Generating suggestions for adjusting the mechanical ventilator

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
47
Q

What are CDS design considerations?

A

Level of Control – Pre-emptive – Suppressible – Hard-stop – Interruptive

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
48
Q

What are degrees of CDS interruptiveness?

A

• On demand – Link to formulary from within order • In-Line or Background (modeless) – “Unread lab result” indicator on toolbar – Optional reminder for health maintenance • Popup or Interruptive (modal) – Alerts – Reminders requiring acknowledgement

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
49
Q

What are specific categories of CDS?

A

• Therapeutic duplication • Single & cumulative dose limits • Allergies & cross allergies • Contraindicated route of administration • Drug-drug and drug-food interactions • Corollary orders • Cost of care • Nuisance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
50
Q

EXAM - What are the 10 Commandments for Effective CDS?

A
  1. Speed is everything – expect sub-second latency
  2. Anticipate needs and deliver in real time – e.g. showing relevant labs with med orders
  3. Fit into the user’s workflow – external tools not as good as those at POC
  4. Little things can make a big difference – “usability matters – a lot”, “make it easy to do the right thing”
  5. Physicians resist stopping – don’t tell docs to not do something without offering an alternative
  6. Changing direction is easier than stopping
  7. Simple interventions work best – try to fit guidelines onto a single screen
  8. Ask for additional information only when you really need it – “likelihood of success is inversely proportional to the number of extra data elements needed”
  9. Monitor impact, get feedback, and respond
  10. Manage and maintain your knowledge-based systems
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
51
Q

What are the 5 Rights of CDS?

A
  • Right Information – quality of knowledge base
  • Right Person – target of CDS
  • Right Format – implementation of CDS (speed, ease of use, comprehensibility)
  • Right Channel – mode of CDS
  • Right Time – workflow integration
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
52
Q

How do you evaluate CDS?

A

Literature - not representative, few RCTs, insufficient HCI research, etc.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
53
Q

What are limitations of current implementations?

A

– For most organizations, implementing and maintaining an EHR is hard enough

– Difficult to implement and evaluate CDS with constrained resources

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
54
Q

What CDSS have evidence?

A

– Chronic disease management

– Acute care management

– Therapeutic drug monitoring and dosing

– Drug prescribing and management

– Diagnostic test ordering behavior

– Primary preventative care

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
55
Q

What is the CDS evidence for chronic disease management?

A

A small majority (just over half) of CCDSSs improved care processes in chronic disease management and some improved patient health. 55 trials considered – 87% (n=48) measured impact on care process • 52% of these (n=25) showed significant improvement – 65% (n=35) measured impact on surrogate outcomes • 31% (n=11) showed benefits

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
56
Q

What is the CDS evidence for acute care management?

A

The majority of CCDSSs demonstrated improvements in process of care, but patient outcomes were less likely to be evaluated and far less likely to show positive results 35 trials considered – 63% (n=22) showed improved process of care • 64% of med dosing assistants (9 of 14) • 82% management assistants with alerts/reminders (9 of 11) • 38% (3 of 8) guidelines / algorithms • 67% (2 of 3) diagnostic assistants – 20 studies looked at patient outcomes, but only 3 showed improvement

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
57
Q

What is the CDS evidence for Drug Monitoring & Dosing?

A

“[S]tudies were small and generally of modest quality, and effects on patient outcomes were uncertain, with no convincing benefit in the largest studies. At present, no firm recommendation for specific systems can be given. • 76% were standalone systems, 85% were to be used by physicians • 60% showed improved process, 21% showed improved outcome • Insulin (in all studies) and Vitamin K (in meta-analysis) showed significant improvement

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
58
Q

What is the CDS evidence for Drug Prescribing & Management?

A

“CCDSSs inconsistently improved process of care measures and seldomly improved patient outcomes. Lack of clear patient benefit and lack of data on harms and costs preclude a recommendation to adopt CCDSSs for drug therapy management.” 65 studies considered – Process of care improved in 37 of 59 (64%) – Outcomes improved in 6 of 29 (21%)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
59
Q

What is the CDS evidence for Diagnostic Test Ordering?

A

“Some CCDSSs can modify practitioner test-ordering behavior…[S]tudiesshould describe in more detail potentially important factors such as system design, user interface, local context, implementation strategy, and evaluate impact on user satisfaction and workflow, costs, and unintended consequences.” 35 studies identified – quality improved after 2000 – 55% improved testing behavior (18 of 33) – 5 of 6 diagnostic testing – 5 of 8 treatment monitoring – 6 of 17 disease monitoring – 4 of 4 designed to reduce test ordering rates – Cost, user satisfaction, and workflow rarely measured or reported

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
60
Q

What is the CDS evidence for Preventative Care?

A

“Evidence supports the effectiveness of CCDSSs for screening and treatment of dyslipidaemia in primary care with less consistent evidence for CCDSSs used in screening for cancer and mental healthrelated conditions, vaccinations, and other preventive care. CCDSS effects on patient outcomes, safety, costs of care, and provider satisfaction remain poorly supported.” 41 RCTs considered – Improved process of care in 63% (25 of 40)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
61
Q

What are the 4 predictors of improved practice?

A
  1. Provision of CDS as part of clinical workflow
  2. Provision of recommendations, not just assessments
  3. Provision of CDS at time/location of decision
  4. Computer based decision support
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
62
Q

What are unintended consequences of CDS?

A

19.8 - More/new work for clinicians 17.6 - Workflow issues 14.8 - Never ending system demands 10.8 - Paper persistence 10.1 - Changes in communication …

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
63
Q

What does the evidence show for medication alerts & CPOE

A

Clear benefits in reducing prescribing errors Less clear CPOE can prevent adverse drug events

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
64
Q

What is the curly braces problem?

A

When you have a code like {get blood pressure} it’s different each time from computer system to computer system

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
65
Q

What is Arden Syntax?

A

Data: systolic_blood_pressure := read last {get systolic blood pressure}; /* the value in braces is specific to your runtime environment */ systolic_pressure_threshold := 140; stdout_dest := destination {stdout}; ;;

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
66
Q

How are guidelines modelled?

A

CPG recommendations

Axis I - ambiguity (syntactic, semantic, pragmatic), vagueness (passive voice, strength qual, underspec)

Axis II - deliberate, inadvertent

Axis III - condition, action, explanation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
67
Q

Describe what syntactic, semantic, and pragmatic issues

A

– Syntactic – “A or B and C” (missing parentheses?) – Semantic – “I will meet you at the bank” (which bank?) – Pragmatic (conflicting recommendations)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
68
Q

What are some requirements for computer interpretable guidelines?

A
  • The guidelines must first lend themselves to computation
  • The representation format must allow for clinical expressivity – Temporal dependencies – Complex rules – Strength of evidence – Imperative / optional actions
  • Standard vocabularies and semantics
  • Interoperable
  • Portable
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
69
Q

What are some guideline modeling frameworks?

A

GLIF Protege Arden Syntax GEM SEBASTIEN

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
70
Q

Describe Knowledge Maintenance

A

• Reliance on EHR patient data • Guideline authorship, review, update cycle • Review patterns of use – process measures, override rates, sentinel events, and other measures of CDS effectiveness • Role for service-oriented architecture for “plug-andplay” CDS systems

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
71
Q

What is OpenCDS?

A

Decision support service Uses the May 2011 HL7 standard specification for a Decision Support Service • Built using open source software tools • Robust authoring environment for rules • Integration with standard terminologies (ICD10, SNOMED, LOINC, RxNORM) • Can be integrated with other types of CDS tools, such as the HL7 Infobuttons standard

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
72
Q

What is SMART on FHIR?

A

2009: NEJM article “No small change for the health information economy” by Mandl & Kohane suggested that EHRs should be an extensible platform, like an iPhone™ – Liquidity of data –reduce impediment to data transfer – Substitutability of applications – modular and interoperable – Built to open standards for open-and closed-source developers – Development of an ecosystem of apps, free marketplace of ideas

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
73
Q

What is FHIR?

A

2010: SMART = Substitutable Medical Applications and Reusable Technologies – 1st Gen: HTML, JavaScript, OAuth, Resource Description Framework (RDF) for metadata, and common terminologies like LOINC, RxNorm. – Lacked a standard for sharing granular clinical data – Poor initial uptake of “SMART Classic” by EHR vendors

  • 2011: HL7 community concerned that HL7 V3 was not gaining traction – Led to emergence of Resources for Health –> Fast Healthcare Interoperability Resources (FHIR®)
  • 2013: SMART team adopts FHIR® standard
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
74
Q

Describe SMART on FHIR format

A

Can be represented as XML or JSON - Javascript Object Notation JSON is the only way to serialize a software object RESTful applications - REST = representational state transfer URI = universal resource identifier HTTP methods = PUT, GET, POST, DELETE

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
75
Q

What are CDS hooks?

A

Developed by SMART project - specifies how EHR triggers can invoke external CDS services Addresses major barrier to computable, shareable decision support

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
76
Q

What is alert fatigue?

A

Refers to state of user resistance to guidance provided by alerts, even those that might offer possible benefit or reduce harm, presumably because they are overwhelmed by unimportant alerts • Difficult to measure – Literature typically uses alert override rates as proxy for “low utility” – EHR systems offer different alert designs for Drug-Drug interactions, custom CDS, and other alert types. Studies may be comparing apples to oranges. • Difficult to define – What is an “appropriate” override rate? • Counterintuitive results – Reducing alert burden dramatically does not dramatically reduce override rate – In fact, EHR override rates have remained flat or perhaps increased in the past decade Overall override rates INCREASED between 2004 - 2013

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
77
Q

What are the recommendations for alerts?

A
  1. Classify alerts in to 3 levels – minor, moderate, severe
  2. Develop a core set of critical drug drug interactions
  3. Classify alerts into active and passive, only make critical alerts active (interruptive)
  4. Conduct training on new improvements
  5. Develop systems with automated feedback/learning to identify and move alerts from active/interruptive to passive/non-interruptive
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
78
Q

Describe Knowledge Generation (Hersh, 2009)

A

Original research Write up, Submit publication Peer review, publish Secondary publication, relinquish copyright, public repository…

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
79
Q

What is knowledge acquisition (Hersh)?

A

Start with all literature => possibly relevant literature => definitely relevant literature => structured knowledge (divided into information retrieval, information extraction / text mining)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
80
Q

What are 4 basic approaches to knowledge modeling and representation?

A
  1. Clinical algorithms
  2. Bayesian statistics
  3. Production rules
  4. Scoring and heuristics
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
81
Q

What is a clinical algorithm?

A

Follow path through “flow chart”

Elements in chart are nodes - data is gathered at information nodes (squares); decisions are made at decision nodes (diamonds)

Benefits – Knowledge is explicit – Knowledge is easy to encode

• Limitations – No accounting for prior results – Inability to pursue new etiologies, treatments, etc. – New knowledge difficult to generate • Forerunner of modern clinical practice guidelines

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
82
Q

Describe Bayesian statistics

A

• Based on Bayes’ theorem, which calculates probability based on prior probability and new information • Assumptions of Bayes’ theorem – Conditional independence of findings – no relationship between different findings for a given disease – Mutual exclusivity of conditions – one finding can only explain one disease

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
83
Q

What is Bayes’ general theorem?

A

Probability of disease i in the face of evidence E, out of a set of possible j diseases is: P(Di|E) = (P(Di) P(E|Di)) / (Σ P(Dj) P(E|Dj) ) • Translation of formula: probability of a disease given one or more findings can be calculated from – The prior probability of the disease – sometimes can be estimated from prevalence of disease – The probability of findings occurring in the disease

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
84
Q

What is the implementation and limit of the Bayesian approach?

A
  • Leeds Abdominal Pain System (de Dombal, 1975) – Most successful implementation, used in diagnosis of acute abdominal pain – Performed better than physicians – accuracy 92% vs. clinicians 65-80%, better in 6 of 7 disease categories – But difficult to use and not transportable to other locations (Berg, 1997)
  • Limitations of Bayesian statistics – Findings in a disease are usually not conditionally independent – Diseases themselves may not be mutually exclusive – When multiple findings important in diagnosis, reaches high computational complexity quickly
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
85
Q

What are PRODUCTION RULES?

A

Knowledge encoded as IF-THEN rules

  • System combines evidence from different rules to arrive at a diagnosis
  • Two types of rule-based ESs: – Backward chaining – System pursues goal and ask questions to reach goal – Forward chaining – Similar to clinical algorithms, with computer following proscribed path to reach answer
  • Generic rule: IF test-X shows result-Y THEN conclude Z (with certainty p)
86
Q

What was the first rule-based ES?

A

MYCIN • PhD dissertation of Shortliffe (1975) and one of the first applications in medical informatics • Major features – Diagnosed the infectious diseases, meningitis and bacteremia – Used backward chaining approach – Asked questions (relentlessly!) in an attempt to reach diagnosis • Evaluation of MYCIN (Yu, 1979) – 10 cases of meningitis assessed by physician experts and MYCIN; output judged by other physician experts – Recommendations of experienced physicians judged acceptable 43-63% of the time, compared with 65% of the time for MYCIN – In no cases did MYCIN fail to recommend an antibiotic that would cover the infection (even if it was not optimal choice)

87
Q

Limitations of rule-based systems

A

• Depth-first searching could lead to focus in wrong area • Rule bases were large and difficult to maintain – MYCIN had 400 rules covering two types of bacterial infection – Approach worked better in constrained domains, such as pulmonary function test interpretation • Systems were slow and time-consuming to use – Rule-based goal seeking could take long time – System also developed prior to era of modern computers and graphical user interfaces, making data entry time-consuming

88
Q

What are some scoring and heuristics employed in CDS systems?

A

• Knowledge is represented as profiles of findings that occur in diseases • There are measures of importance and frequency for each finding in each disease • Found to be most “scalable” approach for comprehensive decision support systems • Examples – INTERNIST-1/QMR, Dxplain, Iliad

89
Q

Describe the history of systems using scoring and heuristics approach

A

• INTERNIST-1 – Original approach, aimed to develop an expert diagnostician in internal medicine (Miller, 1982) – System originally designed to mimic the expertise of an expert diagnostician at the University of Pittsburgh, Dr. Jack Meyers – Evolved into Quick Medical Reference (QMR) where goal changed to using knowledge base explicitly (Miller, 1986) • DxPlain used principles of INTERNIST-1/QMR but developed more disease coverage (Barnett, 1987) – Only system still available: http://www.lcs.mgh.harvard.edu/dxplain.asp • Iliad attempted to add Bayesian statistics to the approach (Warner, 1989)

90
Q

Describe knowledge representation in the INTERNIST-1/QMR system

A

• Disease profiles – findings known to reliably occur in the disease • Findings – from history, exam, and laboratory • Import – each finding has a measure of how important it is to explain (e.g., fever, chest pain) • Properties – e.g., taboos, such as a male cannot get pregnant and a female cannot get prostate cancer • For each finding that occurs in each disease, there are two measures – Evoking strength – the likelihood of a disease given a finding • Scored from 0 (finding non-specific) to 5 (pathognomonic) – Frequency – the likelihood of a finding given a disease • Scored from 1 (occurs rarely) to 5 (occurs in all cases)

91
Q

What is the Internist-1/QMR scoring algorithm?

A

• Initial positive and negative findings are entered by user • A disease hypothesis is created for any disease that has one or more of the positive findings entered • Each disease hypothesis gets a score – Positive component based on evoking strengths of all findings – Negative component of score based on frequency from findings expected to occur but which are designated as absent • A diagnosis is made if the top-ranking diagnosis is >80 points (one pathognomonic finding) above the next-highest one – When diagnosis made, all findings for a disease are removed from the list, and subsequent diagnoses are made • Performed as well as experts in NEJM clinical cases (Miller, 1982)

92
Q

What are the limitations of INTERNIST-1 and evolution to QMR?

A

• Limitations – Long learning curve – Data entry time-consuming – Diagnostic dilemmas not a major proportion of clinician information needs – Knowledge base incomplete • Evolution to QMR (Miller, 1986) – Less value in “case” mode – More value in knowledge exploration mode, e.g., • Rule diseases in and out • Obtain differential diagnoses • Link to more detailed information – Became commercial product but did not succeed in marketplace

93
Q

Describe the history of CDS

A

• By the late 1980s and early 1990s, it was apparent that – Diagnostic process was too complex for computer programs – Systems took long time to use and did not provide information that clinicians truly needed – “Greek Oracle” model was inappropriate to medical usefulness (Miller, 1990) • Recent studies still demonstrate limited utility for expertise-based artificial intelligence – Best systems still cannot pass eighth-grade science tests (Knight, 2016) – Systematic review of differential diagnosis generators shows that, while accurate, they generate limited value (Riches, 2015) • Decision support evolved in the 1990s with recognition of its value within EHR – Rules and algorithms most useful in this context – Evolution from broad-based diagnostic decision support to more focused CDS

94
Q

Should the FDA regulate CDS?

A

• Evolved to “clinical decision support” (CDS) in the 1990s with recognition of their value within EHR – Production rules  simpler rules – Algorithms  clinical practice guidelines • Many grand challenges remain (Sittig, 2008) • Now being implemented on wider scale in operational EHRs (Osheroff, 2012) • Should FDA regulate? View so far is that systems akin to textbooks, with humans between patient and system, but could change

95
Q

What are some new CDS approaches?

A

• Isabel (www.isabelhealthcare.com) – “Second generation” approach uses – Natural language processing to map entered text into findings – List of differential diagnosis with 30 most likely diagnoses grouped by body system, not probability • “Googling” for a diagnosis? Large quantity of text in Google may hold latent knowledge – Found in a case study to make diagnosis of a rare condition (Greenwald, 2005) – When text of NEJM cases entered, 15 of 26 had correct diagnosis in top three suggested (Tang, • 2006) • “Patients like my patient” in EHR may yield similar cases that can inform decisions (Shirts, 2013) • Patient symptom-checkers – analysis of 23 found deficiencies in triage and diagnosis, often advising care when self-care is reasonable (Semigran, 2013)

96
Q

Describe the Watson system

A

• IBM Watson – evolved out of PIQUANT system used for question answering, DeepQA(Ferrucci, 2010) – Additional (exhaustive) details in special issues of IBM Journal of Research and Development (Ferrucci, 2012) • Achieved fame by beating humans at Jeopardy! television game (Markoff, 2011) • Has turned to other domains, including healthcare – Has “graduated” medical school (Cerrato, 2012) – First results are in (Ferrucci, 2012) • Trained using several resources from internal medicine: ACP Medicine, PIER, Merck Manual, and MKSAP • Trained with 5000 questions from Doctor’s Dilemma, a competition like Jeopardy!, in which medical trainees participate and is run by the ACP each year – Sample question is, Familial adenomatous polyposis is caused by mutations of this gene, with the answer being, APC Gene • Googling the question gives the correct answer at the top of its ranking to this and two other sample questions listed

97
Q

What is Knowledge Management and Maintenance?

A

• Many healthcare organizations and EHR systems maintain knowledge assets in different ways (Wright, 2011) • Recommended practices for CDS and KM include attention to (Ash, 2012) – Workflow – Knowledge management – Data as a foundation for CDS – User-computer interaction – Measurement and metrics – Governance – Translation for collaboration – Meaning of CDS – Roles of special, essential people – Communication, training, and support • Commercial solutions the answer? – e.g., Zynx, Lexicomp, EHR vendors, etc.

98
Q

What are some of the ethical issues around CDS?

A

• Legal, ethical, and regulatory issues – many complex issues, but CDS still viewed as “open loop” system, i.e., clinician between patient and system (Bates, 2011) • Quality and safety issues – recognized need to view health IT as potentially dangerous (Sittig, 2009; Sittig, 2012; IOM, 2012) • Supporting decisions for populations of patients – including use in personal health records (Cushman, 2010)

99
Q

What is the DEFINITION of EBM?

A

• A set of tools and disciplined approach to informing clinical decision-making – Applies the best evidence available – Most recent textbooks – manual (Guyatt, 2014); handbook (Guyatt, 2015) • Allows clinical experience (art) to be integrated with best clinical science • Makes biomedical literature more clinically applicable and relevant • Cannot forget the caveat: “Absence of evidence is not evidence of absence” (Carl Sagan)

100
Q

Describe the Hierarchy of evidence

A

4S, 5S, 6S — TOP — Systems - actionable knowledge Summaries - EB text & collections Synopses of syntheses - EB abstract Syntheses - systematic rev & ev. reports Synopses of studies - EB journal abstracts Studies - original articles in journals — BOTTOM — Systems: guidelines, rules, order sets Summaries: textbooks, compendia… Syntheses: systematic reviews Studies: medline, journal articles…

101
Q

Compare background and foreground questions in EBM

A

• Steps include – Phrasing a clinical question that is pertinent and answerable – Identifying evidence to address the question – Critically appraising the evidence to determine if it applies to the patient • Background vs. foreground questions – Background questions ask for general knowledge about a disorder • Usually answered with textbooks and classical review articles – Foreground questions ask for knowledge about managing patients with a disorder • Answered with EBM techniques

102
Q

What are foreground questions in EBM?

A

• Have three or four essential components (PICO) – Patient and/or problem – Intervention – Comparison intervention (if appropriate) – Outcomes • Example – In an elderly patient with congestive heart failure, are beta blockers helpful in reducing morbidity and mortality without excess side effects? • Recent addition of timing and setting, i.e., PICOTS

103
Q

What are the categories of foreground questions?

A

• Intervention (or Treatment or Therapy) – benefit of treatment or prevention • Diagnosis – test diagnosing disease • Harm – etiology or cause of disease • Prognosis – outcome of disease course

104
Q

What are measures of treatment effect?

A

• Usually measured in terms of risk of undesired outcomes, e.g., mortality, recurrence, complications, etc. • Relative measures – relative to control – Relative risk (RR, risk ratio) – risk relative to control • Relative risk reduction – Odds ratio (OR) – odds of having vs. not having event – Hazard ratio (HR) – relative risk adjusted for time • Absolute measures – overall population – Absolute risk reduction (ARR, risk difference) – absolute difference of risk – Number needed to treat (NNT) – how many must be treated for one person to benefit

105
Q

What is the role of CIs?

A

Precision of estimate of treatment effect • True risk for population is unknown; need to assess with sample • Study result gives point estimate, but true result can vary due to chance (and bias if study not performed properly) • Assess possible range of results by calculating confidence interval (CI) – Range of values that includes true value 95% of the time

106
Q

Calculate EER, CER, RR, ARR

A

Table: CV outcome / No CV outcome in Intensive / standard 243, 4435 319, 4364 • EER = 243 / 4678 = 0.052 (5.2%) • CER = 319 / 4683 = 0.068 (6.8%) • RR = EER / CER = 0.76 (76.0%) • RRR = 1 – RR = 0.24 (24.0%) • ARR = CER – EER = 0.068 – 0.052 = 0.016 (1.6%) • NNT = 1 / ARR = 62

107
Q

Compare Systematic Review and Meta-Analyses

A

• Often use meta-analysis, which combines results of multiple similar studies • Systematic review ≠ meta-analysis – Studies may be too heterogeneous in terms of patient characteristics, settings, or other factors • When meta-analysis is done, summary measures employed usually include – Odds ratio (OR) or relative risk/risk ratio (RR) for dichotomous variables (i.e., events) – Mean difference (MD) or standardized mean difference (SMD) for continuous variables

108
Q

What is the Cochrane Collaboration?

A

• Most reviews include meta-analysis – Logo based on review of steroids in preterm labor • Each horizontal line represents a single RCT – Span of line indicates CI • All study questions configured relative to vertical line – Line represents OR=1 or MD/SMD=0 – Treatment benefit is to left of line – CI not touching line indicates statistical significance

109
Q

What is a clinical practice guideline?

A

• CPGs are “systematically developed statements to assist practitioner and patient decisions about appropriate health care for specific clinical circumstances” (Field, 1990; IOM, 2011) – Usually aim to “normalize care” • May consist of – Series of steps for providing clinical care – Represented as text/tables or algorithms • Steps in construction include – Gathering evidence for important outcomes – Grading quality of that evidence – Ascertaining balance of benefits and harms – Determining strength of recommendation – Implementing and evaluating

110
Q

Describe the GRADE scale

A

Grading of Recommendations Assessment, Development, and Evaluation (GRADE) Qual Evidence: high, moderate, low, very low

111
Q

Describe the USPSTF Recommendation Scale

A

Recommendations based on grading evidence derived from a commissioned systematic review – Grade A – certainty of evidence is high that the magnitude of net benefits is substantial – Grade B – certainty of evidence is moderate that the magnitude of net benefits is either moderate or substantial, or that the certainty of evidence is high that the magnitude of net benefits is moderate – Grade C – certainty of the evidence is either high or moderate that the magnitude of net benefits is small – Grade D – certainty of the evidence is high or moderate that the magnitude of net benefits is either zero or negative – Grade I – the evidence is insufficient to determine the relationship between benefits and harms (i.e., net benefit)

112
Q

What are the limitations of guidelines?

A

• May not apply in complex patients – for 15 common diseases, following best-known guidelines in elderly patients with comorbid diseases may have undesirable effects and implications for pay-for-performance schemes (Boyd, 2005) • May be incomplete or inaccurate – of 11 guidelines for oral medications in Type 2 diabetes, several varied from known best evidence, with those having evidence-based processes being judged of higher quality (Bennett, 2012) • Difficult to implement in EHRs – issues include precise coding of logic and integration into workflow (Maviglia, 2003) • Conflict of interest – For 17 American College of Cardiology/American Heart Association guidelines, 56% of authors had a reported conflict of interest, most commonly being a consultant or a member of an advisory board (Mendelson, 2011)

113
Q

Where are CPGs found?

A

• Medical literature, i.e., PubMed/MEDLINE • National Guidelines Clearinghouse (NGC) – www.guideline.gov • From many organizations – Medical specialty societies, e.g., • American College of Physicians https://www.acponline.org/clinical-information/guidelines • American Heart Association/American College of Cardiology http://professional.heart.org/professional/GuidelinesStatements/ UCM_316885_Guidelines-Statements.jsp – Government and related organizations, e.g., • US Preventive Services Task Force http://www.uspreventiveservicestaskforce.org – Healthcare delivery organizations and others

114
Q

What is the basic structure of information retrieval?

A

Queries, Search Engines, Content => metadata (retrieval, indexing)

115
Q

What are the intellectual tasks of IR?

A

• Indexing – Assigning metadata to content items – Can assign • Subjects (terms) – words, phrases from controlled vocabulary • Attributes – e.g., author, source, publication type • Retrieval – Most common approaches use • Boolean – use of AND, OR, NOT • Natural language – words common to query and content

116
Q

What is the classification of knowledge-based content?

A

Bibliographic – By definition rich in metadata • Full-text – Everything on-line • Annotated – Non-text annotated with text or structured text • Aggregations – Bringing together all of the above

117
Q

Describe MEDLINE

A

References to biomedical journal literature – Original medical IR application – launched in 1971, with literature dating back to 1966 (and now some older) – Free to world since 1997 via PubMed – pubmed.gov • Produced by National Library of Medicine (NLM) • Statistics – Over 22 million references to peer-reviewed literature – Over 5,000 journals, mostly English language – About 750,000 new references added yearly • Links to full text of articles and other resources

118
Q

What is annotated content?

A

Non-text annotated with text or structured text, e.g., – Image collections, usually from “visual” specialties – Citation databases, e.g., Science Citation Index – Evidence-based medicine databases, e.g., JAMA Evidence – Clinical decision support, from publishers or vendors – Genomics databases, from NLM and others – Other databases, e.g., ClinicalTrials.gov

119
Q

What are aggregations (EBM)?

A

• Clinical – major publishers now “bundle” their collections • Biomedical research – example is linked databases of NCBI – http://www.ncbi.nlm.nih.gov/gquery/ • Consumer – example is MEDLINEplus from NLM – medlineplus.gov

120
Q

Indexing

A

• Assignment of metadata to content to facilitate retrieval • Two major types – Human indexing with controlled vocabulary • Best known approach is MEDLINE applied by professional indexers using Medical Subject Headings (MeSH) vocabulary – Automated indexing of all words

121
Q

What is the MeSH vocabulary?

A

• Over 26,000 terms, with many synonyms for those terms • Hierarchical, based on 16 trees, e.g., Anatomy, Diseases, Chemicals and Drugs • Contains 83 subheadings, which can be used to make a heading more specific, such as Diagnosis or Therapy • Also includes Publications Types, important for EBM, e.g., Randomized Controlled Trial, Systematic Review • MeSH browser allows exploration – https://meshb.nlm.nih.gov/search

122
Q

How is automated indexing done?

A

• Indexing of all words that occur in content items – In bibliographic databases, will usually include title, abstract, and sometimes other fields, e.g., author or subject heading – In full-text documents, will usually include all text and title • Often use a stop word list to remove common words (e.g., the, and, which) • Some systems “stem” words to root form (e.g., coughs or coughing to cough)

123
Q

Describe information retrieval approaches

A

• Two general approaches – Boolean, set-based, exact-match – Natural language, automated, partial-match • They are not mutually exclusive, e.g., PubMed • Early systems tended to be Boolean – Preferred by power users? • More recently have seen growth of natural language systems – Popular for Web searching

124
Q

How are IR systems evaluated?

A

• Questions often asked (Hersh, 2009) – Is system used? – Are users satisfied? – Do they find relevant information? – Do they complete their desired task? • Most studied group is physicians, with systematic reviews of results (Hersh, 1998, Pluye, 2005) • Most IR evaluation research has focused on retrieval of relevant documents, which may not capture full spectrum of usage

125
Q

What are the relevance-based measures?

A

Recall (sensitivity) R = (# retrieved and relevant docs) / (# relevant documents in collection) Precision (PPV) R = (# retrieved and relevant docs) / (# retrieved docs)

126
Q

What is the definition of a workflow?

A

A process during which documents, information or tasks are passed from one participant to another for action, according to a set of procedural rules (Ref #1)

127
Q

What is workflow analysis?

A

Study of the way documents, information and people related to a process move through an organization, in order to improve efficiency (Ref #2)

128
Q

What is process redesign?

A

Examination and redesign of existing processes and workflows and putting them into action

129
Q

What levels do workflows occur at?

A

• Includes mental and physical tasks • Occurs at three levels – Inter-organizational – Intra-organizational, interpersonal – Individually (intra-personal) • Steps may occur sequentially or simultaneously (Sheehan, 2012) • Include movement of people/actions, information, objects through space & time

130
Q

What is workflow analysis?

A

• Reduces complex process into individual components • Creates visual representation of flow of people, information and objects • Used to detect defects and waste • May be high-level to very detailed • Important to capture variations in addition to expected normal workflow • Study of an EXISTING workflow • Need to capture all aspects of workflow – People and their actions – Information – Objects

131
Q

What is Lean?

A

• Workflow analysis system • Lean (Toyota Production System) – Primary focus is on workflow (value stream) – Waste (muda) vs. Value • Important to observe at the place where the work is performed – gemba (Vorne, 2011-2016; Campbell, 2009; IHI, 2005)

132
Q

What are the major classes of workflow analysis?

A

Theories and Strategies – Computer science-based approaches • Petri-nets • Contextual Design – Computer-Supported Cooperative Work (CSCW) • Activity Theory • Coordination Theory – Cognitive Science • Cognitive Task Analysis • Distributed Cognition and UFuRT – Organizational Science

133
Q

What are petri-nets?

A

– Electronic capture of workflow where it touches the information system – Requires system use data and a workflow management system to understand workflow – Limited detection of interpersonal or non-system related elements of workflow – Example: Process mining • Uses system log file data to construct event-based depictions of processes using an information system

134
Q

What is contextual design?

A

– Provides framework and techniques for software designers to understand primarily the human elements of workflow – Useful for organizational as well as individual workflow Contextual inquiry => work modelling => consolidation Goal of Contextual Inquiry: Uncover 4 aspects of work 1. Motive behind tasks 2. Patterns used in carrying out tasks 3. Structure that enables task completion 4. Conceptual distinctions between aspects of work

135
Q

What is CSCW?

A

• Computer-Supported Cooperative Work (CSCW) (Sheehan, 2012) – Goal: to understand the activities of groups engaged in collaborative work activities for the purposes of software design – Activity Theory • Humans engage in purposeful activities which are goaldirected and context-specific • Useful for individual as well as group workflow – Coordination Theory • Task-interdependencies among workers result in harmonious goal-achievement • Useful for group workflow analysis but not for individual workflow analysis

136
Q

What is ActAD?

A

Activity Analysis and Development Framework (ActAD) – 4 Steps 1. Describe activity network 2. Assess development of work activities over time – Determine goals for the new software/tool 3. Identify existing goal conflicts 4. Develop future view of activity network

137
Q

What is an activity checklist?

A

– Focuses software developers on work aspects relevant to software design – Directs observations of work practices – More specific guidance than ActAD Four categories in the checklist Means/ends - Focuses on hierarchical structure of activities Environment - Context of activities Learning, cognition, articulation - Internal cognitive components related to activities External actions related to activities Development - Anticipate changes to actions related to use of the new technology

138
Q

What is coordination theory?

A

• Task-interdependencies among workers result in harmonious goal-achievement • Uncovering task interdependencies can result in identifying new ways to manage them • Focus on – Pre-requisite tasks – Tasks which require shared resources – Tasks that require synchronization • Examines four processes underlying coordination and their components 1. Coordination 2. Group decision-making 3. Communication 4. Perception of common objects • May involve tagging an object to map out process followed where it is used (tracer method)

139
Q

What is cognitive science?

A

Multidisciplinary field – Concentrates on understanding human thought processes – Includes knowledge attainment, memory and problem solving – Patel et al in Shortliffe, 2014; Sheehan, 2012

140
Q

What is cognitive task analysis?

A

Group of methods to examine individual human tasks – Cognitive walkthrough (CW) – Think-aloud protocol (TA)

141
Q

What is a CW (cognitive science)?

A

Cognitive walkthrough (CW) • Performed by a systems analyst • Is a form of workflow inspection • May be performed in the presence of system users (who verify the cognitive walkthrough by the analyst) • Simulates a user’s cognitive processes as they engage in tasks

142
Q

What is the TA protocol (Think-aloud)

A

• Performed by a system user • Is a form of workflow testing • The user verbalizes thought processes as tasks are carried out • Analyst records the verbalization into a visual representation of the user’s mental model

143
Q

What is distributed cognition?

A

– Studies the collaborative nature of human cognition – People and objects constantly interact within a framework of social and cultural practices

144
Q

What is UFuRT?

A

• User, Functional, Representational and Task Analysis • Can be used for workflow analysis at all levels • Four phases 1. Distributed user analysis 2. Distributed functional analysis 3. Distributed task analysis 4. Distributed representational analysis

145
Q

What is organizational science?

A

– Aims to clarify internal organizational structures to influence change and direct process re-design – Two components of organizational routines • Ostensive aspect: general pattern of the routine • Performative aspect: specific actions performed by individual people within specific contexts – Artifacts: physical manifestations of the routine

146
Q

What are some workflow analysis data collection methods?

A

Quantitative ▪ Collected via operational systems ▪ Collected via detached human observer (e.g., counting events) Qualitative ▪ Capture details of everyday work practices ▪ Ethnographic Observation, including participant observation - Attends to meaning, goals, context - Attends to how people communicate

147
Q

What is grounded theory?

A

Grounded Theory – Ethnographic method – Inductive analysis • Opposite of deductive analysis; studies detailed data first before arriving at a hypothesis/theory – Analysis occurs in parallel with data collection – Break data down into much smaller components and code them • Codes are combined/related to categories (or concepts) – Very helpful in uncovering hidden triggers or cultural taboos in workflows

148
Q

What is usability testing?

A

Usability testing – Usability incorporates five attributes that must be evaluated on the information system 1. Learnability – how easy is it to learn? 2. Efficiency – can it make an experienced user very efficient? 3. Memorability – how easily can users remember how to use it? 4. Errors – are these minimized? Are they easily detected? 5. Satisfaction – are users happy with it?

149
Q

What is a simple flowchart?

A

Workflow analysis tool Also known as a process map – Good at representing actions and decisions through time – Well suited to high-level workflow analyses – Less good at detailed workflow analysis where specific people (roles) and their actions/decisions need to be shown

150
Q

What is a swimlane flowchart?

A

– Uses swimlanes to represent the various functions of each person’s role in a workflow – Great tool for picking up redundancies and inefficiencies

151
Q

What are Spaghetti diagrams?

A

– Physical map of movements of people in the workflow – Walking = waste – Poorly configured information systems create a lot of waste

152
Q

What is value stream analysis?

A

Document all steps required to complete a service from beginning to end – Include both steps with and without value – Document time between steps – Creates a value stream map (VSM)

153
Q

What is workflow re-engineering?

A

• Examination and redesign of existing processes and workflows and putting them into action • Fundamental component of – Continuous Quality Improvement (CQI) – Total Quality Management (TQM) – Process Improvement

154
Q

What are some process redesign models?

A

Lean, Six Sigma, ISO, Baldrige, VA-TAMMCS, etc.

155
Q

What are the common steps for process redesign?

A

Steps common to all methods 1. Workflow analysis 2. Determine ideal workflow 3. Gap analysis 4. Mitigate obstacles to ideal state 5. Finalize planned future state 6. Testing 7. Implement change 8. Measure outcomes 9. Determine/describe next future change

156
Q

What is gap analysis?

A

Includes determination of the gaps that exist between current state and your ideal future state • Also includes: – Evaluation of how to close the gaps • Are there obstacles/barriers to the ideal future state? – Action plans to mitigate obstacles, where possible • Not all can be resolved • Use of published tools is helpful (AHRQ)

157
Q

After gap analysis, what steps to you take in process redesign?

A

• Mitigate obstacles to future state where possible • Develop the final details of the change to be made (the final future state plan) – based on your gap analysis and ability to mitigate obstacles • Test against your finalized future state plan • Implement the change • Measure outcomes (pre vs. post data)

158
Q

What is a logic model?

A

Describing the next change in process redesign Picture of how the next change is supposed to work (before workflow analysis or other steps), aka theory of change, roadmap - components: purpose, context, inputs, activities, outputs, effects (short, medium, long)

159
Q

Why does process redesign fail?

A

INADEQUATE or no use of change management strategies misaligned incentives, poor comms, inadequate training, etc.

160
Q

What is change management?

A

• Approach to transitioning individuals, teams and organizations to a desired future state • Successful process redesign requires the use of change management

161
Q

What is the first step in change management?

A

• Change is more rapid now than ever before – Can increase the level of resistance in systems which are already stressed • Some organizational cultures embrace information technology more than others • Assessment of environmental readiness for change must gauge: – the level of organizational stress – the amount of resources available (human, financial) – the degree to which organizational leadership embraces change Lack of engagement by affected end-users is likely to make change attempts fail • Most change theories focus on people

162
Q

What is the PRECEDE-PROCEED theory?

A

Change management theory PRECEDE: Predisposing, Reinforcing, and Enabling Constructs in Educational/Environmental Diagnosis and Evaluation PRECEDE: Diagnostic phase (5 subphases) 1. Social Assessment 2. Epidemiological Assessment 3. Behavioral and Environmental Assessment 4. Educational and Ecological Assessment 5. Administrative and Policy Assessment PROCEED: Policy, Regulatory, and Organizational Constructs in Educational and Environmental Development PROCEED: Evaluation phase (4 subphases) 1. Implementation of the intervention 2. Process evaluation (Is workflow moving as expected?) 3. Impact evaluation (Change has the expected impact?) 4. Outcome evaluation – Does the planned outcome = actual outcome?

163
Q

When is PRECEDE-PROCEED used?

A

Typically used in community and public health settings for health improvement initiatives • Getting patients (or the general public) to change in order to improve their health – Advantages • Planning process is very prescriptive; unlikely to leave things or people out • Uses a ranking system to facilitate determinants for change at the individual (patient), provider and system levels

164
Q

What is social influence?

A

change in behavior that one person causes in another, intentionally or unintentionally, as a result of the way the changed person perceives themselves in relationship to the influencer, other people and society in general conformity - changing how you behave to be more like others Compliance - where a person does something that they are asked to do by another; decision to comply may be influenced by thoughts of social reward or punishment; person believes that he/she has a choice Obedience - obeying an order from someone that you accept as an authority figure; person believes that he/she does not have a choice

165
Q

What is the social influence model of technology adoption?

A

Conformance to subjective norms play a central role in technology adoption • In other words: an individual often acts, not as an individual, but as a member of a group with whom he/she identifies – Social influence is at the confluence of 4 social computing phenomena • Action, consensus, cooperation, authority

166
Q

What are the steps in social influence model of technology adoption?

A

Social Computing Action - Actions performed through the use of software e.g., web browsers, cell phones, INFORMATION SYSTEMS Social Computing Consensus - Agreement from all end-users that it is right to carry out the action Social Computing Cooperation - Participating in a way that is in the best interests of the group Social Computing Authority - Authority imposed by the group supersedes traditional authority Have to foster users’ feelings of acceptance by the medical community because they adopted the technology – Best used when effects of technology are visible and can be tried out

167
Q

What are complex adaptive systems?

A

Complexity Theory; Systems Theory – Individuals are to organizations as organisms are to ecosystems • Individuals/organisms coexist and depend on each other for system survival – Characteristics • Nonlinear, dynamic, unpredictable • Composed of independent intelligent agents • Goals and behaviors of a single person/organism often conflict • Adaptation and learning  self-organization • No single point of control – E.g., healthcare, internet, embryo Analyzes complex relationships between components of a system – Often tries to apply mathematics to systems – Ease of access to information will improve performance of the complex adaptive system – Incentives are essential to productivity and wellness • More likely to be intrinsic than extrinsic (see section 4A on motivation) – Focuses on creating the conditions that foster adoption of change iteratively – Helpful most when in the planning phase of a change; also helpful at early implementation – Prepares for unpredictable behavior and fosters adaptations to it

168
Q

What is diffusion of innovation theory?

A

– Innovation = change – Five most influential characteristics of innovations for affected end-users 1. Perceived benefit of change 2. Observability of the innovation 3. Compatibility of the change with current organizational culture and personal beliefs 4. Level of simplicity of the innovation 5. Trialability of the innovation (can you test it?) innovators - 2.5% early adopters - 13.5% early majority - 34% late majority - 34% laggards - 16% Tipping point - 15-20%

169
Q

How do you describe grief in change?

A

Kubler-Ross grief cycle - dabda 1. denial 2. anger 3. bargaining 4. depression 5. acceptance - for some end-users, letting go of old workflows may cause significant grief

170
Q

What is Bridges’ Transition theory?

A

– “Managing Transitions” by William Bridges, PhD (1991) – Psychological transitions of people are more difficult than the technology change itself • Think 80-20 rule (80% people; 20% technology) • Informatics is 80% sociology (Homer Warner, MD, PhD) – Three phases of change • Ending, losing, letting go (Loss) • Neutral zone – Confusion, chaos, attempt to re-align to change • New beginning – Energy, purpose, embrace change

171
Q

What is Lewin’s Change Theory?

A

– Kurt Lewin, 1930s – Unfreeze • Prepare for change, overcome inertia and resistance – Change • Uncomfortable confusion and transition – Re-freeze • Post-change circumstances crystallize; increasing comfort with outcome

172
Q

What is Kotter and Schlesinger’s change management strategy?

A

• Diagnose resistance – Four most common reasons • Parochial self-interest • Misunderstanding and lack of trust • Different assessments of perceived benefit • Low tolerance for change • Deal with resistance – Education and communication – Participation and involvement – Facilitation and support – Negotiation and agreement – Manipulation • Co-optation – Form of manipulation – Giving a key role in change design or implementation – Explicit and implicit coercion • Choose your strategy - continuum – Fast vs. slow implementation – Factors affecting decision depend on data from organizational culture/behavior assessment • Amount and type of expected resistance • Power and political capital of initiators vs. resisters • The amount of energy needed to implement • Stakes (consequences of not making change)

173
Q

What are the three steps for Kotter’s change management?

A

Phase 1: create climate for change Phase 2: engage & enable organization Phase 3: implement and sustain changes

174
Q

What are barriers and facilitators to adoption for informatics systems?

A

Barriers - time consuming, percieved lack of utility, transition of data… Facilitators - efficiency, organization size … percieved utility, error reduction…

175
Q

What are 6 barriers to effective implementation (adoption)?

A

Qualis Health experience in primary care settings (Hummel, 2012) – Six barriers to effective implementation (adoption) • Leadership issues • Workflow issues • Provider issues • Training issues • Data interface issues • User interface issues - mitigation strategies for each

176
Q

What are RECs?

A

Meaningful use established a national program of regional extension centres to help with small EHR implementation Torda et al. - REC experience with small practice barriers and facilitators of adoption

177
Q

How many deaths are due to medical errors in America?

A

Medical errors are harmful and costly – 98,000 deaths and 1 million injuries annually, at a cost of $29 billion (IOM, “To Err is Human” 1999) Put that in context (CDC, 2010, http://www.cdc.gov/nchs/fastats/deaths.htm) – 633,842 Heart Disease – 595,930 Cancer – 155,041 Chronic lower respiratory diseases – 146,571 Accidents (unintentional injuries) – 140,323 Stroke – 110,561 Alzheimer’s disease – 98,000 Deaths Due to Medical Error – 79,535 Diabetes

178
Q

Describe ADEs

A

Two seminal works • IOM 1999, “To Err is Human” • IOM 2001, “Crossing the Quality Chasm” – Adverse Drug Events (ADE) • Hospital Adverse Drug Events: – Classen 1997: “380,000 preventable adverse drug events annually” – Bates 1995: “450,000 preventable ADEs” • Long-term Care Facilities – Gurwitz 2005: “800,000 ADE’s annually” • Outpatient Care – Gurwitz 2003: “Among Medicare patients alone, 530,000 ADEs” – Impact of ADEs • $3.6 Billion in additional expenses due to hospital ADEs

179
Q

What are some types of IOM errors?

A

• Diagnostic – Error or delay in diagnosis – Failure to employ indicated tests – Use of outmoded tests or therapy – Failure to act on results of monitoring or testing • Treatment – Error in the performance of an operation, procedure, or test – Error in administering the treatment – Error in the dose or method of using a drug – Avoidable delay in treatment or in responding to an abnormal test – Inappropriate (not indicated) care • Preventive – Failure to provide prophylactic treatment – Inadequate monitoring or follow-up of treatment • Other – Failure of communication – Equipment failure – Other system failure

180
Q

How is healthcare system value defined?

A

• Value = Quality / Cost – Care can have poor value if it either delivers poor quality or has excess cost (or both) • Compared to other developed countries, US healthcare system lags in many markers of healthcare value

181
Q

Compare the US with other healthcare systems

A

US spends 2.5x the OECD average 17.6% of GDP in 2010, 1.5x as much as any country US spends much more on health than expected by GDP Hospital services are 60% higher than average of 12 OECD countries US performs well for Breast Ca, Colorectal Ca But primary care not so good - asthma, COPD hospital admin

182
Q

Describe inequality in US Healthcare

A

healthcare disparities bet 2003-2006 cost $229 billion - infants born to Af-Am women 1.5-3 times more likely to die - Af-Am men > 2x likely die prostate Ca - Hispanic women >2x likely diagnosed w. cerv. cancer

183
Q

What is quality?

A

“The degree to which health services for individuals and populations increase the likelihood of desired health outcomes and are consistent with current professional knowledge.”

184
Q

What are the IOM quality domains?

A

• Safe – 1st IOM Report “To Err is Human” (1999): 1M injured, 44K-98K die annually • Effective / Reliable – Care is evidence-based; benefits outweigh risks – Care is consistent – patients receive the same standard of care regardless of where, when, and from whom • Patient-centered – Meet individual needs, incorporates values/preferences – Tailored to language, level of education – Focus on emotional support, pain relief, attention to suffering, family support • Efficient – avoid wastefulness and redundancy, match access to demand • Timely – avoid long waits, scheduling delays, barriers to care • Equitable – at population and individual level – reduce disparities attributable to age, gender, race, education, disability, sexual orientation, etc.

185
Q

Who are quality stakeholders / orgs?

A

• CMS – Center for Medicare & Medicaid Services – Have published clinical quality measures (CQMs) that define appropriate use of EHR technology to support clinical practice – Submitting CQM data is required in order to receive Meaningful Use incentive in Stage 1 and Stage 2 – Derived from NQF measures – Administers the EHR Incentive Program, total funds ~ $21 billion • NQF – National Quality Forum – collects and standardizes quality measures in a tool known as QPS or Quality Positioning System – Each NQF-endorsed measure has an NQF number, defined steward, and update / revision cycle • Joint Commission – Non-profit that accredits US healthcare organizations – Established National Patient Safety Goals (NPSG) • Ex: reduction of MDRO, catheter-related bloodstream infections, surgical site infections • Leapfrog Group – Voluntary program that described “4 leaps” that would improve safety and quality of US healthcare system • CPOE – recommend a list of CPOE functions/safeguards • Evidence-based hospital referral – recommend referring complex cases to high volume and high quality health care facilities • ICU Physician Staffing – recommend staffing ICU with intensivists • Leapfrog Safe Practice Score (a list of NQF-endorsed safe practices) NCQA – National Committee for Quality Assurance – Publish and maintain Health Effectiveness Data and Information Set (HEDIS) performance measures – Intent is to allows consumers to benchmark health plans – Process of physician and hospital accreditation – HEDIS measures are required of CMS “Medicare Advantage” subcontractors, like HMOs • ONC – Office of the National Coordinator for Healthcare IT – Established in 2004 by legislative order, mandated in 2009 in the HITECH Act (Title XIII of ARRA, “the stimulus package”) – Oversees national activities to promote HIT and healthcare information exchange – Established certification criteria for EHR – Established HIE standards

186
Q

Describe NQF measures

A

NQF #0002 – Appropriate Test for Children with Pharyngitis – Measure Steward – NCQA – Measure description – “percentage of children 2-18 who received a diagnosis of pharyngitis, had strep testing, and received abx” – Numerator – “a group A strep test was performed in the 7 day period from 3 days before to 3 days after index episode” – Denominator – “children age 2 to 18 as of 6mo prior to measurement period who had an outpatient or ED visit with only a diagnosis of pharyngitis” – Exclusions – Risk Adjustment – Additional Classifications (condition, care setting, data source, etc

187
Q

What is MACRA?

A

• MACRA = Medicare Access and CHIP Reauthorization Act of 2015 – New payment program that incentivizes value and patient-centered care • Starting in 2019, physicians will receive payments either through MIPS or APMs – MIPS = Merit Based Incentive Payment System – APM = Alternative Payment Models • MACRA rates physicians based on NCQA quality measures, including HEDIS performance scores, as well as PCMH (Patient Centered Medical Home), PCSP (Patient Centered Specialty Practice) designation, and MU (Meaningful Use)

188
Q

What is MIPS Consolidation?

A

MIPS - Merit-based incentive payment system PQRS - patient quality reporting system VBPM - value-based patient modifier MU - meaningful use

189
Q

What are error-proofing concepts?

A

James T. Reason’s “Swiss Cheese” Model – Latent and active failures are like “holes in the cheese” – Processes, safeguards, and workflows are “layers of cheese” – Accidents / errors occur when the latent and active failures in different layers line up, allowing hazards to lead to losses. WHO Surgical checklist - death rates dropped from 3.7% to 1.4%

190
Q

What are Failure Mode & Effects Analysis?

A

FMEA - devised by US military in 1949 - used in aerospace, automotive industry - later adopted for healthcare use - Modes of failure in a process can be risk-prioritized according to severity of the failure, frequency of occurrence, and detectability

191
Q

What are the steps to FMEA?

A

• Step 1: Create detailed flow diagram of a process • Step 2: For each step, describe what happens if process fails • Step 3: Rate each failure on a standardized scale x 3 – Severity of harm if failure occurs (S) • 1=none; 5=fatal – Likelihood of occurrence (O) • 1=rare; 5=common – Inability of existing controls to detect failure (D) • 1=easily detectable; 5=failure would not be evident • Step 4: Calculate Risk Priority Number (RPN = S x O x D ) • Example: A fatal, but rare and detectable error = 5 x 1 x 1

192
Q

What is the Donabedian Quality Framework?

A

• Structure –Attributes of setting in which care occurs – Number of specialists for a given patient population – Number of clinical guidelines implemented • Process – How care is actually given and received – Proportion of diabetic patients who are screened for proteinuria – Proportion of children with otitis media who are treated appropriately with narrow-spectrum penicillins • Outcome – Effects of care on patient status – Intermediate measures • HbA1c results for diabetic patients • Lipid profile results for patients with hyperlipidemia – End measures • Quality of life for patients with degenerative joint disease • Functional status for stroke patients • Patient satisfaction Goal: “All emergency department rooms should be stocked with equipment for bag-mask ventilation” – Structure: “does your hospital have a policy or standard that specifies what equipment should be in every room in the ED”? – Process: “In weekly audits of ED rooms, what percent of the time is a room found to be improperly stocked?” – Outcome: “How many occurrences are there annually of trauma/resuscitation events in the ED where a bad outcome was attributed to missing airway equipment?”

193
Q

What are lagging vs. leading indicators?

A

Leading Indicator: An indicator that anticipates future events, changes detectable before the events occur. – Examples: physical activity, weight, immunizations, antibiotics given prior to surgery, timely corticosteroid treatments for acute asthma, etc. • Lagging Indicator:An indicator that follows an event. – Examples: infections (lagging) caused by hand washing rate (leading); ventilator acquired pneumonia; complication rates, asthma hospitalization or revisit rates • Compare “Leading” to “Process”, “Lagging” to “Outcome” – Process/Leading: rate of pediatric immunization – Outcome/Lagging: rates of pertussis and measles in a community

194
Q

What are the attributes of good indicators?

A

• Definitions are agreed upon • Optimally sensitive and specific • Valid – does the indicator discriminate between good and bad quality? • Reliable – are repeated measurements stable, reproducible, consistent? • Relates to identifiable user events (cause  effect) • Permits useful comparison • Evidence-based

195
Q

Describe the historical development of QI

A

• Informed by work in manufacturing, process control • 1890s – Frederick Taylor – “Scientific Management” movement – By modern standards, he thought poorly of workers • “In the majority of cases this man deliberately plans to do as little as he safely can” • “When he tells you to pick up a pig and walk, you pick it up and walk, and when he tells you to sit down and rest, you sit down. You do that right through the day. And what’s more, no back talk” • 1930s –Walter Shewhart, Western Electric Co. – Statistical Process Control – Creator of control charts • 1950s – Taiichi Ohno – Developed Toyota Production System, aka “Toyota Lean” – System focuses on removing all activity that has no value, contributes to waste or “muda” • 1970s – W. Edwards Deming – Theory of Improvement – Plan-Do-Study-Act cycle for learning and improvement – Hired as consultant to improve production methods in post-WWII Japan

196
Q

What are Shewhart / Control Charts?

A

• Not a hypothesis test • Definitions of “common cause” and “special cause” are based in statistics • When monitored over time, an indicator will fluctuate around an average value, defined by upper and lower control limits • UCL: +3 sigma • LCL: -3 sigma • 99.73% observations fall within -3 to +3 sigma Warning Limits • UWL: +2 sigma • LWL: -2 sigma • -2 to +2 sigma: 95% observations

197
Q

How do you calculate CL/WL?

A

C chart - number of non-conformities, no denominator U chart - nonconformities per unit, no denominator P chart - proportion nonconforming, known denominator, equal or unequal subgroup I chart - subgroup size of 1 (aka X chart) S chart - subgroup size > 1, equal / unequal subgroups (X-bar)

198
Q

What are some control chart examples?

A

Number of workplace injuries per month – Count data  number of injuries, unknown denominator (employee can be injured multiple times) – “Equal area of opportunity”  workplace doesn’t change size, time period is fixed – Therefore, you would use a C Chart (think “Count”) • Number of line infections per 1000 patient days – Count data  # of line infections, but unknown denominator (no limit on # of infections per patient day) – “Unequal area of opportunity”  # patient days changes per observation period – Therefore, you would use a U Chart (think “Unequal”) • Proportion of patients who had medication reconciliation performed per encounter – Count data with known denominator  total number of patients/encounters is known – Numerator can’t exceed denominator  Med Rec only performed once per encounter – Therefore, you would use a P Chart (think “Proportion” Chart) Variation in Patient Days per month – Not a count or classification of nonconformities – Has a ”scale” (days, time) – Measurement is individual patient days, not an average  subgroup size = 1 – Therefore, you would choose an I Chart (think “Individuals”) • Time from ED to OR for sequential cases of isolated femur fracture – Scale = time – Individual, sequential measurements (x-axis is each femur fracture case in sequence) – Therefore, you would choose an I Chart Average turnaround time for STAT CBC tests per month – Time is continuous scale – Measurements are an average, not individual / sequential observations – Therefore, you would choose an X-bar Chart (X Chart) • Average cost per appendicitis case per month – Scale = cost – Average of multiple measures – Therefore, you would choose an X-bar Chart (X Chart)

199
Q

What are common cause/special cause fluctuations in control charts?

A

• “Common Cause” Fluctuation – Within UCL and LCL (99.73% of random fluctuation should fall within 3-sigma) – AND has no unnatural patterns • “Special Cause” Fluctuation – Falls outside UCL or LCL – OR meets criteria for any “Special Cause” pattern • Various patterns for “special cause” patterns - single point outside 3sigma, 6 points in a row, etc.

200
Q

What terms are used to describe special cause?

A

Shift – “a run of 6 or more points on same side of center line” • Trend – “five consecutive points going in same direction” • Run - “too few or too many events crossing the center line” • Cycle – periodicity in data suggests special cause – Eg: “difference in STAT lab delays during night shift” • Pattern – cycles in data attributable to other factors besides time – Eg: “higher override rates when a specific pharmacist is on duty”

201
Q

What is the value of using control charts?

A

Can trend defects and process over time After 11 months - calc mean deviation, improvement, etc. Find out if deviations are common cause, special cause

202
Q

What is an Ishikawa Chart?

A

Cause-Effect / Ishikawa / Fishbone diagram – Identify possible targets for improvement – Trace back to root cause by asking “Five Whys” – Represent as an outcome (head) and domains (bones)

203
Q

What is a Pareto Chart?

A

Pareto Chart – Frequency-sorted graph of events with a cumulative percent line – Origin of the “80:20” rule – Used commonly to identify the most valuable targets for improvement

204
Q

What is the 5 Whys?

A

Problem: The vehicle won’t start – 1st Why? The battery is dead – 2nd Why? The alternator is not functioning – 3rd Why? The alternator belt is broken – 4th Why? The alternator belt was well beyond its useful service life and not replaced – 5th Why? The vehicle was not maintained according to the recommended service schedule (Root Cause) • Solution: start to maintain according to schedule – 6th Why? Replacement parts are not available because of the extreme age of the vehicle (optional footnote) • Solution: purchase a different vehicle that is easier to maintain

205
Q

What is a key driver diagram?

A

Branching: lose weight => calories in/out => less soda/snacks, more exercise

206
Q

What is PDSA?

A

Improvement methodology Plan Do study Act Key to improvement is small, repeated cycles to select targets, improve on a small scale, implement widely, and measure outcome • IHI reference: http://www.ihi.org/knowledge/Pages/HowtoImprove/ • Steps: – Form the team – Set Aims – time specific and measurable – Establish measures (ideally, these should be good indicators) – Select target for change/improvement (use FMEA, Pareto, Fishbone, and other techniques to identify targets) – Plan – Establish objectives, processes, expectations – Do – Implement the plan, collect data for analysis – Study / Check – look at the results and compare against expected results – Act – request corrective actions, disseminate results to all areas

207
Q

What is Six Sigma?

A

• Developed by Motorola in the 1980s • Name comes from ideal of having a process in control within sixsigma (“perfect” process) – 3.4 defects per million opportunities, or 99.999% error free. • Steps – DMAIC (note some similarities to PDSA/PDCA) – Define – project charter, needs, scope, goals – Measure – data collection plan, sources of data to measure defects, design control charts to monitor process – Analyze – identify deviation from standards, sources of process variation – Improve – identify creative solutions, implement plans – Control – process is updated; policies, guidelines, error-proofing put in place

208
Q

What is Lean methodology?

A

Taiichi Ohno, Toyota Motor Corporation Engineer in 1950s • Remove all non-value added activities – Muda – “uselessness, wastefulness” – Mura – “irregularity, unevenness” – Muri – “unreasonable, burdensome work”

209
Q

What are the types of Muda

A

Seven Types of Muda 1. Overproduction / underproduction 2. Inventory (ex: too much inventory of a perishable good in stock) 3. Repairs / rejects (assembly mistakes) 4. Motion (poor work area ergonomics) 5. Processing (e.g. outdated policies, procedures) 6. Waiting (patients languishing in a waiting room) 7. Transport (transporting patients unnecessarily)

210
Q

What is value stream mapping?

A

Graphical depiction of inputs, throughputs, outputs • Highlights opportunities for improvement • Frontline staff bring forth ideas for improvement • Tests of change implemented as “kaizens” or “change for the better” – small improvements, rapid adaptation to results, continuous quality improvement

211
Q

What is Kaizen?

A

Standardize operational activities • Measure operation • Compare measurements to requirements • Engage frontline staff in identifying opportunities to improve • When improvements work, make them the new standard • Repeat

212
Q

What are the supporting conventions for Lean?

A

Kanban cards – Visual indicators that a supply is empty – Ex: red flip tabs on the top of hand sanitizer dispensers • Andon – Visual indication that indicates production status / alerts when assistance is needed – Ex: “X-Ray In Progress” light • Poka-yoke – “mistake avoiding” in design or process – Intentional incompatibility of refill spouts for inhaled anesthetics – Color-coding of medical gases - yellow for air, green for oxygen – The notch on your SIM card that only allows it to be inserted in one orientation