Clinical Assessment and Predictions Flashcards
Clinical Assessment is composed of: (3)
- Tests (IQ/general mental ability, Personality, Neuropsychology)
- Behavioural Assessment + Contextual/Environmental information
- Clinical interview (Structured/Unstructured)
E.g. of Neuropsychology tests
- Bender Gestalt
- Luria-Nebraska
Challenge with clinical assessment
Integrating the Information for Decision Making
Phases in Decision Making (2)
- Data collection
- Data integration and prediction
(Phases in Decision Making): Data Collection Phase - Description (2)
- “Mechanical” scores
- E.g. Questionnaires
- “Judgmental” scores
- E.g. Clinical interview
(Phases in Decision Making) Data Integration Phase and Prediction - Description (2)
- Clinical: Integrating material, writing reports with recommendation
- Statistical: Combining information statistically
Myth of Experience
Beyond certain amount of training, more experience does NOT translate into more accurate diagnosis
Myth of more information
More information does NOT necessarily give more accurate predictions
Myth of configurability/patterns
Clinicians’ decisions CAN be modelled using a formula
Study: Goldberg (1965). Comparison of clinicians with statistical combinations. Description of study + Findings
Participants:13 Ph.D. staff and 16 predoc trainees
- Material:861 MMPI profiles
- Decision: Was the patient neurotic or psychotic?
- Criterion: Official hospital discharge diagnosis (either neurosis or psychosis)
Findings:
(1) No effect of training
(2) All statistical procedures did better than the clinicians’ average
-> Models based on judges’ decision patterns surpassed judges in accuracy, due to their perfect reliability.
Goldberg (1965): What’s the Goldberg rule and what role did it have in the experiment?
Goldberg rule = stat rule for diagnosis (Add scores from 3 MMPI scales, subtract 2 others; score ≥ 45 = psychotic, < 45 = neurotic)
=> Judges, even after 4000 practice trials, couldn’t surpass the Goldberg Rule.
Goldberg (1965): Are these results isolated or general?
Meehl (1965) - Meta-Analysis
N = 51 studies in different domains
-> Statistical BETTER than clinical: 33
-> Statistical EQUAL than clinical: 17
-> Statistical WORSE than clinical: 0
Meehl et al.: Introduced the ____ comparison
clinical vs. actuarial comparison
Meehl et al. established two conditions for fair comparison between clinical and actuarial judgment
- Same Data Basis: Both methods should use the same data for judgment, though their development may rely on different datasets.
- Cross-Validation: Apply decisions to new contexts (not only known outcomes) to ensure it would work in a real world setting
Yes, stat seem better than clinical judgment. HOWEVER, there seem to be a limit on the ____ of those studies
Generalizability
-> Many actuarial rules are context-specific, though some (e.g., Goldberg Rule) have broader applications.
Meehl et al - Meta analysis: Are These Results Still True?
Grove et al. (2000):
Statistical prediction > Clinical prediction in 33-47% of studies
Clinical prediction > Statistical prediction in 6-16% of studies
A hybrid of clinical and actuarial methods is possible but often impractical, why?
- Impractical for dichotomous decisions (e.g., prescribing medication, granting parole).
- Agreement between methods makes combination unnecessary; disagreement forces a choice between one or the other.
Meehl et al - Meta analysis: Statistical prediction improves accuracy by about ____
10%
Why Aren’tClinicians Better at Prediction? (3 errors)
(1) Do not apply their knowledge uniformly or reliably
(2) Overweight positive instances
(3) Similarity Heuristic
(Why Aren’tClinicians Better at Prediction?) Do not apply their knowledge uniformly or reliably, explain. (3)
- Distraction
- Fatigue
- Mood
[Why Aren’tClinicians Better at Prediction?] Overweight positive instances, explain. (3)
(1) Do NOT consider false positives
(2) Do NOT consider base rates
(3) Can be incorrect with an INVALID predictor to the extent that your guess is consistent with the base rate and the base rate is high.
-
E.g., If Marc receives a score indicating moderate depression, will he attempt suicide?
→ People remember the true positives and say yes
→ BUT other types of data are relevant: e.g. depressed individuals who DIDN’T commit suicide (FALSE positive) and those who weren’t depressed but who committed suicide (FALSE negatives).
(Why Aren’tClinicians Better at Prediction?) Similarity Heuristic
I.e. Individuals predict future behaviour that RESEMBLES test information (E.g. The test-taker tells violent stories on TAT → Prediction = person will assault others)
-> Ignores base rates
-> Ignores VALIDITY of test information
Even if they (wrongfully) use the Similarity heuristic, clinicians might still be correct about the diagnosis IF: _____
The BR for phenomena is HIGH/The test information is in fact VALID
Conclusion: Why Aren’tClinicians Better at Prediction? (3 bad practices)
- We overweight successful predictions & do NOT remember incorrect predictions
- We do NOT think base rates
- We do NOT use all the available information
Improving Clinical (human) Judgment (3)
- Systematically consider alternatives
- Collect feedback about decisions/predictions
- Think about statistical prediction issues
What Can Clinicians (Humans) Do Well? (3)
- Provide input into statistical model
- Generate hypotheses
- Provide prediction when no formula (yet) exists