Clinical Assessment and Predictions Flashcards

1
Q

Clinical Assessment is composed of: (3)

A
  1. Tests (IQ/general mental ability, Personality, Neuropsychology)
  2. Behavioural Assessment + Contextual/Environmental information
  3. Clinical interview (Structured/Unstructured)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

E.g. of Neuropsychology tests

A
  • Bender Gestalt
  • Luria-Nebraska
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Challenge with clinical assessment

A

Integrating the Information for Decision Making

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Phases in Decision Making (2)

A
  • Data collection
  • Data integration and prediction
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

(Phases in Decision Making): Data Collection Phase - Description (2)

A
  1. “Mechanical” scores
    • E.g. Questionnaires
  2. “Judgmental” scores
    • E.g. Clinical interview
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

(Phases in Decision Making) Data Integration Phase and Prediction - Description (2)

A
  1. Clinical: Integrating material, writing reports with recommendation
  2. Statistical: Combining information statistically
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Myth of Experience

A

Beyond certain amount of training, more experience does NOT translate into more accurate diagnosis

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Myth of more information

A

More information does NOT necessarily give more accurate predictions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Myth of configurability/patterns

A

Clinicians’ decisions CAN be modelled using a formula

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Study: Goldberg (1965). Comparison of clinicians with statistical combinations. Description of study + Findings

A

Participants:13 Ph.D. staff and 16 predoc trainees
- Material:861 MMPI profiles
- Decision: Was the patient neurotic or psychotic?
- Criterion: Official hospital discharge diagnosis (either neurosis or psychosis)
Findings:
(1) No effect of training
(2) All statistical procedures did better than the clinicians’ average
-> Models based on judges’ decision patterns surpassed judges in accuracy, due to their perfect reliability.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Goldberg (1965): What’s the Goldberg rule and what role did it have in the experiment?

A

Goldberg rule = stat rule for diagnosis (Add scores from 3 MMPI scales, subtract 2 others; score ≥ 45 = psychotic, < 45 = neurotic)
=> Judges, even after 4000 practice trials, couldn’t surpass the Goldberg Rule.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Goldberg (1965): Are these results isolated or general?

A

Meehl (1965) - Meta-Analysis
N = 51 studies in different domains
-> Statistical BETTER than clinical: 33
-> Statistical EQUAL than clinical: 17
-> Statistical WORSE than clinical: 0

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Meehl et al.: Introduced the ____ comparison

A

clinical vs. actuarial comparison

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Meehl et al. established two conditions for fair comparison between clinical and actuarial judgment

A
  1. Same Data Basis: Both methods should use the same data for judgment, though their development may rely on different datasets.
  2. Cross-Validation: Apply decisions to new contexts (not only known outcomes) to ensure it would work in a real world setting
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Yes, stat seem better than clinical judgment. HOWEVER, there seem to be a limit on the ____ of those studies

A

Generalizability
-> Many actuarial rules are context-specific, though some (e.g., Goldberg Rule) have broader applications.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Meehl et al - Meta analysis: Are These Results Still True?

A

Grove et al. (2000):
Statistical prediction > Clinical prediction in 33-47% of studies
Clinical prediction > Statistical prediction in 6-16% of studies

17
Q

A hybrid of clinical and actuarial methods is possible but often impractical, why?

A
  • Impractical for dichotomous decisions (e.g., prescribing medication, granting parole).
  • Agreement between methods makes combination unnecessary; disagreement forces a choice between one or the other.
18
Q

Meehl et al - Meta analysis: Statistical prediction improves accuracy by about ____

A

10%

19
Q

Why Aren’tClinicians Better at Prediction? (3 errors)

A

(1) Do not apply their knowledge uniformly or reliably
(2) Overweight positive instances
(3) Similarity Heuristic

20
Q

(Why Aren’tClinicians Better at Prediction?) Do not apply their knowledge uniformly or reliably, explain. (3)

A
  • Distraction
  • Fatigue
  • Mood
21
Q

[Why Aren’tClinicians Better at Prediction?] Overweight positive instances, explain. (3)

A

(1) Do NOT consider false positives
(2) Do NOT consider base rates
(3) Can be incorrect with an INVALID predictor to the extent that your guess is consistent with the base rate and the base rate is high.
-
E.g., If Marc receives a score indicating moderate depression, will he attempt suicide?
→ People remember the true positives and say yes
→ BUT other types of data are relevant: e.g. depressed individuals who DIDN’T commit suicide (FALSE positive) and those who weren’t depressed but who committed suicide (FALSE negatives).

22
Q

(Why Aren’tClinicians Better at Prediction?) Similarity Heuristic

A

I.e. Individuals predict future behaviour that RESEMBLES test information (E.g. The test-taker tells violent stories on TAT → Prediction = person will assault others)
-> Ignores base rates
-> Ignores VALIDITY of test information

23
Q

Even if they (wrongfully) use the Similarity heuristic, clinicians might still be correct about the diagnosis IF: _____

A

The BR for phenomena is HIGH/The test information is in fact VALID

24
Q

Conclusion: Why Aren’tClinicians Better at Prediction? (3 bad practices)

A
  1. We overweight successful predictions & do NOT remember incorrect predictions
  2. We do NOT think base rates
  3. We do NOT use all the available information
25
Q

Improving Clinical (human) Judgment (3)

A
  1. Systematically consider alternatives
  2. Collect feedback about decisions/predictions
  3. Think about statistical prediction issues
26
Q

What Can Clinicians (Humans) Do Well? (3)

A
  1. Provide input into statistical model
  2. Generate hypotheses
  3. Provide prediction when no formula (yet) exists