Lecture 11: Application of Assessment in Clinical Settings Flashcards
What is the difference between assessment and testing?
Testing: a particular scale is administered to obtain a specific score and a
descriptive meaning can be applied to the score on the basis of normative,
nomothetic findings.
• Assessment: concerned with the clinician who takes a variety of test
scores, generally obtained from multiple test methods, and considers data in
the context of history, referral information, and observed behaviour to understand the person being evaluated, to answer the referral questions, and then to communicate findings to the patient, his or her significant
others and referral sources.
Why should we assess?
- describe current functioning
- confirm, refute, or modify impressions formed by clinicians
- identify therapeutic needs, highlight issues likely to arise in treatment, recommend forms of interventions and offer guidance about likely outcomes
- aid in differential diagnosis
- monitor treatment over time to evaluate the success of interventions
- manage risk (untoward treatment reactions, potential legal liabilities)
- provide skilled, empathic assessment feedback as a therapeutic
intervention in itself.
Why should we use standardised tests?
Clinicians are unreliable judges
Why are clinicians unreliable judges?
Errors in gathering data: -tendency to see patterns where none exist • tendency to seek confirmatory evidence • use of preconceived biases Errors in synthesising data • heuristics in clinical judgement (Tversky & Kahneman, 1974) • representativeness • availability • anchoring
Types of tests used by clinicians
-Diagnostic interviews • Self-report questionnaires • Questionnaires completed by significant others • Behavioural tests • Observational methods
What are diagnostic interviews?
Fully or semi structured
Ensure coverage of the diagnostic criteria as specified by DSM-5 (few errors in gathering data)
Rules for scoring the interview are specified (few errors in synthesing data)
e.g. 1 in 5 people etc etc
What is a reliability for structured diagnostic interview
Inter-rater agreement (two clinicians come up with same diagnosis)
-Test-retest reliability
What is a validity for structured diagnostic interview
- Validity of diagnostic criteria
- What is the “gold standard”? Is a LEAD standard better?
- Procedural validity
LEAD- expert who knows client, who has access to all data
What are the statistics used?
- Kappa coefficients
- Sensitivity (probability that a person will
What is Kappa?
A measure of agreement corrected by chance.
What is sensitivity?
• Probability that a person with a clinical diagnosis will
receive the same diagnostic interview diagnosis
What is specificity
Probability that a person without a clinical diagnosis will not receive that diagnosis via the diagnostic interview
What is Positive predictive values
Probability that a person with a diagnostic interview diagnosis is truly “ill”
What is negative predictive values?
Probability that a person without a diagnostic interview
diagnosis is truly “well”
What is the sensitivity equation?
a/a+c