Violence Risk Assessment Flashcards
Week 1
how do evaluators measure predictive validity?
they calculate the area under the curve (AUC);
- rate offenders using tool
- examine who offenders
- calculate AUC
e.g., of VRAG limitation; what if someone showed an increase in symptoms of schizophrenia? would this change the rating?
unfortunately not as the items looked at only focus on historical static factors
availability heuristic (confirmation bias)
recall certain info more easily if important/noticeable – so we think it’s true
example of availability heuristic
confirmation bias
an example of representativeness heuristic is
base rate neglect
T/F: actuarial tools do not allow for discretion or flexibility
TRUE. if its not on the tool, you can’t consider it. However some discretion may be important such as someone hearing voices saying to kill others should be able to bump up ratings.
what are the pros of unstructured clinical judgement?
easy, convenient, individualized
anchoring heuristic
overly influenced by whatever info is learned first (e.g. framing)
false negative error
thinking someone is low risk but they’re actually high risk
what does AUC of .70 mean in comparison to AUC of .50
AUC of .70 means that if you randomly selected a
recidivist and a non-recidivist, there is a 70% chance the
recidivist would score higher than the non-recidivist
AUC of .50 means that the tool is no better than chance
representative heuristic (base rate neglect)
rely on stereotypes or evidence that seems prototypical.
true negative
low risk because they truly are low risk
true positive
high risk because they truly are high risk
pros of VRAG
- better reliability
- better predictive validity
- more transparent
- easier to defend
when were risk assessment tools first developed
around 1985
framing is an example of
anchoring heuristic
what are the challenges that evaluators may face?
The challenge is in making an accurate assessment about someone due to;
- limited amount of time
- limited validity
- limited amount of info because usually the person giving the info is wanting the examiner to steer their opinion a way that favours them.
- ensuring that one who has been involved in the case does not affect your decisions
T/F: we predict around half of the people we evaluate will reoffend
False.
more like 10%
Framing (anchoring heuristic)
drawing different conclusions from the same information, depending on how or by whom that information is presented
_______ ______ is poor. ______ tools are better than ________ but still have some problems. So…another option is ______ ________ _______
unstructured judgement is poor. Actuarial tools are better than Unstructured but still have some problems. So… another option is Structured Professional judgement
T/F: Predictive validity of VRAG is perfect
FALSE. it’s good but not perfect. AUC = .68 (moderate range)
where can risk assessments happen in the justice system, psychiatric, and education
In the Justice system: • Placements (e.g., pretrial detention) • Sentencing (e.g., death penalty) • Supervision/security (e.g., max security) • Treatment (e.g., services) • Release from custody (e.g., parole)
PSYCHIATRIC
• Warn potential victims (duty to protect)
• Civil commitment (i.e., hospitalized –
danger to others)
EDUCATION
• Threaten to harm another student
what type of tool is the Violence Risk Appraisal Guide (VRAG)?
actuarial - add up scores, no discretion. it produces estimated probabilities (e.g., 78% chance this person will reoffend)
how is information combined in actuarial tools?
adding up the total scores
T/F: Tools have higher AUC score than Unstructured Judgement
TRUE.
what items does the VRAG look at? historical or static?
both. 12 items looked at. they developed it by selecting factors that predict violence in a sample
what are the cons of unstructured clinical judgement?
- lacks reliability
- poor predictive validity
- accurate in NOT MORE THAN 1/3 predictions
- lacks transparency
- difficult to defend
what are some of the limitations with risk assessment tools?
- best option but are not crystal balls into the future
- biases can STILL occur even when tools are used
how many tools are out there?
over 400
what is the purpose of the VRAG?
designed to predict violence in adults
what are the pros of having risk assessment tools?
- better inter-rater reliability
- better predictive validity (moderate range)
- more transparent
- easier to defend
to use tools such as the HCR-20 V3, you must have
a Ph.Din clinical psychology or a related field
false positive error
thinking someone is high risk when they are actually low risk
confirmation bias is an example of
availability heuristic
base rate neglect (representativeness heuristic)
judging likelihood of an outcome without considering accurate information about the probability of that outcome
e.g., we predict that around half of the people we evaluate will reoffend
base rate neglect is an example of
representativeness heuristic
example of anchoring heuristic
framing
cons of VRAG
- not crystal balls and biases can still occur (this is with all tools)
- VRAG doesn’t allow for any discretion or flexibility and VRAG focuses on static factors (this is an issue with actuarial tools mostly)
what is a limitation of the VRAG?
factors on the VRAG are static or unmodifiable and so, risk can change over time but that won’t change a rating.
how is information combined in Structured Professional Judgement?
they don’t add up but rather rate low/moderate/high)
Confirmation bias (availability heuristic)
look for evidence that confirms our initial impression
e.g. we already made up our mind that one is high risk