Wk 13 - Extreme 2 Flashcards
Can you do the 2x2 calculation reliably? (Suggest you invent your own figures and have a practice) (x5)
Remember to check whether the figures are percentages or probabilities – and convert to probabilities if needed (divide by 100; move decimal point 2 places to the left).
Columns are: Disease present, Disease absent
Rows are: Test positive, Test negative
Giving: Correct positives and False positives, over
False negatives, Correct negatives
Explain what sensitivity is in the context of evaluating diagnostic tests (x3)
% of people with disorder who test positive
(correct positive rate;
i.e. correct positives divided by total with disease)
How could you design a diagnostic test that would correctly diagnose every person who had a disease as having the disease? (x2)
By adopting a liberal response bias
ie err on the side of positive if there’s even slight chance of having the disorder
Why is it important that health professionals understand the calculations described in this lecture? (x2)
So that they don’t overstate the importance of a positive test result -
Can explain to patients the logic behind testing/not panic them
What procedure would you use to choose the optimal pass mark for your diagnostic test, if you wanted to maximize the discriminatory power of the test? (x1)
What was this initially designed for? (x1)
Use a ROC curve - Receiver Operating Characteristic
For radar in World War Two (where operators had to distinguish signals from noise on a radar screen)
What two variables determine the number of correct hits and false positives in a diagnostic test?
Correct positive rate (sensitivity) versus
False positive rate (1 – specificity)
What is a ROC curve? (x3)
A plot of Correct positive rate (sensitivity) versus
False positive rate (1 – specificity)
where each point on the curve is a different “pass mark” for the test
Why might we want to choose different cut offs for diagnostic tests under different clinical conditions? (x1 plus e.g.)
It’s a judgment call dependent on the relative costs of false positive and false negatives
Eg in breast screens, tolerate heaps of FPs in order to get higher TPs
Draw a histogram of scores on a diagnostic test for a disease, showing (1) the distribution of people without a particular disease and (2) the distribution of people with a disease, a situation where the test does its job but not perfectly. Mark on a potential pass mark for the test and label which parts of the curves refer to (1) correct hits (2) false positives (3) correct negatives and (4) false negatives.
Do
Sketch a ROC curve of a test that has reasonable diagnostic capability. Mark on the pass mark that is likely to yield greatest discrimination between those with and without the disease, with a note explaining why you placed the pass mark where you did.
Do
Sketch two ROC curves, one showing a “worthless” test and one showing an “excellent” test.
Do
On a ROC curve, how can we quantify the accuracy (diagnostic ability) of the test? (x2)
Buy calculating the area under the curve
as a proportion of the whole, so 1 = max, .5 = chance
In terms of sensitivity and specificity, how would you choose the “most discriminating” pass mark for a diagnostic test? (x1)
By adding them up to find the point that gives the highest sum between them
What is the key thing you might want to know following a positive result on a medical or behavioural test? (x1)
What are the chances you ACTUALLY have the disease?
What is often missed when people think about the accuracy of diagnostic tests? (x3 terms for same thing, plus define)
Base rate - the likelihood of the disease occurring in the population
Pre-test probability
Prior probability
What is base rate neglect bias?
Failure to consider the prior/pre-test probability of disease occurrence in calculations of test accuracy
The probability a woman (aged 40-50, in a region with routine screening) having breast cancer is 0.8%. If a woman has breast cancer, the probability is 90% that she will have a positive mammogram. If a woman does not have breast cancer, the probability is 7% that she will still have a positive mammogram.
What is the probability that she does have breast cancer if she has a positive mammogram?
(Practice the calculation)
Pre-test probability = .008 (=.8%)
Sensitivity (% correct positives) = .90 (=90%)
Specificity (% correct negatives: 100%-7%) = .93 (=93%)
Pre-test probability = .008 (=.8%)
Sensitivity (% correct positives) = .90 (=90%)
Specificity (% correct negatives: 100%-7%) = .93 (=93%)
- Choose arbitrary number of people (100,000) and put in grand total box.
- Multiply Grand Total by Pre-test probability to get Total With Disorder
= 100,000 x .008 = 800 - Grand Total minus Total With Disorder = Total without disorder
= 100,000 – 800 = 99,200 - Multiply Total With Disorder by Sensitivity to get Correct Positives
= 800 x .90 = 720 - Multiply Total Without Disorder by Specificity to get Correct Negatives
= 99,200 x .93 = 92,256 - Compute False Positives and False Negatives by subtracting Correct Positives/Correct Negatives from column totals
FP = 99,200 – 92,256 = 6,944
FN = 800-720 = 80 - Compute Total positive and Total negative by adding up rows
TP = 720 + 6,944 = 7,664
TN = 80 + 92,256 = 92,336 - Predictive value of positive test (Positive Predictive Value) Correct Positive ÷ Total Positive
= 720/7664 = .094 - Predictive value of negative test is Correct Negatives ÷ Total Negatives =
92256/92336 = .9999
Answer = 9%
Explain what specificity is in the context of evaluating diagnostic tests (x3)
% of people WITHOUT the disorder who test negative
(correct negative rate;
i.e. correct negatives divided by total without disease)
What is the difference between sensitivity definitions in diagnostic test evaluation and signal detection theory? (x2)
In diagnostic, is % correct hits, but in
Signal detection, is d-prime (d’) - discriminatory ability