Evaluating the diagnosis Flashcards
Sensitivity
- the usefulness of a test in the context of truly diseased population
- a sensitive test will pick up the bases with even small amounts of evidence
- makes more false positive inclusions
- highly sensitive tests are preferred if a diagnosis should not be missed but over-diagnosis is not harmful
- SN OUT: highly sensitive test if NEGATIVE helps to rule OUT a disease
Specificity
- the usefulness of a test in the context of the non-diseased population
- highly specific tests will pick up cases onli if definitive evidence is noted
- highly sensitive tests make more galse negative exclusions
- highly specific tests are preferred if missing some cases is not bad but wrongly labeling is bad/costly
- SP In: highly specific test if POSITIVE helps to rule IN a disease
Sensitivity calculation
True positive/total diseased
A/A+C
Specificity calculation
True negative/total non-diseased
D=B+D
Accuracy
=all correct ‘hits’/total hits
= (true positive+ true negative)/total population
Receiver operator curve
- ROC curve
- useful in choosing between two diagnostic tests of different sensitivty and specificity rates or choosing a cut off point for making a diagnosis
ROC curve
- plot the true positive rates (sensitivity) on Y axis and corresponding false positive rates (1- specificity) on the X axis
- cut-off point in the curved point
- when comparing two test curves the one with the curve closer to the left upper corner is the better screening test
Area under the ROC curve
-a measure of test accuracy
Likelihood ratios
-more useful than specificity and sensitivity calculations as they give us a single measure to tell us how much more likely a positive or negative test result is to have come from someone with a disease than someone without it
Likelihood ratio for a positive test
LR+= likelihood of testing positive rightly/wrongly
- (A/A+C))/(B/B+D)
- sensitivity/1-spec
Likelihood ratio of a negative test
LR-= likelihood of testing negative rightly/wrongly
- (C/A+C)/(D/B+D)
- 1-sens/spec
LR+ values
- usually >1 for most tests
- if over 10 then you can use
- if below 0.1 then don’t use
Pretest probability
- prevalence in the studied population
- (A+C)/(A+B+C+D)
A
True positive
B
False positive
C
False negative
D
True negative
Likelihood of a diagnosis
- depends on the prevalence or prior probability of the disease before applying a test
- one cannot multiply the probability by the likelihood ratio
- probabilities need to be converted to odds before the likelihood ratio can be used
Using pre-test odds to calculate post-test probability
- obtain pretest probability (A+C)/(A+B+C+D)
- convert pretest probability to pretest odds (A+C)/(B+D)
- convert pretest odds to post test odds - Post test odds= LR x Pretest odds
- convert post-test odds to post test probability- posttest odds/1+ postest odds
Using Bayesian nomogram (Fagan) to calculate post-test probability
- obtain pretest probability
- obtain likelihood ration (LR+/LR-)
- draw a line using straight edge accross the two values available
- post-test probability is the value obtained on the other side of the nomogram
Positive predictive value
- True positive/Total test positive
- can apply for individual patients results and informs the chance of having the disease if tested positive
Negative predictive value
- True negative/total test negative
- informs the chance of not having the disease if tested negative
- decreases with increasing prevalence