Screening/Diagnostic Tests Flashcards
Sensitivity
True positives/people with disease
Specificity
True negative/people without disease
False positive
1-specificity
Positive predictive value
true positives/number who tested positive
Negative predictive value
true negatives/number who tested negative
Prior probability
probability that a randomly selected person would have the disease, essentially the prevalence
Post test probability
probability that someone doesn’t have the disease after a negative test or opposite for positive test
Likelihood ratio of positive test
LR+ = sensitivity / (1-specificity)
essentially change from pre-test odds to get post-test odds
post-test odds= pre-test x (LR+)
Should be more than 1, the higher the better
LR-
(1-sensitivity)/ specificity
LR- should be less than 1, lower the better as looking to exclude people with true negatives
Diagnostic Odds Ratio (DOR)
(True positive/false negative) / (false positive/true negative)
Useful for comparing multiple tests
Accuracy for diagnostic tests
(True positives + true negatives) / total tested
Importance of cut-off points for tests
High specificity but low sensitivity minimises false positives
High sensitivity but low specificity avoids false negatives
Finding balance between sensitivity and specificity for tests
Receiver operating characteristic (ROC) curves plot sensitivity vs (1-specificity)
Point closest to top left corner, or maximising area under curve optimises both
Ideal characteristics of screening test
Not too uncomfortable Repeatable Sensitive Specific Quick + easy to interpret
Bias in diagnostics
Verification - e.g. gold standard tests only used to check those with a + result on new test
Review - When results of test is known before gold standard or vice versa
Inconsistencies in borderline tests