Week 8. Evaluating Diagnostic Literature Flashcards
Validity
- Is it true? Can I believe it? Are the outcome measures trustworthy and accurate?
- Extent that a measure assess what it is intended to measure.
Applicability
If valid and important, can/should I apply it to my patients
Use of Diagnostic Tests
PT’s have increased access to DI but it should not replace clinical assessment/tests
E.g. Shoulder imaging — physicians order shoulder imaging to facilitate referral to surgeon but after prolonged wait periods, surgeon refers patient to PT
Diagnosis Research Goals
- Evaluate whether a test gives additional information about presence/absence of a condition
- Evaluate whether clinical test provides similar information as an invasive or radiological test
- Evaluate whether a diagnostic test is able to distinguish between patients with and w/o a specific condition
- Avoid invasive tests/x-ray exposure, more carefully define injured structures/tissues to customize treatment
Clinical Prediction Rules (CPR)
Ottawa Ankle Rules
- Used for diagnosis
- A rule or model that tries to identify the best combination of S&S, and other findings for predicting the probability of a specific outcome
OAR - sensitivity: 96-99% - specificity: 26-48% if negative, low chance of fracture (point tenderness at posterior edge (of distal 6 cm) or tip lateral malleolus. point tenderness at posterior edge (of distal 6 cm) or tip medial malleolus. inability to weight bear (four steps) immediately)
Level of evidence in diagnostic study design:
Why can’t RCT be used in Dx studies
- RCT cannot be done in Dx studies as all subjects must undergo both tests
Level 1 evidence in Dx studies - Cross-sectional
- Cohort study designs
Methodological Issues in Diagnostic Research
- Gold Standard Test
Inappropriate gold standard/reference test
Methodological Issues in Diagnostic Research
- Verification Bias
Verification Bias:
results in test influence the decision to have the gold standard test
Methodological Issues in Diagnostic Research
- Selection/referral bias
Selection/Referral Bias:
Evaluation done in a Population with a high prevalence of disease or investigators pick study participants
Methodological Issues in Diagnostic Research
- Measurement Bias
Measurement Bias
- Testers are aware of gold standard tests results which bias outcome
- Outcomes for what constitutes positive/negative are not well-defined
- Testers unable to complete diagnostic test properly
Sensitivity (SnNout)
Sensitivity: likelihood of a +test in presence of disease (true positive rate)
SnNout:
- a negative result on a highly sensitive test is a good way to rule out people who don’t have the condition
- Example: airport security, if highly sensitive will pick up all kinds of metal, so no buzz = no metal, but lots of false positives, but you don’t miss things
Specificity (SpPin)
Specificity: likelihood of a -test in the absence of disease (true negative rate)
SpPin: a highly specific test will not falsely identify people has a condition, a positive result on a highly specific test is likely to accurately detect the presence of a condition
- Example: airport security, if airport sensor is turned down, it would be highly specific (metal = buzz)
Positive Prediction Value
Likelihood of disease in the presence of +test
Negative Predictive Values
Likelihood of not having a disease in the presence of a negative test
Positive/Negative predictive values table
- Rows calculate?
- Columns calculate?
Rows = Predictive Values Columns = sensitivity/specificity
(TP)True+ (FP) False+ Total who test positive
(FN)False- (TN) True+ Total who test negative
Total w. Total w/o
Disease. Disease
Accuracy
Accuracy = (a + d) / (a+b+c+d)
= (TP + TN) / (TP + TN + FP + FN)
Sensitivity calculation (true positive rate, TPR)
Sensitivity = a / (a+c)
- true positive divided by total number with disease.
- this is the probability of positive test if subject has disease, also called true positive rate
Specificity Rate (True negative rate, TNR)
Specificity = d/ (b+d)
- computed as true negatives divided by total number without disease
- defined as probability of negative test if subject does not have disease; true negative rate (TNR)
Positive Predictive Value
PPV = a / (a+b)
- computed as true positive divided by total number that tested positive
- defined as probability of disease if subject has a positive test; true negative rate (TNR)
Negative Predictive Value
NPV: d / (c+d)
- true negative divided by total number that tested negative
- probability of no disease if subject has a negative test
Specificity vs. NPV vs. -LR
Specificity (d / b+d)
- do not have disease - probability of negative test
NPV (d / c+d)
- negative test - probability of no disease
-LR: probability of a negative test result given the presence of the disease and the probability of a negative test result given the absence of the disease, i.e.
Which aspect is dependent on prevalence of disease?
PV are dependent on prevalence of disease, while sensitivity/specificity are not
- PV are meaningless out of context of prevalence
- Sensitivity and Specificity are dependence on diagnostic threshold; more consistent BETWEEN studies
Sensitivity and specificity
- more reliable INTER- or INTRA?
- most consistent between studies
- diagnostic threshold for a sp diagnostic test defined as min or max requirement to obtain a positive result
Receiver Operator Characteristic Curves (ROC Curves)
- 3-way relationship between sensitivity, specificity, and diagnostic threshold
- curve shows trade-off between sensitivity and specificity with changing diagnostic thresholds
ROC values
0 = terrible
1 = ideal
Positive Likelihood Ratio
- ratio indicates?
- probability that a person?
- Value indicates?
Sensitivity / (1-specificity)
- ratio of true positive rate to false positive rate
- probability of a person with a positive test has the disease
- larger numbers indicate higher likelihood of disease
Negative LIkelihood Ratio
- ratio indicates?
- probability that a person?
- Value indicates?
NLR: (1 - sensitivity) / specificity
- ratio of true negative rate to the false negative rate
- used to determine the probability with a negative test does not have the disease
- smaller numbers indicate higher likelihood of NO disease
LR+ of 7.29 and LR- of 0.166
If individual takes test for the disease, we can update their probability of disease by multiplying odds by the likelihood ratio
If test is positive, updated odds of disease: (1/99) x 7.29 = 0.0736
If test is negative, updated odds of disease (1/99) x 0.166 – 0.00168
Odds of disease increases from 1% to 7.4% with positive test and decreases from 1% to 0.16% with a negative test
\+LR. -LR. > 10. < 0.1 5-10. 0.1 - 0.33 3-5. 0.34 - 0.99 <3. > 1
Almost conclusive
Useful
Marginally Useful
Likely not important
Clinical Utility of DI Statistics: Sensitivity/Specificity
- most common reported values
- can calculate LR from these values
Clinical Utility of DI Statistics: PV
- limited useful ness because less stable estimates (depends on population tested/prevalence of disease)
Clinical Utility of DI Statistics: LR
- Most clinically useful because they contain both sensitivity/specificity values in 1 ratio
Example: A new ‘special test’ for the shoulder has been developed to test for a presence of a rotator cuff tear
Want to compare results of the new test to a known standard
A = 50 B = 10 C = 15 D = 25
Accuracy? Sensitivity? Specificity? PPV? NPV? \+LR? -LR?
Accuracy = 75/100 Sensitivity = 77% Specificity = 71% PPV = 83% NPV = 62% \+LR = 2.7 (likely not important) -LR = .32 (may be useful)
Sensitivity vs. PPV vs. LR+
Sensitivity (a / a+c): probability of a positive test if subject has disease (TPR)
- they have the disease - probability of positive test recognizing
PPV (a / (a+b): probability of disease if subject has a positive test
- they have a positive test - probability of actually having disease
LR+ (sensitivity / (1-specificity)): probability of a positive test result given the presence of the disease and the probability of a positive test result given the absence of the disease,