Week 8. Evaluating Diagnostic Literature Flashcards
Validity
- Is it true? Can I believe it? Are the outcome measures trustworthy and accurate?
- Extent that a measure assess what it is intended to measure.
Applicability
If valid and important, can/should I apply it to my patients
Use of Diagnostic Tests
PT’s have increased access to DI but it should not replace clinical assessment/tests
E.g. Shoulder imaging — physicians order shoulder imaging to facilitate referral to surgeon but after prolonged wait periods, surgeon refers patient to PT
Diagnosis Research Goals
- Evaluate whether a test gives additional information about presence/absence of a condition
- Evaluate whether clinical test provides similar information as an invasive or radiological test
- Evaluate whether a diagnostic test is able to distinguish between patients with and w/o a specific condition
- Avoid invasive tests/x-ray exposure, more carefully define injured structures/tissues to customize treatment
Clinical Prediction Rules (CPR)
Ottawa Ankle Rules
- Used for diagnosis
- A rule or model that tries to identify the best combination of S&S, and other findings for predicting the probability of a specific outcome
OAR - sensitivity: 96-99% - specificity: 26-48% if negative, low chance of fracture (point tenderness at posterior edge (of distal 6 cm) or tip lateral malleolus. point tenderness at posterior edge (of distal 6 cm) or tip medial malleolus. inability to weight bear (four steps) immediately)
Level of evidence in diagnostic study design:
Why can’t RCT be used in Dx studies
- RCT cannot be done in Dx studies as all subjects must undergo both tests
Level 1 evidence in Dx studies - Cross-sectional
- Cohort study designs
Methodological Issues in Diagnostic Research
- Gold Standard Test
Inappropriate gold standard/reference test
Methodological Issues in Diagnostic Research
- Verification Bias
Verification Bias:
results in test influence the decision to have the gold standard test
Methodological Issues in Diagnostic Research
- Selection/referral bias
Selection/Referral Bias:
Evaluation done in a Population with a high prevalence of disease or investigators pick study participants
Methodological Issues in Diagnostic Research
- Measurement Bias
Measurement Bias
- Testers are aware of gold standard tests results which bias outcome
- Outcomes for what constitutes positive/negative are not well-defined
- Testers unable to complete diagnostic test properly
Sensitivity (SnNout)
Sensitivity: likelihood of a +test in presence of disease (true positive rate)
SnNout:
- a negative result on a highly sensitive test is a good way to rule out people who don’t have the condition
- Example: airport security, if highly sensitive will pick up all kinds of metal, so no buzz = no metal, but lots of false positives, but you don’t miss things
Specificity (SpPin)
Specificity: likelihood of a -test in the absence of disease (true negative rate)
SpPin: a highly specific test will not falsely identify people has a condition, a positive result on a highly specific test is likely to accurately detect the presence of a condition
- Example: airport security, if airport sensor is turned down, it would be highly specific (metal = buzz)
Positive Prediction Value
Likelihood of disease in the presence of +test
Negative Predictive Values
Likelihood of not having a disease in the presence of a negative test
Positive/Negative predictive values table
- Rows calculate?
- Columns calculate?
Rows = Predictive Values Columns = sensitivity/specificity
(TP)True+ (FP) False+ Total who test positive
(FN)False- (TN) True+ Total who test negative
Total w. Total w/o
Disease. Disease
Accuracy
Accuracy = (a + d) / (a+b+c+d)
= (TP + TN) / (TP + TN + FP + FN)
Sensitivity calculation (true positive rate, TPR)
Sensitivity = a / (a+c)
- true positive divided by total number with disease.
- this is the probability of positive test if subject has disease, also called true positive rate
Specificity Rate (True negative rate, TNR)
Specificity = d/ (b+d)
- computed as true negatives divided by total number without disease
- defined as probability of negative test if subject does not have disease; true negative rate (TNR)
Positive Predictive Value
PPV = a / (a+b)
- computed as true positive divided by total number that tested positive
- defined as probability of disease if subject has a positive test; true negative rate (TNR)
Negative Predictive Value
NPV: d / (c+d)
- true negative divided by total number that tested negative
- probability of no disease if subject has a negative test
Specificity vs. NPV vs. -LR
Specificity (d / b+d)
- do not have disease - probability of negative test
NPV (d / c+d)
- negative test - probability of no disease
-LR: probability of a negative test result given the presence of the disease and the probability of a negative test result given the absence of the disease, i.e.
Which aspect is dependent on prevalence of disease?
PV are dependent on prevalence of disease, while sensitivity/specificity are not
- PV are meaningless out of context of prevalence
- Sensitivity and Specificity are dependence on diagnostic threshold; more consistent BETWEEN studies
Sensitivity and specificity
- more reliable INTER- or INTRA?
- most consistent between studies
- diagnostic threshold for a sp diagnostic test defined as min or max requirement to obtain a positive result
Receiver Operator Characteristic Curves (ROC Curves)
- 3-way relationship between sensitivity, specificity, and diagnostic threshold
- curve shows trade-off between sensitivity and specificity with changing diagnostic thresholds
ROC values
0 = terrible
1 = ideal