(Reliability and validity) Flashcards
Quality assurance
Prior to data collection (through protocol and manuals of operation)
Quality control
During data collection and processing
How do we measure reliability?
- % agreement
- kappa
- icc
- bland-altman plots
How do we measure validity?
- sensitivity/specificity
- ROC/c-statistic
- Bland-altman plots
How to adjust for self-reporting bias
with a validation study and by directly adjusting for mismeasured exposures using the relationship bw the gold standard measurement and mismeasured exposure
Sensitivity=
True positives/(all positives)
Specificity=
True negatives/(all negatives)
Positive likelihood ratio
Sensitivity/(1-specificity)
Negative likelihood ratio
(1-sensitivity)/specificity
Limits to extrapolating Se and Sp across studies
- Se and Sp for specific tests often not directly generalizable across populations
- Depending on severity of condition, prevalence of other diseases that would also test positive, association between self-report and other characteristics
Which of Se and Sp is important to determine the bias in the effect measure
Specificity
ROC curve: what happens when we select a higher criterion value (axis x)
false positive fraction decrease with increased specificity
but
true positive fraction and sensitivity will decrease
Uninformative test has AUC of…
0.5 (area under the curve)