Measures of Reliablity and Validity Flashcards
requires constant collection, evaluation, analysis, and use of quantitative and qualitative data.
Clinical Medicine
Error
- Mistakes in the diagnosis and treatment of patients
- Mistakes due to clear negligence
Goal
minimize error in data so as to guide, not mislead
PROMOTING
Accuracy and Precision
What errors do we need to reduce
Differential and Indifferential Erros
We need to reduce what variability?
intraobserver and interobserver
closer to the true value
Accuracy
Also known as “reproducibility” or “reliability”
•Ability of a test to give the same result or a similar result with repeated measurement of the same factor
Precision
Differential error
information errors differ between groups
information is incorrect, but is the same across groups
Nondifferential error
refers to any systematic error that may occur during the collection of baseline or follow-up data
Measurement bias
Examples of Measurement Bias
• Blood pressure values
• measuring height with shoes on
• laboratories and the use of different methods
• Variability and unpredictability
• results in lack of precision
• some observations are too high and some are too low
Random error
one observer examining the same results more than once
Intraobserver variability (within the observer)
Interobserver variability (between observers)
2 or more observers examining the same material
measure of the consistency of a metric or a method
Reliability
• Overall percent agreement
• Paired observation
• Multiple variables
• Kappa test ratio
MEASURES OF RELIABILITY
Common way to measure agreement
Overall Percent Agreement (OPA)
- does not include prevalence
- does not show how disagreement occurred
- Agreement might be due to chance alone
Drawbacks of OPA
Percent Agreement Formula
PA = a+d/a+b+c+d(100)
Measures the extent to which agreement exceeds that expected by chance
KAPPA TEST RATIO
Kappa =
(Percent Agreement Observed)-(Percent agreement expected by chance alone)/100% -(Percent agreement expected by chance alone)
FORMULA FOR KAPPA TEST RATIO
INTERPRETATION OF KAPPA
<0 = Less than Chance agreement
0.01-0.20 = Slight
0.21-0.40 = Fair
0.41-0.60 = Moderate
0.61-0.80 = Substantial
0.81-0.99 = Almost perfect agreement
Ability of a test to distinguish between
WHO HAS a disease and WHO DOES NOT
Validity
Screening tests
is performed as a preventative measure
- able to correctly identify who has the disease
- Reliably finding a disease when it is present
- Avoids false negative results
Sensitivity
Specificity
– correctly identifies who does not have the disease
- Reliably excluding a disease when it is absent
- Avoids false positive results
Type 1 error / false-positive error / alpha error
Finding a positive result in a patient in whom disease is absent
Finding a negative result in a patient whom disease is present
Type II error / false-negative error / beta error
Sensitivity Formula
TP/TP+FN
Specificity Formula
TN/TN+FP
False Positive error rate formula
b/(b+d)
False Negative Error rate formula
c/(a+c)
Predictive Values
Describes the probability of having actual disease given the results of a test
Positive predictive value (PPV)
Indicates what proportion of the subjects with positive test results actually have the disease
Negative predictive value (NPV)
Indicates what proportion of the subjects with negative test results actually do not have the disease
Formula for PPV
PPV = number of people with gold-standard evidence of disease who test positive (a)/ number of people who test positive (a+b)
NPV FORMULA
NPV = number of people with gold-standard absence of disease who test negative (d)/ number of people who test negative (c+d)
Sensitivity & specificity versus
predictive value
Sensitivity and specificity are characteristics of a test.
Positive predictive value (PPV) and negative predictive value (NPV) are best thought of as the clinical relevance of a test.