Epidemiology (IV) Multiple Test Screening Flashcards
What is a way to get around the decreased sensitivity and/or specificity of a particular test?
Provide two tests (either in sequence or in parallel); one should have a high sensitivity (screening test), and one should have a high specificity (confirmatory test)
What are the two rules of sequential two-test screenings?
- A positive test on the initial screen triggers a referral for the second test
(may apply if Test 1 is less expensive, less invasive, or less uncomfortable, but Test 2 is more sensitive and specific)
- Positive finding requires both tests to be positive
(if someone tests negative on EITHER test they are considered to not have the disease)
What does sequential testing do to sensitivity and specificity?
It increases specificity and decreases sensitivity
(fewer false positives, but more cases missed)
What does sequential screening do to false positive and false negative rates?
Fewer false positives (higher specificity); increased false negatives (higher sensitivity)
What type of disease is amenable to sequential screening?
One where the damage done by a false positive would be very high.
What is the rule of parallel (simultaneous) screening regarding positive outcomes?
Patient is defined as having the disease if they test positive on EITHER test
What is the rule of parallel (simultaneous) screening regarding negative outcomes?
Patient is defined as disease-free if they test negative on BOTH tests
What does parallel screening do to sensitivity and specificity?
Parallel testing increases sensitivity at the expense of specificity
(More cases are caught; but with more false positives)
What type of disease is amenable to parallel screening?
One where missing the disease would have severe consequences
What does parallel screening do to rates of false positives and false negatives?
Less false negatives (higher sensitivity); but with more false positives (lower specificity)
Are sequential screenings better for sensitivity or specificity?
Are parallel screenings better for sensitivity or specificity?
Specificity (sequential)
Sensitivity (parallel)
What does the positive predictive value measure?
If the test is positive, what’s the likelihood that the patient actually has the disease?
Positive predictive value is similar to ________ that, if it is low, there will be an increase in ________.
specificity; false positives
What is the formula for PPV?
TP / (TP + FP)
What does the negative predictive value measure?
If the test is negative, what’s the likelihood that the patient is actually disease-free?
What is the formula for NPV?
TN / (TN + FN)
What two factors most strongly influence PPV?
- Specificity (sensitivity only marginally)
- Disease prevalence
If the PPV is very low, what does that indicate?
Either the test is not sufficiently specific, OR the population being tested has too low a disease prevalence for the test to be useful
What are three types of variability across screening measurements?
Intra-subject, intra-observer, inter-observer
What is intra-subject variability?
The tendency of a variable to fluctuate
What is intra-observer variability?
The tendency of a single observer to get different results using the same test on the same person.
What is inter-observer variability?
The tendency of two or more observers to get different results using the same test on an individual.
What are a few methods of measuring variability among observers (e.g. screening staff; researchers; etc.)?
(OPA, PPA, KS)
Overall Percent Agreement
Paired Percent Agreement
Kappa Statistic
What is an overall percent agreement?
The proportion of all tests in which the two observers provide the same rating

What is a paired percent agreement?
A measure of the agreement between two tests/observers divided by the total number of pairs where at least one of the two tests/observers indicate a positive result
Since most people don’t have the disease and test negative, the percent agreement can be inflated because of the relatively large value in cell “d.”
So, this method simply ignores cell “d.”

What is a kappa statistic?
Evaluates how much better the agreement between two tests/observers is than chance
Normally a value between 0 (no agreement) and 1 (comlpete agreement)
