Lecture 9: Screening & Diagnostic Tests 2 Flashcards
What are the 2 ways to combine tests (2 tests involving sn and sp like a stool and ELISA test)?
Series interpretation
-We call a test “positive” only if an individual tests positive on BOTH tests
Parallel interpretation
-We call a test “positive” if the individual tests positive to at LEAST one test
What is series testing?
Series testing aka sequential testing
-Multiple tests used one after another to determine a final decision
-1st test is cheaper or less invasive/uncomfortable
-2nd test is often more expensive, laborious, invasive/uncomfortable
Test 1 D-
D+————Test 2 D-
D+ Only these individuals are . the final D+)
How does sn and sp related to series testing?
-If nothing tests must be positive to call an individual positive, this leads to an increase in SP for the combined tests (or look at it like decreasing your F pos)
-There is a lower chance of false positives bc you need to test positive to both tests to be called “positive”
-However, it decreases the sensitivity bc there’s a higher chance of false negatives
What is parallel testing?
Parallel testing aka simultaneously testing
-Multiple tests used simultaneously to determine a final decision
-If at least 1 test is positive then test “positive” therefore need both tests must be negative to be called a “negative”
SO
-Both tests must be negative to be called a “negative” (this will increase sensitivity and fewer false negatives)
-Only 1 test must be positive to be called a “positive” (this will determine specificity and more false negatives)
What are some other important considerations when considering screening?
Validity of a test
-Ability of a test to distinguish b/w who has the disease and who doesn’t (where multiple tests are run, the average should be close to the true value)
-Sensitivity and specificity
Reliability of a test
-Aka: repeatability
-Ability of a test to give repeatable results
What is reliability?
can have 3 sources of variation:
-Intra-subject variation: differences within the individual
-Intra-observer variation: differences within the same observer (ex on different trials)
-Inter-observer variation: differences between observers.
What is chance agreement?
-Probability that 2 Tess agree (or 2 clinicians agree) just by chance, rather than true agreement b/w them
-How can er compare the agreement b/w tests beyond chance?
KAPPA= a measure of agreement beyond what would be due to chance alone
How do you interpret Kappa?
Kappa values
<0.2 = slight agreement
0.2-0.4 = fair agreement
0.4-0.6 = moderate agreement
0.6-0.8 = substantial agreement
>0.8 = Excellent agreement
*Dont need to memorize exact values but should be able to comment on the agreement in general terms
What is this lectures “bottom line”?
-Tests include everything from lab tests, to clinical opinions, to survey questions
-Kappa measures potential agreement of 2 tests beyond that expected by chance alone
-When combining 2 tests in SERIES, both tests must be positive to call the individual “positive” (this increases specificity and lower false +, but reduces sensitivity and increases false -)
-When combining 2 tests in PARALLEL, only 1 test needs to be positive to call it positive (this will increase sensitivity or lower false - bc need 2 negatives to be not diseased, but reduces specificity or increase false +