Clinical Diagnosis Flashcards
2 parts of clinical vet med
- make diagnosis
- provision of treatment and control methods
What is the main thing leading to medical problems?
-diagnosis
*needs to be accurate to get any value
*lots of uncertainty, but need to quantify it
Clinical data interpretation
means nothing unless interpreted in the context of expected values for the population
How do we define normal?
Gaussian: mean +/- 2 standard deviations
Percentile: 2.5th to 97.5th percentile
Issues with using guassian and percentile definitions
-few diagnostic test results fit a Gaussian distribution
-both methods assume all diseases have same prevalence
-leads to Diagnosis of diseaseBUT the only normal animals are the ones that have not been tested yet
Diagnosis of the disease
95% of normal subjects fall within the reference range for a test, but 5% of normal does not
Abnormal due to disease presence
-gold standard. If disease present then considered abnormal
eg. whether cows get pregnant
-lower serum rates associated with higher open rates in cows
-younger will be at most risk of something go wrong because this is the first time breeding
Uncertainty of clinical data
Imagine if it were always:
- always present in patients with disease, so if absent= no disease
- Never present in patients who do not have disease therefore if disease present= disease
**not always the case! Need clinical judgement
Diagnoses
-usually based on signs, symptoms and tests
-every diagnostic test has some false positive and false negatives
*therefore ruling in or out disease becomes an assessment of probabilities
Actions for diagnosing disease
- Do nothing
- Get more information (test or response to treatment)
- Treat without obtaining more information
**Choice usually depends on probability of disease
Diagnostic Test
-Any technique that differentiates healthy from disease individuals or between different diseases
Accuracy
Degree of agreement between estimated value and the true value
-reflects validity (lack of bias) and reproducibility (precision or repeatability)
Eqn of accuracy
Accuracy= validity+reliability
Validity
Ability to measure what it is supposed to measure, without being influenced by other systemic errors
**Valid=Unbiased
-does not ensure accuracy
-not always repeatable
Reliability
The tendency to give the same results on repeated measures of the same sample
-a reliable test gives repeatable results, over time, locations or populations
**does not ensure accuracy
Sources of false positive and negative results
- Lab error
- Improper sample handling
- Recording errors
What effects Lab error?
-depends on both analytical accuracy and precision
-can vary between labs or within labs
-does the lab have recognized QA/QC programs?
Specific false negative results
-improper timing of test
-wrong sample
-natural or induced tolerance
-non-specific inhibitors
False positive results
-group cross-reactions- looking for one thing, detecting something else
-cross contamination
What is a test that is VALID?
The accuracy of any diagnostic test should be established by a BLIND comparison to an independent and valid criterion for infection or disease status (GOLD STANDARD)
ex. culture of organism, post-mortem examination, biopsy, long term follow up
Pathognomonic Tests
Absolute predictor of disease or disease agent
-can have false negatives
eg.Culture of T. foetus
eg. salmonella
eg. MAP (Johne’s disease bacteria)
***these examples all involved with different times of shedding so can have false negatives
Surrogate tests
Detect secondary changes that will hopefully predict the presence or absence of disease or the disease agent
*Can have false negatives and false positive
eg. Serology
eg. Serum chemistry
How to determine if the test will work for our purpose?
- Diagnostic validity
2.Understand our test subject
Diagnostic validity
The proportion of affected or non-affected animals will be correctly identified by the test
**the sensitivity and specificity
SOURCE: lab or test manufacturer
Understanding our test subject
-What is the prevalence of the disease in the source population for our subject
-What is the pre-test probability that our patient has the disease
SOURCES: signalment, history, clinical exam, published literature and clinical judgement
2x2 tables
TYPES:
- Exam results of diagnostic test
- Determine how much more likely one probelm is vs another in terms of causing a disease
Diagnostic validity 2x2 table
Unique to diagnostic test interpretation
Want actual health status (disease present vs absent)
compared with Test result (positive vs. negative)
Sensitivity
-The proportion of subjects with the disease who have a positive test
>indicates how good a test is at detecting disease
>1-false negative rate
SnNout
When using tests with very high sensitivity, negative results help to Rule out disease
Specificity
The proportion of the subjects without the disease who have a negative test result
-indicates how good the test is at identifying the non-diseased
-1-False positive
SpPin
When using tests with high specificity, positive results rule in disease
Cut offs
-used to distinguish positive and negative test results
-will determine the sensitivity and specificity
-can be changed to get what you need out of the test
Adjustments of cut off
-Finding more positives (sensitivity) = drop the cut off
-find more negatives (Higher specificity)= raise the cut off
**remember that sensitivity and specificity are inversely related (raising one, decreases the other)
Constant sensitivity and specificity
-usually these are assumed to be constant
**especially in this class
Prevalence
The proportion of the population who have the infection under study at ONE POINT in time
**assumes we actually know whether animal has disease or not
True prevalence eqn
TP= disease positive animals/all animals= (a+c)/n
Apparent prevalence
AP= all test positives/all animals = (a+b)/n
**includes all animals that test positive, whether they actually have it or not… real world!
Positive predictive value
-useful in clinic
-proportion of patients with positive test results who actually have the target disorder
What affects positive predictive value?
-sensitivity
-specificity
-prevalence
Positive predictive value eqn
PPP= probability that an animal is diseased given that it is test positive
=a/(a+b)
Negative predictive value
-proportion of animals that have negative test results who don’t have the target disorder
Negative predictive value eqn
NPV= probability that an animal is non diseased given that it is test negative
NPV= d/(c+d)
Lyme Disease SNAP test Example of sensitivity and specificity
-Reported as sensitivity=88%, specificity= 97%
>means 88/100 dogs with Lyme disease should test positive, and if there were 100 dogs never exposed to Lyme then 97 would test negative
Lyme Disease SNAP test Example for clinicians with outdoor dogs
**more interested in the probability of a dog being truly positive if test is positive
- Need to use manufacturer reports and population information (ex. 45% of dogs exposed to Lyme in last year)
- Make Table!
-45%=expected prevalence
-n=1000 number of dogs
-45%of 1000= 450 exposed
-1000-450= 550 not exposed
-Sensitivity (88%) x 450= 396= number of dogs exposed and positive
-specificity (97%) x 550= 534 dogs exposed and test negative
- Make predictive values
-Positive= 396/412 = 96.1%
-Negative= 534/588= 90.8% - Prevalence
-want apparent and true to be similar
Apparent=412/1000= 41.2%
True= 450/1000= 45%
Lyme Disease SNAP test Example for clinicians with indoor dogs
- 1000 indoor dogs, less than 1% exposed
=10 dogs exposed, 990 not
2.sensitivity=88%, specificity=97%
=9 truly positive, 960 truly negative - Determine positive and negative predictive values
4.Determine apparent and true prevalence.
**in rare disease, the apparent prevalence will overestimate the true prevalence of disease
Rare disease apparent and true prevalence
**in rare disease, the apparent prevalence will overestimate the true prevalence of disease
Prevalence effects on NPV and PPV
As prevalence drops, NPV increases to high levels and PPV falls dramatically
Best tests to rule out disease
Negative test with high sensitivity and NPV
Best tests to confirm/rule in disease
Positive test with high specificity and PPV
When are diagnostic tests the best?
When pretest probability of disease is near 50% and predictive values are maximized
-between 40-60%
How to optimize predictive values?
- Use in situation where pre test probability is around 50%
- Use one test and then apply another more specific test to the positive animals
- two tests concurrently
Parallel testing
2 or more different tests are performed and interpreted simultaneously
-animal is positive if it reacts positive to one or all tests
*better for negative test results
*increased sensitivity and NPV
Serial testing
Tests conducted sequentially based on results of previous test
-use one test than do another specific test to those that are positive
-animal only positive if positive on all tests
*max specificity and improves PPV
Repeat Testing
Negative re-testing
-negative test animals are retested with the same test at regular intervals
*used to eradicate disease
*improves sensitivity
eg. Johne’s disease, trichomoniasis
Ruling out disease
-use test with high sensitivity and high negative predictive value
-works best when pre-test probability of disease is low
SnNOUT
Ruling in disease
-use high specificity and high PPV
-works best with pre test probability of disease is high
SpPIN
Cost of false negative
-high consequence when missing certain diseases eg. Foot and Mouth disease
-need high sensitive tests even at cost of specificity
-avoid false negatives at all costs
-use multiple tests in parallel
Cost of a false positive test
-high treatment costs and treatments that might be dangerous, and euthanasia of valuable animal
-use highly specific tests
-use multiple tests interpreted in series
Main things that happen with parallel or negative re testing
-FN decreased
-sensitivity increases
-NPV increases
Main things that happen with serial testing
-False positives decreased
-specificity increased
-PPV increased
Cut points
-point between normal and diseased animals
-can be adjusted to improve sensitivity or specificity
Increased vs decreased cut points
Increased: increase specificity and decreased sensitivity
Decreased: increased sensitivity, decreased specificity
Selection of low cut point
-gives good sensitivity
-use when false negatives are not acceptable
-consequences of false positives are not severe
-disease can be treated by untreated cases are fatal
Selection of high cut points
-gives good specificity
-false negative consequences are not severe
-disease is severe but confirmation has little impact on terms of therapy or prevention
Cut off fuzzy zones
animals falling within the intermediate zone
-need to be re tested after a certain time period
Lead toxicity in cattle
-lead accumulates in bone and is transferred across placental barrier
-excreted in urine, bile, feces, milk
-neuro signs, blindness, clamping jaws, aggression, head pressing, tonic-clonic convulsions, encephalopathy,
Lead toxicity cut off point and uncertain zone
Background <0.10 ppm
High (uncertain zone): 0.10=0.35
Toxic: >0.35ppm
*animals can hide in this zone because they can be toxic and have no clinical signs
Receiver Operator Characteristic (ROC) Curves
-Graphs true positive rate on vertical axis (sensitivity)
-False positive rate (1-specificity) on horizontal axis
**point closest to the top left corner will maximize sensitivity and specificity
BUT remember to consider the costs of false positives and false negatives
Mass screening
-sampling volunteers or a sample of the population to detect disease
eg. Brucellosis testing
eg. Bovine TB testing
Case finding
Seeking an early diagnosis when a client brings an animal to vet for unrelated reasons
eg. heartworm testing
eg. meat inspection
Suitable tests for screening tests
-sensitivity is hard to estimate so specificity is most important
-PPV is only measuring diagnostic test performance, not efficacy of the screening
Evaluating a screening program
Is the early detection test worth it?
-increased QOL, treatment costs decreased, etc.
-Use randomized clinical trials
Bias with diagnostic screening
Early diagnosis will almost always improve survival even if therapy applied is useless
BUT there are many biases that can occur to make a test appear better
1.volunteer bias
2. Zero time shift or lead time bias
3. Lenght time bias
Volunteer effect
-clients that bring animals for screening tests are not the same as ones that don’t
-these animals coming in will likely be the ones with better management and higher health anyway
Zero Time shift or lead time bias
Comparing survival times after early diagnosis to survival times after conventional diagnosis
-zero point for survival time is time of diagnosis so if early diagnosis happens before conventional time then lead time is not taken into account
Length time bias
Long pre clinical phase have longer clinical phase
vs. short pre clinical phase have short clinical phase
So likely to find the diseases that are less aggressive and longer clinical phases
Early diagnosis hazards
-marketing our treatment on clients; need to ensure efficacy
-False positive risk- especially if treatment debilitating
-Labeling is important
Diagnostic panels
- panel of diagnostic tests on healthy animals
-each test has a specificity and sensitivity; so there will be false positives in healthy animals due to chance
Ex. chance of two normal tests= specificity^2 and then probability of false positive= 1-specificity^2
Herd testing
-used to determine prevalence of infected herds and certify herds as disease negative for eradication or trade
Differences between herd and individual tests
Uncertainty of individual selectivity and specificity is amplified in herd selectivity and specificity
*false results impact is greater
Herd positives
A positive test does not equal positive diagnosis
because false positives can occur in herds
-if disease prevalence drops in herd, PPV gets worse
**Need high specificity tests when prevalence low
Ways to manipulate tests
- Increase number of animals tested
-increases herd sensitivity
-decreases herd specificity
-decreases herd PPV, increases herd NPV - Increase required reactors to be considered positive
-decreases herd sensitivity
-increases herd specificity
-increases herd PPV, decreases herd NPV
Pooled Samples
Ideal in situations where within herd prevalence is low
Pros: decreases lab cost, increases herd sensitivity due to increased n
Cons: risk of decreases selectivity due to dilution; logistical challenges of mixing samples; some PCRs are susceptible to inhibitors (eg. urine) so contamination can mess up all samples
What to do if no gold standard?
- compare 2 tests agreement
- Compare agreement between 2 clinicians
- compare agreement within clinicians (same data=same diagnosis)
Kappa Statistic
The proportion of agreement measured beyond that expected by chance alone
eg. adjust for agreement of flipping a coin and getting agreement 50% of the time
Interpreting kappa
0.61 and above is relatively good agreement
-above 0.81 is almost perfect agreement
Index test
-other option for no gold standard
-test is compared to a reference standard (therefore selectivity and specificity obtained relative from reference test)