Clinical Diagnosis Flashcards

1
Q

2 parts of clinical vet med

A
  1. make diagnosis
  2. provision of treatment and control methods
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is the main thing leading to medical problems?

A

-diagnosis
*needs to be accurate to get any value
*lots of uncertainty, but need to quantify it

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Clinical data interpretation

A

means nothing unless interpreted in the context of expected values for the population

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

How do we define normal?

A

Gaussian: mean +/- 2 standard deviations

Percentile: 2.5th to 97.5th percentile

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Issues with using guassian and percentile definitions

A

-few diagnostic test results fit a Gaussian distribution

-both methods assume all diseases have same prevalence

-leads to Diagnosis of diseaseBUT the only normal animals are the ones that have not been tested yet

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Diagnosis of the disease

A

95% of normal subjects fall within the reference range for a test, but 5% of normal does not

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Abnormal due to disease presence

A

-gold standard. If disease present then considered abnormal

eg. whether cows get pregnant
-lower serum rates associated with higher open rates in cows
-younger will be at most risk of something go wrong because this is the first time breeding

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Uncertainty of clinical data

A

Imagine if it were always:

  1. always present in patients with disease, so if absent= no disease
  2. Never present in patients who do not have disease therefore if disease present= disease

**not always the case! Need clinical judgement

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Diagnoses

A

-usually based on signs, symptoms and tests

-every diagnostic test has some false positive and false negatives
*therefore ruling in or out disease becomes an assessment of probabilities

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Actions for diagnosing disease

A
  1. Do nothing
  2. Get more information (test or response to treatment)
  3. Treat without obtaining more information

**Choice usually depends on probability of disease

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Diagnostic Test

A

-Any technique that differentiates healthy from disease individuals or between different diseases

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Accuracy

A

Degree of agreement between estimated value and the true value
-reflects validity (lack of bias) and reproducibility (precision or repeatability)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Eqn of accuracy

A

Accuracy= validity+reliability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Validity

A

Ability to measure what it is supposed to measure, without being influenced by other systemic errors

**Valid=Unbiased
-does not ensure accuracy
-not always repeatable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Reliability

A

The tendency to give the same results on repeated measures of the same sample
-a reliable test gives repeatable results, over time, locations or populations
**does not ensure accuracy

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Sources of false positive and negative results

A
  1. Lab error
  2. Improper sample handling
  3. Recording errors
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What effects Lab error?

A

-depends on both analytical accuracy and precision
-can vary between labs or within labs
-does the lab have recognized QA/QC programs?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Specific false negative results

A

-improper timing of test
-wrong sample
-natural or induced tolerance
-non-specific inhibitors

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

False positive results

A

-group cross-reactions- looking for one thing, detecting something else
-cross contamination

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What is a test that is VALID?

A

The accuracy of any diagnostic test should be established by a BLIND comparison to an independent and valid criterion for infection or disease status (GOLD STANDARD)

ex. culture of organism, post-mortem examination, biopsy, long term follow up

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Pathognomonic Tests

A

Absolute predictor of disease or disease agent
-can have false negatives

eg.Culture of T. foetus
eg. salmonella
eg. MAP (Johne’s disease bacteria)

***these examples all involved with different times of shedding so can have false negatives

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Surrogate tests

A

Detect secondary changes that will hopefully predict the presence or absence of disease or the disease agent
*Can have false negatives and false positive

eg. Serology
eg. Serum chemistry

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

How to determine if the test will work for our purpose?

A
  1. Diagnostic validity

2.Understand our test subject

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Diagnostic validity

A

The proportion of affected or non-affected animals will be correctly identified by the test
**the sensitivity and specificity

SOURCE: lab or test manufacturer

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Understanding our test subject

A

-What is the prevalence of the disease in the source population for our subject

-What is the pre-test probability that our patient has the disease

SOURCES: signalment, history, clinical exam, published literature and clinical judgement

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

2x2 tables

A

TYPES:

  1. Exam results of diagnostic test
  2. Determine how much more likely one probelm is vs another in terms of causing a disease
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Diagnostic validity 2x2 table

A

Unique to diagnostic test interpretation

Want actual health status (disease present vs absent)
compared with Test result (positive vs. negative)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

Sensitivity

A

-The proportion of subjects with the disease who have a positive test
>indicates how good a test is at detecting disease
>1-false negative rate

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

SnNout

A

When using tests with very high sensitivity, negative results help to Rule out disease

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

Specificity

A

The proportion of the subjects without the disease who have a negative test result
-indicates how good the test is at identifying the non-diseased
-1-False positive

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

SpPin

A

When using tests with high specificity, positive results rule in disease

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

Cut offs

A

-used to distinguish positive and negative test results

-will determine the sensitivity and specificity

-can be changed to get what you need out of the test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

Adjustments of cut off

A

-Finding more positives (sensitivity) = drop the cut off

-find more negatives (Higher specificity)= raise the cut off

**remember that sensitivity and specificity are inversely related (raising one, decreases the other)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

Constant sensitivity and specificity

A

-usually these are assumed to be constant
**especially in this class

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

Prevalence

A

The proportion of the population who have the infection under study at ONE POINT in time

**assumes we actually know whether animal has disease or not

36
Q

True prevalence eqn

A

TP= disease positive animals/all animals= (a+c)/n

37
Q

Apparent prevalence

A

AP= all test positives/all animals = (a+b)/n

**includes all animals that test positive, whether they actually have it or not… real world!

38
Q

Positive predictive value

A

-useful in clinic
-proportion of patients with positive test results who actually have the target disorder

39
Q

What affects positive predictive value?

A

-sensitivity
-specificity
-prevalence

40
Q

Positive predictive value eqn

A

PPP= probability that an animal is diseased given that it is test positive
=a/(a+b)

41
Q

Negative predictive value

A

-proportion of animals that have negative test results who don’t have the target disorder

42
Q

Negative predictive value eqn

A

NPV= probability that an animal is non diseased given that it is test negative

NPV= d/(c+d)

43
Q

Lyme Disease SNAP test Example of sensitivity and specificity

A

-Reported as sensitivity=88%, specificity= 97%
>means 88/100 dogs with Lyme disease should test positive, and if there were 100 dogs never exposed to Lyme then 97 would test negative

44
Q

Lyme Disease SNAP test Example for clinicians with outdoor dogs

A

**more interested in the probability of a dog being truly positive if test is positive

  1. Need to use manufacturer reports and population information (ex. 45% of dogs exposed to Lyme in last year)
  2. Make Table!
    -45%=expected prevalence
    -n=1000 number of dogs
    -45%of 1000= 450 exposed
    -1000-450= 550 not exposed

-Sensitivity (88%) x 450= 396= number of dogs exposed and positive
-specificity (97%) x 550= 534 dogs exposed and test negative

  1. Make predictive values
    -Positive= 396/412 = 96.1%
    -Negative= 534/588= 90.8%
  2. Prevalence
    -want apparent and true to be similar
    Apparent=412/1000= 41.2%
    True= 450/1000= 45%
45
Q

Lyme Disease SNAP test Example for clinicians with indoor dogs

A
  1. 1000 indoor dogs, less than 1% exposed
    =10 dogs exposed, 990 not
    2.sensitivity=88%, specificity=97%
    =9 truly positive, 960 truly negative
  2. Determine positive and negative predictive values
    4.Determine apparent and true prevalence.
    **in rare disease, the apparent prevalence will overestimate the true prevalence of disease
46
Q

Rare disease apparent and true prevalence

A

**in rare disease, the apparent prevalence will overestimate the true prevalence of disease

47
Q

Prevalence effects on NPV and PPV

A

As prevalence drops, NPV increases to high levels and PPV falls dramatically

48
Q

Best tests to rule out disease

A

Negative test with high sensitivity and NPV

49
Q

Best tests to confirm/rule in disease

A

Positive test with high specificity and PPV

50
Q

When are diagnostic tests the best?

A

When pretest probability of disease is near 50% and predictive values are maximized
-between 40-60%

51
Q

How to optimize predictive values?

A
  1. Use in situation where pre test probability is around 50%
  2. Use one test and then apply another more specific test to the positive animals
  3. two tests concurrently
52
Q

Parallel testing

A

2 or more different tests are performed and interpreted simultaneously
-animal is positive if it reacts positive to one or all tests

*better for negative test results
*increased sensitivity and NPV

53
Q

Serial testing

A

Tests conducted sequentially based on results of previous test
-use one test than do another specific test to those that are positive
-animal only positive if positive on all tests

*max specificity and improves PPV

54
Q

Repeat Testing

A

Negative re-testing
-negative test animals are retested with the same test at regular intervals

*used to eradicate disease
*improves sensitivity
eg. Johne’s disease, trichomoniasis

55
Q

Ruling out disease

A

-use test with high sensitivity and high negative predictive value
-works best when pre-test probability of disease is low

SnNOUT

56
Q

Ruling in disease

A

-use high specificity and high PPV
-works best with pre test probability of disease is high

SpPIN

57
Q

Cost of false negative

A

-high consequence when missing certain diseases eg. Foot and Mouth disease

-need high sensitive tests even at cost of specificity
-avoid false negatives at all costs

-use multiple tests in parallel

58
Q

Cost of a false positive test

A

-high treatment costs and treatments that might be dangerous, and euthanasia of valuable animal

-use highly specific tests
-use multiple tests interpreted in series

59
Q

Main things that happen with parallel or negative re testing

A

-FN decreased
-sensitivity increases
-NPV increases

60
Q

Main things that happen with serial testing

A

-False positives decreased
-specificity increased
-PPV increased

61
Q

Cut points

A

-point between normal and diseased animals
-can be adjusted to improve sensitivity or specificity

62
Q

Increased vs decreased cut points

A

Increased: increase specificity and decreased sensitivity

Decreased: increased sensitivity, decreased specificity

63
Q

Selection of low cut point

A

-gives good sensitivity
-use when false negatives are not acceptable

-consequences of false positives are not severe
-disease can be treated by untreated cases are fatal

64
Q

Selection of high cut points

A

-gives good specificity
-false negative consequences are not severe
-disease is severe but confirmation has little impact on terms of therapy or prevention

65
Q

Cut off fuzzy zones

A

animals falling within the intermediate zone
-need to be re tested after a certain time period

66
Q

Lead toxicity in cattle

A

-lead accumulates in bone and is transferred across placental barrier
-excreted in urine, bile, feces, milk

-neuro signs, blindness, clamping jaws, aggression, head pressing, tonic-clonic convulsions, encephalopathy,

67
Q

Lead toxicity cut off point and uncertain zone

A

Background <0.10 ppm

High (uncertain zone): 0.10=0.35

Toxic: >0.35ppm
*animals can hide in this zone because they can be toxic and have no clinical signs

68
Q

Receiver Operator Characteristic (ROC) Curves

A

-Graphs true positive rate on vertical axis (sensitivity)
-False positive rate (1-specificity) on horizontal axis

**point closest to the top left corner will maximize sensitivity and specificity
BUT remember to consider the costs of false positives and false negatives

69
Q

Mass screening

A

-sampling volunteers or a sample of the population to detect disease

eg. Brucellosis testing
eg. Bovine TB testing

70
Q

Case finding

A

Seeking an early diagnosis when a client brings an animal to vet for unrelated reasons

eg. heartworm testing
eg. meat inspection

71
Q

Suitable tests for screening tests

A

-sensitivity is hard to estimate so specificity is most important

-PPV is only measuring diagnostic test performance, not efficacy of the screening

72
Q

Evaluating a screening program

A

Is the early detection test worth it?
-increased QOL, treatment costs decreased, etc.

-Use randomized clinical trials

73
Q

Bias with diagnostic screening

A

Early diagnosis will almost always improve survival even if therapy applied is useless
BUT there are many biases that can occur to make a test appear better

1.volunteer bias
2. Zero time shift or lead time bias
3. Lenght time bias

74
Q

Volunteer effect

A

-clients that bring animals for screening tests are not the same as ones that don’t
-these animals coming in will likely be the ones with better management and higher health anyway

75
Q

Zero Time shift or lead time bias

A

Comparing survival times after early diagnosis to survival times after conventional diagnosis
-zero point for survival time is time of diagnosis so if early diagnosis happens before conventional time then lead time is not taken into account

76
Q

Length time bias

A

Long pre clinical phase have longer clinical phase
vs. short pre clinical phase have short clinical phase

So likely to find the diseases that are less aggressive and longer clinical phases

77
Q

Early diagnosis hazards

A

-marketing our treatment on clients; need to ensure efficacy

-False positive risk- especially if treatment debilitating

-Labeling is important

78
Q

Diagnostic panels

A
  • panel of diagnostic tests on healthy animals
    -each test has a specificity and sensitivity; so there will be false positives in healthy animals due to chance

Ex. chance of two normal tests= specificity^2 and then probability of false positive= 1-specificity^2

79
Q

Herd testing

A

-used to determine prevalence of infected herds and certify herds as disease negative for eradication or trade

80
Q

Differences between herd and individual tests

A

Uncertainty of individual selectivity and specificity is amplified in herd selectivity and specificity
*false results impact is greater

81
Q

Herd positives

A

A positive test does not equal positive diagnosis
because false positives can occur in herds
-if disease prevalence drops in herd, PPV gets worse

**Need high specificity tests when prevalence low

82
Q

Ways to manipulate tests

A
  1. Increase number of animals tested
    -increases herd sensitivity
    -decreases herd specificity
    -decreases herd PPV, increases herd NPV
  2. Increase required reactors to be considered positive
    -decreases herd sensitivity
    -increases herd specificity
    -increases herd PPV, decreases herd NPV
83
Q

Pooled Samples

A

Ideal in situations where within herd prevalence is low

Pros: decreases lab cost, increases herd sensitivity due to increased n

Cons: risk of decreases selectivity due to dilution; logistical challenges of mixing samples; some PCRs are susceptible to inhibitors (eg. urine) so contamination can mess up all samples

84
Q

What to do if no gold standard?

A
  1. compare 2 tests agreement
  2. Compare agreement between 2 clinicians
  3. compare agreement within clinicians (same data=same diagnosis)
85
Q

Kappa Statistic

A

The proportion of agreement measured beyond that expected by chance alone

eg. adjust for agreement of flipping a coin and getting agreement 50% of the time

86
Q

Interpreting kappa

A

0.61 and above is relatively good agreement
-above 0.81 is almost perfect agreement

87
Q

Index test

A

-other option for no gold standard
-test is compared to a reference standard (therefore selectivity and specificity obtained relative from reference test)