PTMMD Unit 1 Flashcards

1
Q

What are the characteristics of Type 1 system thinking?

A
  • Non-analytical
  • Fast thinking / forward reasoning
    – Information is recognized -> quick to reason
  • Automatic, involuntary
  • Pattern recognition
  • Inductive reasoning
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are the characteristics of Type 2 system thinking?

A
  • Analytical
  • Slow thinking / backward reasoning
    – Information is perceived -> analyzed for reason
  • Conscious, effortfull
  • Logical
    – If -> Then
  • Deductive reasoning
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is the definition of Reliability?

A

The extent to which a test or measurement is free from error. It is the repeatability of the measurement between clinicians, between groups of patients, over time.
A test is considered reliable if it produces precise, accurate, and reproducible information

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What are the 2 types of reliability?

A
  • Inter-rater Reliability: Determines whether the same single examiner can repeat the test consistently
  • Intra-rater Reliability: Determines whether two or more examiners can repeat a test consistently
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

In the intraclass correlation coefficient Benchmark values (table 4-24 in 198). What is the description of the value is less than (<) 0.75?

A

Poor to moderate agreement

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

In the intraclass correlation coefficient Benchmark values (table 4-24 in 198). What is the description of the value is greater than (>) 0.75?

A

Good agreement

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

In the intraclass correlation coefficient Benchmark values (table 4-24 in 198). What is the description of the value is greater than (<) 0.90?

A

Reasonable agreement for clinical measurements

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What are the 2 statistical coefficients most commonly used to characterize the reliability of the test and measures?

A
  • Intraclass correlation coefficient (ICC)
  • Kappa (k) statistic
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is the Intraclass correlation coefficient?

A

A reliability coefficient calculated with variance estimates obtained through analysis of variance

(Use table 4-24 in pg 198)

The advantage of the ICC over correlation coefficients is that it does no require the same number of raters per subject, and two or more raters or ratings can use it

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Is the Kappa (k) statistic?

A

Its an index of inter-rater agreement- it represents the extent to which the study’s data are correct representations of the variables measured. With nominal data, the k statistic is applied after the percentage agreement between testers has been determined.

The k statistic was developed to account for the possibility that raters guess some variables due to uncertainty. Like most correlation statistics, the kappa can range from -1 to +1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

With Kappa (k) Coefficient, what would be the strength of agreement with a Kappa statistic of <0.00?

A

Poor

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

With Kappa (k) Coefficient, what would be the strength of agreement with a Kappa statistic of
0.00-0.20?

A

Slight

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

With Kappa (k) Coefficient, what would be the strength of agreement with a Kappa statistic of
0.21-0.40?

A

Fair

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

With Kappa (k) Coefficient, what would be the strength of agreement with a Kappa statistic of
0.41-0.60?

A

Moderate

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

With Kappa (k) Coefficient, what would be the strength of agreement with a Kappa statistic of
0.61-0.80?

A

Substantial

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

With Kappa (k) Coefficient, what would be the strength of agreement with a Kappa statistic of
0.81-1.00?

A

Almost perfect

17
Q

What is Standard Error of Measurement?

A

SEM estimates how repeated measures of an individual on the same instrument tend to be distributed around their “true” score, and it indicates how much change there might be when the test is repeated.
The SEM can be used to differentiate between real change and random measurement error.

The smaller the SEM, the more precise the measurement capacity of the instrument

18
Q

What is Validity?

A

The degree to which the test measures what is purports to be measuring and how well it correctly classifies individuals with or without a particular disease

19
Q

What is Construct Validity?
Convergent Vs. Divergent validity?

A

This type of validity represents the level of agreement between a measurement and the idea it purports to measure. To establish adequate construct validity, the measurement should provide similar results to other measurements intended to measure the same variable.
- Convergent validity: is the agreement between two different measurements of the same variable.
- Divergent validity: is the extent to which two measurements of theoretically separate ideas are unrelated.

20
Q

What is Content Validity?

A

This type of validity is the extent to which a measurement reflects the idea (content) it purports to measure.
Measurements with high contents validity demonstrate a more accurate representation of the variable being measured

21
Q

What is Criterion-related Validity?
Concurrent vs. Predicted Validity

A

This type of validity estimates the extent to which the test can substitute for another test, which may be the gold standard test or a test of a related variable.
This is commonly assessed as concurrent validity and predicted validity.
- Concurrent Validity: Is the measure of association between two measurements taken simultaneously
- Predicted Validity: Is an estimate of the ability of a measurement to forecast a future measurement of outcome.

22
Q

What is Face Validity?

A

This type of validity approximates that a measurement reflects the variable that the measurement is intended to estimate.

23
Q

What is Sensitivity?
- What is Positive Predictive Value (PPV)?

A

Sensitivity represents the proportion of patients with a disorder who test positive. (A test that can correctly identify every person who has a disorder has a sensitivity of 1.0)
- SnNout, when a symptom or sign’s sensitivity is high, a Negative response RULES OUT the target disorder.

  • The positive predictive value is the proportion of patients with positive test results who are correctly diagnosed.
24
Q

What is Specificity?
- What is Negative Predictive Value (NPV)?

A

Specificity is the proportion of the study population without the disorder that test negative. (A test that can correctly identify every person who does not have the target disorder has a specificity of 1.0)
- SpPin, when specificity is extremely high, a Positive test RULES IN the target disorder

  • The negative predictive value is the proportion of patients with negative test results who are correctly diagnosed.
25
Q

What is Likelihood Ratio (LR)?

A

The index measurement that combines sensitivity and specificity values into one number and can gauge the performance of a diagnostic test, as it indicates how much a given diagnostic test result will lower or raise the pre-test probability of the target disorder.

26
Q

What are the four measures that contribute to sensitivity and specificity?

A
  • True Positive: The test indicates that the patient has the disease or the dysfunction, and the Gold Standard test confirms this
  • False Positive: The clinical test indicates that the disease or the dysfunction is present, but the Gold Standard test does not confirm this
  • False Negative: The clinical test indicates the disorder’s absence, but the Gold Standard test shows that the disease or dysfunction is present
  • True Negative: The clinical and the Gold Standard tests agree that the disease or dysfunction is absent
27
Q

Why are Positive and Negative Likelihood Ratios (LR) important?

A

Both Positive and Negative LR’s provide more useful and accurate information to the clinician than negative predictive value (NPV) and positive predictive value (PPV) because LR’s are calculated independently from the patients condition’s prevalence in the sample population, making the LR’s important stat for summarizing diagnostic accuracy

28
Q

What does it mean for a test to have a strong LR+ value?

A

This moves the clinician closer to a diagnosis

29
Q

What does it mean for a test to have a strong LR- value?

A

This moves the clinician further away from a diagnosis

30
Q

What is the explanation if you have a Positive Likelihood Ratio of 2-5 and a Negative Likelihood Ratio of 0.2-0.5?

A

This alters post-test probability of a diagnosis by a small degree

31
Q

What is the explanation if you have a Positive Likelihood Ratio of 5-10 and a Negative Likelihood Ratio of 0.1-0.2?

A

This alters post-test probability of a diagnosis by a moderate degree

32
Q

What is the explanation if you have a Positive Likelihood Ratio of more than 10 and a Negative Likelihood Ratio of less than 0.1?

A

This alters post-test probability of a diagnosis by a large degree

33
Q

What is the Diagnostic Odds Ratio?

A

The ratio of the odds of positivity in disease relative to the non-diseased positivity odds

  • Positive (+) :
    If present a(true+ve) , If absent b(false+ve)
  • Negative (-) :
    If present. c(false-ve). , if absent d(true-ve)
34
Q

What is the Quality Assessment of studies of diagnostic accuracy (QUADAS)?
What would a score of 10 or greater of “yes” indicate?
What would a score of less than 10 of “yes” indicate?

A

An evidence-based quality assessment tool currently recommended for use in systematic reviews of Diagnostic accuracy studies (DASs).

  • Its a list of 14 questions that should be answered “yes”,”no” or “unclear”
  • A score of 10 or greater of “yes” indicate higher quality study
  • A score less than 10 “yes” suggest a poorly designed study
35
Q

What is Diagnostic accuracy studies (DAS)?

A

DAS aims to determine how good a particular test is at detecting the target condition. DAS allows the calculation of various statistics that indicate - “test performance”- how good the index test is at detecting the target condition

  • These statistics include sensitivity, specificity, PPV, NPV, positive and negative LRs and diagnostic odds ratio