PTMMD Unit 1 Flashcards
What are the characteristics of Type 1 system thinking?
- Non-analytical
- Fast thinking / forward reasoning
– Information is recognized -> quick to reason - Automatic, involuntary
- Pattern recognition
- Inductive reasoning
What are the characteristics of Type 2 system thinking?
- Analytical
- Slow thinking / backward reasoning
– Information is perceived -> analyzed for reason - Conscious, effortfull
- Logical
– If -> Then - Deductive reasoning
What is the definition of Reliability?
The extent to which a test or measurement is free from error. It is the repeatability of the measurement between clinicians, between groups of patients, over time.
A test is considered reliable if it produces precise, accurate, and reproducible information
What are the 2 types of reliability?
- Inter-rater Reliability: Determines whether the same single examiner can repeat the test consistently
- Intra-rater Reliability: Determines whether two or more examiners can repeat a test consistently
In the intraclass correlation coefficient Benchmark values (table 4-24 in 198). What is the description of the value is less than (<) 0.75?
Poor to moderate agreement
In the intraclass correlation coefficient Benchmark values (table 4-24 in 198). What is the description of the value is greater than (>) 0.75?
Good agreement
In the intraclass correlation coefficient Benchmark values (table 4-24 in 198). What is the description of the value is greater than (<) 0.90?
Reasonable agreement for clinical measurements
What are the 2 statistical coefficients most commonly used to characterize the reliability of the test and measures?
- Intraclass correlation coefficient (ICC)
- Kappa (k) statistic
What is the Intraclass correlation coefficient?
A reliability coefficient calculated with variance estimates obtained through analysis of variance
(Use table 4-24 in pg 198)
The advantage of the ICC over correlation coefficients is that it does no require the same number of raters per subject, and two or more raters or ratings can use it
Is the Kappa (k) statistic?
Its an index of inter-rater agreement- it represents the extent to which the study’s data are correct representations of the variables measured. With nominal data, the k statistic is applied after the percentage agreement between testers has been determined.
The k statistic was developed to account for the possibility that raters guess some variables due to uncertainty. Like most correlation statistics, the kappa can range from -1 to +1
With Kappa (k) Coefficient, what would be the strength of agreement with a Kappa statistic of <0.00?
Poor
With Kappa (k) Coefficient, what would be the strength of agreement with a Kappa statistic of
0.00-0.20?
Slight
With Kappa (k) Coefficient, what would be the strength of agreement with a Kappa statistic of
0.21-0.40?
Fair
With Kappa (k) Coefficient, what would be the strength of agreement with a Kappa statistic of
0.41-0.60?
Moderate
With Kappa (k) Coefficient, what would be the strength of agreement with a Kappa statistic of
0.61-0.80?
Substantial