Week 12: Reliability And Validity Flashcards
What are the 3 psychometric properties
Reliability: reproducibility
Validity: measures what it’s supposed to measure
Responsiveness: detect change
Reliability definition
The amount of error, both random and systematic, inherent in a measurement
- AKA: reproducibility, stability, agreement, precision, association, sensitivity objectivity
- consistency of measurement
- a reliable measure can be expected to repeat the same score in 2 different occasions
Measurement error
Eg temp, scale
Systematic error
-bias
-fairly consistent that results in overestimate or underestimate
-predictable
-occur in one direction
Eg tape measure that merging at 2 instead of zero, the fire recordings will be 2 in too long
Primarily a concern of validity, not reliability (the flawed tape measure be be very consistent)
Random error
-errors due to chance
-unpredictable
-the primary concern of reliability
Eg: while measuring the patient: the patient moves slightly
-the therapist incorrectly reads the tape measure
-the tape measure may have some slack
Sources or error in test performance
Clinician-related factors:
-random error (poor technique or interpretation)
-directional bias
Patient-related factors
-natural fluctuations
-misunderstanding of procedures or miscommunication
Reliability types
- Rater reliability
A) intra-rater: degree of random variation within one radar. Ie the repeated measures obtained by one person and the variation between these.
B) inter-rater: when there is two or more examiners measuring the same thing. The variation is between the people obtaining these measures.
2.Test-retest: repeated measurements that aren’t influence by reader - Equivelance
- Internal consistency
Intra-examiner reliability
- scores should match when the same examiner test the same subjects in two or more occasions
- intra-examiner reliability is the degree that the examiner agrees with himself or herself
Inter-examiner reliability
- when 2 or more examiners tests the same subjects for the same characteristic using the same measure, scores should match
- inter-examiner reliability is the degree that their findings agree
Quantifying inter-examiner and intra-examiner reliability
Correlation
There should be a high degree of correlation between scores if 2 examiners testing the same group of subjects or 1 examiner testing the same group on 2 occasions.
- however, it is possible to have good correlation and concurrent poor agreement
- occurs when 1 one examiner consistently scores higher or lower than other examiner
Test-retest reliability
Ie if an examiner gave a subject a sheet to fill out on level of disability, then gave them the same sheet 5 minutes later then we wouldn’t expect a difference between them. If there was it would be error, but because the radar isn’t involved in this error it is a test-retest reliability .