Test #2 Flashcards
PICO
Patient/problem/population
Intervention
Comparison
Outcome
types of reliability
Test-retest reliability
Internal consistency
Intra-rater reliability
Inter-rate reliability
types of validity
face validity
content validity
construct validity
criterion-related validity
responsiveness
test-retest reliability
measures the consistency of the testing instrument over time
Considerations:
test-retest intervals
Carryover and testing effects
Internal consistency
measures the extent to which an instrument measures aspects of a certain characteristic mainly for surveys
methods:
* Correlation among all items
* test-retest
* Split half reliability
Intra-rater reliability
measures the stability of measures recorded by one individual across 2 or more trials
Considerations:
* carry over effects, practice effect, intervals between trials, rater bias (e.g. memory)
Inter-rater reliability
Variation between 2 or more raters measuring the same group of subjects
Considerations
* Intra-rater reliability for each rater
* Simultaneous scoring often not possible
* influence of multiple trials on same sibject
Face validity
Simple and subjective
Content validity
Does this “test and/or measure” measure what it is supposed to measure?
Often done by a group of experts
Based on subjective opinion
Construct validity
It is the extent to which what was to be measured was actually measured
It is the 1st objective validity type
Criterion related validity
Compare “new” measure to relevant criterion variable
Responsiveness
“Dynamic” quality of measure; the ability of an instrument to accurately detect change when it has occurred
ICC general guidelines
> 0.90= excellent reliability
0.75 = good reliability
< 0.75= poor to moderate reliability