Lecture 10B: Assessment of Measurement Reliability Flashcards
What is measurement reliability?
A reliable instrument performs with predictable consistency under set conditions.
What are the four types of measurement reliability?
- Test-retest
- Intra-rater
- Inter-rater
- Internal consistency
Define test-retest reliability.
Examine the same instrument in the same group of individuals at two (or more) different times.
What does intra-rater reliability concern?
The stability of measurements by one individual across one or more trials.
What is inter-rater reliability?
The consistency of findings between raters.
What makes a high reliability?
- High degree of association
- Agreement of scores
What is the intraclass correlation coefficient (ICC)?
A statistic used to assess reliability, considering correlation and agreement of scores.
What is the magnitude range of ICC?
0.00 to 1.00; values closer to 1.00 indicate higher reliability.
What is considered acceptable reliability for ICC?
≥0.75
What does Model 1 of ICC represent?
Each subject is assessed by a different set of randomly selected raters.
What is Model 2 of ICC also known as?
Two-way random effects model: Each subject is assessed by each rater, results can be generalized
What characterizes Model 3 of ICC?
Two-way mixed effects model: Each subject is assessed by each rater; raters are not representative of a larger population.
What is the difference between Form 1 and Form k in ICC?
Form 1 uses a single measurement; Form k uses the average of several measurements.
What is Cronbach’s alpha used to assess?
Internal consistency.
What does internal consistency measure?
the extent to which the items of a scale
measure various aspects of the same attribute (e.g., balance: static, dynamic, sitting, standing, etc.)
What does a Cronbach’s alpha value >0.8 indicate?
Excellent internal consistency.
What does a Cronbach’s alpha value <0.7 indicate?
Poor internal consistency.
What does a Cronbach’s alpha value >0.9 indicate?
Too high: all items are measuring exactly the
same thing (can only use one item to measure what you want to measure instead?)
What is percent agreement?
A measure of how often raters agree on scores.
Number of agreements divided by Number of possible agreements
What does the Kappa statistic account for?
Agreement due to chance.
Ratio of observed non-chance agreement to
possible non-chance agreement.
What is the interpretation of a Kappa value >80%?
Excellent agreement.
What is the purpose of reliability analyses for categorical data?
To assess inter-rater reliability.
What is the formula for percent agreement (Po)?
Po = Number of agreements / Number of possible agreements.
What is the effect of deleting an item with high correlation in Cronbach’s alpha?
It may indicate redundancy or improve internal consistency.
What should be the action if deleting an item dramatically increases Cronbach’s alpha?
Consider removing or modifying that item.
Summarize reliability
What action is recommended if a scale has an item with very high correlation and affects Cronbach’s alpha?
Remove/modify that item.