Item Analysis and Test Reliability Flashcards
classical test theory
used as framework to develop and evaluate tests
true score variability
result of actual differences among examinees
measurement error
random factors affect test performance. Ex- distractions, examinee fatigue
test reliability
consistency
test-retest reliability
consistency of scores over time. administer, readminister later, correlate 2 sets of scores
alternate forms of reliability
consistency over different forms of the test
internal consistency reliability
consistency over different test items. Not good for speed tests.
coefficient alpha (Chronbach’s alpha)
administer test to sample and calculate average inter-item consistency
Kuder-Richardson 20 (KR-20)
coefficient used when items are dichotomously scored (correct or incorrect)
split half reliability
administer test to a sample, split test in half and correlate the 2 halves (ex- half do odd, half do even questions)
Spearman Brown prophecy formula
determine effects of lengthening or shortening tests
Inter-rater reliability
consistency of scores over different raters for subjectively scores tests
Cohen’s kappa coefficient
consistency among 2 raters when rate is nominal scale
coefficient of concordonance
assess 3 or more raters when ratings are ranked
consensual observer drift
2 or more raters communicate while rating, which increased consistency