chapter 5 Flashcards
cohen’s kappa
statistical estimate of inter-rater reliability that is more robust than percent agreement because it adjusts for probability that some agreement is due to random chance
parallel-forms reliability
correlation between two versions of the same test or measure that were constructed in the same way, usually by randomly selecting items from a common test question pool
internal consistency
correlation that assesses the degree to which items on the same multi-item instrument are interrelated
most common forms: average inter-item correlation, average item-total correlation, split-half correlation and cronbach’s alpha
average inter-item correlation
estimate of internal consistency reliability that uses the average of the correlations of all pairs of items
average item-total correlation
estimate of internal consistency reliability where you first create a total score across all items and then compute the correlation of each item with the total. average inter-item correlation is the average of those individual item-total correlations
split-half reliability
estimate of internal consistency reliability that uses correlation between the total score of two randomly selected halves of the same multi-item test or measure
cronbach’s alpha
method of estimating internal consistency reliability which is analogous to the average of all possible split-half correlations
alpha = 1, if perfect internal consistency
poor internal consistency can be caused by inconsistent answers or measurement that is off
reliability
criterium for evaluating quality of measurement. extent to which data collection techniques or analysis procedures yield consistent findings
validity
criterium for evaluating quality of measurement. deals which accuracy or precision of measurement