Test Construction Flashcards
test reliability
the extent to which a test provides consistent information
Test-retest reliability
administer a test to a sample of examinees twice and correlate the two sets of scores
alternate forms reliability
-administer one form of the test to a sample of examinees, and another form of the test to the same examinees, then correlating the two sets of scores
internal consistency reliability
-provides info on the consistency of scores over different test items
-useful for tests that are designed to measure a single content domain or aspect of behaviour
(includes coefficient alpha, Kuder-Richardson 20 (KR-20), and split-half reliability)
Inter-rater reliability
-provides info on the consistency of scores over different raters –important for subjectively scored tests
(includes cohen’s kappa coefficient and Kendall’s coefficient of concordance)
which method would be used to assess the inter-rater reliability of a rating scale designed to help clinicians distinguish between children who either do or do not meet the DSM criteria for a diagnosis of ADHD?
Cohen’s Kappa coefficient
After using the split-half method to estimate a test’s reliability, you would use which method to correct the split-half reliability coefficient?
Spearman-Brown prophecy formula
Name two types of construct validity
convergent and divergent (discriminant)
What method is most useful to assess convergent validity?
Multitrait-Multimethod matrix displays the correlations between different tests measuring same and different traits.
What is criterion-referenced scoring?
Pegs performance to some criterion level. A yes or no answer. Pass/fail.
For timed tests, which measure of reliability is most effective?
Alternate forms, becauses measures of internal consistency aren’t good, since time is the important variable.
Kuder-Richardson Formula
Used to measure internal consistency of test items.