Chap 6: EVALUATING SELECTION TECHNIQUES AND DECISIONS Flashcards
The extent to which a score from a test or from an evaluation is consistent and free from error.
Reliability
method each one of several people take the same test
twice.
test-retest reliability
The scores from the first administration of the test are correlated with scores from the second to determine whether they are similar
test-retest reliability
The extent to which repeated administration of the same test will achieve similar results.
test-retest reliability
The test scores are stable across time and not highly susceptible to such random daily conditions as illness, fatigue, stress, or uncomfortable testing conditions
temporal stability
The consistency of test scores
across time.
temporal stability
two forms of the same test are constructed
alternate-forms reliability
designed to eliminate any effects that taking one form of the test first may have on scores on the second form.
counterbalancing
The extent to which two forms of the same test are similar
alternate-forms reliability
A method of controlling for order
effects by giving half of a sample Test A first, followed by Test B, and giving the other half of the sample Test B first, followed by Test A
counterbalancing
The extent to which the scores on two forms of a test are similar.
Form stability
consistency with which an applicant responds to items measuring a similar dimension or construct
Internal Reliability
The extent to which similar items are answered in similar ways is referred to as internal consistency and measures ______
item stability
The extent to which similar items are answered in similar ways is referred to as _____ and measures item stability
internal consistency
The extent to which test items measure the same construct.
Item homogeneity
3 statistics used to determine internal reliability of test
-Kuder-Richardson 20
-Spearman-Brown Prophecy Formula
-Coefficient Alpha (Cronbach’s Alpha)
- A form of internal reliability in which the consistency of item responses is determined by comparing scores on half of the items with scores on the other half of the items.
- is the easiest to use, as items on a test are split into two groups.
Split-half method
are more popular and accurate methods of determining internal reliability, although they are more complicated to compute
Cronbach’s coefficient alpha and the K-R 20
Used to correct reliability coefficients resulting from the split-half method.
Spearman-Brown prophecy formula
A statistic used to determine internal reliability of tests that use
interval or ratio scales.
Coefficient alpha
A statistic used to determine
internal reliability of tests that use items with dichotomous answers (yes/no, true/false).
Kuder-Richardson Formula 20 (K-R 20)
used for tests containing dichotomous items (e.g., yes-no,
true-false)
K-R 20
can be used not only for dichotomous items but also for tests containing interval and ratio(nondichotomous) items such as five-point rating scales
coefficient alpha
The extent to which two people scoring a test agree on the test score, or the extent to which a test is scored correctly.
Scorer reliability