Chapter 6: Evaluating Selection Techniques and Decision Flashcards
the extent to which a score from a selection measure is stable and free from error
reliability
four ways to determine test reliability
- Test- Retest Reliability
- Alternate-forms Reliability
- Internal reliability
- scorer reliability
test- retest reliability
the extent to which repeated administration of the same test will achieve similar results
the consistency of test scores across time
temporal stability
bonus: the amount of anxiety an individual normally has all the time
trait anxiety
Bonus: anxiety has at any given moment
state anxiety
method of reliability in which two forms (Form A and Form B) of the same test are constructed, half of the sample receive form A and there other half receive form b
Alternate-Forms reliability
the extent to which two forms of the test are similar
form stability
the average correlation between alternate form of test used in the industry
.89
the extent to which the responses to the tests items are consistent
item stability
the extent to which test items measure the same construct
item homogeneity
the statistic used to determine the reliability of the tests that use items with dichotomous answers
Kuder- Richardson 20
a form of reliability in which the consistency of item responses is determined by comparing scores on half of the items with scores on the other half
split-half method
a statistic used to determine internal reliability of tests that use interval or ratio scales
coefficient alpha
the extent to which scorers agree on the test score, or the test is scored correctly
scorer reliability
the degree to which inferences from the test scores are justified by the evidence
validity
three types of validity
- content validity
- criterion validity
- construct validity
the extent to which the items on a test are fairly representative of the entire domain the test seeks to measure
content validity
the extent to which test score is statistically related to the criterion
criterion validity
a form of criterion validity that correlates test scores of current employees with measures of job performance (performance appraisal)
concurrent validity
a form of criterion validity in which test scores of applicants are correlated to the future job performance
predictive validity
why is a concurrent design weaker than predictive design?
because there will be very few employees at the extremes of performance scale (restricted range issues)
the extent to which a test found valid for a job in one location or organization is valid for the same job in a different location
validity generalization
_______ _______ is based on the assumption that tests that predict a particular component of one job should predict performance on the same job component for a different job
synthetic validity
the extent to which a test measures the construct it intends to measure
construct validity
three types of construct validity
- convergent validity
- discriminant validity
- known-group validity
the extent to which a test appears to be job-related which affects the applicant’s motivation to do well on a test
face validity
sources of reliability and validity information
- Nineteenth Mental Measurement Yearbook
2. compendium entitled Test in Print VIII
refers to the practical selection of tests when it comes to the testing cost, administration, and scoring as well
Cost-efficiency