Chapter 6: Evaluating Selection Techniques and Decision (Reversed) Flashcards
reliability
the extent to which a score from a selection measure is stable and free from error
- Test- Retest Reliability
- Alternate-forms Reliability
- Internal reliability
- scorer reliability
four ways to determine test reliability
the extent to which repeated administration of the same test will achieve similar results
test- retest reliability
temporal stability
the consistency of test scores across time
bonus: the amount of anxiety an individual normally has all the time
trait anxiety
state anxiety
Bonus: anxiety has at any given moment
Alternate-Forms reliability
method of reliability in which two forms (Form A and Form B) of the same test are constructed, half of the sample receive form A and there other half receive form b
form stability
the extent to which two forms of the test are similar
.89
the average correlation between alternate form of test used in the industry
item stability
the extent to which the responses to the tests items are consistent
item homogeneity
the extent to which test items measure the same construct
Kuder- Richardson 20
the statistic used to determine the reliability of the tests that use items with dichotomous answers
split-half method
a form of reliability in which the consistency of item responses is determined by comparing scores on half of the items with scores on the other half
coefficient alpha
a statistic used to determine internal reliability of tests that use interval or ratio scales
scorer reliability
the extent to which scorers agree on the test score, or the test is scored correctly
validity
the degree to which inferences from the test scores are justified by the evidence
- content validity
- criterion validity
- construct validity
three types of validity
content validity
the extent to which the items on a test are fairly representative of the entire domain the test seeks to measure
the extent to which test score is statistically related to the criterion
criterion validity
concurrent validity
a form of criterion validity that correlates test scores of current employees with measures of job performance (performance appraisal)
predictive validity
a form of criterion validity in which test scores of applicants are correlated to the future job performance
(restricted range issues)
why is a concurrent design weaker than predictive design?
validity generalization
the extent to which a test found valid for a job in one location or organization is valid for the same job in a different location
synthetic validity
_______ _______ is based on the assumption that tests that predict a particular component of one job should predict performance on the same job component for a different job
construct validity
the extent to which a test measures the construct it intends to measure
- convergent validity
- discriminant validity
- known-group validity
three types of construct validity
face validity
the extent to which a test appears to be job-related which affects the applicant’s motivation to do well on a test
- Nineteenth Mental Measurement Yearbook
2. compendium entitled Test in Print VIII
sources of reliability and validity information
Cost-efficiency
refers to the practical selection of tests when it comes to the testing cost, administration, and scoring as well