R&S Foundations of Recruitment and Selection I: Reliability and Validity Flashcards
The Recruitment and Selection Process
An employer’s goal is to hire an applicant who possesses the knowledge, skills, abilities, or other attributes (KSAOs) required to perform the job
Reliability
the degree to which observed scores are free from random measurement errors; an indication of the stability or dependability of a set of measurements over repeated applications of the measurement procedure
Interpreting Reliability Coefficients
True score: the average score that an individual would earn on an infinite number of administrations of the same test or parallel versions of the same test
•Error score (or measurement error): the hypothetical difference between an observed score and a true score
Factors Affecting Reliability
Temporary Individual Characteristics
•Lack of Standardization
•Chance
Methods of Estimating Reliability
Test and Retest
•Alternate Forms
•Internal Consistency
•Inter-Rater Reliability
Inter-Rater or Inter-Observer Reliability
Whenever you use humans as a part of your measurement procedure, you have to worry about whether the results you get are reliable or consistent. People are notorious for their inconsistency. We are easily distractible. We get tired of doing repetitive tasks. We daydream. We misinterpret.
Validity
the legitimacy or correctness of the inferences that are drawn from a set of measurements or other specified procedures; the degree to which accumulated evidence and theory support specific interpretations of test scores in the context of the test’s proposed use
Validation Strategies
Construct level
Cognitive ability
Job performance
Measurement level
Wonderlic test score
Job performance score
Predictive and Concurrent Evidence for Test Criterion Relationships
Predictive evidence: obtained through research designs that establish a correlation between predictor scores obtained before an applicant is hired and criteria obtained at a later time, usually after an applicant is employed
Predictive and Concurrent Evidence for Test Criterion Relationships
Concurrent evidence: obtained through research designs that establish a correlation between predictor and criteria scores from information that is collected at approximately the same time from a specific group of workers
Validity Generalization
Validity generalization: the application of validity evidence, obtained through meta-analysis of data obtained from many situations, to other situations that are similar to those on which the meta-analysis is based
Factors Affecting Validity Coefficient
Range Restriction
•Measurement Error
•Sampling Error
Bias and Fairness
Bias: systematic errors in measurement, or inferences made from those measurements, that are related to different identifiable group membership characteristics such as age, sex, or race
Fairness
Fairness: the value judgments people make about the decisions or outcomes that are based on measurements
◦Principle that every test taker should be assessed in an equitable manner
Different Views of Fairness
Fairness as equitable treatment in the testing process
◦Fairness as lack of bias
◦Fairness in selection and prediction