Lecture 6: Selection decisions Flashcards
Validity
The degree to which inferences from test scores are justified by the evidence (Does the instrument measure what it is supposed to measure?)
Constuct validity
Are you measuring what you’re supposed to measure
Content validity
The extent to which the items on a test are fairly representative of the entire domain the test seeks to measure
Criterion validity
The extent to which an operationalization of a construct relates to a theoretical representation of the construct
Concurrent validity (Part of criterion validity)
Your measure should correlate highly with other related measures which you expect it to correlate with and don’t correlate with constructs that don’t measure the same
Predictive validity (Part of criterion validity)
Your measure should be correlated with a future outcome in the way you expect it to be
Face validity
The extent to which a test is subjectively viewed as covering the concept it is supposed to measure –> subjective
Reliability
The extent to which a score from a test or from an evaluation is consistent and free from error (Does the instrument measure and perform well?)
Temporal stability
Test-retest reliability–> testing at different time points should give the same results
Form stability
Different forms of tests should give the same results
Internal reliability
It all the items in the test measure the same (Cronbach’s alpha)
Scorer reliability
Objective evaluation of scores–> In structured interviews scorer reliability is high, panel members often agree
Reliable, not valid
Consistent but not measuring what it’s suppost to measure
Valid, not reliable
Not consistent but measures what it is supposed to measure
Neither reliable or valid
Something that doesn’t measure what it’s supposed to and is different every time you test it
Both reliable and valid
Measures what it’s supposed to and is consistent
Utility
The degree to which a selection device improves the quality of a personnel system, above what would have occurred had the instrument not been used
Taylor-Russel tables
Designed to estimate the percentage of future employees who will be successful on the job if a particular selection method is used
Lawshe tables
Tables that use the base rate, test validity and applicant percentile on a test to determine the probability of future success for that applicant
Proportion of correct decision
Refers to a utility method that compares the percentage of times a selection decision was accurate with the percentage of succesful employees
Utility formulas
Provides an estimate of the amount of money an organization will safe if it adopts a new testing procedure
Measurement bias
Technical aspects of the test A is biased if there are group differences in test scores (race, gender) that are unrelated to the construct being measured
Predictive bias
A situation in which the predicted level of job success falsely favours one group over another
Unadjusted Top-down selection
A “performance first” hiring formula
Passing scores
Who will perform at an acceptable level? (passing score is a point in a distribution of scores that distinguishes acceptable from unacceptable)
Banding
A compromise between the top-down and passing scores approach. Takes into account that tests are not perfectly reliable because of error variance
SEM Banding
Based on the concept of Standard Error of Management –>
- Non sliding band: hire anyone whose scores fall between the top score and the band score
- Sliding band: start with the highest score and subtract from it the band width