Selection - Validity Flashcards
define validity
the degree to which available evidence supports inferences made from scores on selection procedures; how well are you measuring what you’re claiming to measure?
how is validity applied in the context of selection?
we want to know how well a predictor (i.e., test) is related to criteria (i.e., performance)
Discuss the relationship between reliability and validity
- it is possible to have a measure that is reliable yet does not measure what we want for selection.
- However, we cannot have high validity if we do not have high reliability. High re- liability is a necessary but not a sufficient condition for high validity.
- calculated validity can’t be higher than the maximum possible validity because both the predictor and criterion scores contain random error. Random error is uncorrelated, so the more random error within scores from a predictor or criterion then the more likely it is that the maximum possible validity will fall.
- if reliability of either test X or criterion Y were lower, maximum possible validity would be lower as well.
- If either our test or criterion were completely unreliable (that is, reliability 0.00), then the two variables would be unrelated, and empirical validity would be zero. (We discuss the meaning of a validity coefficient later in this chapter.)
- Thus reliability or unreliability limits, or puts a ceiling on, possible empirical validity.
- Practically speaking, to enhance maximum possible validity, reliability should be as high as possible for our predictors and our criteria.
list sources of validity evidence
Content response processes internal structure relations w/ other variables decision consequences
content validity: describe what it is
Content validity is demonstrated to the extent that the content of the assessment process reflects the important performance domains of the job. Validity is thus built into the assessment procedures.
Content validation methods focus on content relevance and content representation. Content relevance is the extent to which the tasks of the test or assessment are relevant to the target domain. Representativeness refers to the extent to which the test items are proportional to the facets of the domain. Content relevance and representativeness are commonly assessed using subject matter expert ratings.
many companies rely heavily on content validation for various reasons.
describe the steps in content validity
- job analysis
2. examination plan
challenges with conducting criterion related validity studies
sample size: small sample sizes are often due to many job classifications having only a small number of employees
range restriction: the selection process has already restricted the org’s workforce to a certain level of performance (the scores of hired people don’t reflect scores of entire applicant group)
criterion measures: good criterion measures are often not available or just plain bad
advantages of content validity
- feasible
- viewed as fair
- helps ID best candidates
- practical to meet legal standards when supplemented by supporting data
- easy to explain to courts and candidates
- acceptable under UG and The Principles
disadvantages of content validity
- not easy: time, resources, expertise, documentation
- some people don’t believe in it
- difficult for entry level jobs where there isn’t any specific prior preparation required
- not appropriate when job requirements change frequently or aren’t well defined
- more resources to develop job specific test vs. use of GMA tests
content validity: direct vs indirect measures
for content validity, there is a hierarchy of assessment evidence. we can think of this evidence as being higher when direct methods are used and lower when indirect methods are used. For example, in measuring keyboard ability, a highly content valid keyboarding test would replicate the most important job tasks (text entry and data entry). The test would be a direct measure of keyboarding ability. Two indirect measures or indicators of keyboarding ability are completion of a high school keyboarding course, and having keyboarding work experience. The indirect measures do not inform us about the current keyboarding proficiency of the subject.
List and describe the guidelines for interpretting correlations according to the U.S. Dept. of Labor (note that setting an arbitrary value of a validity coefficient for determining whether a selection procedure is useful is not a wise practice).
above .35 = very beneficial
.21-.35 = likely to be useful
.11-.20 = depends on circumstance
less than .11 = unlikely to be useful
validity of work sample tests
.33
Roth, Bobko, & McFarland, 2005
Correlated w/ supervisory ratings of job performance.
validity of structured interviews
.51
Not sure where this is from.
validity of unstructured interviews
.38
Not sure where this is from.
validity of job knowledge tests
.48. job knowledge tests which have high job specificity, have higher levels of criterion-related validity. Job knowledge tests can not be used for entry-level jobs. They are not appropriate for use with jobs where no prior experience is required or where no prior job-specific training is required
Not sure where this is from.
validity of behavioral consistency T&E methods
.45; This method is based on the principle that the best predictor of future performance is past performance. Applicants describe their past achievements and the achievements are rated by subject matter experts.
McDaniel et al 1988
validity of self report T&E methods
.15-.20; few studies available
Not sure where this is from.
validity of years of experience
.18; Years of experience are a very indirect measure of ability.
Not sure where this is from.