Lecture 4: Selection Flashcards
Making Decisions with Multiple Assessments: DECISION MAKING STRATEGIES:
MULTIPLE REGRESSION
Multiple Regression is when applicants complete all assessments and their scores are weighted and then added together to create an overall evaluation that will be used to rank candidates.
this is a compensatory approach because a high score on one assessment can compensate for a low score on another assessment.
Making Decisions with Multiple Assessments: DECISION MAKING STRATEGIES:
MULTIPLE CUTOFFS:
Applicants complete all assessments and have a certain threshold or mark they need to score above and the ones who do are ranked in order of their scores.
This is a non-compensatory approach because a high score on one assessment cannot compensate for a low score on another assessment.
Making Decisions with Multiple Assessments: DECISION MAKING STRATEGIES:
MULTIPLE HURDLE
This is pretty similar to the multiple cutoff strategy, but the assessments are completed sequentially. So you must score above a set level on each assessment to continue in the selection process.
This is a non-compensatoryapproach as well.
CUTOFFS & HURDLES REDUCE ADVERSE IMPACT & GOOD FOR COST AND EFFICIENCY +
LEGAL ISSUES THAT CAN ARISE THROUGH ASSESSMENTS
WHAT ARE THE 4 MAJOR US EEO LAWS
EEO is equal employment opportunity
- Civil Rights Act (1964) Title 7: which states that there can be no discrimination in employment on the basis of race, color, sex, religion, national origin
- Age Discrimination in Employment Act (ADEA; 1967): covers those over the age of 40
- Americans with Disabilities Act (ADA; 1990): make reasonable accomodation with undue hardship
- Civil Rights Act (1991): allows jury trials, compensatory and punitive damages.
so basically ada, adea, and two civ rights
What are the theories of employment discrimination?
- Adverse/Disparate treatment: this is a form of overt discriminationwhere you intentionally treat protected class members differently
Ruby Tuesday Case:Ruby Tuesday wanted to hire summer help but in her ad she said “females only” because of housing concerns; as a result EEOC brought suit. this is rare to see (explict overt discrimination.)
- Adverse/Disparate impact: when there are practices/policies that have been thought to be unbiased but they result in a disproportionate negative impact on a certain group. This is a form of Unintentional discrimination
adverse impact:
how can you provide evidence of adverse impact
- stock statistics: compare “utlization rates.” eg: compare company’s %m/f in clerical jobs vs % m/f in relevant population
- concentration statistics: compare job category distributions; e.g., compare % of m/f in clerical vs. sales vs. management
- Flow statistics: compare “selection rates” e.g., compare % of m/f applicants hired
4/5ths or the 80% rule
selection rate of females /selection rate of males = less than 80%, then adverse impact exists.
What are some employer defenses?
- Business necessity/job relatedness: show that the process/practices: (a) are closely related to job requirements, and/or (b) predict job performance
- BFOQ (bona fide occupational qualification): necessary for safe performance or is essential to role; often difficult to prove
say BFOQ full form three times
adverse impact
What are some strategies for reducing adverse impact?
Some recommended approaches are to:
1. hire more qualified minority applicants
2. include multiple assessments that can assess a comprehensive array of skills ans qualities that are related to both technical task performance and contextual job performance
somethings not recommmended are:
1. using assessments that have low validity
2. providing orientations or prep programs to candidates (no impact)
3. remove individual test items for which majority and minority candares differ (has been shown to also have no impact)**!!! can be asked review well **
Personnel Selection
The process of determining those applicants who are selected for hire versus those who are rejected.
Multiple cutoff and hurdles
Predictor Cutoff
A score on a test that differentiates those who passed the test from those who failed; often equated with the passing score on a test.
Selection Ratio
number of job openings (n) divided by the number of job applicants (N):
SR = n/N
When the SR is equal to 1.00 (there are as many openings as there are applicants)
or greater (there are more openings than applicants), the use of any selection device has little meaning
Base Rate
The percentage of employees who would be successful if individuals are randomly hired.
If a company has a base rate of 99% (that is, 99 out of every 100 randomly-hired employees would perform their jobs successfully), it is unlikely that any new selection method can improve upon this already near-ideal condition. If a company has a base rate of 100%, obviously no new selection system can improve upon a totally satisfactory workforce. The only “improve-ment” that might be attained with a new test is one that takes less time to administer or one that costs less (but still achieves the same degree of predictive accuracy). On the other hand, if the base rate were 0, that means that no randomly-hired employee would be able to perform the job satisfactorily (yikes!). In this case, it is unlikely that the issue is one that will be fixed by a new predictor. The job is either impossibly hard, the criteria for what is “successful” is too stringent, or the pool of applicants is inap-propriate and underqualified.
Multiple cutoff and hurdles:
Criterion cutoff
A standard that separates successful from unsuccessful job performanc
multiple correlation:
Multiple Correlation
he combined relationship between two or more predictors and the criterion is referred to as a multiple correlation, symbolized as R.
Validity Generalization
this refers to the predictor’s validity spreading or generalizing to other jobs or contexts beyond the one in which it was validated. For example, let us say that a test is found to be valid for hiring administrative assistants in a company. If that same test is found useful for hiring administrative assistants in another company, we say its validity has generalized.
the failure to demonstrate validity generalization may not be due to validity truly
not generalizing.29 Rather, it appears the average sample size in typical criterion-related validity studies is too small to produce stable, generalizable conclusions, resulting in the (erroneous) conclusion that test validity is situation-specific.30 Indeed, when tests are validated in large samples, the results appear to generalize (not be situation-spe-cific). For example, researchers examined the validities of ten predictors that were used to forecast success in 35 jobs in the army. Using a sample of more than 10,000 individ-uals, results indicated highly similar validity coefficients across different jobs, meaning that differences among jobs did not change the predictor–criterion relationships.31 Thus, the effects of situational moderators appear to disappear with appropriately large sample sizes.
One psychological construct that is supposedly common for success in all jobs
(and which accounts for validity generalizing or spreading) is general mental ability (g) and, in particular, the dimension of g relating to information processing **validity of intelligence does indeed generalize across a wide variety of occupations. **