Exam 2 Flashcards
Selection as a Process
- Job Analysis
- Recruit Applicants to the Job
- Assessment
- Make a decision
Selection as a process
the measurement of mental processes
psychometrics
Test Properties
- Error: We always have error in measurement
- Reliability: The consistency of measurement. Can we reliably measure a given predictor or criterion?
- Validity: Are we accurately measuring what we want to measure. How accurate are the inferences that we are making?
Test Properties
the consistency, stability, or equivalence of a measure
reliability
4 major ways that we measure reliability:
Test-Retest Reliability
Equivalent (Parallel)-Forms Reliability
Internal Consistency Reliability
Inter-Rater Reliability
Measuring reliability by giving participants a test and then giving them the same test at a later date or time and then correlate the two sets of scores (helpful in establishing personality and intelligence measures)
test-retest reliability
how much variance or error do we see in the measurement of a construct over time
coefficient of stability
a way to test reliability–two tests with the same mean and standard deviation, but separate items are used to measure the same construct. The scores of the two tests are then correlated to get a coefficient of equivalence; also called parallel or alternate forms reliability (difficult to construct, not often used)
equivalent forms reliability
measure of reliability; the homogeneity of the items composing a test; involves: corrected item-total correlations and split half reliability
internal consistency reliability
the extent to which two raters agree on their assessments of a construct (also called conspect reliability); correlations between the rating provided by each rater
Examples would be agreement on job analysis judgements or agreement on performance evaluations
inter-rater reliability
the accuracy and appropriateness of drawing inferences from test scores
validity
validity based on the judgement of Subject Matter Experts (SMEs)
content validity
the degree to which a test forecasts or is statistically related to a criterion; validity coefficients are the correlation between a predictor and a criterion; two major types are concurrent and predictive
criterion-related validity
in the “unitary” view, this is the true form of validity; the degree to whih the test is an accurate and faithful measure of the construct it purprts to measure (convergent and divergent validity)
construct validity
defdefinition of a good employee
criterion
Steps in conducting a validation study
- Conduct a job analysis
- Specify criteria
- Choose predictors
- Validate the predictors
- Cross-validate
what one knows
knowledge
what one is able to do
skill
predicts whether a person will engage in dishonest behaviors
integrity tests
problems with personality/integrity tests
- faking: intentionally misrepresenting oneself in personality inventories
- job relevance: are these dimensions job relevant?
simulation of actual job tasks; good predictors of future job performance
work samples
ssimulation of management and other subjective jobs
assessment centers
tasks of assessment centers
- In-basket exercise: come in to work and this is what you find
- Leaderless group exercise: leaders often emerge
- Problem-solving simulation: write up a report for the solution of give a presentation
- Role-play exercise: act out firing me
cognitive ability/general mental ability; the ability to learn and acquire information; measured by (g); meausres aptitude and achievement
intelligence
_____ ability measures are amongst the highest predictors of performance across a wide variety of jobs
cognitive
Meta-analyses siggest an aveage coefficient of r=___, between cognitive ability and performance
.51
ththe result of using a selection method has a negative effect on a legally protected minority group compared to a majority group
adverse impact
Cognitive ability is a better predictor of performance in Caucasians than in __________ or ____________
hispanics, african americans
the trend that there is an increase in mean intelligence scores over time
Flynn Effect
Is adverse impact legal?
Yes, so long as the validation study demonstrates a direct connection to performance on the job
Typical meta-analytic validity of interviews estimates range from r=___ to r=___
.25 to .30
Why don’t interviews work?
- high variability in judgment (disagreement between raters, poor inter-rater reliability
- lack of established criteria
- poor interviewing skills
- high influence of rater’s personal preference (attractiveness, ethnicity, weight, and like-me bias)
Why do we cling to interviews?
control, we overestimate our own reliability and accuracy of judgment, and we have a self-serving bias in memory
describe a problem to the test taker and require the test taker to rate various possible solutions in terms of their feasibility or applicability; measures “practical intelligence”
Situational Judgment Tests
projective interpretation of handwriting to determine personality features
graphology
the five “protected” groups
race, sex, religion, color, national origin
What dis Griggs v. Duke Power establish?
before we can use our assessments, we need to validate them; we need to statistically demonstrate that our assessment is predictive of performance on the job
current employees take selection measures, scores are correlated with their Performance Evaluations
concurrent
all applicants are given selection measure, months later performance evaluations are given, and scores are then correlated from records
predictive
validating our findings by collecting two samples: splitour sample in two or concurrent first and then prefictive later or validity generalization (same job, same KSAOs, should have same relationships between predictors and criterion)
cross validation
the minimum acceptable performance on the criterion measure
criterion cutoff
the minimum acceptable score on the predictor assessments
predictor cutoff
the number of openings divided by the number of applicants
selection ratio
the percentage of employees who are currently performing at an acceptable level
base rate
must pass each criterion cutoff to be considered; failure at any stage prevents passage upward, “bottom up elimination”
multiple hurdles approach
advantages of multiple measures
can save the most expensive or time consuming measure for only a few applicants
best for jobs which require sufficient levels of multiple measures in which one KSAO can not compensate for another
disadvantages of multiple measures
range restriction makes validity of each subsequent measure more difficult to determine
uses scores from each predictor in an equation to estimate criterions; “compensatory approach” low score in one area can be outweighed by a high schore in another, highest score is the best applicant (“Top down approach”
multiple regression approach
in order to account for error, a standard errorin measurement is calculated (a standard deviation around the regression line)–this creates a confidence interval based on the amount of error estimated to be included in the measurement
banding
value of selection system to the organization
utility
utility is maximized by:
- base rate success-should be 50%
- Selection ratio (#hired/#applicants)-should be low
- Validity of selection device-should be high
Valid predictors:
increase true positives and reduce false positives
affirmative action is required of all organizations:
with 50+ employees or government contracts of $50,000+
Gratz v. Bollinger
University of Michigan used a points system for undergrads (20 pts out of 100 needed) were given for underrepresented minority status
*ruled against can’t have different standards for minority groups
Grutter v. Bollinger
University of Michigan Graduate School (law) considered minority status a plus, but no definitive points added, it was equivalent to other possible good characteristics
What has changed after the Fisher case?
Colleges are now required to demonstrate they can not achieve sufficient diversity w/o taking race into account
four fifths rule
if the selection ratio for a minority group is less than 80% of what it is for a majority group, adverse impact is present
Case where required an IQ test that was not a valid predictor of performance; if Adverse Impact, system must be valid and company must prove it
Briggs vs. Duke Power
Case where blacks not as likely to receive promotion recommendation
Rowe vs. General Motors
Case where a test can be used for multiple jobs only if jobs are similar
Albermarle Paper Company vs. Moody
Court case. Established a separate admission process for Blacks, Chicanos, Asians, and American Indians (reverse discrimination)
Bakke vs. Regents of University California