Selection Flashcards

1
Q

Newman & Lyon (2009)

A

“• Examined combination of recruiting parameters and its effects on adverse impact ratio and performance
• Found that focusing on cognitive ability and conscientiousness recruiting increased adverse impact ratio and improved (did not harm) job perf
o Particularly, if targeted recruitment of these charac within minority groups
• Org recruitment strategy that focused emphasized results-orientation attracted more conscientious applicants
• Blacks who were conscientious more attracted (prob of applying) to innovation-based org
• Job postings that are worded in certain ways may help/deter recruitment of particular demographic groups”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Finch et al (2009)

A

“• Trade-off problem – mean perf levels (-)ly related to diversity in workforce
• Monte carlo study on multistage selection process and its effects on AI
• Found 9 predictor combinations that had no AI and met 4/5ths rule
• All included integrity tests (except 1), mostly biodata Qs and conscientiousness – NONE had cognitive ability tests as predictors
• Multistage selection strategies outperformed singlestage strategies in terms of having lower AI and almost equivalent mean perf
• Commonly held assumption of including cog ability tests in the first stage resulted in higher AI and lower mean perf”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Highhouse (2008)

A

“• Reviewed why intuition in selection decisions is so popular
• Because of the widespread belief that 1) it is possible to obtain a near-perfect precision in predicting human performance and 2) experience/expertise increase ability to predict human perf
• Its impossible to expect near perfect prediction of human performance and research does not support expertise and better skill at predicting human perf in unstructured interviews
• Common responses to limitations of intuition-based selection:
o Analytical selection decision aids do not take into account “broken-leg” incidents/rare events
o Need to evaluate candidate based on configuration of traits and not just 1 trait”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Kuncel (2008)

A

“• Ways to overcome overreliance on intuition in personnel decision-making by increasing appeal of effective predictors, incorporating human judgments into mechanical systems, improve acceptance of more effective data combination methods
• Present prescreened applicants to experts and let them make decisions
• Emphasize select-out or select-in methods
• Tell more narrative stories based on data
• Train raters to avoid irrelevant information
• Use data combination methods that allow human input
• Present research findings in other metrics besides correlations
• Use more high-fidelity decision aids e.g. cognitive ass
• Pit experts against each other and provide hard-to-avoid feedback making experts aware of their shortfall”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Colarelli & Thompson (2008)

A

“• Evolutionary view of why humans prefer intuitive decision-making as opposed to using rational decision aids
• Humans rely on face-to-face interaction a lot bcoz of evolutionary past
• Situation isn’t as bad as Highhouse (2008) described because ppl do use decision aids when making decisions about ppl they don’t have to interact with or for personal advantage
• Given our understanding of human evolutionary tendencies, we need to adapt our selection decision aids accordingly”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Klimoski & Jones (2008)

A

“• Accused Highhouse of neglecting the org/work context surrounding personnel decisions – usually done in a group setting
• Shift the problem from individual to systems level to address the need to further understand org contextual factors that might influence personnel decisions”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q
A

“• Most businesses in Europe composed of small and moderate enterprises – less than 10 ees each
• HR functions and tools that are proposed by IO psychologists aren’t relevant and applicable to these companies
o Might explain why less use of selection decision aids
• In order to entice these companies into using more objective tools, use the language of money – conduct utility analysis”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Martin (2008)

A

• Managers might overrely on obj tests too when faced with threat of hiring potentially unqualified candidate, when hiring for higher level positions, and when interviewer/decision maker is at a senior level in the org

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Guion (1998): ch. 1

A

“Personality: a mixture of values, temperament, coping strategies, character and motivation
o State = temporary condition or mood vs. Trait = habitual way of thinking or doing in a variety of situations
Five Factor Model = Extraversion. Agreeableness, Conscientiousness, Emotional Stability, Openness to Experience
o 2 meta-analyses had positive finding on the criterion related validities for the five dimensions (Barrick & Mount, 1991; Tett et al., 1991; Ones, Schmidt & Viswesvaran, 1994).
o Criticisms of the big 5 exist
Non-Cognitive Predictors: Physical Characteristics, Physical Abilities, Psychomotor Abilities, Experience, education and training, Person-Situation Interaction, Predictors for team selection:
Developing Predictive Hypotheses: - Linking predictors to criteria is a local hypothesis based on local job and need analysis, criterion constructs are dictated by organizational needs”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Guion (1998) Ch 4

A

“Title VII: Employers may not fail or refuse to hire based on race, color, religion, sex, national origin
Office of Federal Contract Compliance Testing Order 1968: Contractors should validate tests in selection decisions if they had an adverse impact on protected group
• Adverse Impact = discrimination affects different groups differently; evidence that a group as a whole is less likely to be hired – requires justification of business necessity
• Disparate Treatment = evidence that a candidate from a protected group is treated differently from other candidates in employment process
Validation Requirements According to Guidelines: Criterion related, content, and construct validation
1991 Amendment to Civil Rights Act: prohibited race norming, quotas, defined business necessity and job relatedness
Affirmative Action: to reduce effects of prior discrimination; can be voluntary or required
Age Discrimination Act of 1967: prohibits discrimination against anyone 40 years or older – decisions should be based on ability NOT age
Americans with Disabilities Act of 1990: prohibits discrimination against qualified people who have disabilities”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Guion (1998) Ch 5

A

“Classical test theory = any measure is the sum of a true score plus an error score
Reliability is the extent to which a set of measurements is free from random error variance
Reliability coefficient –correlate 2 sets of measures of the same thing from the same people
Validity = the extent to which systematic sources of variance are relevant to the purpose of measure
Accuracy = measure must be highly correlated with a standard measure of the same thing and the relationship must be linear (x=y)
Sources of unreliability in measures - Thorndike (1949) pointed out that reliability depends on the reasons for individual differences in test performance
Reliability is computed by correlating 2 sets of measures presumably measuring the same thing in the same people in the same way
• 2 systematic scores are expected to be the same, whereas random error lowers the correlation
• If the source of variance is consistent in 2 measures it is treated as systematic
Different methods of operationalizing reliability make different procedural and mathematical assumptions and define error differently – estimates should make sense for circumstance (coefficients of stability, equivalence, internal consistency, interrater agreement)
Standard error of measurement – reliability of individual score may be important in selection
• Use to determine: if 2 people’s scores differ significantly, if a person’s score differs significantly from hypothetical true score, if scores discriminate differently in different groups of people
Psychometric Validity Evidence: Evidence based on: test development, reliability, patterns of correlates, outcomes”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Guion (1998) Ch 7

A

“Criterion related validity: relationship between predictor scores and the criterion used in the analysis
Validation as hypothesis testing - Criterion related validation tests the hypothesis that Y is a function of X
Regression: permits prediction (not always linear)
Contingency Tables: often gross categorical predictions can be sufficient for some organizational purposes
• Practical compromise between dichotomous, pass-fail use of predictors and statistical regression
Correlation: describes how closely 2 variables are relate, correlations permit inferences about the degree of prediction error based on regression function
• Residual = difference between the observed Y and the Y predicted by the regression
• Pearson/Product Moment Coefficient: influenced by Nonlinearity, Homoscedacity, Correlated error, Unreliability, Reduced variance, Extreme skewness, Group heterogeneity, Outliers, Unknown error
Statistical significance: Probability that r differs from 0 by chance
• Null hypothesis is ALWAYS false in real world
• Type I Error = null is actually true but it is rejected
• Type II Error = null is false but you fail to reject it
• Statistical power is the probability that a statistical test will lead to the rejection of the null and is a function of N, effect size in the population, alpha level chosen”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Guion 1998 C 8

A

“Linear additive model – scores are summed to form a composite often with different weights for different variables
• Summing scores is compensatory because a person’s strength in one trait may compensate for weakness in another
• Multiple regression finds optimal weights for the several predictors to get the best correlation with the criterion
• Suppressor: A valid predictor may contain an invalid, contaminating variance component and a variable that does not predict criterion but is correlated with contamination may improve prediction
Noncompensatory Prediction Models
• Mutiple cut scores/multiple hurdles – uses a cut score fore each of 2 or more tests; used when each trait is so vital to performance that other strengths can not compensate or if their variance is too low to yield correlation
• Conjunctive model – based decision on predictor that minimizes the estimated criterion value; no other score in the set no matter how high can compensate for an unsatisfactory prediction based on minimum score
• Disjunctive model – based on predictor with the highest estimated criterion value; no matter how poorly one performs on some other variable the decision is based on candidate’s strength
• Multiple regression requires cross validation (but samples may be independent so systematic error is different)
Synthetic Validity - Inferring validity of the test in a specific situation from a logical analysis of jobs into their elements, a determination of test validity for these elements, and a combination of elemental validities into a whole
“Causal” Research: Quasi-experimental research – causal inference is desired but not assured because no random assignment “

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Guion (1998) Ch 11

A

“• Performance tests - used to assess proficiency, skill or knowledge at the time of testing, tests for maximal performance and may be inappropriate to infer actual performance
• Work samples and simulations: High criterion related validity, sample of a job content domain taken under standard conditions, simulations imitate actual work but omit its trivial, time consuming, dangerous or expensive aspects – high and low fidelity simulations are used
• Developing work samples begins with job analysis, uses only frequent tasks
• Non-cognitive Performance: Physical abilities, Fitness testing, Sensory and Psychomotor Proficiencies
• Assessment of Basic Competencies
o Competency = here and now performance
o Ability = aptitude for future performance
o Basic competency = acceptable performance of simple things a person must do on a job”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Gotman (2003)

A

“Civil Rights Act of 1991 - adverse impact unlawful if the defendant “fails to demonstrate that the challenged practice is job related and consistent with business necessity
• Adverse Impact light (Beazer): When the job clearly requires a high degree of skill and the economic and human risks are great, the employer bears a lighter burden to show that employment criteria are job related.
• Adverse Impact Moderate (Griggs-Albermarle):
o Duke Power relied on Title VII language making it legal to use professionally developed ability tests.
o EEOC guidelines: a test must fairly measure the knowledge or skills required by the particular job which the applicant seeks, or which fairly affords the employer a chance to measure the applicant’s ability to perform a particular job or class of jobs.
• Adverse Impact Heavy (Dothard):
o Bona fide occupational qualification: 1) qualification is reasonably necessary and either 2) all individuals excluded are in fact disqualified or 3) that some of the individuals so excluded possess a disqualifying trait that cannot be ascertained except by relevance to the BFOQ”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q
A

“• Meta analysis: applicants who hold positive perceptions about selection are more likely to view organization positively and report stronger intentions to accept job offers and recommend employer to others
• Org justice theory - applicants view selection procedures in terms of the 4 facets of org justice and these influence reactions
• Applicant perceptions include: view about dimensions of org justice, thoughts and feelings about testing, test anxiety and broader attitudes about tests and selection in general
• Applicant reactions are related to a number of org outcomes, companies that promote fairness and use job related selection tools may be less likely to become targets of employment discrimination lawsuits”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Hogan & Holland (2003)

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Hulin (2002)

A

“• Work provides a source of: identity, relationships outside the family, obligatory activity, autonomy, opportunities to develop skills and creativity, purpose in life, feelings of self worth and self esteem, income and security and it gives other activities such as leisure time meaning
• Most frequent reason given for delayed beginnings of families or marriage is work
• We spend more time engaged in work activities than any other single activity
• There is evidence that events at work create emotions and emotional reactions that spill over into nonwork behaviors and health; little evidence that the spillover goes in the opposite direction
Contributions of I/O Psychology to the Lives of Employees
o If appropriately done selection and training provide better person-job fit than that provided by a random selection system and unstructured on the job training
• Theories of work behavior should be a foundation for general theories of behavior
• Theories of behavior have emerged from I/O research (goal setting theory, theories of affect and evaluations)”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Lievens et al. (2005)

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Mael et al. (1996)

A

“Biodata - good validity, but evidence that perceived invasiveness of selection measures leads to a more global negative perception of the organization
Attributes of invasive items: Verifiable, Controllable, Negative, Transparent, Personal
Individual Difference Characteristics examined: Introversion, Self disclosure, Need for privacy, Gender, Age, Education level, Attitudes/experience
• Two studies: 1) social scientists and undergrads asked to rate 24 items 2) Army officers, students and SMEs rated 60 items (including original 24)
• Verifiable and transparent items seen as less invasive and personal items seen as more invasive; negative items seen as no more invasive than positive items
• Relatively weak contribution of individual difference variables is encouraging
• 4 categories of objections to invasive items emerged:
o Trauma – caused persons to reflect on and re-experience traumatic events
o Stigma – self incriminating items
o Religion and politics – included claims that items were biased for or against religious persons
o Intimacy –reawaken traumatic memories or could open respondent to unwanted attention “

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Motowidlo (2003)

A

“• Job Performance: total expected value to the organization of the discrete behavioral episodes that an individual carries out over a standard period of time (refers to behavior that can impact organizational goal accomplishment)
• Campbell’s Multifactor Model: defines 8 behavioral dimensions of performance:
• Job-specific task proficiency, Non-job specific task proficiency, Written and oral communication, Demonstrating effort, Maintaining personal discipline, Facilitating team and peer performance, Supervision, Management and administration
• Task (activities that usually appear on formal job descriptions) vs. Contextual (behavior that contributes to organizational effectiveness through its effects on the psychological, social, and organizational context of work
• Organizational Citizenship Behavior: Altruism, Conscientiousness, Sportsmanship, Courtesy, Civic virtue
• Counterproductive Behavior: any intentional behavior on the part of the org. member viewed by the organization as contrary to its legitimate interests; must be both intentional AND contrary to organizational interests
• The General Performance Factor: Explains 49% of total variance in supervisory ratings of performance (Viswesvaran, Schmidt, & Ones, 1996). Probably determined by general mental ability and conscientiousness
• Antecedents of Job Performance: personality, ability, experience, knowledge, skills, performance
Significant paths: 1) extraversion, ability, and experience to knowledge 2) ability, experience, neuroticism, and knowledge to skill and 3) skill to performance”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Murphy & DeShon (2000)

A

“• Parallel test model is not useful for evaluating the psychometric characteristics of ratings; Generalizability theory should be used instead.
• Use of a parallel test model to estimate the reliability of ratings requires two assumptions:
a) Variability in ratings can be broken down into that due to true scores and that due to error (classical test theory isn’t wrong, but it applies to few situations)
b) Raters represent parallel measures (raters can rarely be thought of as parallel tests)
Reliability and validity are closely related concepts
• There are many systematic variables that lead raters to disagree when providing ratings of the same target
• If the errors are due in part to the systematic effects of variables other than the target construct then part of the error variance is due to sources of invalidity
• Error variance - indexed by the reliability coefficient - is in part an assessment of the validity (or invalidity) of the measure
The meaning of True Scores
• ““True scores”” have little to do with ““true performance””
• One of the many purposes of generalizability theory was to try and disentangle the many sources of variance that are lumped together under the heading of true score in the classic theory”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Sacco et al. (2003)

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Schleicher (2002)

A

“• Examined different types of evidence to demonstrate the effectiveness of frame of reference training in improving the construct and criterion-related validity of ACs
• Frame-of-reference (FOR) training is an intervention designed to improve the validity of trait judgments in the performance appraisal context by eliminating idiosyncratic standards held by raters and replace them with a common frame of reference for rating (Bernadin & Buckley, 1981)
• Results showed that FOR training improved the reliability and the accuracy of AC ratings
• There was also improved discriminant validity associated with the FOR assessment ratings in the form of smaller heterotrait-monomethod and heterotrait-heteromethod correlations and somewhat improved convergent validity in the form of larger correlations with external measures of the same and similar constructs
• FOR training significantly improved the criterion-related validity of the current AC for predicting supervisors’ ratings of job performance.
• The results suggest that FOR training should be useful in AC practice given that it is no more lengthy or expensive than control training, enhances the legal defensibility of the AC, and appears to enhance AC validity”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Shippmann et al. (2000)

A

“• Growing popularity of competency modeling, due to restructuring of work and concerns that traditional job analysis are unable to play a central role in the modern human resource management
• Core competency- KSAOs that are associated with high performance
• SMEs – job analysis is more work- and task-focused (“what” is accomplished) and competency modeling is worker-focused (“how” work is accomplished)
• Competency - successful performance of a certain task or activity, or adequate knowledge of a certain domain of knowledge or skill
• Competency modeling more for training and development vs. Job analysis more for selection, performance appraisal, and other human resource decision making applications
• No single type of descriptor content is best suited for all purposes; most useful model has the right level of detail for the required use”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Werner & Bolino (1997)

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Borman et al. (2002)

A

“• Overview of the steps taken in Project A job analysis which became an established part of personnel management procedures after World War II
• Measurement issues: 1) Context specific (job specific) versus General 2) Unit of description: Job-oriented (tasks) vs. Person-oriented (KSAs), 3) Level of specificity/generality at which content of work is described
Project A Task Analysis for Entry Level Position – purpose: to support job performance criterion development efforts (assumption that job performance is multidimensional)
• Steps in the job analysis procedure used in Project A: 1) Specify the task domain: synthesized existing job descriptions provided by Soldier’s manuals and the Army Occupational Survey Program (AOSP), 2) Subject matter expert judgments, 3) Selecting tasks for measurement
Critical Incidents Analysis - alternative method to identify critical dimensions of performance for the jobs
• Identify job-specific performance dimensions, Identify army-wide performance dimensions
Job analyses for Second-Tour (upper level) positions
• Used procedures similar to those used for entry level positions, but focused on supervisory/leadershipcomponents of these higher level jobs (used Supervisory Responsibility Questionnaire and Leader Requirements Survey to understand what type of leadership tasks needed for these positions)
• Resulted in 9 supervisory dimensions”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

Sackett et al. (2003)

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

Schmidt & Oswald (2006)

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

Clevenger et al (2001)

A

“• Advantages of SJIs (situational judgment inventories):
• Relatively easy and inexpensive to develop and display at least moderate validity
• Differences in mean scores between racial subgroups typically smaller than those for ability tests
• Perceived to be face-valid and have job relevance
• Measured cog ability, conscientiousness, job experience, situational judgment, job knowledge, and job performance in 3 samples
• SJI appears to be a valuable alternative to other measures, and also an important source of incremental value in a test battery (SJIs were not very highly correlated with the other predictors)
• SJI displayed less adverse impact than all measures except conscientiousness”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

Gottfredson (1994)

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

Guion (1998) 1

A

“• Cumulative effects of hiring decisions can result in substantial increases in mean performance levels and productivity
• Successive hurdles: partial assessment of all candidates, those who pass first cut are assessed further
• Simultaneous assessment: all assessment completed at roughly same time
• Five Assumptions in Personnel decisions: 1) people have abilities, 2) people differ in any given ability, 3)relative differences in ability remain about the same even after training, 4) different jobs require different abilities, and 5) required abilities can be measured
• Many orgs make initial selection decision based on potential for growth”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

Guion (1998) 2

A

“Factor analysis:
• Varimax rotation: place axes where the resulting loadings will be as close as possible to 0 or 1 (maximizes variance of loadings on a factor)
• Orthogonal rotation: keeps factors uncorrelated (typically preferred)
• Oblique rotation: permits interpretation of factors as correlated
• Generalizability theory ascertains how much of total variance is attributable to sources of variance including: 1) individual differences, 2) checklist items, 3) time of day, and 4) the observer (if attribute well measured most variance should be due to individual differences)
• IRT - Based on idea that people with a lot of a certain ability are more likely to give the right answer to an item requiring than ability than people with less ability”

34
Q

Guion (1998) 3

A
35
Q

Guion (1998) 4

A

“• Bias: systematic group differences in item responses or test scores for reasons unrelated to the trait being assessed
• Group mean differences: not by itself evidence of bias! (e.g., groups may actually differ on the trait being measured)
Fairness models:
• The Regression Model: biased if criterion score predicted from common regression line is consistently too high/low within subgroups (Cleary Model)
• Group parity model: proportion in each group that would have been hired based on criterion should be matched by proportion actually hired based on test
• Differential Item Functioning (DIF) detection: Transformed difficulty statistic, contingency methods, IRT methods”

36
Q

Guion (1998) 5

A
37
Q

Lumsden (1976)

A

“• The perfectly reliable test DOES NOT perfectly reflect the attribute
• When true score is defined as an ideal (platonic) score stripped of
error, the result is a contradiction. When true score is defined in
other ways so as to avoid the contradiction, then the resultant
statistics have no useful application.
a. error score is not completely independent of true score
b. the regression of true scores on obtained (observed) scores is
not linear
• Reliability should be conceived as part of a wider question: how
well does the test represent the attribute? True score and the
reliability coefficient can play no part in providing an answer.
• Unidimensional tests has a single attribute, but the attribute is complex”

38
Q

Murphy (1996)

A

“• 1965 – 1985 research concentrated mainly on relationship between cog ability and job perf
• Drawbacks of past emphasis on cognitive ability- oversimplification! – (Performance is multifaceted)
• Little understanding of specific abilities that predict performance (“g-ocentric view”) and of role of cog ability in understanding outcomes other than job perf
• Current individual difference domains: cognitive ability, personality, affect, orientation (values/interests)
• Criteria: Task and nontask performance, individual’s experience of life in the organization, orientation toward and identity with the organization
• Must merge ‘I’ and ‘O’ when studying variables with regard to individual differences
• The goals of parsimony and efficiency may not be appropriate as range of IDs studied expands”

39
Q

Rotundo & Sackett (2002)

A
40
Q

Ryan & Ployhart (2000)

A

“• Perceptions affect how the applicant views the org, and his/her decision to join the org
• Perceptions of procedures appear to be influenced by type of procedure, the method of assessment, self-assessed performance, type of job, information provided about the procedure
• Heuristic model that categorizes types of applicant perceptions:
o perceptions of the procedure/process
o one’s affective and cognitive states during the procedures
o the procedure’s outcome
o of selection processes and procedures in general
• At this point there is not enough evidence that negative perceptions of applicants actually have negative effects (i.e., relate to turnout down jobs, badmouthing the org, etc.)
• Deliver info in an interpersonally sensitive manner; regularly monitor applicant perceptions; assess perception-behavior links”

41
Q

Sackett et al (2001)

A
42
Q

Sackett & Wilk (1994)

A

“• Civil Rights Act 1991 - Individuals responding in the same way on a test should receive the same test scores and have scores interpreted similarly regardless of group membership
• Uniform Guidelines: AI present if the selection ratio for the minority group is less than four fifths (80%) of the selection ratio for the majority group
o Company must then prove job relatedness through criterion, content, construct validity
• Adjust scores to attain business (diversity), societal goals (equal rights to jobs), to alleviate test bias, when justified by some perspective on fairness
• Substantial group differences are found on a variety of measures (cognitive, personality, and physical ability) commonly used for selection decisions → Putting all candidates through all stages of a selection procedure reduces AI
• Test uses that consider group membership: bonus point and separate cutoffs for certain groups, banding, w/in group norming”

43
Q

Schmidt & Hunter (1998)

A
44
Q

Schmidt & Zimmerman (2004)

A

“• It is well established that structured employment interview have higher criterion-related validity for predicting job performance than unstructured interviews
• Structure leads to identification and specification of more valid constructs, and more valid and reliable measurement of the specified constructs
• Results of the study are mixed - cannot definitively answer whether or not reliability differences account for validity differences between interview types
• Validity increased with the # of interviews for both structure and unstructured interviews
• Even with 8 interviews, validity of unstructured still not equal to that of structured interview (.57 v .61)
• By increasing # of interviews, can raise the validity of the unstructured interview to that of the structure interview”

45
Q

Schmidt et al (2003)

A

“• Two major individual difference determinants of performance:
1. can-do (maximal) factors (e.g.: g, physical abilities, experience)
2. will-do (typical) factors (e.g.: personality, integrity)
• Job sample tests more indicative of maximal performance (can-do)
• Interviews reflect both can-do and will-do determinants of performance
• 3 skills important to expatriate selection: Self-skills (maintain mental health), Relationship skills, and Perception skills (perceive behavior of those in host country)
• Biodata measures have incremental validity over the Big Five
• More use of computer adaptive tests – using item response theory
o May impact subgroups that do not have the same experience with technology”

46
Q

Stevens (1998)

A

“• The 4 major scenes of interviews—Greetings and establishing rapport, interviewer questions, applicant questions, & disengagement—rarely occurred in any other order
• Trained interviewers asked more screening-oriented questions than untrained interviewers
• Untrained interviewers gave more stringent ratings as their orientation shifted from recruitment to screening; trained interviewers did not differ
• Interviewers’ orientation/training influences interview process & applicant ratings
• Training led interviewers to be better at soliciting differentiating information about applicants
• Training and interviewer orientation (recruitment or screening) interacted to affect applicant ratings; untrained interviewers became harsher as orientation changes from recruiting to screening”

47
Q

Allen & Yen (1979)

A

“• Test theory / test model: symbolic representation of factors influencing observed test scores and is described by its assumptions
• Classical True-Score Theory: describes how errors of measurement can influence observed scores; error in this theory is random
• Parallel tests will yield equal observed score means, variances and correlations with other observed test scores. However, scores on 2 parallel tests need not perfectly correlate (because error variances are not predictable)
• Tau equivalence: have true scores that differ by additive constant; for every population of examinees, true scores are the same except for additive constant; can have unequal variances; parallel tests are tau equivalent but tau equivalent tests are not necessarily parallel”

48
Q

Schmidt & Hunter (1981)

A
49
Q

Koenig et al. 2007

A

“• Ability to identify criteria (ATIC; transparent criteria in an assessment center) resulted in cross-situational consistency and performance
• ACIT scores in one selection procedure predicted performance in other procedures above and beyond cognitive ability.”

50
Q

Rolland & Steiner 2007

A
51
Q

De Meijer et al. 2007

A

“• In selection of minorities, raters required as much or more information to judge minority candidates than majority.
• A larger number of irrelevant cues were also used in juding minority candidates.
• Raters relied more of ratings of others, than on their own ratings, when making decisions regarding minority candidates.”

52
Q

King et al. 2006

A
53
Q

Austin & Villanova 1992

A

“• Criterion problem: difficulties in the process of conceptualizing and measuring performance constructs that are multidimensional
• Criterion is now defined as a sample of performance, measured directly or indirectly, perceived to be of value to organizational constituencies for facilitating decisions about predictors or programs
• Order of appearance (first to last):
o Supervisor ratings, hard and soft criteria
o Ultimate criterion, critical incident and forced choice
o Self & peer ratings, BARS, Title VII, hard criterion
o Criterion validity, static and dynamic criterion, rater training”

54
Q

Binning & Barrett 1989

A
55
Q

Chapman & Zweig 2005

A
56
Q

Cortina 1993

A
57
Q

Dayan et al. 2002

A

“• Present study examined the predictive validity of the AC for entry- level police candidates
• 712 entry-level candidates to the Israel Police force
• AC performance shows a unique and substantial contribution to the prediction of future police work success.
• Peer Evaluations based on their observations of candidates during the AC process, were shown to contribute uniquely beyond the contributions of professional AC ratings or written tests of cognitive ability.”

58
Q

Dierdorff & Wilson 2003

A

“• Purpose: To provide insight into the average levels of reliability that on can expect of job analysis (JA) data.
• Most common forms of reliability estimates are
• Interrater reliability: degree to which different raters agree
• Intrarater reliability: indices of rater covariation
• Interrater reliability sample size weighted mean reliability:
• for task-level job analysis was .77
• for General Work Actvity JA was .61
• Intrarater reliability sample size weighted mean reliability:
• for task level JA was .68
• for General Work Activity JA was .73”

59
Q

Flynn 1999

A
60
Q

Gutman 2004

A

“• Nine ground rules for adverse impact, Civil Rights Act of 1991 (CRA-91)

  1. actual or implied selection rates are only viable sources of AI
  2. Title IIV is the only viable law to sue under for AI violations
  3. Recruitment is not a viable source of AI claims
  4. Cross-job and composition disparities are disparate treatment, not AI scenarios
  5. Subjective and discretionary decisions are not necessarily the same.
  6. AI rules apply only to subjective, not discretionary decisions.
  7. Guidelines are not equally applicable to all causes of AI
  8. Components of selection system should be IDed by employer and defended, even if not “bottom line” AI.
  9. Job relatedness is not the same as Business Necessity”
61
Q

Huffcutt et al. 2001

A
62
Q

Jawahar & Mattsson 2005

A

“• Men and women more likely to be hired for sex-typed jobs, self-monitoring in raters amplifies this effect.
• Attractive candidates more like to be hired, but only mattered in sex-types jobs.
• Applicants who sex is incongruent with sex type of the job should provide as much individuating information as possible to minimize gender stereotyped inferences of their attributes and trigger more controlled information processing.
• Organizations should educate decision makers that stereotypes can lead to poor decisions.”

63
Q

Jensen 1984

A
"• General Intelligence (g) discovered by Spearman (1904)
 • Overview of Test Bias: Most current standardized tests of mental ability yield unbiased measures for all native-born English-speaking segments of the American society today, regardless of their sex or their racial and social class background. The observed difference is generally caused by factors independent of the tests.
 • Bias means systematic errors of measurement: an obtained measurement (test score) consistently over- or underestimates the true (error-free) value of the measurement for members of one group as compared with members of another group.
 • A biased test is one that yields scores that have a different meaning for members of one group from their meaning for members of another"
64
Q

Roth et al. 2005

A

“• Work sample tests are thought to be among the most valid predictors of job performance, and are viewed positively
• Correlation with job performance was .26 uncorrected and .33 with corrected criterion
• Work sample and general cognitive ability r = .40 (corrected)
• Possible moderator: applicant/incumbent, objective/
Subjective, criterion/predictor, military/non, job complexity, publication date”

65
Q

Sacket & Wanek 1996

A
66
Q

Schmidt et al. 1997

A

“• Examined the effects on AI when multiple alternative predictors (number of predictors, predictor intercorrelations, validity, and the level of subgroup differences) with low AI are used in conjunction with cognitive ability tests.
• AI ratios are a function of the effect size and the selection ratio, and even when using alternate predictors, with high selection ratios, the AI ratio was sometimes above .80”

67
Q

Schmitt & Kunce 2002

A
68
Q

Schmitt & Ryan 1997

A

“• Examined applicant withdrawal and the role of test-taking attitudes and racial differences
• Generally, having to take a test and test-taking attitudes only a small relation with withdrawal.
• Four evaluations made by applicants may affect their intentions to withdraw
- Evaluation of fit
- Perceived alternatives
- Process fairness
- Significant other perceptions
• Candidates withdrew due to three general categories of specific problems: scheduling; preparation, and personal problems”

69
Q

Spychalski et al. 1997

A
70
Q

Tett & Burnett 2003

A
71
Q

Morgenson et al. 2005

A

“• Increase in team based designs hasn’t been accompanied by an increase of research on HR systems that support the use of teams.
• This research shows importance of a team setting for individual selection
• Strong relationship between task and contextual performance (r - .89) suggesting that in team based settings with high interdependence, there is little to separate task and contextual performance.
• Social skills, conscientiousness, extraversion, agreeableness, and teamwork knowledge all significantly predicted contextual performance (social skills and teamwork knowledge had the strongest relationships)”

72
Q

Barrick & Zimmerman 2005

A
73
Q

Varca & Pattison (1993)

A
74
Q

Ployhart & Hakel (1998)

A

Considering the dynamic nature of job performance, this latent growth curve modeling study of salespersons examined whether traditional performance predictors could predict trajectories of change in performance over time. Results showed that those who report being perceived as persuasive and empathetic were likely to have a faster rate of increasing sales performance, but also more likely to have the drop in performance around third and six quarters.

75
Q

McDaniel et al (2001)

A
76
Q

Ployhart et al. (2003)

A
77
Q

King et al. (2006)

A

“• Race-typed names have an impact on resume evaluation
o Asian names benefited applicants for high status jobs
o White and Hispanics benefited from high quality resumes
o Black applicants were evaluated more negatively regardless of resume quality
• Raters were white males”

78
Q

Ellingson et al. (2007)

A
79
Q

Hogan et al. (1996)

A

“• Main conclusions of paper are:
o Well constructed measures of normal personality are valid predictors of performance in virtually all occupations.
o They do not result in adverse impact for job applicants from minority groups.
o Using well developed personality measures for preemployment screening is a way to promote social justice and increase productivity.

80
Q

Tett et al. (2003)

A

“• This paper challenges the assumption that general measures allow the highest predictive validity in selection contexts.
o 2 cannonical correlation studies showed that specific measures yielded stronger validity and higher predictive value.
o Greater specificity has advantages of:
• Improving person-job fit through use of more points of comparison.
• Articulating better the causes, effect and measurement of constructs.
• Allowing more powerful analysis of construct validity through finer articulation of the nomological net.

81
Q

Judge & Bono (2001)

A

“• The purpose of this paper is to provide a quantitative review/meta-analysis of the relationship between the four core self-evaluation traits (self-esteem, generalized self-efficacy, locus of control, and emotional stability) with job satisfaction and job performance.
• All four traits had a positive, non-zero relationship with job satisfaction and performance. For job satisfaction, self-efficacy, and internal locus of control → moderately strong correlations, whereas self-esteem and emotional stability had average correlations in the .20s. For job performance, all 4 had correlations in the .20s.