Selection Flashcards
Cog ability def.
a very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, learn from experience. (Gottfredson, 1997)
Cog. ability description
General Cognitive ability (McDaniel & Banks, 2011)
• Spearman (1904)
o General cognitive ability factor (g)
• Cattell’s crystallized and fluid intelligences
o Fluid intelligence is the ability to solve novel problems through reasoning
o Crystallized intelligence is the ability to rely on prior experience and knowledge to solve problems
• Carroll’s Three-Stratum Theory (1993)
o g
o fluid, crystallized, general memory and learning, broad visual perceptions, broad auditory perceptions, broad retrieval ability, broad cognitive speediness, processing speed
o lower level constructs of the above
Cog. ability from staffing
Staffing Notes
• Measure of maximum performance.
• generally considerable time pressure.
• very good validity across range of jobs (validity increases with job complexity).
• job performance (.51; Hunter & Hunter, 1989)
• training success.
• considerable adverse impact
–minorities and older employees.
• but, highly valid within group.
• relatively inexpensive.
• can generally be given in-person or on-line.
• easy to score: in-person or on-line.
• questions concerning face validity:
Emotional intelligence
o Law et al., 2004 has a JAP article on content/construct validity issues
o 3 dimensions: emotional perception, stability and regulation
o Joseph et al.,, 2010 JAP article found it predicts performance above and beyond CA and personality
o Good analysis.
o Idea of being self-aware and aware of other’s emotions and correctly using this information (self-regulation) is likely to be important (E.g., social workers)
o Issue: how to measure it? How to measure with reasonable cost?
e.g., show pictures/videos, ask significant others, etc.
vs. paper-and-pencil self-report measure. (Exhibit 9.6)
o Consulting firms claims have far exceeded data.
Personality def.
Relatively enduring patterns of thoughts, ideas, emotions, and behaviors that are consistent over situations and time and distinguish individuals from others (Barrick & Mount)
Personality description
I. Model and structures of personality
a. Brief history and background
i. Meta-analysis by Barrick & Mount (1991) is a turning point to raising interest in personality
1. Conscientiousness was a valid predictor across most jobs
2. Extraversion was a predictor for interpersonal jobs
b. Big five can be clustered into “getting ahead” and “getting along”
c. Five factor model
d. HEXACO model
e. Using personality facets instead of factors
i. Someone might want one facet of conscientiousness but not another
1. i.e. I am high on achievement (achievement-striving, self-efficacy, and self-discipline) but low on conformity (orderly, dutiful (rules) cautious)
f. Nomological-web clustering approach
i. A general approach or philosophy that personality variables or facets are grouped together (by factor analysis, expert sorting methods, etc)
II. Criterion-related validity
a. Small to moderately related to leadership and career success, job performance, OCB, CWB, training, team processes, JS
b. Some provide reasons why we shouldn’t be concerned about them being small to moderate (e.g., even moderate values are important in practice (increm. V, etc.)
III. Subgroup differences - fairly minimal
IV. Faking
V. Innovations (forced choice, CRT (James), “other” ratings)
Biodata definition
The core attribute of biodata items is that the items pertain to historical events that may have shaped the person’s behavior and identity” (Mael, 1991 and Breaugh, 2009)
Biodata description
• Important that items are discrete and verifiable
The use of biodata for employee selection: Past research and future directions (Breaugh, 2009)
• Definition: An applicant’s past behavior and experiences
• Reliability and validity
o Reliability - high, depends on the construct
o Validity - Schmidt & Hunter (1998) .35 for job performance
o Incremental validity - Beyond tenure, GMA, and big 5 (Mount et al., 2000)
• Modest adverse impact
• Negative applicant reactions
• Susceptible to faking (May be reduced through a warning, elaboration)
• Issues in biodata research
o Heavy reliance on past studies on concurrent validity, which may overestimate V
o Better to have a scale tailored to fit the unique aspects of the position versus a generic scale
o Lack of information on biodata items, constructs measured
• Future research: what is biodata, item focus, technology
Idea: app. reactions to constructs relevant to job rather than generic
Interviews def.
a personally interactive process of one or more people asking questions orally to another person and evaluating the answers for the purpose of determining the qualifications of that person in order to make employment decisions
Interview description
• Validity paradox – criterion-related validity is present, but not clear construct validity
• Modern structured interviewing
o Situational interview
o Behavior description interview
o Couple the interview with a formal rating system
• Reliability and validity
o Reasonable reliability, .75 under the right design conditions
o Greater validity for structured vs. unstructured interviews (.44 - .62 for struc.)
o When properly designed and under the right conditions, comparable to CA
o BUT the interview is a method, so any particular interview can range in validity
• Interview construct research – need to better understand what constructs are measured in interviews; limit # of constructs (Dipboye, Macan, Shahani, 2011)
• The role of structure
o Campion et al., 1997 15 components of structure
o Structured interviews can be developed by creating questions directly from KSAOs or by using critical incidents (job analysis)
o Structure as a continuum!
• Looking at interview from both interviewer’s and interviewee’s perspectives (Dipboye, Macan..)
• Future research
o Interviews across different countries, intentional response distortion/faking, cognitive demands, technological advances
Work sample def.
Test in which the applicant performs a selected set of actual tasks that are physically and or psychologically similar to those performed on the job; standardized and scored with the aids of experts
Roth et al., 2005
Consist of tasks or work activities that mirror
the tasks that employees are required to perform on the job. Work sample tests
can be designed to measure almost any job task but are typically designed to measure
technically-oriented tasks, such as operating equipment, repairing and troubleshooting
equipment, organizing and planning work, and so forth. (Pulakos)
Work sample other
- Validity: .33 with job perf.
- Other: difficult to fake, positive applicant reactions, AI is dependent on the type of and constructs measured in WS (Roth et al., 2008)
Work sample tests typically involve having job applicants perform the tasks of interest
while their performance is observed and scored by trained evaluators. Similar to job
knowledge tests, work sample tests should only be used in situations where candidates
are expected to know how to perform the tested job tasks prior to job entry.
AC intro and background
I. Arthur and Day, 2011 summary:
II. Definition: A comprehensive standardized procedure that uses multiple techniques (exercises) and assessors to assess multiple behavioral dimensions of interests
i. A method; thus, only as good as their and administration
b. Typically used for selection, promotion, or development w/ move towards development
AC design development scoring
Design steps: Job-analysis Determine major work behaviors Identify KSAOs or construct underlying major work behaviors Identify behavioral dimensions related to the KSAO Select or develop exercises to measure dimensions (usually where ACs fall short in construct validity) Train assessors and administrators Pilot test assessment center Refine as warranted Implement AC
Methodological and design-related characteristics and features
a. Sound planning/ job analysis, limit # of dimensions, conceptual distinctiveness of dimensions, transparency of dimensions (more consistent behavior and better differential rating of dimensions), Participant-to-assessor ratio (2:1 - less susceptible to bias and errors)
b. Scoring and rating approach (e.g., within or across exercise, behavior checklists, OAR?)
i. AEDR = dimension factors; WEDR = exercise factors
c. Type of assessor (I/Os > mgrs or supervisors; should have FOR training)
Validity, reliability, faking, cost, subgroup diff.
a. Fairly reliable
b. Content-related: okay
c. Criterion-related: good, incremental over CA and personality
d. Construct-related: problematic (some possible issues…)
i. Methodological design factors (e.g., # of dims), use of espoused versus actual constructs, issues with analytic approaches, specifically post-exercise ratings, differential activation of traits depending on demands of particular exercise
e. Response distortion: Weak and nonsignificant relationships
VI. Cost: Very expensive, but, okay ROI
VII. Subgroup differences: Greater than originally expected
How are ACs diff. from work samples?
Work samples as stand-alone tests are designed to stimulate actual job tasks, whereas AC exercises are designed to represent the general context surrounding the demands of the job
Important articles for AC
Arthur et al., 2006 (dimensions); Meriac et al., 2008/2014 (M-As; incremental over CA and pers and factor structure); Arthur et al., 2008 (why ACs don’t work as they should); Dean et al., 2008 (subgroup differences)
Integrity tests author
Berry et al., 2007
Integrity tests def.
Overt integrity tests - Measure of theft attitudes
Beliefs about the frequency and extent of theft, punativeness toward theft, ruminations about theft, perceived ease of theft, endorsement of common rationalization of theft, and assessments of one’s own honesty
Covert integrity tests - Personality-oriented tests
Include personality items dealing with dependability, conscientiousness, social conformity, thrill seeking, trouble with authority, and hostility
Integrity tests. descrip.
• Construct understanding
o Links to personality variables
o The overall score is nonsignificantly related to cognitive ability
o Links to situational variables
• Validity
o It is difficult to determine validity because it is hard to measure CWBs
o Majority of findings support that it is positively related to CWBs
o Absenteeism (small)
• Faking
o Huge effect sizes
o Applicants can fake, the question is, do they?
o Are certain questions or test more fakeable
SJTs def.
Measurement methods that present respondents with work-related situations and then ask they how they would or should handle the situation
o Considered to be multidimensional measurement methods
o By definition, they are context bound
SJT development and scoring
o Situation generation/Response option generation
o SME will identify effective and effective options
o Forced choice method (best, best and worst, rank order)
o Rate the effectiveness of each option
o Stem complexity
o Fidelity - Can improve fidelity with a video simulation
o Response options
“Would do” - Measures of behavioral tendencies or typical performance
“Should do” - Measures of maximum performance
SJT validity, subgroup diff., cost, future
• Sources of SJT validity evidence
o Moderate criterion related validity (.20 with job performance; McDaniel et al., 2001)
Faking can reduce the predictive validity
o Content validity - Key source of validity for SJTs because it is typically based on the job
o Construct validity
Not possible to establish the construct validity of a method
Correlated with personality, cognitive ability, job knowledge, and job experience
• Subgroup differences - lower than CA, depends on construct
• Fairly costly, time consuming to develop
• Reading requirements
• Future: long term validity, implicit trait policies, and other