Selection Flashcards
Cog ability def.
a very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, learn from experience. (Gottfredson, 1997)
Cog. ability description
General Cognitive ability (McDaniel & Banks, 2011)
• Spearman (1904)
o General cognitive ability factor (g)
• Cattell’s crystallized and fluid intelligences
o Fluid intelligence is the ability to solve novel problems through reasoning
o Crystallized intelligence is the ability to rely on prior experience and knowledge to solve problems
• Carroll’s Three-Stratum Theory (1993)
o g
o fluid, crystallized, general memory and learning, broad visual perceptions, broad auditory perceptions, broad retrieval ability, broad cognitive speediness, processing speed
o lower level constructs of the above
Cog. ability from staffing
Staffing Notes
• Measure of maximum performance.
• generally considerable time pressure.
• very good validity across range of jobs (validity increases with job complexity).
• job performance (.51; Hunter & Hunter, 1989)
• training success.
• considerable adverse impact
–minorities and older employees.
• but, highly valid within group.
• relatively inexpensive.
• can generally be given in-person or on-line.
• easy to score: in-person or on-line.
• questions concerning face validity:
Emotional intelligence
o Law et al., 2004 has a JAP article on content/construct validity issues
o 3 dimensions: emotional perception, stability and regulation
o Joseph et al.,, 2010 JAP article found it predicts performance above and beyond CA and personality
o Good analysis.
o Idea of being self-aware and aware of other’s emotions and correctly using this information (self-regulation) is likely to be important (E.g., social workers)
o Issue: how to measure it? How to measure with reasonable cost?
e.g., show pictures/videos, ask significant others, etc.
vs. paper-and-pencil self-report measure. (Exhibit 9.6)
o Consulting firms claims have far exceeded data.
Personality def.
Relatively enduring patterns of thoughts, ideas, emotions, and behaviors that are consistent over situations and time and distinguish individuals from others (Barrick & Mount)
Personality description
I. Model and structures of personality
a. Brief history and background
i. Meta-analysis by Barrick & Mount (1991) is a turning point to raising interest in personality
1. Conscientiousness was a valid predictor across most jobs
2. Extraversion was a predictor for interpersonal jobs
b. Big five can be clustered into “getting ahead” and “getting along”
c. Five factor model
d. HEXACO model
e. Using personality facets instead of factors
i. Someone might want one facet of conscientiousness but not another
1. i.e. I am high on achievement (achievement-striving, self-efficacy, and self-discipline) but low on conformity (orderly, dutiful (rules) cautious)
f. Nomological-web clustering approach
i. A general approach or philosophy that personality variables or facets are grouped together (by factor analysis, expert sorting methods, etc)
II. Criterion-related validity
a. Small to moderately related to leadership and career success, job performance, OCB, CWB, training, team processes, JS
b. Some provide reasons why we shouldn’t be concerned about them being small to moderate (e.g., even moderate values are important in practice (increm. V, etc.)
III. Subgroup differences - fairly minimal
IV. Faking
V. Innovations (forced choice, CRT (James), “other” ratings)
Biodata definition
The core attribute of biodata items is that the items pertain to historical events that may have shaped the person’s behavior and identity” (Mael, 1991 and Breaugh, 2009)
Biodata description
• Important that items are discrete and verifiable
The use of biodata for employee selection: Past research and future directions (Breaugh, 2009)
• Definition: An applicant’s past behavior and experiences
• Reliability and validity
o Reliability - high, depends on the construct
o Validity - Schmidt & Hunter (1998) .35 for job performance
o Incremental validity - Beyond tenure, GMA, and big 5 (Mount et al., 2000)
• Modest adverse impact
• Negative applicant reactions
• Susceptible to faking (May be reduced through a warning, elaboration)
• Issues in biodata research
o Heavy reliance on past studies on concurrent validity, which may overestimate V
o Better to have a scale tailored to fit the unique aspects of the position versus a generic scale
o Lack of information on biodata items, constructs measured
• Future research: what is biodata, item focus, technology
Idea: app. reactions to constructs relevant to job rather than generic
Interviews def.
a personally interactive process of one or more people asking questions orally to another person and evaluating the answers for the purpose of determining the qualifications of that person in order to make employment decisions
Interview description
• Validity paradox – criterion-related validity is present, but not clear construct validity
• Modern structured interviewing
o Situational interview
o Behavior description interview
o Couple the interview with a formal rating system
• Reliability and validity
o Reasonable reliability, .75 under the right design conditions
o Greater validity for structured vs. unstructured interviews (.44 - .62 for struc.)
o When properly designed and under the right conditions, comparable to CA
o BUT the interview is a method, so any particular interview can range in validity
• Interview construct research – need to better understand what constructs are measured in interviews; limit # of constructs (Dipboye, Macan, Shahani, 2011)
• The role of structure
o Campion et al., 1997 15 components of structure
o Structured interviews can be developed by creating questions directly from KSAOs or by using critical incidents (job analysis)
o Structure as a continuum!
• Looking at interview from both interviewer’s and interviewee’s perspectives (Dipboye, Macan..)
• Future research
o Interviews across different countries, intentional response distortion/faking, cognitive demands, technological advances
Work sample def.
Test in which the applicant performs a selected set of actual tasks that are physically and or psychologically similar to those performed on the job; standardized and scored with the aids of experts
Roth et al., 2005
Consist of tasks or work activities that mirror
the tasks that employees are required to perform on the job. Work sample tests
can be designed to measure almost any job task but are typically designed to measure
technically-oriented tasks, such as operating equipment, repairing and troubleshooting
equipment, organizing and planning work, and so forth. (Pulakos)
Work sample other
- Validity: .33 with job perf.
- Other: difficult to fake, positive applicant reactions, AI is dependent on the type of and constructs measured in WS (Roth et al., 2008)
Work sample tests typically involve having job applicants perform the tasks of interest
while their performance is observed and scored by trained evaluators. Similar to job
knowledge tests, work sample tests should only be used in situations where candidates
are expected to know how to perform the tested job tasks prior to job entry.
AC intro and background
I. Arthur and Day, 2011 summary:
II. Definition: A comprehensive standardized procedure that uses multiple techniques (exercises) and assessors to assess multiple behavioral dimensions of interests
i. A method; thus, only as good as their and administration
b. Typically used for selection, promotion, or development w/ move towards development
AC design development scoring
Design steps: Job-analysis Determine major work behaviors Identify KSAOs or construct underlying major work behaviors Identify behavioral dimensions related to the KSAO Select or develop exercises to measure dimensions (usually where ACs fall short in construct validity) Train assessors and administrators Pilot test assessment center Refine as warranted Implement AC
Methodological and design-related characteristics and features
a. Sound planning/ job analysis, limit # of dimensions, conceptual distinctiveness of dimensions, transparency of dimensions (more consistent behavior and better differential rating of dimensions), Participant-to-assessor ratio (2:1 - less susceptible to bias and errors)
b. Scoring and rating approach (e.g., within or across exercise, behavior checklists, OAR?)
i. AEDR = dimension factors; WEDR = exercise factors
c. Type of assessor (I/Os > mgrs or supervisors; should have FOR training)