Job Analysis, Selection, & Individual Differences Flashcards
Knowledge
CollectionS of discrete but related facts about a given domain
Skills
The level of proficiency/ competency to perform a task or learned activity
Abilities
Relatively enduring basic capacities for performing a range o f different activists — more stable than K and S
Other Characteristics
Large category of all other potentially relevant factors to job performance including personality, motivational traits, education/work experience, licensure & certifications, etc. (Morgeson & Dierdoff, 2011)
Types of Descriptors in JA
Work requirements - specific tasks - general work responsibilities Worker requirements - relevant attributes such as KSAOs Work Context - Task context - social context - physical context
Work Context
situational opportunities and constraints that affect the occurrence and meaning of organiza- tional behavior as well as functional relationships between variables
(Johns, 2006)
Theories relevant to job analysis
Cognitive categorization theory (Schemas)
Role Theory
Impression management theory
Task context (Morgeson & Dierdorff, 2011)
reflects the structural and informational conditions under which work roles are enacted
- e.g., the amount of autonomy and task clarity, the consequence of error inherent in the work, level of accountability, and the resources available to perform the task
Social context (Morgeson & Dierdorff, 2011)
reflects the nature of role relationships and inter- personal contingencies that exist among workers
- e.g., social density, different forms of communication, the extent and type of interdependence with others, and the degree of interpersonal conflict present in the work environment.
Physical context (Morgeson & Dierdorff, 2011)
reflects elements of the material space or built environment within which work roles are enacted, including
- general environmental conditions (e.g., noise, lighting, temperature, air quality),
- presence of hazardous work conditions (e.g., radiation, high places, disease exposure)
- overall physiological job demands (e.g., sitting, standing, walking, climbing)
Work Analysis Decisions
- Purpose influences all of these decisions
1. Descriptor Type
2. Method of data collection - Type of Rating Scale (With questionnaires)
3. Sources of Data
Methods for collecting JA data
Observation
Group Meetings. (SMEs)
Questionnaires
Individual Interviews
Observation forms in JA
Direct observation
Critical Incidence collections
Video Recordings
Pros and cons of Observation in JA
Pros: Not subject to selective recall/other biases related to workers providing data
Cons: subject to other biases
Not all jobs can effectively be observed (knowledge work)
Time consuming
Interviewing in JA pros and cons
Pros: can allow for more detailed collection of data, since additional questions or clarifications can be asked
Cons: some might not be able to effectively describe what they do/what is required in sufficient detail (esp. when people have been there for a while/routinized their performance)
Interview biases- result in faulty recording or bias in recalling the information given to them
Group Meetings of SMEs for JA
Typically conducted with several different groups of SME type (workers, supervisors, technical experts,etc.)
- usually facilitated by the job analyst
Commonly include: brainstorming activities, listing of activities/attributes, evaluating data thats already been collected
Group Meeting Pros and cons for JA
Pros: more efficient than individual interviews
Can provide opportunities to evaluate data collected from other means
Possibility of getting consensus
Cons: a number of group process problems/biases
E.g., lack of participation, conformity,
Logistical issues (scheduling, locations, etc.)
Questionnaire approach in JA
Structured surveys used to collect info on any of the relevant types of descriptors/needs
- paper and pencil or computer based (more likely now)
E.g., PAQ, O*NET
JA Questionnaire Pros and Cons
Pros: cost efficient/easier administration
Systematically gathers a large amount of relevant info that can be quantifiably summarized
Cons: can be overwhelming when they get lengthy enough to capture everything you’re interested in
Along other common survey response biases (social desirability, leniency, etc. )
Types of JA Rating Scales
Frequency
Importance- asked directly or determined by a combo of below
- Criticality - consequences of error
- Task difficulty
Need to know/have on entry (usually more related to KSAs)
- the level of attribute required
Distinction of JA ratings
Dierdorff & Wilson (2003) found there is significant overlap between importance, frequency, time spent, and difficulty rates— indicating despite their conceptual differences, JA raters aren’t typically distinguishing between them and tend to rate them similarity
JA Data Sources
Written documentation Job incumbent Technical experts Supervisors Clients/Orgs Job Analysts
Examples of Written Documentation sources for JA
Previous/current job descriptions Previously published JA info (e.g., ONET) Training manuals Checklists / operating guides Any relevant work aids
Pros and cons of Written documentation in JA
Pros: cost efficient/time saving
A great starting point to see what is known/what needs to know
Cons; can be outdated, insufficient, inaccurate,
Job incumbent data pros and cons
Pros: familiar with the roles and specific aspects of day to day
Cons: may not have the verbal ability of motivation to sufficiently, accurately, and reliably describe the job
Supervisor JA data pros and cons
Pros: may have a higher verbal ability to articulate specific details
- less likely to be motivated to distort info
- higher level of hierarchy gives them a broader perspective on different attributes needed for performance in different roles
Cons: don’t do the actual work themselves which may lead to less detailed/nuanced information
Job analysts data pros and cons
Pros: tend to produce highly reliable ratings, have no motivation to distort info, able to integrate large amounts of info
Cons: may have been exposed to similar roles in the past creating pre-existing stereotypes/schemas; may have insufficient information if they weren’t’ able to collect enough/observe everything
JA Data Collection process
- Usually start with collecting all existing documentation
- This informs subsequent data collection from incumbents and experts
- Supervisors check/augment data collected
- Analysts compile it all and draw relevant conclusions
Purposes of JA
- selection system development
- job and team design
- performance management system design
- compensation system development
- career management system
- training and development curriculum
Differences in JA for selection vs training
Selection- emphasis on identifying KSAOs needed to effectively perform and the extent to which certain attributes are needed immediately on the job vs can be learned
Training- focus on the actives performed and the skills/knowledge needed that are able to be trained
ONet
A comprehensive system of occupational information designed to replace the DOT (dictionary of occupational titles)
Encompasses the broadest scope of work info ranging from labor market data and wages to important KSAs and required tasks
Content Model of ONet
Insert picture
ONet Content model pros
- A comprehensive way to conceptualize all the types of work related data of interest to individuals and orgs
- Posits a taxonomical structure for most of the domains
- which aids in helping one choose between various levels of specificity - Establishes a common language to describe the world of work
- aiding in cross-occupational comparisons - Allows for occupation specific info
- which enables more effective within- occupation comparisons
- helpful for a variety of hR purposes
ONET pros for practice
- Info is nationally representative of the U.S. workforce
- Is supposed to be “fresh” and collected/updated every 5 years
- More descriptive than info typically found in the products of work analyses (e.g., job descriptions/specifications)
- Can provide generalizable data to act as a starting point/ ground and facilitate local work analysis project
- Could bolster the defensibility of decisions based on local projects when combined with Onet
ONet problems
Great in theory- wasn’t quite carried out the way it should have been
- a lot of job ratings are based on job analysts and not incumbents
- hasn’t been empirically validated in a long time (Reliability, discriminability, underlying factor structures) or by an independent source ever
- need to understand relationships between incumbent and analyst ratings and the redundancy in types of ratings
- doesn’t take into account org contexts
- who these job incumbents are is largely unknown - demographic data is collected, but not publicly available
- sample sizes are not clear
ONet significance
Most significant theoretical development in work analysis and reflects the cumulative experience of nearly 50 years of work analysis research
Work analysis quality
Under CTT it implies their is a true score for a given role; so data was aggregated and quality was indexed by Interrater reliability
Others by rating quality of items by assessing careless responding with legitimate items and bogus items
Wilson et al., 1990 suggests repeating some items to measure intrarater consistency
Work analysis multidimensional view of accuracy
- Interrater reliability (the most commonly used measure of data quality )
- Interrater agreement
- Discriminabilty between jobs (reflects between-job variance)
- Dimensionality of factor structures (the extent to which factor structures are complex and multidimensional)
- Mean ratings (reflects inappropriately elevated or depressed ratings)
- Completeness (the extent to which the data are complete or comprehensive)
Source of Variance in WA data
- Rater influences
- social and cognitive influences
- contextual influences
How cognitive ability can influence JA data
- Higher CA may provide more accurate and complete work analysis
- In WA responded are asked to make judgements on a lot of info, which creates large cognitive demands, that high CA provide an advantage
- Many WA measures require high reading ability
Caveats of Cognitive ability in JA ratings
- Incumbents with very high CA may create extraneous info that could lead to analysis/supervisors rating requirements higher for these individuals than the underlying work requires
- Incumbents with higher CA may have qualitatively different work experiences because they could be assigned or take on additional/different work which could influence their ratings
Example of how personality impacts JA ratings
More conscientious. Individuals may make more careful/diligent efforts —> reliabilty and accuracy
High extroverts May incorporate more socially oriented work elements, thereby changing the nature of the work they perform
Rater influences on JA data
familiarity with the job, job tenure , cognitive ability, personality characteristics, work experience, and performance level of workers
Social and cognitive influences on JA ratings
Social
- social influence
- self-presentation process
Cognitive
- limitations of info-processions systems
- biases of info-processing systems
Social influences processes influencing JA ratings
- conformity
- group polarization (extremity shifts)
- motivation loss
Most relevant in group meeting collections
(Morgeson & Campion, 1997)
Self-presentation processes influencing JA ratings
Processes that reflect an individuals attempt to present himself in a particular light
- impression management
- social desirability
- demand effects
Limitations of info-processing systems in JA ratings
- Information overload
- heuristics
- categorization
Biases in info processing systems on jA
- carelessness
- extraneous info
- inadequate info
- order (primacy and regency) and contrast effects
- halo
- leniency and severity
- method effects
Contextual influences on JA ratings
- context shape roles
- can b examined with discrete (more specific, individual, job, team, etc )or omnibus ( broader occupation/org level) approaches
Both types of attributes/approaches account for meaningful variance in JA ratings
Need to emphasize quality over accuracy in JA
Variance may be due to legitimate differences
- it is difficult to establish stability or objectivity of WA data.
Work analysis are often completed based on human judgement aka we are making inferences, so perhaps we should focus on the quality of inferences made; which is influenced by how large of a inferential leap is made when doing ratings
Abstract ratings vs observable
More abstract ratings that are not as visible require larger inferential leaps and result in “less accuracy”. While more specific visible descriptors (e.g., tasks) result in more accuracy; important implications for competency modeling approaches
Strategic work analysis
Forecasting work role requirements of new roles that are expected to exist in the future or current roles that are expected to substantially change
Dierdorff & Wilson (2003) Meta-analysis
Task data has higher interrater and intrarater reliability than generalized work activities; incumbents display the lowest reliability; frequency and importance scales are the most reliable
Harvey & Wilson (2010)
Found the Discriminant validity of the four major ONet scales(abilities, skills, knowledge, and GWAs); found although they may be conceptually distinct, they are certainly not empirically distinct when raters are rating them.
Indicators of work analysis data quality
INSERT PHOTO
Differences in purpose of competency modeling vs. work analysis
Competency modeling- influence the manner in which work assignments are performed to enhance strategic alignment between individuals and org effectiveness
Work analysis- to better understand and measure work assignments
(Sanchez & Levine, 2009; DuVernet et al., 2015)e
DuVernet et al., (2015) results for type of data
Work-oriented data: higher interrater and rate-relate reliability estimates, mean ratings lower, BUT lower interrater agreement, factor structures less likely to be confirmed, and discriminability reduced when using work-oriented vs. worker oriented
DuVernet et al., (2015) results for specificity of data
Task data showed highest interrater reliability but less intrarater reliabilty and more inflation in ratings than duty-level data;
Lower interrater agreement than more general activity descriptors
DuVernet at al., 2015 result for specificity of attributes
Personality had the lowest interrater reliability; ability had the highest;— which was comparable to skills; abilities showed highest inflation however skills and personality both showed low inflation; but ability showed the highest discriminability and knowledge showed the lowest
DuVernet et al., 2015 Data collection results
- Using more methods increased discriminability
- Competency modeling showed higher interrater agreement and discriminability, BUT lower interrater reliability and higher inflation than WA
- Rater training didn’t do much but did show a little more discriminability
- Length is a tricky one some quality goes up as it does to a certain point and some quality goes down as it goes up to a certain point…
Generic questionnaires or customized ones ?
DuVernet et al., 2015 show that specific may be slightly better as they show more intrarater reliabilty, better factor structures, and less inflation; but generic ones tend to have high interrater reliability and rate-relate reliability and discriminability
Types of rating scales on quality of data
DuVernet et al., 2015 found objective scales (frequency, time spent, perform/required by job) showed higher quality on almost all indicators except interrater agreement as higher with more subjective scales (importance, etc)
Best Date sources for JA
DuVernet et al., 2015 showed incumbets may have the lease quality as they were only the highest in interrater agreement and having a higher change to confirm factor structures;?
WA purpose and data quality (DuVernet et al., 2015)
Personally relevant purposes ( compensation, etc.) showed higher interrater reliability, rate-re-rate reliability and discriminability, but were more likely to be inflated and less likely to confirm factor structures
DuVernet et al., 2015 suggestions for JA for election purposes
Could run into inflation, which could result in incorrect predictorr choices; low levels of consensus could result in legal concerns; to counteract this loss in interrater reliability one should include professional analysts and org-specific work analysts; if only incumbents available, discuss the personal relevance of it for them (better selection = better coworkers)
DuVernet at al., 2015 suggestions for multipurpose JA
Multi-purpose leads to increased consensus but inflation and reduced discriminability; to counteract this loss communicating the relevance of the purpose, using supervisors and/or analysts, use objective scales, blend standardized instruments with specific tools
Ellington and Wilson, 2017 Take Aways
- Significant variability remains across raters and contexts after controlling for ratee and rater-level characteristics
- Rater variance may be due to a greater proportion to aspects of the work environment that influence the rating behaviors of those raters
- Rater’s mean tendency seems to account for about 20% of rater effects after controlling for ratee- level characteristics
- The large presence of rater and context effects in supervisor ratings suggests that people using ratings as criteria for validating selection instruments should be utilizing MLM to account for this
- People should be cautious when using supervisor ratings to make comparisons across supervisors or contexts for domain purposes even when attempts of rater training have been made
Individual differences
Individuals basic tendencies, capacities, and dispositions that influence the observed range and frequency of their behavior (Motowidlo et al., 1997)
Schmidt and Hunter 1998 meta analysis
Estimated the correlation between jobs performance and various individual differences (when measurement is without error and there is no restriction of range)
Cognitive ability and job performance
Correlates in excess of .50 with performance ; which implies that an applicants increase in a cognitive ability score by 1 SD would result in a .50 SD improvement in performance (Schmidt & Hunter, 1998)
Spearman’s Cognitive ability
Two - factor theory
There’s g- general ability which influences all performance and the second factor is :category: meaning a specific factor unique to the domain; which is why all performance across domains is correlated to some degree (g) but not perfectly because specific ability influences each type
Thurstone cognitive ability
Foun there to just be several types of intelligence, but no general factor
Today’s take on intelligence
Chernyshynko et al., 2011
G-
Fluid and crystallized
Then fairly specific cognitive performance
It is a hierarchy
Dominance Model
The more of a trait or ability someone possesses the better the chances that Damien has of answering the item correctly or earning a high test score
Cog ability and selection
Chernyshynko et al., 2011
Cog ability —> job performance = .51 (Schmidt and hunter, 1998)
Job complexity moderates the relationship, such that it is stronger from more complex jobs
One of the best predictors in selection BUT concerns around differences in test scores between majority and minority group applicants’ debate on whether it is from measurement bias or true difference in the underlying trait; must be careful to avoid adverse impact or argue that it is imperative to use in selection
The big 5: the big deal
Chernyshynko et al., 2011
Most agree this is an adequate coverage of personality; some argue one or two more should be added; but generally the first model people have had a shared large consensus about
Big 5 traits
OPENNESS CONSCIENTIOUSNESS EXTRAVERSION AGREEABLENESS NEUROTICISM
Personality facets
More contextualized manifestations of broad personality factors; allow more detailed/specific understanding of relationships of personality to behavior/performance;
Factors/dimensions are the broad umbrellas that hold all the facets
Personality assessment in selection
Chernyshynko et al., 2011
Our assessments don’t quire do well with precise measurement across a wide range of test levels, test-retest reliability, and resistance to response distortion — we need to make them better and perhaps approach them differently (IRT not CTT)
Ideal Point Models
Chernyshynko et al., 2011
Individuals are more likely to endorse items when it is close to their ability or trait level, but not if it is too high above or too low below.
Personality predictors of performance
Chernyshynko et al., 2011
Conscientiousness is the most valid predictor across nearly all occupations (.19-.26, depending on the performance criteria)
The other 4 predict success in specific occupations or relate to specific criteria, but are not as wide spread or strong in their prediction
Foldes et al., 2008 meta-analysis
Looked at racial group differences on personality
Found only small differences that would be unlikely to causes adverse impact in selection , especially when aggregated to form composites (scale scores)
Small gender differences too (females slightly higher on agreeableness; males slightly higher on dominance a facet of extraversion)
Values in selection
Chernyshynko et al., 2011
Hard to use values in selection because measures are problematic
- likert type measures tend to use socially desirable items which results in people endorsing everything high
- using values to measure P-O fit means needing an org score to form correspondence ; when done difference scores are common but they are unreliable and ambiguous
The role of individual differences on performance (model)
INSERT PHOTO
Values
Stable individual differences in what people want to do or want to have in their lives; influence how people interpret characteristics in their environment and play an important role in forming one’s specific goals
Core Self Evaluation (CSE)
A broad composite of self-esteem, generalized self-efficacy, emotional stability, and locus of control (Judge et al., 2002)
Selection using behavioral narratives or information
Incremental validity is mixed; but empirically keyed biodata measures have validities ranging from .25 to .40;
But there have been no formally developed measures of behavioral narratives… which is a shame