DONE: Job Analysis Flashcards
Morgeson et al. (2004)
Experiment to test if people give more inflated scores on job analysis ability statements compared to job analysis task statements.
- Respondents may have different self-presentational motives, including a desire to strategically influence the outcomes they receive from others or an attempt to safeguard their own self-concept
- AND - it’s easier to do so with ability statements because they’re more abstract than statements about tasks.
- To the extent that individuals are underutilized in their job, the likelihood of inflation increases. The fact that ability ratings may reflect a self-rating compared with a job rating further suggests that self-presentation will be more likely with ability statements.
Sanchez & Levine (2012)
A Review on JA- Key Points:
- Contrasted with JA, CM seems more about specific behavioral sets/habits that help the organization reach its goals.
- Add to the idea of JA divided by tasks & abilities by adding context (e.g., job contexts may inc/dec certain traits coming out)
- Reliability for personality traits has been weaker than that for abilities.
Morgeson & Campion (1997)
- Job analysis information may be subject to numerous social and cognitive sources of inaccuracy.
- The most common practice has been to rely on measures of consistency, usually either interrater reliability or agreement (Morgeson & Campion, 1997).
Weekley et al. (2019)
Tested the relationship between SME ratings’ of importance for different KSAOs to a particular job and criterion validity for those same KSAOs for that job.
- In three of the four samples, there was a large (r = .50 range) relationship between trait importance and trait validity.
- Moderator analyses showed that the best results may come from supervisors, rather than incumbents, and those who know the job extremely well (there were no differences due to SME job tenure or industry experience, or deletion of outliers).
Bartram (2005)
Meta-analysis on validation studies:
- Mapping the predictor domain to the Great Eight definitions rather than the Big Five accounts for more of the criterion variance and also provides a stronger practitioner focus by concentrating on what is being predicted rather than what is doing the prediction.
- Ones and Viswesvaran (2001) have reviewed the use of “criterion-focused occupational personality scales (COPS)” in selection and have also noted the higher validities associated with scales that directly address issues of relevance in the workplace compared with more general personality assessment instruments.
- OJP is predicted mainly by Organizing & Executing, Leading & Deciding, and Analyzing & Interpreting, with a negative association with Supporting & Cooperating competencies.
-The relationship between the eight predictors and eight criteria shows how much stronger validities can be obtained by aggregation of multiple criteria than by the use of single overall rating measures.
Ones & Viswesvaran (2001)
Ones and Viswesvaran (2001) have reviewed the use of “criterion-focused occupational personality scales (COPS)” in selection and have also noted the higher validities associated with scales that directly address issues of relevance in the workplace compared with more general personality assessment instruments.
Dierdorff & Morgeson (2009)
Found that when
ratings are rendered on descriptors of low specificity and low observability (e.g., traits), variance due to rater idiosyncrasies increases and
reliability decreases.
-Inflation of importance ratings also increases, similar to that found by Morgeson et al. (2004)
Dierdorff & Wilson (2003)
Meta-analysis on work analysis revealed that task data produced higher estimates of interrater reliability than statements of broader (generalized work activities) GWAs .
DuVernet et al. (2015)
Work analyses focusing on activities have consensus and consistency & less inflation compared to those focusing on traits.
-Results also indicate that rating scales requiring more objective judgments generally produce data of higher reliability and discriminability, and are more likely to confirm proposed factor structures, compared with those requiring more subjective judgments.
-Factors associated with data quality included descriptor (attributes vs task) choice, purpose of the work analysis (compensation, training, promotion), choices of collection method and rating
scale (Frequency, Time spent, Importance, Requiredness)
-We also find very little support for the quality benefits of using
multiple methods, which is a common design recommendation
thought to enrich the scope of work analysis projects and to
promote data quality.
Morgeson & Campion (1997)
Seminal article
6 distinct metrics of work analysis data quality: interrater reliability, interrater agreement, between-job discriminability, factor structure confirmation,
mean ratings, and completeness.
1. Inter-rater agreement: indicates that the rank orders among raters are consistent across items.
2. Interrater agreement is an index of absolute agreement
among raters and indicates that raters make similar judgments
across items, speaking to both the rank order and magnitude of
ratings.
3. Between-job discriminability reflects the extent to which raters distinguish between
different jobs when analyzing work.
4. Factor structure confirmation
is an index of the complexity of the factor structure of a work analysis instrument or dataset and indicates the extent to which the predicted structure is observed in the data.
5. Mean ratings reflect the
extent to which ratings are elevated or depressed and can indicate
that certain types of rater biases are operating
6. Completeness represents the extent to which the
collected work analysis data inclusively or comprehensively describe the focal role (i.e., the data are not “deficient” in describing
the role).
Morgeson et al. (2004) - Self-presentation processes in job analysis - Field Experiment
Using an experimental design, the authors examined job incumbent response differences across ability, task, and competency statements.
Results indicated that ability statements were more subject to inflation than were task statements across all rating scales. Greater endorsement of nonessential ability statements was responsible for the differences.
Job analysis data is perhaps the most widely gathered type of organizational information for developing human resource (HR) management systems, as it informs: selection systems, training programs, performance management programs, and compensation systems.
the task-based Dictionary of Occupational Titles has been replaced with the more ability-based Occupational Information Network (O*NET; Peterson et al., 2001).
To the extent that individuals are underutilized in their job, the likelihood of inflation increases. The fact that ability ratings may reflect a self-rating compared with a job rating further suggests that self-presentation will be more likely with ability statements.
First, we found that incumbents endorsed more ability than task statements as being part of their job. Second, summed frequency, importance, and required-at-entry ratings were larger for ability than for task statements.
Also, bogus ability statements were endorsed more often than bogus task statements.
McCormick et al. (1972) made one of the most basic distinctions in JA, which was:
the distinction between job-oriented and worker-oriented information (McCormick, Jeanneret, & Mecham, 1972).
Weekley et al. (2019)
In a clever study, Weekley et al. (2019) tested the relationship between SME ratings’ of importance for different KSAOs to a particular job and criterion validity for those same KSAOs for that job.
This was different since usually SME inter-rater reliability (or intra-rater) is used to test validity of judgements re: JA.
In three of the four samples, there was a large (r = .50 range) relationship between trait importance and trait validity.
Moderator analyses showed that the best results may come from supervisors, rather than incumbents, and those who know the job extremely well (there were no differences due to SME job tenure or industry experience,
HUGE point:
Future research may also examine why supervisors score higher as SMEs. While Morgeson et al. (2004) found evidence of self-presentation bias among incumbents but not supervisors, further consideration of this potential bias is needed.
Neither job tenure nor industry experience were related to accuracy, probably bc of a learning curve regarding workers being able to describe job requirements (and not rare cases that only the most experienced could excel at).
Sanchez et al., 1997)
One challenge is that rater diffs may be due to different perspectives/approaches diff raters take toward the job - so it’s difficult to know what is error and what is different perspectives.
Dierdorff and Wilson (2003
also found that incumbents provided less reliable ratings than job analysts or experts (which included supervisors) in their meta-analysis.