Job Analysis, Selection, & Individual Differences Flashcards

1
Q

Knowledge

A

CollectionS of discrete but related facts about a given domain

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Skills

A

The level of proficiency/ competency to perform a task or learned activity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Abilities

A

Relatively enduring basic capacities for performing a range o f different activists — more stable than K and S

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Other Characteristics

A

Large category of all other potentially relevant factors to job performance including personality, motivational traits, education/work experience, licensure & certifications, etc. (Morgeson & Dierdoff, 2011)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Types of Descriptors in JA

A
Work requirements
- specific tasks
- general work responsibilities 
Worker requirements
- relevant attributes such as KSAOs
Work Context
- Task context
- social context
- physical context
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Work Context

A

situational opportunities and constraints that affect the occurrence and meaning of organiza- tional behavior as well as functional relationships between variables
(Johns, 2006)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Theories relevant to job analysis

A

Cognitive categorization theory (Schemas)
Role Theory
Impression management theory

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q
Task context
(Morgeson & Dierdorff, 2011)
A

reflects the structural and informational conditions under which work roles are enacted
- e.g., the amount of autonomy and task clarity, the consequence of error inherent in the work, level of accountability, and the resources available to perform the task

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q
Social context
(Morgeson & Dierdorff, 2011)
A

reflects the nature of role relationships and inter- personal contingencies that exist among workers
- e.g., social density, different forms of communication, the extent and type of interdependence with others, and the degree of interpersonal conflict present in the work environment.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q
Physical context
(Morgeson & Dierdorff, 2011)
A

reflects elements of the material space or built environment within which work roles are enacted, including

  • general environmental conditions (e.g., noise, lighting, temperature, air quality),
  • presence of hazardous work conditions (e.g., radiation, high places, disease exposure)
  • overall physiological job demands (e.g., sitting, standing, walking, climbing)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Work Analysis Decisions

A
  • Purpose influences all of these decisions
    1. Descriptor Type
    2. Method of data collection
  • Type of Rating Scale (With questionnaires)
    3. Sources of Data
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Methods for collecting JA data

A

Observation
Group Meetings. (SMEs)
Questionnaires
Individual Interviews

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Observation forms in JA

A

Direct observation
Critical Incidence collections
Video Recordings

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Pros and cons of Observation in JA

A

Pros: Not subject to selective recall/other biases related to workers providing data
Cons: subject to other biases
Not all jobs can effectively be observed (knowledge work)
Time consuming

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Interviewing in JA pros and cons

A

Pros: can allow for more detailed collection of data, since additional questions or clarifications can be asked
Cons: some might not be able to effectively describe what they do/what is required in sufficient detail (esp. when people have been there for a while/routinized their performance)
Interview biases- result in faulty recording or bias in recalling the information given to them

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Group Meetings of SMEs for JA

A

Typically conducted with several different groups of SME type (workers, supervisors, technical experts,etc.)

  • usually facilitated by the job analyst

Commonly include: brainstorming activities, listing of activities/attributes, evaluating data thats already been collected

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Group Meeting Pros and cons for JA

A

Pros: more efficient than individual interviews
Can provide opportunities to evaluate data collected from other means
Possibility of getting consensus

Cons: a number of group process problems/biases
E.g., lack of participation, conformity,
Logistical issues (scheduling, locations, etc.)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Questionnaire approach in JA

A

Structured surveys used to collect info on any of the relevant types of descriptors/needs
- paper and pencil or computer based (more likely now)
E.g., PAQ, O*NET

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

JA Questionnaire Pros and Cons

A

Pros: cost efficient/easier administration
Systematically gathers a large amount of relevant info that can be quantifiably summarized
Cons: can be overwhelming when they get lengthy enough to capture everything you’re interested in
Along other common survey response biases (social desirability, leniency, etc. )

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Types of JA Rating Scales

A

Frequency
Importance- asked directly or determined by a combo of below
- Criticality - consequences of error
- Task difficulty
Need to know/have on entry (usually more related to KSAs)
- the level of attribute required

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Distinction of JA ratings

A

Dierdorff & Wilson (2003) found there is significant overlap between importance, frequency, time spent, and difficulty rates— indicating despite their conceptual differences, JA raters aren’t typically distinguishing between them and tend to rate them similarity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

JA Data Sources

A
Written documentation 
Job incumbent
Technical experts
Supervisors
Clients/Orgs
Job Analysts
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Examples of Written Documentation sources for JA

A
Previous/current job descriptions 
Previously published JA info (e.g., ONET)
Training manuals 
Checklists / operating guides
Any relevant work aids
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Pros and cons of Written documentation in JA

A

Pros: cost efficient/time saving
A great starting point to see what is known/what needs to know
Cons; can be outdated, insufficient, inaccurate,

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Job incumbent data pros and cons

A

Pros: familiar with the roles and specific aspects of day to day
Cons: may not have the verbal ability of motivation to sufficiently, accurately, and reliably describe the job

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Supervisor JA data pros and cons

A

Pros: may have a higher verbal ability to articulate specific details
- less likely to be motivated to distort info
- higher level of hierarchy gives them a broader perspective on different attributes needed for performance in different roles
Cons: don’t do the actual work themselves which may lead to less detailed/nuanced information

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Job analysts data pros and cons

A

Pros: tend to produce highly reliable ratings, have no motivation to distort info, able to integrate large amounts of info

Cons: may have been exposed to similar roles in the past creating pre-existing stereotypes/schemas; may have insufficient information if they weren’t’ able to collect enough/observe everything

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

JA Data Collection process

A
  1. Usually start with collecting all existing documentation
  2. This informs subsequent data collection from incumbents and experts
  3. Supervisors check/augment data collected
  4. Analysts compile it all and draw relevant conclusions
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

Purposes of JA

A
  • selection system development
  • job and team design
  • performance management system design
  • compensation system development
  • career management system
  • training and development curriculum
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

Differences in JA for selection vs training

A

Selection- emphasis on identifying KSAOs needed to effectively perform and the extent to which certain attributes are needed immediately on the job vs can be learned

Training- focus on the actives performed and the skills/knowledge needed that are able to be trained

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

ONet

A

A comprehensive system of occupational information designed to replace the DOT (dictionary of occupational titles)

Encompasses the broadest scope of work info ranging from labor market data and wages to important KSAs and required tasks

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

Content Model of ONet

A

Insert picture

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

ONet Content model pros

A
  1. A comprehensive way to conceptualize all the types of work related data of interest to individuals and orgs
  2. Posits a taxonomical structure for most of the domains
    - which aids in helping one choose between various levels of specificity
  3. Establishes a common language to describe the world of work
    - aiding in cross-occupational comparisons
  4. Allows for occupation specific info
    - which enables more effective within- occupation comparisons
    - helpful for a variety of hR purposes
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

ONET pros for practice

A
  1. Info is nationally representative of the U.S. workforce
  2. Is supposed to be “fresh” and collected/updated every 5 years
  3. More descriptive than info typically found in the products of work analyses (e.g., job descriptions/specifications)
  4. Can provide generalizable data to act as a starting point/ ground and facilitate local work analysis project
  5. Could bolster the defensibility of decisions based on local projects when combined with Onet
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

ONet problems

A

Great in theory- wasn’t quite carried out the way it should have been

  • a lot of job ratings are based on job analysts and not incumbents
  • hasn’t been empirically validated in a long time (Reliability, discriminability, underlying factor structures) or by an independent source ever
  • need to understand relationships between incumbent and analyst ratings and the redundancy in types of ratings
  • doesn’t take into account org contexts
  • who these job incumbents are is largely unknown - demographic data is collected, but not publicly available
  • sample sizes are not clear
36
Q

ONet significance

A

Most significant theoretical development in work analysis and reflects the cumulative experience of nearly 50 years of work analysis research

37
Q

Work analysis quality

A

Under CTT it implies their is a true score for a given role; so data was aggregated and quality was indexed by Interrater reliability

Others by rating quality of items by assessing careless responding with legitimate items and bogus items

Wilson et al., 1990 suggests repeating some items to measure intrarater consistency

38
Q

Work analysis multidimensional view of accuracy

A
  1. Interrater reliability (the most commonly used measure of data quality )
  2. Interrater agreement
  3. Discriminabilty between jobs (reflects between-job variance)
  4. Dimensionality of factor structures (the extent to which factor structures are complex and multidimensional)
  5. Mean ratings (reflects inappropriately elevated or depressed ratings)
  6. Completeness (the extent to which the data are complete or comprehensive)
39
Q

Source of Variance in WA data

A
  • Rater influences
  • social and cognitive influences
  • contextual influences
40
Q

How cognitive ability can influence JA data

A
  1. Higher CA may provide more accurate and complete work analysis
  2. In WA responded are asked to make judgements on a lot of info, which creates large cognitive demands, that high CA provide an advantage
  3. Many WA measures require high reading ability
41
Q

Caveats of Cognitive ability in JA ratings

A
  1. Incumbents with very high CA may create extraneous info that could lead to analysis/supervisors rating requirements higher for these individuals than the underlying work requires
  2. Incumbents with higher CA may have qualitatively different work experiences because they could be assigned or take on additional/different work which could influence their ratings
42
Q

Example of how personality impacts JA ratings

A

More conscientious. Individuals may make more careful/diligent efforts —> reliabilty and accuracy

High extroverts May incorporate more socially oriented work elements, thereby changing the nature of the work they perform

43
Q

Rater influences on JA data

A

familiarity with the job, job tenure , cognitive ability, personality characteristics, work experience, and performance level of workers

44
Q

Social and cognitive influences on JA ratings

A

Social

  • social influence
  • self-presentation process

Cognitive

  • limitations of info-processions systems
  • biases of info-processing systems
45
Q

Social influences processes influencing JA ratings

A
  • conformity
  • group polarization (extremity shifts)
  • motivation loss

Most relevant in group meeting collections
(Morgeson & Campion, 1997)

46
Q

Self-presentation processes influencing JA ratings

A

Processes that reflect an individuals attempt to present himself in a particular light

  • impression management
  • social desirability
  • demand effects
47
Q

Limitations of info-processing systems in JA ratings

A
  • Information overload
  • heuristics
  • categorization
48
Q

Biases in info processing systems on jA

A
  • carelessness
  • extraneous info
  • inadequate info
  • order (primacy and regency) and contrast effects
  • halo
  • leniency and severity
  • method effects
49
Q

Contextual influences on JA ratings

A
  • context shape roles
  • can b examined with discrete (more specific, individual, job, team, etc )or omnibus ( broader occupation/org level) approaches

Both types of attributes/approaches account for meaningful variance in JA ratings

50
Q

Need to emphasize quality over accuracy in JA

A

Variance may be due to legitimate differences
- it is difficult to establish stability or objectivity of WA data.
Work analysis are often completed based on human judgement aka we are making inferences, so perhaps we should focus on the quality of inferences made; which is influenced by how large of a inferential leap is made when doing ratings

51
Q

Abstract ratings vs observable

A

More abstract ratings that are not as visible require larger inferential leaps and result in “less accuracy”. While more specific visible descriptors (e.g., tasks) result in more accuracy; important implications for competency modeling approaches

52
Q

Strategic work analysis

A

Forecasting work role requirements of new roles that are expected to exist in the future or current roles that are expected to substantially change

53
Q

Dierdorff & Wilson (2003) Meta-analysis

A

Task data has higher interrater and intrarater reliability than generalized work activities; incumbents display the lowest reliability; frequency and importance scales are the most reliable

54
Q

Harvey & Wilson (2010)

A

Found the Discriminant validity of the four major ONet scales(abilities, skills, knowledge, and GWAs); found although they may be conceptually distinct, they are certainly not empirically distinct when raters are rating them.

55
Q

Indicators of work analysis data quality

A

INSERT PHOTO

56
Q

Differences in purpose of competency modeling vs. work analysis

A

Competency modeling- influence the manner in which work assignments are performed to enhance strategic alignment between individuals and org effectiveness

Work analysis- to better understand and measure work assignments

(Sanchez & Levine, 2009; DuVernet et al., 2015)e

57
Q

DuVernet et al., (2015) results for type of data

A

Work-oriented data: higher interrater and rate-relate reliability estimates, mean ratings lower, BUT lower interrater agreement, factor structures less likely to be confirmed, and discriminability reduced when using work-oriented vs. worker oriented

58
Q

DuVernet et al., (2015) results for specificity of data

A

Task data showed highest interrater reliability but less intrarater reliabilty and more inflation in ratings than duty-level data;
Lower interrater agreement than more general activity descriptors

59
Q

DuVernet at al., 2015 result for specificity of attributes

A

Personality had the lowest interrater reliability; ability had the highest;— which was comparable to skills; abilities showed highest inflation however skills and personality both showed low inflation; but ability showed the highest discriminability and knowledge showed the lowest

60
Q

DuVernet et al., 2015 Data collection results

A
  1. Using more methods increased discriminability
  2. Competency modeling showed higher interrater agreement and discriminability, BUT lower interrater reliability and higher inflation than WA
  3. Rater training didn’t do much but did show a little more discriminability
  4. Length is a tricky one some quality goes up as it does to a certain point and some quality goes down as it goes up to a certain point…
61
Q

Generic questionnaires or customized ones ?

A

DuVernet et al., 2015 show that specific may be slightly better as they show more intrarater reliabilty, better factor structures, and less inflation; but generic ones tend to have high interrater reliability and rate-relate reliability and discriminability

62
Q

Types of rating scales on quality of data

A

DuVernet et al., 2015 found objective scales (frequency, time spent, perform/required by job) showed higher quality on almost all indicators except interrater agreement as higher with more subjective scales (importance, etc)

63
Q

Best Date sources for JA

A

DuVernet et al., 2015 showed incumbets may have the lease quality as they were only the highest in interrater agreement and having a higher change to confirm factor structures;?

64
Q

WA purpose and data quality (DuVernet et al., 2015)

A

Personally relevant purposes ( compensation, etc.) showed higher interrater reliability, rate-re-rate reliability and discriminability, but were more likely to be inflated and less likely to confirm factor structures

65
Q

DuVernet et al., 2015 suggestions for JA for election purposes

A

Could run into inflation, which could result in incorrect predictorr choices; low levels of consensus could result in legal concerns; to counteract this loss in interrater reliability one should include professional analysts and org-specific work analysts; if only incumbents available, discuss the personal relevance of it for them (better selection = better coworkers)

66
Q

DuVernet at al., 2015 suggestions for multipurpose JA

A

Multi-purpose leads to increased consensus but inflation and reduced discriminability; to counteract this loss communicating the relevance of the purpose, using supervisors and/or analysts, use objective scales, blend standardized instruments with specific tools

67
Q

Ellington and Wilson, 2017 Take Aways

A
  1. Significant variability remains across raters and contexts after controlling for ratee and rater-level characteristics
  2. Rater variance may be due to a greater proportion to aspects of the work environment that influence the rating behaviors of those raters
  3. Rater’s mean tendency seems to account for about 20% of rater effects after controlling for ratee- level characteristics
  4. The large presence of rater and context effects in supervisor ratings suggests that people using ratings as criteria for validating selection instruments should be utilizing MLM to account for this
  5. People should be cautious when using supervisor ratings to make comparisons across supervisors or contexts for domain purposes even when attempts of rater training have been made
68
Q

Individual differences

A

Individuals basic tendencies, capacities, and dispositions that influence the observed range and frequency of their behavior (Motowidlo et al., 1997)

69
Q

Schmidt and Hunter 1998 meta analysis

A

Estimated the correlation between jobs performance and various individual differences (when measurement is without error and there is no restriction of range)

70
Q

Cognitive ability and job performance

A

Correlates in excess of .50 with performance ; which implies that an applicants increase in a cognitive ability score by 1 SD would result in a .50 SD improvement in performance (Schmidt & Hunter, 1998)

71
Q

Spearman’s Cognitive ability

A

Two - factor theory

There’s g- general ability which influences all performance and the second factor is :category: meaning a specific factor unique to the domain; which is why all performance across domains is correlated to some degree (g) but not perfectly because specific ability influences each type

72
Q

Thurstone cognitive ability

A

Foun there to just be several types of intelligence, but no general factor

73
Q

Today’s take on intelligence

Chernyshynko et al., 2011

A

G-
Fluid and crystallized
Then fairly specific cognitive performance

It is a hierarchy

74
Q

Dominance Model

A

The more of a trait or ability someone possesses the better the chances that Damien has of answering the item correctly or earning a high test score

75
Q

Cog ability and selection

Chernyshynko et al., 2011

A

Cog ability —> job performance = .51 (Schmidt and hunter, 1998)

Job complexity moderates the relationship, such that it is stronger from more complex jobs

One of the best predictors in selection BUT concerns around differences in test scores between majority and minority group applicants’ debate on whether it is from measurement bias or true difference in the underlying trait; must be careful to avoid adverse impact or argue that it is imperative to use in selection

76
Q

The big 5: the big deal

Chernyshynko et al., 2011

A

Most agree this is an adequate coverage of personality; some argue one or two more should be added; but generally the first model people have had a shared large consensus about

77
Q

Big 5 traits

A
OPENNESS
CONSCIENTIOUSNESS
EXTRAVERSION
AGREEABLENESS
NEUROTICISM
78
Q

Personality facets

A

More contextualized manifestations of broad personality factors; allow more detailed/specific understanding of relationships of personality to behavior/performance;
Factors/dimensions are the broad umbrellas that hold all the facets

79
Q

Personality assessment in selection

Chernyshynko et al., 2011

A

Our assessments don’t quire do well with precise measurement across a wide range of test levels, test-retest reliability, and resistance to response distortion — we need to make them better and perhaps approach them differently (IRT not CTT)

80
Q

Ideal Point Models

Chernyshynko et al., 2011

A

Individuals are more likely to endorse items when it is close to their ability or trait level, but not if it is too high above or too low below.

81
Q

Personality predictors of performance

Chernyshynko et al., 2011

A

Conscientiousness is the most valid predictor across nearly all occupations (.19-.26, depending on the performance criteria)

The other 4 predict success in specific occupations or relate to specific criteria, but are not as wide spread or strong in their prediction

82
Q

Foldes et al., 2008 meta-analysis

A

Looked at racial group differences on personality

Found only small differences that would be unlikely to causes adverse impact in selection , especially when aggregated to form composites (scale scores)

Small gender differences too (females slightly higher on agreeableness; males slightly higher on dominance a facet of extraversion)

83
Q

Values in selection

Chernyshynko et al., 2011

A

Hard to use values in selection because measures are problematic

  • likert type measures tend to use socially desirable items which results in people endorsing everything high
  • using values to measure P-O fit means needing an org score to form correspondence ; when done difference scores are common but they are unreliable and ambiguous
84
Q

The role of individual differences on performance (model)

A

INSERT PHOTO

85
Q

Values

A

Stable individual differences in what people want to do or want to have in their lives; influence how people interpret characteristics in their environment and play an important role in forming one’s specific goals

86
Q

Core Self Evaluation (CSE)

A

A broad composite of self-esteem, generalized self-efficacy, emotional stability, and locus of control (Judge et al., 2002)

87
Q

Selection using behavioral narratives or information

A

Incremental validity is mixed; but empirically keyed biodata measures have validities ranging from .25 to .40;
But there have been no formally developed measures of behavioral narratives… which is a shame