I/O Psychology Flashcards
Job Analysis
Used to obtain info about the nature & requirements of a job; KSAO’s (Knowledge, skills, attitudes, & other characteristics) used to devel. criterion measures & predictors.
Conducted to ID the essential characteristics of a job, & may be 1st step in a job evaluation.
Provides info. to:
- facilitates workforce training & planning programs
- Assist w/decisions about job redesign
- Help ID causes of accidents & other safety related probs.
Methods for Conducting a Job Analysis
Info. about a job can be obtaineda few ways including:
- Observing EE’s perform the job
- Review company records
- Interview EE’s. sups. & others familiar w/the job
- Having EE’s keep a job diary
Methods include:
- Job-oriented techniques: Focus on work activities/tasks & conditions of work.
-
Worker-oriented techniques: Focus on KSAO’s reqired for the job.
- Position Analysis Questionnaire (PAQ)
A systematic process of determining how a job differs from other jobs in terms of required responsibilities, activities, & skills.
The Position Analysis Questionnaire (PAQ)
A frequently used structured job analysis questionnaire w/194 questions that provides info on 6 dimensions of worker activity divided into:
- info. input
- mental processes,
- work output,
- relationships with other persons,
- job context,
- Interpersonal activities
A quantitative worker-oriented method of collectin data for purposes of job analysis.
More helpful for desining training prog. & deriving criterion measures that provide useful EE feedback.
Job Evaluation
Job evaluation may begin with a job analysis but is conducted for the purpose of setting wages and salaries.
Primary purpose of a(n) Job Evaluation is to obtain detailed info. about job requirements in order to facilitate decisions related to compensation.
ID compensable factors & assigning a dollar values to them, such as:
- Skill & ED req.
- Consequences of error
- Degree of autonomy & responsibility
- Establish Comparable Worth
Determine the relative worth of jobs in order to set wages & salaries.
Comparable Worth
(aka pay equity) Refers to the principle that jobs that require the same education, experience, skills, & other qualifications should pay the same wage/salary regardless of the employee’s age, gender, race/ethnicity, etc.
Criterion Measures
Measure of job performance used to provide EE’s w/performance feedback & help make decisions about salary increases & bonuses, training needs, promotions & termination.
Types:
- Objective (direct) Measures: Include quantitative measures of production & certain types of personnel data (Not avalible for many jobs & may not provide a complete pict. of an EE’s perf.)
-
Subjective Measures: Rely on judgement of the rater. More useful for eval. complex contributors to job perf. such as motivation, leadership skills & decision making ability.
-
Absolute measures
- Critical Incidents
- Forced Choice
- Graphic Rating Scale
- BARS
-
Relative measures
- Paired comparison
- Forced distribution
-
Absolute measures
Ultimate (Conceptual) Criterion
In devel. of job perf. measure it is a measure of perf. that is theoretical & can not actually be measured.
- A construct that can not be measured directly but instead is measured indirectly.
- Ex: Ultimate Criterion = “Effective EE”
- Actual Criterion = Dollar amt. of sales in a 3 mo. period
Subjective Criterion Measures
Rely on judgement of the rater. More useful for eval. complex contributors to job perf. such as motivation, leadership skills & decision making ability.
-
Absolute measures: Subjective perf. assess that indicates a ratee’s perf. in absolute terms. Involve rating an EE w/out considering the perf. of other EE’s & often takes the form of a graphic, likert type scale.
- Critical Incident Technique (CIT)
- BARS
-
Relative measures (techniques): Involve comparing EE’s to each other on various aspects of job perf., & help reduce rater biases; less useful than absolute measures for EE feedback. Includes:
- Paired comparison
- Forced distribution
Relative Techniques; Types of Criterion Measures
Relative measures (techniques): Involve comparing EE’s to each other on various aspects of job perf., & help reduce rater biases; less useful than absolute measures for EE feedback. Includes:
- Paired comparison: The rater compares each EE to every other EE performing the same job. →Disadvantage is that it is time consuming as the number of EE’s increases.
- Forced distribution: The rater categorizes EE’s in terms of pre-defined normal distribution. →Disadvantage is that it produces misleading info when perf. is not actually normally distributed.
Rater Bias
4 Types of rater bias that limit validity & relaiability of rating scales:
- Leniency Bias: Occurs when a rater consistently assigns high ratings to all ee’s, regardless of how they actually do on the job.
- Strictness BIas: Occurs when a rater consistently assigns low ratings to all ee’s, even when they are good workers.
- Central Tendency Bias: Occurs when a rater consistently assigns average ratings to all ee’s.
- Halo Bias: Occurs when the rater judges all aspects of an ee’s perf. on the basis of a single aspect of perf.
Leniency Bias
Type of rater bias that occurs when a rater consistently assigns high ratings on each dimension of performance to all employees, regardless of how they actually do on the job.
Can be alleviated by using relative rating scales such as the forced distribution scale that categorizes ee’s in terms of a predefined normal distribution.
Central Tendency Bias
Occurs when a rater consistently assigns average ratings to all ee’s.
Halo Bias
Occurs when the rater judges all aspects of an ee’s perf. on the basis of a single aspect of perf.
Methods for Reducing Rater Bias
Best way is to provide raters w/adequate training, especially training that helps them observe & distinguish btwn levels of performance such as:
- Critical Incident Technique (CIT)
- Behaviorally Anchored Rating Scales (BARS)
- Frame-of-reference Training
Critical Incident Technique (CIT)
Involves using a checklist of critical incidents (descriptions of successful & unsuccessful job behaviors) to rate each employee’s job performance.
The Supervisor observes EE’s & records behaviors. Then used to provide EE’s w/feedback about perf. or complied into a checklist.
When incorportated into rating scales, can help reduce rater biases.
Behaviorally Anchored Rating Scales (BARS)
A graphic rating scale that requires the rater to choose the one behavior for each dimension of job performance that best describes the employee.
Incorporates critical incidents which improves graphic rating scales by using anchor points on the scale w/descriptions of specific behaviors representing poor to excellent perf.
Distinguishing charateristic is that it is devel. as a multi-step process that involves a team of sups, managers & other ppl familiar w/the job.
- Advantage*: Involvement of managers/sups. may increase motivation & accuracy when they use the scales
- Disadvantage*: Requires substantial time & effort to develop.
Frame-of-Reference Training
A type of rater training that emphasizes the multidimensional nature of job performance & focuses on the ability to distinguish between good & poor work-related behaviors. (Training focues on helping raters become good observers of behavior)
Helps ensure that the raters have the same idea about what constitutes succesful & unsuccesful job perf.
It is useful for eliminating rater biases.
Criterion Deficiency
The degree to which an actual criterion does NOT measure all aspects of the ultimate (conceptual) criterion & is one of the factors that limits criterion relevance.
A criterion measure can have high relaiability, but low validity (It can give consistent results but measures only some aspects of the ultimate criterion).
Criterion Deficiency = Low Validity
Criterion Contamination:
A bias that occurs when a rater’s knowledge of an Indivs. perf. on a predictor affects how the rater rates him/her on the criterion; criterion measure assesses factors other than those it was designed to measure.
Ex: contamination is occurring when a rater’s knowledge of a ratee‘s performance on a predictor affects how the rater rates the ratee on the criterion. It can artificially inflate the criterion-related validity coefficient.
Identifying & Validating Predictors
- Conduct a Job Analysis: Determine what knowledge, skills, attitudes & other characterisitics (KSO’s) the job requires. This info. indicates the type of predictors that would be useful & best criterion measures to eval. job perf.
- _Select/Devel. the Predictor & Criterion Measures _
- _Obtain & correlate Scores on the Predictor & Criterion: _ Admin. to a similar sample of ppl & correlate the 2 sets of scores on the test w/scores on the criterion to determine a criterion related coefficient.
- Check for Adverse Impact: Determine if the predictor unfairly discriminates against members of a legally protected grp.
- Evaluate Incremental Validity: Determine if use of the predictor increases decision-making accuracy.
- Cross-Validate: Admin. the predictor & criterion to a new sample.
Adverse Impact
Occurs when use of a selection test or other employment procedure results in substantially higher rejection rates for members of a legally protected (minority) group than for the majority group; adverse impact is said to exist.
The result of dicrimination against indiv. protected by Title VII & related legislation due to the use of an employment practice.
Methods to ID adverse impact:
- 80% Rule
- Differential Validity
- Unfairness
80% Rule
The 80% rule can be used to determine if adverse impact is occurring.
EEOC methods define when using this rule, the hiring rate for the majority group is multiplied by 80% to determine the min. hiring rate for the minority group.
Ex: If the hiring rate is 70% for men & 40% for women, then .70 x .80 = .56
- This means the min. hiring rate for women is 56% which is less than the actual rate of 40% & indicates the selection test is having an adverse impact on women.
Differential Validity
Differential validity exists when the validity coefficient of a predictor is significantly different for one subgroup than for another subgroup (e.g.. lower for African American job applicants than for White applicants) & results in a larger proportion of 1 grp being hired.
Potential cause of Adverse Impact
Method for responding to adverse impact: When it’s due to differential validity, use a diff. predictor that’s equally valid for both grps
Unfairness
Refers to unfair hiring, placement, or related discrimination against a minority grp that occurs when members of the minority group consistently score lower on a predictor but perform approximately the same on the criterion as members of the majority group. (EEOC)
Potential cause of Adverse Impact bc members of the grp obtaining lower predictor scores will be hired less often.
Method for responding to adverse impact: When it’s due to unfairness, use a different predictors cutoff scores for members of different grps.

