Industrial Organizational PRPJET Flashcards
Job Analysis
- Job Analysis: Job analysis is a systematic procedure for “identifying how a job is performed, the conditions under which it is performed, and the personal requirements it takes to perform the job” (Aamodt, 2013, p. 599). A job analysis serves several functions in organizations including obtaining the information needed to write a job description, develop or identify appropriate job performance and selection measures, determine training needs, and make decisions about job design and redesign. Methods of obtaining information include observing employees while they perform the job; interviewing employees and supervisors about the job; having employees, supervisors, and others familiar with the job complete surveys and questionnaires; and using electronic performance monitoring.
A job analysis may be work-oriented or worker-oriented: A work-oriented job analysis focuses on the tasks that must be accomplished to achieve desired job outcomes. Task analysis is a work-oriented approach that involves having employees and supervisors develop a comprehensive list of job tasks, having subject matter experts rate the identified tasks in terms of frequency and importance, and then including tasks with high ratings in the job description. A worker-oriented job analysis focuses on the knowledge, skills, abilities, and other characteristics (KSAOs) that are required to accomplish job tasks. The Position Analysis Questionnaire (PAQ) is a worker-oriented job analysis questionnaire that addresses six categories of work activity: information input, mental processes, work output, relationships with other people, job context, and other characteristics.
Competency Modeling
Competency Modeling: Competency modeling is similar to job analysis but is somewhat different in focus. While job analysis focuses on the tasks and/or worker characteristics required to perform a specific job successfully, competency modeling is always worker-oriented and focuses on the core competencies (attributes) that are required to successfully perform all jobs or a subset of jobs within an organization and that are linked to the organization’s values, goals, and strategies. “Exhibiting the highest level of professional integrity at all times” and “staying current with the latest technological advances” are examples of core competencies” (Muchinsky, 2012, p. 78). Competency modeling, like job analysis, serves several functions in organizations, including identifying appropriate job selection and performance measures, determining the content of training programs, and identifying future job requirements.
Job Evaluation
Job Evaluation: Job analysis is ordinarily the first step in job evaluation, but a job evaluation is conducted specifically to facilitate decisions related to compensation. It’s often used to establish comparable worth, which is the principle that workers performing jobs that require the same skills and responsibilities or that are of comparable value to the employer should be paid the same. Comparable worth has been applied primarily to the gender gap in wages.
The point system is a commonly used method of job evaluation. It involves determining the monetary value of a job by assigning points to the job’s “compensable factors” (e.g., effort, skill, responsibility, working conditions), summing the points to derive a total score, and using the total score to determine the appropriate compensation for the job.
Performance Assessment
Performance Assessment – Criterion Measures: Measures of job performance are also referred to as criterion measures. They serve several functions in organizations, including providing employees with feedback about their performance and evaluating employee performance to obtain the information needed to make decisions about raises, promotions, etc.
Types of Performance Appraisal Measures
Types of Performance Appraisal Measures: Performance appraisal measures are categorized as objective or subjective: Objective measures usually provide quantitative information and include direct measures of productivity and number of errors, accidents, and absences. Objective measures can provide important information, but they’re not available for certain jobs, they don’t always provide complete information about employee performance, and they can be affected by situational factors such as inadequate resources or support. Subjective measures take the form of performance ratings and are the most commonly used performance measures in organizations. Advantages of subjective measures are that they can provide information on aspects of performance that cannot be assessed with an objective measure, they allow raters to take situational factors that affect performance into account, and they can provide information that’s useful for giving employees feedback about their performance. A major disadvantage is that they can be affected by rater biases and errors.
Subjective Rating Scales
- Subjective Rating Scales: There are two types of subjective rating scales – relative and absolute. Relative rating scales require the rater to evaluate an employee by comparing the employee to other employees, while absolute rating scales require the rater to evaluate an employee without considering the performance of other employees. Relative rating scales include the paired comparison technique and the forced distribution method:
(a) When using the paired comparison technique, the rater compares each employee to all other employees in pairs on each dimension of job performance (e.g., work quality, job knowledge, communication) by indicating which employee is best. An advantage of this technique is that it alleviates central tendency, leniency, and strictness rater biases; a disadvantage is that it can be very time-consuming to use when there are many employees to rate.
(b) The forced distribution method requires the rater to assign a certain percent of employees to prespecified performance categories for each dimension of job performance. For example, it might require that 10% of employees be assigned to the poor performance category, 20% to the below average performance category, 40% to the average performance category, 20% to the above average performance category, and 10% to the excellent performance category. The forced distribution method alleviates central tendency, leniency, and strictness rater biases, but it provides inaccurate information when the performance of employees does not match the prespecified categories (e.g., when all employees are performing at the average or above average level).
Absolute rating scales include the critical incident technique, graphic rating scales, and behaviorally anchored rating scales:
(a) The critical incident technique (CIT) is a method of both job analysis and performance assessment. It involves identifying employee behaviors that are associated with exceptionally poor and exceptionally good performance by observing employees while they work or by interviewing people familiar with the job. The list of “critical incidents” is then used to evaluate performance by checking those that apply to each employee. An advantage of CIT is that it provides useful information for employee feedback because it focuses on observable behaviors. Disadvantages are that it can be time-consuming to develop, it focuses on extreme (rather than typical) behaviors, and it’s job-specific, which means that new critical incidents must be identified for different jobs.
(b) When using a graphic rating scale, the rater rates an employee’s performance on several performance dimensions on a Likert-type rating scale – e.g., from 1 (poor performance) to 5 (excellent performance). An advantage of graphic rating scales is that they’re easy to construct; a disadvantages is that they’re vulnerable to rater biases.
(c) Behaviorally anchored rating scales (BARS) are a type of graphic rating scale in which each point on a scale is “anchored” with a description of a specific behavior. A distinguishing characteristic of BARS is their development, which involves having job incumbents, supervisors, and other subject matter experts identify essential dimensions of job performance and specific behaviors for each dimension that are associated with good, average, and poor performance. Advantages of BARS are that the behavioral anchors help reduce rater biases and provide information that’s useful for employee feedback. Disadvantages are that they’re time-consuming to develop and job-specific.
Ultimate versus Actual Performance Measures
- Ultimate versus Actual Performance Measures: Descriptions of job performance measures often distinguish between ultimate and actual criteria. The ultimate criterion is an ideal measure that assesses all of the important contributors to job performance, while the actual criterion is what a job performance measure actually measures. Criterion deficiency and criterion contamination are two reasons for the gap between ultimate and actual criteria: Criterion deficiency refers to aspects of performance that are not assessed by the criterion. For instance, a job knowledge test for clinical psychologists would be deficient if it includes questions on psychopathology and clinical psychology but not ethics. Criterion contamination occurs when the criterion measure is affected by factors unrelated to job performance – for example, when a supervisor’s ratings of employees on the criterion are affected by an employee’s gender or race or by the supervisor’s knowledge of how well the employee did on the predictors that were used to hire him or her.
Performance Assessment – Rater Biases
Performance Assessment – Rater Biases: Distribution errors, the halo error, the contrast error, and the similarity bias are rater biases that can affect the accuracy of subjective performance ratings.
(a) Distribution errors occur when raters consistently use only one part of the rating scale when rating all employees: The central tendency bias occurs when the rater consistently gives all employees average ratings regardless of their actual performance. The leniency and strictness biases occur when the rater consistently gives all employees high ratings or low ratings, respectively, regardless of their actual performance.
(b) The halo error is also known as the halo effect and halo bias. It occurs when a rater’s rating of an employee on one dimension of job performance affects how the rater rates the employee on all other dimensions, even when they’re unrelated to that dimension. The halo error can be positive or negative: A supervisor’s ratings are affected by a positive halo error when the supervisor highly values cooperation and rates employees who are very cooperative high on all dimensions of job performance and is affected by a negative halo error when the supervisor rates employees who are uncooperative low on all dimensions of performance.
(c) The contrast error occurs when a rater’s ratings of an employee are affected by the performance of a previously evaluated employee. For example, a supervisor’s ratings are affected by the contrast error when the supervisor gives an average employee below average ratings because she rated an excellent employee immediately before rating the average employee.
(d) The similarity bias occurs when raters give higher ratings to ratees they perceive to be similar to themselves.
Methods for reducing rater biases include using relative rating scales, anchoring points on an absolute rating scale with descriptions of specific job behaviors, and providing raters with adequate training. Using relative (rather than absolute) rating scales is most useful for eliminating distribution errors (central tendency, leniency, and strictness biases) since relative scales require raters to give some employees higher or lower ratings than they give to other employees. Anchoring points on an absolute rating scale with descriptions of specific job behaviors helps reduce distribution errors and other biases by clarifying the meaning of each point on the scale. And providing adequate rater training is the best way to reduce rater biases as well as other factors that decrease rater accuracy on relative and absolute rating scales. Note that the research has found that focusing only on rater biases during training can actually reduce overall accuracy and that a better approach is to provide frame-of-reference (FOR) training (Guion, 1998). It includes ensuring that trainees understand the multidimensional nature of job performance and the organization’s definition of successful and unsuccessful performance and giving trainees opportunities to practice assigning ratings and receive feedback about their rating accuracy.
Types of Selection Techniques
Types of Selection Techniques: Selection techniques are often referred to as predictors. Commonly used predictors in organizations include interviews, general mental ability tests, personality tests, integrity tests, work samples, assessment centers, and biographical information.
Interviews
Interviews: Interviews can be unstructured or structured. When conducting an unstructured interview, interviewers can ask whatever questions they want to ask and do not necessarily ask all applicants the same questions. When conducting a structured interview, all interviewees are asked the same questions that may be derived from the results of a job analysis and scored using a standardized scoring key. Behavioral and situational interviews are types of structured interviews: Behavioral interviews are based on the assumption that past behavior is the best predictor of future behavior and consist of questions that ask interviewees how they responded to specific job-related situations in the past. Situational interviews are future-oriented and consist of questions that ask interviewees how they would respond to hypothetical situations. Not surprisingly, structured interviews have been found to be better than unstructured interviews for predicting job performance, with some studies finding that behavioral interviews are more valid than situational interviews (e.g., Taylor & Small, 2002).
General Mental Ability Tests
General Mental Ability Tests: General mental ability tests are also known as cognitive ability tests and are among the most frequently used selection techniques. They have been found to be the most valid predictors of job performance across a variety of jobs, performance criteria, and organizations and good predictors of training success and level of occupational attainment (Hunter & Schmidt, 2004; Schmidt & Hunter, 1998). A disadvantage of these tests is that they are associated with a greater risk than other valid predictors of job performance for adverse impact for members of some racial/ethnic minority groups.
Personality Tests
Personality Tests: Many personality tests used to facilitate selection decisions in organizations assess the Big Five personality traits – i.e., conscientiousness, openness to experience, extraversion, agreeableness, and emotional stability. Of these traits, conscientiousness has been found to be the best predictor of job performance across different jobs and different performance criteria (Schmidt & Hunter, 1998).
Integrity Tests
Integrity Tests: Integrity tests are used to predict whether an applicant is likely to engage in counterproductive behaviors. There are two basic types of integrity tests: Overt integrity tests ask directly about attitudes toward and previous history of dishonesty and theft, while personality-based integrity tests assess aspects of personality that have been linked to dishonesty, disciplinary problems, sabotage, and other counterproductive behaviors. Integrity tests do not seem to have an adverse impact for racial/ethnic minorities, and they have been found to be good predictors of counterproductive behavior and, to a somewhat lesser degree, job performance (Heneman, Judge, & Kammeyer-Mueller, 2014).
Work Samples:
Work Samples: Work samples require job applicants to perform tasks they would actually perform on-the-job. They have been found to have even higher validity coefficients than general mental ability tests do but, unlike general mental ability tests, they’re job specific (Schmidt & Hunter, 1998). While traditional work samples are useful for experienced applicants, trainability work sample tests incorporate periods of training and evaluation and are useful for determining if inexperienced applicants are likely to benefit from training. A work sample is often included as part of a realistic job preview (RJP), which involves informing job applicants about the positive and negative aspects of the job in order to reduce the risk for turnover after they’re hired by ensuring they have realistic job expectations.
Assessment Centers
Assessment Centers: Assessment centers are most often used to evaluate candidates for managerial-level jobs and involve having multiple raters rate candidates on several performance dimensions using multiple methods. Methods include personality and ability tests, structured interviews, and simulations (work samples). The in-basket exercise and leaderless group discussion are two of these simulations: The in-basket exercise is used to assess decision-making skills and requires participants to respond to memos, phone messages, and other communications that are similar to those they would encounter on-the-job. The leaderless group discussion is used to evaluate the leadership potential of participants and requires a small group of participants to work together without an assigned leader to solve a job-related problem.
Biographical Information
Biographical Information: Biographical information is obtained by most organizations as part of the selection process and, when it’s obtained from empirically derived biographical information blanks (BIBs), is referred to as biodata. The questions included in BIBs not only ask about an applicant’s education and work history but also about other issues that predict job performance (e.g., family history, health history, interests, social relationships). Questions are presented in a multiple-choice format or other format that can be easily scored, and an applicant’s scores are used to predict productivity and other job outcomes. A disadvantage of BIBs is that, while questions have been found to be job-related, some may lack face validity. As a result, applicants may consider these questions to be irrelevant to job performance and refuse to answer them
Combining Selection Techniques
Combining Selection Techniques: No single predictor is likely to be adequate for making accurate hiring decisions, and organizations ordinarily use multiple predictors. The methods for combining information obtained from multiple predictors are divided into two types – compensatory and noncompensatory: Compensatory methods are appropriate when a high score on one or more predictors can compensate for a low score on another predictor. Included in this category are clinical prediction and multiple regression: Clinical prediction relies on the subjective judgment of decision makers, who use their familiarity with job requirements to determine if an applicant’s predictor scores qualify the applicant for the job. The major disadvantage of this method is that it’s susceptible to biases and errors, and the studies have confirmed that statistical methods for combining scores are more accurate than clinical prediction for predicting job performance. Multiple regression is a statistical method for combining scores. When using this method, each predictor is weighted on the basis of its correlations with the other predictors and with the criterion and the weighted predictor scores are combined to obtain an estimated criterion score.
Noncompensatory methods are used when a low score on one predictor cannot be compensated for by a high score on another predictor. Included in this category are multiple cutoff and multiple hurdles. When using multiple cutoff, all of the predictors are administered to all applicants, and applicants must obtain a score that’s above the cutoff score on each predictor to be considered for the job. Multiple hurdles is similar to multiple cutoff except that predictors are administered in a prespecified order and the applicant must obtain a score above the cutoff on each predictor in order for the next predictor to be administered. Multiple hurdles is preferable to multiple cutoff when it would be too costly to administer all of the predictors to all applicants. Multiple cutoff and multiple hurdles can be combined with multiple regression by using multiple regression to predict the criterion scores of applicants who score above the cutoff score on all of the predictors.
Reliability and Validity
Reliability and Validity: Reliability and validity are two standards that are used to judge the adequacy of a predictor. Reliability refers to the degree to which a predictor is free from the effects of measurement (random) error and, as a result, provides consistent scores. The various methods for evaluating reliability assess the consistency of scores over time, across different forms or items, or across different scorers, and most produce a reliability coefficient, which is a type of correlation coefficient. The reliability coefficient ranges from 0 to 1.0 and, the closer the coefficient is to 1.0, the less the effect of measurement error and the greater the consistency of scores.
Knowing that a predictor is reliable only indicates that, whatever the predictor measures, it does so consistently. To determine if the predictor actually measures what it was designed to measure, its validity must be assessed. There are three main types of validity and each type is evaluated using different methods. For many predictors used to make selection and other employment decisions, more than one type of validity is evaluated.
(a) Content validity refers to the extent to which a predictor adequately samples the knowledge or skills it’s intended to measure. Basing a predictor’s content on the results of a job analysis and having subject matter experts review the content help ensure that it has an acceptable level of content validity. Job knowledge tests and work samples should have adequate content validity.
(b) Construct validity refers to the extent to which a predictor measures the construct (hypothetical trait) it was designed to measure. A predictor’s construct validity is assessed in several ways including correlating scores on the predictor with scores on valid measures of the same, similar, and different constructs. Intelligence tests and personality tests should have adequate construct validity.
(c) Criterion-related validity refers to the degree to which scores on the predictor correlate with scores on the criterion. It’s evaluated by correlating predictor and criterion scores obtained by individuals in a tryout sample to obtain a criterion-related validity coefficient. This coefficient ranges from -1.0 to +1.0 and, the closer it is to 0, the lower the predictor’s criterion-related validity. When an organization’s goal is to use applicants’ scores on a predictor to estimate or predict their scores on the criterion to facilitate hiring decisions, it would be important to assess the predictor’s criterion-related validity.
Additional information about reliability and validity is provided in the Test Construction questions and content summaries.
Incremental Validity
Incremental Validity: Incremental validity refers to the increase in decision-making accuracy that occurs by adding a new selection technique (predictor) to the existing selection procedure. A predictor is most likely to increase decision-making accuracy when its criterion-related validity coefficient is large. However, even when a predictor’s validity coefficient is low to moderate, it can have incremental validity when the selection ratio is low and the base rate is moderate:
(a) The selection ratio is the percent of job applicants the company plans to hire and is calculated by dividing the number of applicants that will be hired by the total number of applicants. For example, a selection ratio of .10 is low and means that one of 10 applicants will be hired, while a selection ratio of .90 is high and means that nine of 10 applicants will be hired. A low selection ratio is best because it means the company has more applicants to choose from.
(b) The base rate is the percent of employees who were hired using the current selection procedure and are considered successful. A moderate base rate (around .50) is associated with the greatest increase in decision-making accuracy because, when the base rate is high, adding a new predictor probably won’t have much effect since the current procedure is adequate. And when the base rate is low, this suggests that something other than the selection procedure (e.g., inadequate training) is the problem because it’s not likely that use of the current procedure results in choosing the least suitable applicants.
The Taylor-Russell tables are used to obtain an estimate of a predictor’s incremental validity for various combinations of criterion-related validity coefficients, base rates, and selection ratios. For example, when a predictor’s criterion-related validity coefficient is .30 (a fairly low coefficient), the base rate is .50 (50% of current employees are successful), and the selection ratio is .10 (one of every 10 applicants will be hired), the Taylor-Russell tables indicate that the addition of the new predictor will result in 71% of hired employees being successful. This means that there will be a 21% increase in successful employees when the new predictor is added to the current selection procedure (71% - 50% = 21%).
Adverse Impact
Adverse Impact: Adverse impact is also referred to as disparate impact and is “a type of unfair discrimination in which the result of using a particular personnel selection method has a negative effect on protected group members compared with majority group members” (Muchinsky, 2012, p. 138). Adverse impact is addressed in the Uniform Guidelines on Employee Selection Procedures, which was adopted in 1978 by the Equal Employment Opportunities Commission (EEOC) and other government agencies responsible for enforcing equal employment opportunity laws. The Uniform Guidelines on Employee Selection Procedures (1978) and Uniform Guidelines on Employee Selection Procedures Interpretation and Clarification (1979) describe test unfairness and differential validity as situations that can cause adverse impact:
(a) Test unfairness occurs when members of one group consistently obtain lower scores on a selection test or other employment procedure but the score difference is not reflected in differences in scores on a measure of job performance. Test unfairness is occurring when men and women receive similar ratings on a measure of job performance but, for some reason, women consistently obtain lower scores on the selection test and, as a result, are hired less frequently than men are when the test is used to make hiring decisions.
(b) Differential validity occurs when a selection test or other employment procedure has significantly different validity coefficients for members of different groups. A selection test has differential validity, for instance, when its criterion-related validity coefficient is .70 for men but .20 for women.
The Uniform Guidelines describes the 80% rule (also known as the four-fifths rule) as a method for determining if a selection test is having an adverse impact. When using the 80% rule, adverse impact is occurring when the hiring rate for a legally protected group is less than 80% of the hiring rate for the majority group. As an example, if the hiring rate for White applicants is 70%, the minimum hiring rate for African-American applicants is 56% (.70 times .80 = .56).
When the court determines that a selection test or other employment procedure is having an adverse impact, the employer has several options: The employer can use another procedure that doesn’t have an adverse impact, can modify the procedure so it no longer has an adverse impact, or can demonstrate that use of the procedure is a business necessity or a bona fide occupational qualification: An employment procedure is a business necessity when it’s job-related and necessary for the safe and efficient operation of the business. For example, a company may discriminate against people with certain physical disabilities if, because of the nature of the job, their disabilities are likely to cause safety risks for employees or customers. An employment requirement is a bona fide occupational qualification (BFOQ) when it’s necessary to maintain normal business operations. Religion is a BFOQ when a religious high school requires its faculty to be members of its denomination. BFOQ may apply to gender, age, religion, and national origin but not race.