I/O Flashcards
What are the 2 types of subjective measures of performance appraisal?
relative rating scales- require the rater to evaluate the employee by comparing them to other employees
absolute rating scales- rater evaluates employee without considering performance of other employees
What are 2 types of relative ratings scales for performance appraisal?
paried comparison technique- rater compares employee to all other employees in pairs on each dimension of job performance (quality, knowledge, communication), by indicating which employee is best (advant: alleviates central tendency, leniency, and strictness biases; disadvant: very time consuming)
focred distribution method: rater assigns certain percentage of employees to prespecified performance categories for each dimension of performance (advant: alleviates rater biases; disadvant: gives inaccurate info when employees dont match the prespecified categories)
What are the 3 types of absolute rating scales for performance appraisal?
critical incident technique- identifying behaviors that exceptionally poor or exceptionally good, by observing while they work or asking people familiar with the job. list of critical incidents used to evaluate performance (advant: useful info for employee feedback bcuz focused on observable behaviors; disadvant: time-consuming to develop, focuses on extremes, job specific)
graphic rating scale- use likert scale to rate performance on several dimensions (advant: easy to cosntruct; disadvant: vulnerable to rater biases)
behaviorally anchored rating scales (BARS)- a type of graphic rating scale, each point is anchored with a description of a specific behavior (advant: reduce rater biases, provide useful info for employee feedback, disdvant: time consuming to develop, job specific)
Define ultimate and actual criterion for performance measures
ultimate criterion- ideal measured that assesses all important contributors to job performance
actual criterion- what a job performance measure actually meausres
What are criterion deficiency and contamination and what do they contribute to?
they contribute to the gap between ultimate and actual criteria for performance measures
criterion deficiency- aspects of performance not assessed by the criterion
criterion contamination- criterion measure is affected by factors unrelated to job performance (like gender or race)
What are the 4 rater biases that occur in performance assessment?
distribution errors- raters consistently using only one part of the rate scale for all employees:
a. central tendency- gives everyone average ratings despite actual performance
b. leniency or strictness biases - consistently gives all employees high or low ratings
halo error- rater’s rating of employee on one dimension affects how they rate the employee on all other dimensions even when unrelated
contrast error- ratings of 1 employee are affected by the performance of a previously rated employee
similarity bias- raters give higher ratings to employees they perceive as similar to themselves
what methods can be used to reduce rater biases?
use relative rating scales (distribution errors), anchoring points on absolute rating scales, provide raters with adequate training
Define behavioral interviews and situational interviews for employee selection
behavioral- based on assumption that past behavior is best predictor of future behavior, asks how they responded to job-related situations in the past
situational interviews- future-oriented, asks how they would respond to hypothetical situations
situational has been found to be more predictive of job performance, means intentions are more predictive than past behaviors
What is the most valid predictor of job performance?
general mental abliity tests, but has greater risk than other predictors of having adverse impact on minority applicants
(followed by interviews, job knowledge tests, integrity tests)
combining a generaly mental ability test with integrity test gives the greatest gain in validity
what personality trait is most predictive of job performance?
conscientiousness
What is an integrity test?
used to predict if applicant is likely to engage in counterproductive behaviors. two types:
overt integrity tests- asks directly about attitudes towards dishonesty/theft and past history of these
personality-based integrity tests- assess personality aspects linked to dishonesty, etc
dont seem to h ve adverse impact on minorities, good predictors of counterproductive behaviors and job performance
overt better at predicting counterproductive behaviors
personality better at predicting job performance
What are assessment centers used for?
evaluating candidates for managerial level jobs
methods used: personality and ability tests, structured interviews, simulations
most commonly used simulation techniques: in-basket exercise and leaderless group discussion
Definite biodata
biographical info that predicts job performance include things like family history, health history, etc
good predictor of performance for variety of jobs
disavant: some lack face validity, applicants may view as irrelevant and invasion of privacy and not answer
Define compensatory and noncompensatory methods of combining info from multiple predictors of job performance
compensatory- appropriate when high schore on 1 or more predictors can compensate for low score on another predictor (clinical prediction, multiple regression)
noncompensatory-used when a low score on one predictor cannot be compensated for by a high score on another predictor
a. multiple cutoff- all predictors adminsitered to all applicants and they must obtain a score above cut off on all predictors to be considered
b. multiple hurdles- measures administered in order, has to score above cutoff on each predictor to be administered the next one
How is incremental validity increased?
what are selection ratio and base rate?
a predictor is most likely to add to decision-making accuracy when its criterion-related validity coefficient is large
but even when low to moderate, can have incremental validity when selection ratio is low and base rate is moderate
selection ratio- percent of job applicants company plans to hire, low ratio is best because means company has more options to choose from
base rate- % of employees who were hired using current selection procedures and are considered successful
How are taylor-russell tables used to determine incremental validity?
gives an estimate of incremental validity for different combos of criterion-related validity, selection ratios, and base rates
Define test unfairness
when members of one group consistently obtain lower scores on a selection test but the score difference is not reflected in differences in scores on a meausre of job performance
define differential validity and the 80% rule
when a selection test has different validity coefficients for members of different groups
80% rule- adverse impact is occurring when the hiring rate for a legally protected group is less than 80% of the hiring rate for hte majority group
What options does an employer have when the court determines their selection test is having an adverse impact?
replace the procedure, modify the procedure, or demonstrate that there is no alternative procedure that would not have an adverse impact and that the procedure is job-related, job relatedness is demonstrated by showing procedure is valid, business necessity, or bona fide occupational qualification (necessary for maintaing normal bussines, BFOQ can be gender, age, religion, but not race)
What are the 4 analyses conducted as part of a needs analysis for developing a training program?
- organizational analysis- identify organizational goals, determine if performance probs due to lack of training, bad selection procedures or something else
- task analysis- identify tasks required to performan the job and the knowledge, skills, abilities and other characteristics (KSAOs) required to successfully perform each task
- person analysis- identify which employees have deficiencies that need training
- demopgrahic analysis- identify training needs of specific groups of owrkers (young vs old)
What are the 3 factors that affect learning?
- distributed vs. massed practice
- whole-task vs part-task (whole training more effective when task is highly organized and highly interrelated)
- overlearning- learning or practicing beyond point of mastery and results in automaticity
What are the 3 factors that effect trasnfer of training?
- identical elements- more simliar training and work situations are, greater the trasnfer of training
- stimulus variability- trasnfer is maximized when variety of stimuli are used during training- multiple examples, practice in variety of conditions
- support- amount of support received for using new skills on the job
What are the 2 types of training evaluation and their components according to Scriven’s model
- formative evaluation- assists with development and improvement of a program
- summative evaluation- determine whether program outcomes met the program’s goals
components of formative eval:
1. needs assessment
2. evaluability assesssment (determine if evaluation if possible and practical)
3. structured evaluation- define program, its participants, and potentail outcomes
4. implementation evaluation- monitor fidelity of the program
5. process eval- eval how program was delivered
components of sumamtive eval:
1. outcome eval- if program achieved its goals
2. impact eval- identify intended and unintended effects of the program on the org
3. cost-effectiveness and cost-benefit analysis- assess outcomes in terms of costs and benefits
4. secondary analysis-identify new questions or methods not previously addressed
5. meta-analysis- integrate all outcome estimates and derive a summary evaluation
What is dessinger-moseley full scope evaluation model
expanded on scriven’s dichotomy and had 4 types of eval:
1. formative- conducted during development of training program determine what changes are needed
2. summative- conducted soon after the training program to determine immediate effects
3. confirmative- conducted at later time to evaluate long-term effects
4. meta-evaluation- ongoing process done during and after the first 3 evals to assess reliability and validity