Performance management Flashcards
Definition of PA
PA = managerial evaluation of an employee’s performance, often annually, in which they assess the extent to which desired behaviors have been observed or achieved. Done for a variety of reasons such as decision making, feedback, providing a basis for pay decisions.
Taxonomies
• Most cited is Campbel’s 8 factors (1993). Seemed to be the granddaddy
o Job-specific task proficiency o Non-jobs-specific task proficiency o Written and oral communication o Demonstrating effort o Maintaining personal discipline o Facilitating peer and team performance o Supervision o Management administration
• Then Borman & Brush (1992) taxonomic structure – 18 factors of managerial performance developed from critical incidents
• Viswesvaran (1996) - general factor (p)
o Reliability only .52 for performance
• Borman & Motowidlo (1993; 1997) - the are the contextual perf boys
Spector and Fox = CWB
Importance of PM
We should care about performance management because it provides the basis for our criterion measures used in the evaluation of employees, but also in the evaluation of programs we design (e.g., training) as well as selection systems (Austin & Villanova, 1992). Performance management has a rich history that has progressed from looking specifically and rating formats to reduce rater error to rater training to a broadening criterion space (e.g., OCB and CWB) to a world that requires management of teams and cross-cultural groups more than ever before (Austin & Crespin). While there might not be one best performance management system, there are many aspects that can be used to evaluate the system and make it more effective (Bowman). Performance is most typically measured using supervisory ratings that are subjective in nature. Multisource ratings (i.e., 360-degree ratings) have also become more prevalent.
How do orgs feel about PA?
Regardless of the system used, most are unhappy w/ it
Employees often feel they are not fairly assessed and many believe ratings are biased
Managers feel they receive inadequate training and don’t like being placed in the role of “judge.”
Often a feeling they’re done b/c they have to be done but that nothing productive comes out of the process (Murphy and Cleveland; Pulakos)
Trends
o Drive to concentrate on the context of the PA rather than just looking at raters as a source of variance.
o Adaptive performance (Pulakos, 2000)
o Drive to look at ratee reactions and motivation to improve rather than looking @ errors or accuracy
o Other?
How do we do PAs?
No one best way. Context determines what you should use. MBO, checklists, different rating scales,
critical incidents, BARS, BOS (behavior observation scales), compare individuals, ranking, & diaries.
New! FORS and CAT
The criterion problem (long, sorry)
Austin and Villanova, 1992
Has always been prevalent (should we update this to, “1917-2014”???)
Issues with semantics – what does the term mean
-We focus on measuring our predictors but not sufficiently on measuring the criteria, and then we just assume it’s fine.
We need to need to consider the entire process
Need to consider how to evaluate performance:
(absolute/comparative; Judgment vs objective, hard vs soft criteria, multidim. nature)
Different methods: supervisor, self, 360 deg
Subjective – biases, time, things that can’t be seen, inferences. May gather things that objective may be deficient in.
Contamination, Deficiency, Relevance (to job performance– important at a fundamental level)
o Making decisions that are based on incorrect info
o Legal issues with devices loaded w/ biases and incorrect info (Title VII of Civil Rights Act)
o Different ways of looking @ it, examine at multiple levels of analysis
- Other: Halo, leniency, central tendency, recency & first impression, contrast effects (tendency to go away from ratings of the previous person), & assimilation effects (opposite of contrast effects).
Critical considerations (1) - the “Why”: purpose
a. Research
b. Feedback development
c. Training development (similar to feedback develop)
d. Performance evaluation (can also be at team level)
e. Organization planning (big picture performance that can lead to high-level decisions)
Critical considerations (2) - the “What”: content
a. Conceptual
b. Criteria
c. Multiple dimensions
Critical considerations (3) - the “When” - timing
a. Mid perf.
b. Post perf.
c. Repeated measures
* Performance is dynamic and changes over time
* End of year performance reviews and archival data may provide an inaccurate or outdated view of performance
Critical considerations (4) where- fidelity
a. similarity between the . . . situation and the operational situation which is simulated” (p. 50). In the case of PM, this refers to how closely the measurement setting replicates the actual performance situation it is intended to represent.
b. Two dimensions:
(a) the physical characteristics of the measurement environment (i.e., the look and feel of the equipment and environment)
(b) the functional characteristics (i.e., the functional aspects of the task and equipment).
c. Depending on the purpose and nature of the performance measures, different levels of fidelity more or less appropriate.
i. Example: If measurement is intended to capture day-to-day performance and job is highly dependent on various changes that occur in a fast-paced dynamic environment, a strictly controlled laboratory setting with a low level of fidelity may result in misleading findings
Critical considerations (5) - the “How”
a. Questionnaires
b. Observations
c. Simulations
Categories of errors
- Distributional errors – incorrect representations of perf. distributions across employees
a. Rating means Severity, leniency,
b. Variance range restriction, central tendency - Illusory halo –correlations between ratings of two different dimensions being higher (or lower) than the correlation between the actual behaviors reflecting those dimensions
- And other (e.g. similar to me error, first impression error)
Why is context important
- Other variables, such as situational factors, may constrain the translation from predictors to behaviors to results (Wildman, 2011)
- Also, see Murphy IOP
Has there been an overemphasis on accuracy? Explain. What else should we look @.
• Accuracy is important but by focusing on accuracy, we’ve introduced difficult to understand systems and have designed systems to reduce bias that make raters confused about what they’re rating. Perceived fairness (pull in justice lit. Colquitt) is a more important goal (DeNisi and Sonesh)
o Issues with our bunny hole expedition of errors (halo isn’t always bad! Murphy and Balzer)
o Landy and Farr say rating formats don’t change
o All of these efforts (and more) were focused on perf. appraisal, not on performance magagmeent and actual performance improvement (little attn paid to affect, motivation, etc.)
o But, these goals were still on accuracy (mot. and affect accuracy)
o Illgen, 1993 says accuracy might be the wrong idea all together (also DeNisi)
o This led to shift of focus on perf. improvement and whether Es are motivated to improve; thus shift to whether Es perceive PAs as accurate and fair
Name the 4 operationalizations of accuracy and define them.
Elevation accuracy - avg for a rater across ratings (rater diffs); overall accuracy
Diff. elevation - avg for each target across scores (differentiate b/w ratees across dimensions)
* above two about ratees (elevation in title; ppl on elevators) * most important in administrative ratings (Hoffman et al., 2012)
Stereotype accuracy - avg for dim. Across targets (differentiate b/ dims across ratees)
*above about dimensions
Diff. accuracy - sensitivity to patterns of performance (diff both b/w and w/in ratee scores); ratee*dimension interaction; rater identifies patterns of strengths and weaknesses
* combo * most important in developmental (Hoffman et al., 2012)