Performance management Flashcards
Definition of PA
PA = managerial evaluation of an employee’s performance, often annually, in which they assess the extent to which desired behaviors have been observed or achieved. Done for a variety of reasons such as decision making, feedback, providing a basis for pay decisions.
Taxonomies
• Most cited is Campbel’s 8 factors (1993). Seemed to be the granddaddy
o Job-specific task proficiency o Non-jobs-specific task proficiency o Written and oral communication o Demonstrating effort o Maintaining personal discipline o Facilitating peer and team performance o Supervision o Management administration
• Then Borman & Brush (1992) taxonomic structure – 18 factors of managerial performance developed from critical incidents
• Viswesvaran (1996) - general factor (p)
o Reliability only .52 for performance
• Borman & Motowidlo (1993; 1997) - the are the contextual perf boys
Spector and Fox = CWB
Importance of PM
We should care about performance management because it provides the basis for our criterion measures used in the evaluation of employees, but also in the evaluation of programs we design (e.g., training) as well as selection systems (Austin & Villanova, 1992). Performance management has a rich history that has progressed from looking specifically and rating formats to reduce rater error to rater training to a broadening criterion space (e.g., OCB and CWB) to a world that requires management of teams and cross-cultural groups more than ever before (Austin & Crespin). While there might not be one best performance management system, there are many aspects that can be used to evaluate the system and make it more effective (Bowman). Performance is most typically measured using supervisory ratings that are subjective in nature. Multisource ratings (i.e., 360-degree ratings) have also become more prevalent.
How do orgs feel about PA?
Regardless of the system used, most are unhappy w/ it
Employees often feel they are not fairly assessed and many believe ratings are biased
Managers feel they receive inadequate training and don’t like being placed in the role of “judge.”
Often a feeling they’re done b/c they have to be done but that nothing productive comes out of the process (Murphy and Cleveland; Pulakos)
Trends
o Drive to concentrate on the context of the PA rather than just looking at raters as a source of variance.
o Adaptive performance (Pulakos, 2000)
o Drive to look at ratee reactions and motivation to improve rather than looking @ errors or accuracy
o Other?
How do we do PAs?
No one best way. Context determines what you should use. MBO, checklists, different rating scales,
critical incidents, BARS, BOS (behavior observation scales), compare individuals, ranking, & diaries.
New! FORS and CAT
The criterion problem (long, sorry)
Austin and Villanova, 1992
Has always been prevalent (should we update this to, “1917-2014”???)
Issues with semantics – what does the term mean
-We focus on measuring our predictors but not sufficiently on measuring the criteria, and then we just assume it’s fine.
We need to need to consider the entire process
Need to consider how to evaluate performance:
(absolute/comparative; Judgment vs objective, hard vs soft criteria, multidim. nature)
Different methods: supervisor, self, 360 deg
Subjective – biases, time, things that can’t be seen, inferences. May gather things that objective may be deficient in.
Contamination, Deficiency, Relevance (to job performance– important at a fundamental level)
o Making decisions that are based on incorrect info
o Legal issues with devices loaded w/ biases and incorrect info (Title VII of Civil Rights Act)
o Different ways of looking @ it, examine at multiple levels of analysis
- Other: Halo, leniency, central tendency, recency & first impression, contrast effects (tendency to go away from ratings of the previous person), & assimilation effects (opposite of contrast effects).
Critical considerations (1) - the “Why”: purpose
a. Research
b. Feedback development
c. Training development (similar to feedback develop)
d. Performance evaluation (can also be at team level)
e. Organization planning (big picture performance that can lead to high-level decisions)
Critical considerations (2) - the “What”: content
a. Conceptual
b. Criteria
c. Multiple dimensions
Critical considerations (3) - the “When” - timing
a. Mid perf.
b. Post perf.
c. Repeated measures
* Performance is dynamic and changes over time
* End of year performance reviews and archival data may provide an inaccurate or outdated view of performance
Critical considerations (4) where- fidelity
a. similarity between the . . . situation and the operational situation which is simulated” (p. 50). In the case of PM, this refers to how closely the measurement setting replicates the actual performance situation it is intended to represent.
b. Two dimensions:
(a) the physical characteristics of the measurement environment (i.e., the look and feel of the equipment and environment)
(b) the functional characteristics (i.e., the functional aspects of the task and equipment).
c. Depending on the purpose and nature of the performance measures, different levels of fidelity more or less appropriate.
i. Example: If measurement is intended to capture day-to-day performance and job is highly dependent on various changes that occur in a fast-paced dynamic environment, a strictly controlled laboratory setting with a low level of fidelity may result in misleading findings
Critical considerations (5) - the “How”
a. Questionnaires
b. Observations
c. Simulations
Categories of errors
- Distributional errors – incorrect representations of perf. distributions across employees
a. Rating means Severity, leniency,
b. Variance range restriction, central tendency - Illusory halo –correlations between ratings of two different dimensions being higher (or lower) than the correlation between the actual behaviors reflecting those dimensions
- And other (e.g. similar to me error, first impression error)
Why is context important
- Other variables, such as situational factors, may constrain the translation from predictors to behaviors to results (Wildman, 2011)
- Also, see Murphy IOP
Has there been an overemphasis on accuracy? Explain. What else should we look @.
• Accuracy is important but by focusing on accuracy, we’ve introduced difficult to understand systems and have designed systems to reduce bias that make raters confused about what they’re rating. Perceived fairness (pull in justice lit. Colquitt) is a more important goal (DeNisi and Sonesh)
o Issues with our bunny hole expedition of errors (halo isn’t always bad! Murphy and Balzer)
o Landy and Farr say rating formats don’t change
o All of these efforts (and more) were focused on perf. appraisal, not on performance magagmeent and actual performance improvement (little attn paid to affect, motivation, etc.)
o But, these goals were still on accuracy (mot. and affect accuracy)
o Illgen, 1993 says accuracy might be the wrong idea all together (also DeNisi)
o This led to shift of focus on perf. improvement and whether Es are motivated to improve; thus shift to whether Es perceive PAs as accurate and fair
Name the 4 operationalizations of accuracy and define them.
Elevation accuracy - avg for a rater across ratings (rater diffs); overall accuracy
Diff. elevation - avg for each target across scores (differentiate b/w ratees across dimensions)
* above two about ratees (elevation in title; ppl on elevators) * most important in administrative ratings (Hoffman et al., 2012)
Stereotype accuracy - avg for dim. Across targets (differentiate b/ dims across ratees)
*above about dimensions
Diff. accuracy - sensitivity to patterns of performance (diff both b/w and w/in ratee scores); ratee*dimension interaction; rater identifies patterns of strengths and weaknesses
* combo * most important in developmental (Hoffman et al., 2012)
Definition of CWB
Voluntary behavior that violates organizational norms and threatens the wellbeing of the org, its members or both (Robinson and Bennett)
2 dimensions: o Abuse against others (emotion-based; e.g., incivility) o Production deviance Sabotage Theft Withdrawal
Spector and Fox some of first to do work in this area
What are the two main approaches to CWB?
Justice approach—CWB are a form of retaliation - Greenberg and colleagues
Emotion—events that create negative emotions lead to revenge - Spector and colleagues
***Note - Still need to look at CWB from a cognitive focus too (like Organ & Koronovsky with OCBs)
What are the two ways of dealing with CWB?
o Non-punitive: alignment, corrective/constructive feedback, self-management training, positive discipline, EAPs)
o Punitive—generally viewed as effective, but moderated by perceptions of control, just world beliefs, and negative affect
Also can lead to bad feelings, power play perceptions, etc.
Effects on observers can be negative depending on situation
Progressive discipline
Terminate as a last resort
• Consider at-will status
Common measurement issues in PA (Kline & Sulsky, 2009)
o Absolute vs comparative judgments
• Depends on purpose– developmental vs promotions; expect diff user reactions
o Two ways efforts have gone to improve ratings?
• 1. Rater training
Rater error training, beh. obs. Training, FOR
• 2. Rating formats – lots in the BARs format family
Behaviorally based, graphic
o Meaning of work perf – the criterion problem; cites back to A&V, 92
o Likes Campbell et al formulation or taxonomy of relevant perf dims
o Most theory/research focused on
Delineation of perf dims
Expected associated perf standards
o Rating Quality
Crohnbach’s accuracy indices
o Includes team perf appraisal – messy and difficult
OCB guide
o Barnard, 1938 - willingness to contribute
o Organ, 1983 - narrow def. of perf –> OCB
o Williams and Anderson, 1991 - O vs. I; scale
o Borman and Mottowidlo, 1993 - contexutal perf. distinguished from task perf.
o Organ, 1997 - ok, OCB can be rewarded. Same as contextual
o Hanson and Borman say they’re identical = 2006
o Using same toothbrush
o Chiaburu - in role vs. out role distinction
OCB and context. P original defs.
o OCB definition: “individual behavior that is discretionary, not directly or explicitly recognized by the formal reward system, and that in the aggregate promotes the effective functioning of the org.” (Organ, 1988)
o CP definition: known for its more proximal effect, enhancing and sustaining the social, psychological, and org context of the cooperative system (Borman & Motowidlo,1993)
o CP uses the terms interpersonal facilitation and job dedication to denote OCB-I/OCB-O; tends to have more consistent factorial structure than OCB
Name a couple of Williams and Anderson’s dimensions
e.g., courtesy, sportsmanship, peacemaking, civic virtue; and they did the OCB-I/O distinction
Lowdown on CWB- can you think of any studies we read??!
- In 2000, Bennett and Robinson created a scale to measure workplace deviance. Deviance happens!
- So, what causes it? We know that OCBs are more related to cognitive than affective (thank you Organ & Koronovsky), but Spector & Fox say that we have to consider emotions in the formation of CWBs and the proposed a model (but didn’t test it). But what about raters? Do they have any influence on ratings of CWB?
- Why yes, they do. Lievens et al found that culture and type of rater affected ratings. So raters rate differently (for task, OCB, and CWB) and this could partially be due to the culture and role they have.
- We also know that there are problems when people rate themselves. Stewart et al (2009) developed an other, multi-rater instrument, established it’s validity and compared it to the self-report it was based on. Most employees are engaging in some type of subtle deviant behavior, but not frequently.
- We’ve learned that OCBs have a lot to do with social exchange, and CWBs do too, particularly with the breach of psychological contracts. Jensen et al was able to expand on which types of breaches was related to which type of CWB. Additionally, while we know that breach relates to CWBs it appears that org policies have no effect on them.