L2: Job Performance Flashcards

1
Q

define criteria

A

evaluative standards or yardsticks used to assess an employee’s
- success & failure / performance on the job.
- attitude
- motivation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

what are criteria used for?

A

selection, placement, promotion, performance evaluation, and succession planning
- predictive purposes (does high openness to experience lead to higher creativity?)
- evaluative (how effective was the course?)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

what’s the difference between predictors and criteria?

A
  • predictors: measured before an employment decision (ex: aptitude tests)
  • criteria: measured after an employment decision (ex job performance ratings)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

what are some necessary characteristics of criteria?

A
  • relevance: measures important aspects of performance
  • sensitivity: differentiates between effective & ineffective employees
  • practicality: feasible to measure without excessive time & costs
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

what are the 3 dimensions of criteria?

A
  • static
  • dynamic
  • individual
    the 3 ways in which ppl should be evaluated
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

what is the static (fixed) dimensionality of criteria?

A

job performance at a single point in time (task performance, contextual performance, counterproducitve work behaviours)
ex: call center evaluates employees based on how many calls they handled today. this is their performance criteria.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

what is the dynamic or temporal dimensionality of criteria?

A

looks at how Performance changes over time. this change can be due to
- validity shifts (eg tech changes job requirements)
- rank ordering shifts (top performers may change due to learning, burnout etc)
ex: company evaluates salesperson over the years and sees growth from year 1 to year 3

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

what is the individual dimensionality of criteria?

A

same job done by 2 ppl yet unique contributions
not everyone excels in the same way,
Employees with the same job may contribute differently. Some may be excellent planners, while others are great at execution​

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

define contextual behaviours/performance

A

behaviours that contribute to organizational effectiveness by providing a good environment in which task performance can occur (teamwork, initiative, pro social behaviour, willingness to help train new employees, willingness to work late etc)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

define counterproductive work behaviours CWB

A

Actions that harm the organization (e.g., absenteeism, theft, workplace aggression)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

how do you measure counterproductive behaviours?

A

objective data (like sales volume) and subjective assessments (eg performance appriasals)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

what is typical performance?

A

average level of performance
day to day kind of performance (what demployees do)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

what is maximum performance?

A

peak level of performance that can be achieved. highly motivated
what employees can do
depends on the ccontext (ex: in an interview you would see max perfprmance)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

what is the correlation between typical & max performance

A

.33

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

how much do ability & motivation influence max & typical performance?

A

ability influences both max & typical performance
but motivation only influences typical performance

since at max performance, levels of motivation are similar between ppl so what differentiates performance levels is ability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

what are 3 major challenges in developing criteria that address challenges in performance measurement?

A
  • job performance unreliability
  • observation unreliability
  • multidimensionality of performance aka criteiron problem (which leads to criterion contamination, criterion deficiency)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

define criterion contamination

A

when non performance-related factors influence the measure
due to error (random variation), or bias (systematic variation)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

define criterion deficiency

A

when a criterion is not complete in addressing all critical aspects of successful job performance (so key performance indicators)
ex: for professor only assess research output (while they also teach & do administration)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

what is job performance unreliability?

A

employee performance varies due to motivation, training, (intrinsic unreliability) or external factors (extrinsic unreliability)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

what is observation unreliability?

A

supervisors may not always accurately observe or record performance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

what is multidimensionality of performance?

aka criterion problem

A

Difficulties in conceptualizing and measuring performance accurately due to its multidimensional and dynamic nature.
this leads to criterion deficiency & criteiron contamination

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

how do you develop criteria that address multidimensionality of performance?

A

using multiple raters, training evaluators, and employing both objective and subjective performance indicators
- use multiple criteria (since different skills require different criteria & scores) for research - to gain understanding of what skills required in what jobs
- use composite scores for estimating “overall success’ & for most administrative decisions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

how do you address lack of reliability in measuring performance?

A

aggregate scores over time (evens out fluctuations)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

what are some situational determinants of performance?

A
  • Organizational factors (e.g., leadership, policies).
  • Environmental factors (e.g., economic conditions).
  • Job characteristics (e.g., location, work schedule)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
how do you evaluate criteria using relevance?
does it measure important aspects of performance?
26
how do you evaluate criteria using sensitivity?
can it distinguish between high & low performers
27
how do you evaluate criteria using practicality?
can it be implemente'd efficiently without excessive costs
28
what is criterion deficiency?
Failing to measure important aspects of a job (e.g., evaluating teachers only on student grades but not engagement).
29
what is criterion contamination?
Including irrelevant factors in measurement (e.g., evaluating employees based on personal bias)
30
when do you use composite criteria vs multiple criteria?
composite: when you want to combine various aspects into one score (useful for decision making) multiple: when you want to treat each aspect separately (useful for research)
31
should you use composite or multiple criteria to solve problems like criterion deficiency & contamination?
use a mix of both to ensure accurate performance assessment?
32
whats the difference between observed and unobserved criteria and their antecedents?
Observed Criteria: Measurable aspects of performance (e.g., sales numbers, performance ratings). Unobserved Criteria: Aspects that contribute to success but are difficult to measure (e.g., creativity, adaptability). Organizations should ensure indirect indicators (e.g., supervisor evaluations) do not introduce bias​
33
what are nonnormal distributions of performance?
Performance distributions are not always normal; they often follow a Pareto distribution (20% of employees ontribute 80% of results).
34
what are star performers & their implications for nonnormal distributions of performance?
star performers: highly productive employees responsible for disproportionate success implications: Identifying and retaining star performers should be a priority. Reward systems should account for high variance in employee output rather than assuming equal contributions
35
What is performance management?
a continuous process of evaluating, developing, and improving employee performance - helps align individual goals w organizational objectives
36
why is performance management important?
- increases motivation & productivity - identifies strengths & areas for improvement - supports HR decisions (promotions, pay raises, terminations) ex: A company that gives yearly appraisals only will struggle to track day-to-day performance. A good performance management system gives frequent feedback and helps employees grow.
37
define performance appraissal
a one time evaluation of an employee's performance
38
What is the difference between performance appraisal and performance management?
appraisal: - one time evaluation - aim: decision making (promotions, raises etc) - focus: past performance - timing: usually 1x or 2x a year management - continuous cycle of expectations, measuring progress, coaching - aim: development & performance improvement - focus on future performance - timing is ongoing throughout the year ex: supervisor who only gives feedback during annual reviews is using performance appraisal. manager who holds monthly coaching sessions is practicing performance management
39
What are the types of performance measures?
- objective meaures (hard data) - subjective measures (judgment based)
40
what are some examples of objective measures?
- sales revenue - production output - error rates
41
what are the pros/cons of objective measures vs subjective measures
objective: less biased but may not capture effort/teamwork subjective: measures soft skills (teamwork, communication) but prone to bias
42
what are some examples of subjective measures?
- supervisor ratings - customer feedback
43
what are some common biases in performance appraisals?
- halo effect - horns effect - leniency bias - central tendency - recency bias
44
what is the halo effect?
Letting one positive trait (e.g., being likable) influence the overall rating.
45
what is the horns effect?
Letting one negative trait (e.g., poor punctuality) overshadow everything else.
46
what is leniency bias?
giving everyone high scores to avoid conflict
47
what is central tendency bias?
Rating everyone average to avoid extremes
48
what is recency bias?
basing ratings only on recent events, not the full review period
49
how do you reduce bias in performance appraisals?
- Train raters on how to evaluate fairly. - Use multiple raters (peers, subordinates, customers). - Keep performance logs throughout the year.
50
what are some performance rating methods?
- graphic rating scales - behaviourally anchored rating scales (BARS) - 360degree feedback
51
what are graphic rating scales?
- uses a numerical scale (like 1-5) to rate employee performance - simple but prone to subjective bias
52
what are behaviourally anchored rating scales (BARS)
Uses real examples to define different performance levels. More accurate than simple rating scales. 📌 EXAMPLE (Customer Service Agent): 1 (Poor): Hangs up on customers. 3 (Average): Answers politely but doesn’t resolve problems. 5 (Excellent): Listens carefully and solves customer complaints quickly.
53
what is 360 degree feedback?
Collects performance ratings from multiple sources: Supervisor Peers Subordinates Customers 📌 EXAMPLE: A manager may think an employee is performing well, but subordinates rate them poorly due to lack of leadership. 360-degree feedback helps uncover these blind spots. ✅ Best for: Leadership roles, teamwork-focused jobs.
54
who should rate performance?
1. Supervisors (most common, but may be biased) 2. Peers (great for teamwork evaluation) 3. Subordinates (good for leadership roles) 4. Self-evaluations (helps employees reflect on performance) 5. Customers (important for service-based jobs)
55
what are 5 steps for an effective review meeting?
1. Prepare – Gather performance data before the meeting. 2. Start Positive – Highlight the employee’s strengths first. 3. Discuss Areas for Growth – Use constructive feedback (e.g., “You can improve your teamwork by…”). 4. Set SMART Goals – Define clear performance goals for the next period. 5. Encourage Employee Input – Let the employee share their thoughts.
56
why is goal setting important in performance management?
- helps employees know whats expected - increases motivation when goals are challenging but achievable
57
what are SMART goals?
S Specific “Increase customer satisfaction scores” M Measurable “Improve scores from 75% to 85%” A Achievable “Take a customer service training” R Relevant “Better service helps our brand” T Time-bound “Achieve this in 3 months”
58
why is measurement of performance important?
- between person decisions (eg promotion, termination, permanent contract, salary administration) - within person decisions (eg identification of training needs, feedback, diagnosis of weaknesses & strengths, self dev) - systems maintenance (eg evaluation of personnel systems, identification of the organizations dev needs) - documentation (eg compliance w legal requirements, criterion measures for validation of personnel selection processes)
59
how is measuring performance helpful in recruitment?
- shows success of recruitment - quality of applicants deterimnes feasible performance standards
60
how is measuring performance helpful in selection?
- validates selection function (=criterion) - selection should produce high performing workers
61
how is measuring performance helpful in training & dev?
- determines training needs, feedback supports personal dev - T&D helps reach performance standards
62
how is measuring performance helpful in compensation management?
factor in determining pay
63
how is measuring performance helpful in labor relations & strategy?
justifies administrative personnel actions (promotion, termination, transfer, disciplinary action...)
64
define performance | called performance domain in the schema
actions & behaviours that are under control of the individual & contribute to the goals of the organization - multidimensional - should distinguish between behaviours & outcomes/results of these behaviours - criterion = measure/operationalization of performance
65
in which cases should you focus more on results (outcome) in performance measurement?
when - workers are skilled - behaviours & results are clearly related - there are many ways to do the job right BUT keep in mind that outcomes could be influenced by factors outside of employee's control
66
what is task performance?
activities that are formally recognized as part of the job and that contribute to the organization's technical core (ex ikea support staff provide info to customers)
67
what are the 2 types of performance measures?
- objective: production data (like number of sales) & employment data (turnover, absenteeism etc) - subjective: depends on human judgment
68
what is the problem with subjective performance measures?
humans are prone to biases both relative & absolute biases
69
what are the pros & cons of objective measures?
pro: helps under certain conditions (for highly skilled workers) con: can be unreliable, contamined by situational characteristics/factors beyond control, focus on OUTCOME of behaviour, not behaviour itself -> useful as supplement w subjective measures
70
who could rate subjective measures?
- generally the supervisor - peers - subordinates (anonymity important here) - self - clients -> 360 DEGREE SYSTEMS
71
why should the supervisor do subjective measures?
- controls consequences (rewards/punishments) - research shows feedback from supervisors more highly related to performance than other sources - 360 degree systems provide more info but supervisor makes ultimate decision
72
what are the downsides of peers doing subjective measures of performance?
- if negative feedback, it has strong effects on group behaviour (reduces satisfaction, cohesiveness, perceived friendship bias) - also issue w common method variance
73
what are the pros & cons of self rating subjective measures of performance?
- can increase motivation if combined w goal setting & decrease fear around appraisal - but more leniency, less variability, less agreement w others
74
who should rate performance (subjective measures category)?
360 DEGREE SYSTEMS - improved reliability cause of multiple sources - broader range of performance info (minimize criteiron deficiency) - not only task performance but usually also contextual performance + counterproductive work behaviours - multiple sources also mean biases are reduced manager, peers, customers, direct reports etc
75
how does criterion contamination occur?
when external factors (like biases, luck) affect performance measures
76
what are the rating biases often seen in subjective measures of performance?
1. leniency & severity 2. central tendency 3. halo 4. primacy/recency 5. contrast 6. overvaluing dramatic events 7. similar to me effect
77
what is halo bias in subjective measures of performance?
ratings based on general impression of employee (no distincition between performance dimensions). not as common & bad as believed
78
what is central tendency bias in subjective measures of performance?
- everybody is average - useless appraisal because fails to discriminate
79
what is leniency & severity bias in subjective measures of performance?
some raters are "easy" some "difficult", lack of in between
80
what is primacy/recency bias in subjective measures of performance?
first & last impressions of the person weigh heavier in influencing rating
81
what is contrast bias in subjective measures of performance?
an employee's evaluation is biased up-or downward due to comparison w another employee just previously evaluated
82
how can you reduce judgmental biases in subjective measures of performance?
- consider type of rating scale used (each has pros & cons) - to reduce amount of freedom exercised by the rator -> provide structure - training of raters to make sure that raters are competent in making the judgments
83
how can you train raters to reduce judgmental biases in subjective measures of performance?
- improve observational skills - reduce/eliminate judgmental biases - improve ability to communicate performance info to ratees in an objective & constructive manner
84
what rater characteristics improve accuracy of rating?
- high self monitoring - accountability - own performance - length of relationship w ratee
85
what rater characteristics worsen accuracy of rating?
- stress - delayed rating - limited data
86
what rater characteristics change the rating direction of the performance rating?
- low self confidence (-ve) - High agreeableness (+ve) - High conscientiousness (-ve) - if supervisor got positive evaluation (+ve)
87
which ratee characteristics improve performance rating?
- dependability - low performance of others - perceived similarity (vs actual) rater ratee
88
which ratee characteristics decrease performance rating?
- age - obnoxiousness (peer raters) - gender (females for promotion)
89
what are the 2 subtypes of subjective performance measures are there?
- relative: employees are compared to one another - absolute: description w/o reference to other ratees -> rated against an absolute standard
90
what are some relative rating systems?
1. rank ordering 2. paired comparisons 3. forced distribution scale 4. relative percentile method (RPM)
91
what is relative percentile method (RPM)?
- rater asked to compare the performance of an individual to a refernce group (average employee) - based on social comparison theory (in absence of objective criteria social stimuli provide raters standards in form of natural schemas which are simpler) - ex: pls ratre person in video compared to other psych studies by drawing a line through the scale from 0 to 100
92
does the relative percentile method (RPM) work well? in what cases?
accuracy is similar or superior to absolute rating systems & also perceived as fairer than other relative rating measures (like forced distribution) by raters & ratees also useful for self assessment conditions - good for level of broad/global dimensions - use group rather than individual referent, so rater must be familiar with referent group
93
what is rank ordering?
- simple ranking method - rank order all the employees - alternation ranking: best, worst, second best, second worst etc
94
what is pair comparisons?
- pair every worker w every other worker and compare their performance - nr of comparisons = n(n-1)/2
95
what is forced distribution scale?
- assumes certain (usually normal) distribution as "true" and forces people into categories (like top 5% or 10%) - controls for leniency, severity, and central tendency biases - but what if the ratees as a group do not conform to a normal distribution? (ie its a high performing group?)
96
whats the pros of relative rating systems?
- easy to understand - help w discriminating among ratees (differential accuracy) - control for certain biases (eg central tendency)
97
what are the cons of relative rating systems?
- provides no indication of the relative distance between individiuals (ordinal) - difficult to compare across groups, departments ... - reliability questionable, oko for the very high & low performers but not in the middle - not behaviourally specific - rewards members of poor groups, punishes members of superior groups - can be perceived as unfair (especiallly forced distribution)
98
what are some absolute rating systems?
1. narrative essay 2. behavioural checklists 3. forced-choice system 4. critical incidents 5. graphic rating scales 6. behaviourally anchored rating scale (BARS)
99
what are narrative essays?
- raters describes in writing individuals strengths, weaknesses, & potential, making suggestions for improvement
100
what are the pros & cons of narrative essays?
pros: detailed feedback cons: - unstructured - only qualitative - impossible to compare across individuals, group etc - pretty useless as criterion( because in qualitative format, needs to be quantified)
101
what are behavioural checklists?
- rater is provided w a series of descriptive statements of job related behaviour rather than evaluative - can be combined w likert scale - overall numeric rating for an employee
102
what are the pros & cons of behavioural checklists?
pros - easy to use & understand - raters are reports of job behaviour rather than evaluators (reduces cognitive demand on the rater) con - difficult to give diagnostic feedback as not given in terms of evaluations
103
what is a forced choice sytem?
- special type of checklist - need to choose among given statements which fit best
104
what are the pros & cons of a forced choice system?
pro - harder to distort ratings (so should reduce leniency) cons - removes control from rater so cannot be sure how person was assessed (less helpful for feedback), unpopular method
105
what are critical incidents?
Reports of employee actions that were especially effective / ineffective in accomplishing the job. Each supervisor records CIs for each employee as they occur.
106
what are the pros & cons of critical incideents?
pros - forces attention to situational & personal determinants & uniqueness in doing the job - absolutely job related -> focus on job behavior - ideal for feedback & development - can be used to develop more standardized methods of performance appraisal (like BARS) cons - time-consuming & burdensome -> could delay feedback - qualitative, difficult to compare employees
107
what are graphic rating scales?
- common title for differing formats that use anchors (verbal, numeric, both) on a continuum - better if define the response categories clearly
108
what are the pros & cons of graphic rating scales?
pros - quick & easy -> liked by raters, popular! - Standardized and so comparable across individuals (quantitative) - Consider more than one performance dimension cons - Maximum control to the rater, so less control over biases (e.g. central tendency, halo, leniency) - Poorly defined anchors and descriptions of dimensions -> lead to interrater differences - Do not have as much depth of information as narrative essays & CIs
109
what are BARS?
behaviourally anchored rating scale - Another modification of the CI. Idea was to improve on the graphic rating scales. - Goes through several steps: 1. Identify dimensions of effective performance for a job 2. For each dimension -> Critical Incidents (CIs) 3. Another group given dimensions + randomised CIs & needs to put them back into dimensions -> retranslation (eliminate a dimension if there is no clear agreement) 4. Identify scale values
110
what are pros & cons of BARS?
pro - behaviourally based, clear (each numerical point is explained) & easy to use - good face vailidity - greater ratee satisfaction w BARS info, than graphic rating scales - greater participation potential in development cons - long & painstaking process - in some studies not shown to be any more superior than other performance measurement systems (althought other studies did show that it had higher accuracy & lower rater error)