L2: Job Performance Flashcards

1
Q

How is job analysis related to performance?

A

Starts with job descriptions so evaluating performance based on how the employee performs duties and tasks. Job specifications are the KSAOs needed to perform well.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is criterion development?

A

using different measures to see how well a criterion predicts behaviour

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Why is measurement of performance important?

A
  • between-person decisions → promotion, termination, contract, salary
  • within-person decisions like training needs, feedback, diagnosis of weaknesses and strengths
  • systems maintenance like evaluation of personnel systems
  • documentation like compliance with legal requirements
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is the purpose of performance management in the job in general?

A
  • recruitment (quality of applicants determines performance)
  • selection (should produce high-performing workers),
  • training and development (determines needs and feedback to reach performance standards)
  • compensation management determines pay
  • labor relations and strategy justifies administrative personnel actions
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is the performance domain?

A
  • performance is the action and behaviours under control of the individual and contribute to the goals of the organization
  • it is multidimensional
  • distinguishes between behaviour and outcomes/results
  • criterion is the measure of performance
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is a more comprehensive definition of criterion?

A

It is an evaluation standard used to measure a person’s performance, attitude, motivation etc.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

When is there more focus on outcome?

A
  • when workers are skilled, the behaviours and results are clearly related and there are many ways to do the job right
  • but outcomes could be influenced by other factors
  • so should capture full criterion
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What are some examples of outcomes?

A
  • output measures (like how many units produced/sold)
  • quality measures (like how many errors made)
  • lost time (like number of absent days)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What are some examples of behaviours?

A
  • ratings of performance like personal traits
  • counterproductive behaviours like aggression, substance abuse
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is meant by the term ultimate criterion?

A

The full domain of performance, both behaviours and results that define success on the job

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is criteria used for?

A

predictive purposes and evaluative purposes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What are the dimensions of criteria?

A
  • multidimensionality at any point in time (task, context, typical vs maximum)
  • dynamic is when to measure and there can be changes in validity and rank ordering
  • individual is the same job done by two people with different contributions
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What does static criteria consist of?

A

At any point of performance, there are several dimensions involved and a person is high on one fact but low on another. Includes: task performance (activities recognized as part of the job and contributes to the organization’s core), contextual performance (contributes to effectiveness and provides a good performance for task performance to occur) and counterproductive behaviours (violates norms and threatens wellbeing of organization)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

How to distinguish between typical and maximum performance?

A

Typical is the average level of performance while maximum is the peak level of performance that can be achieved with high motivation. General mental ability strongly correlated to maximum performance, low correlation between can do and will do

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is contextual performance?

A

Prosocial behaviours or organizational citizenship behaviours, do not always go hand in hand with task performance. Includes putting in extra effort, helping others, following rules and procedures, doing tasks with enthusiasm and exerting extra effort

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is task performance?

A

Task performance includes activities directly involved in producing goods and services, as well as supporting tasks like resource replenishment, product distribution, and functions such as planning, coordination, supervision, and staff support to ensure organizational efficiency.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What is temporal dimensionality?

A

Performance is not constant over time as individual difference variables could influence an individual’s performance over time. Average performance changes. The validity might change: changing task model is that people stay the same but the demands of the task change and changing subjects model is that the requirements stay the same but level of ability changes over time. So rank ordering also changes over time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

How can we capture changes in performance?

A

Through employee monitoring systems (wearable sensors) as these capture employee performance on an ongoing basis, capture fluctuations at different time points-> intraindividual performance fluctuations. This also allows for the collection of big data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What is individual dimensionality?

A

When 2 people in the same job perform equally well but the nature of their contributions differs. Criterion needs to include all relevant aspects of performance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What is important for criterion development?

A
  1. developing good criteria is needed to construct selection procedures to predict criteria
  2. all major aspects of performance domain should be captured (identified through factor analysis)
  3. objective outcome measures should be used with behavioural measures
  4. development of reliable measures
  5. determination of predictive validity
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What are good criteria?

A
  1. relevance- logically related to the performance domain in question
  2. sensitivity/discriminability- must be capable of differentiating between effective and ineffective employees
  3. practicality- should be feasible in terms of time and costs
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

How is lack of reliability an issue for criteria?

A

It is the consistency or stability of job performance over time, can be intrinsic (personal inconsistency in performance) or extrinsic (source of variability external to job demands) or rater inconsistency. Solution: aggregate scores over time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

How is criterion contamination an issue?

A

When the criterion includes variation not related to actual performance due to error (random variation that should not correlate with predictors) and bias which is systematic variation in knowledge of predictors and in ratings

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

What are some examples of biases?

A

Biases in prior knowledge of predictor scores, such as rating someone as having better performance due to being rated that previously. There can be bias due to group membership and biases in ratings

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

How is criterion deficiency an issue?

A

When a criterion does not address all critical aspects of successful job performance so key performance indicators are missing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

What are some other issues?

A
  • reliability of job performance observation (high reliability of judging performance is needed)
  • dimensionality of job performance (more than one specific criterions needed)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

What are the situational characteristics of performance?

A
  • environmental and organizational characteristics like interpersonal factors, shift work, policies, practices etc
  • environmental safety
  • life-space variables like interactions with organizational factors, task demands, supervision and conditions of the job
  • job and location
  • extra-individual differences like influences beyond control
  • leadership
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

In-situ performance

A

the specification of the broad range of effects—situational, contextual, strategic, and environmental—that may affect individual, team, or organizational performance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

What is the criterion problem?

A

Difficulties in conceptualizing and measuring multidimensional and dynamic constructs. It is not possible to measure all aspects of performance so there will always be some criterion deficiency. External factors likely influenced the measure so there could be criterion contamination. There is a focus on measuring what is convenient/easy than desirable and important

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

Multiple criteria

A
  • argues that performance should be multidimensional
  • different job skills require different scores and criteria reflect required behaviours
  • measure of different variables should not be combined
  • this gains understanding in a non-ambiguous way
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

Composites for estimating overall success

A
  • global measures of work summed into one score with weighted components
  • this is used for most administrative decisions
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

How do these positions differ?

A
  • the nature of the underlying constructs
  • primary purpose of validation itself
  • some argue that the criterion should measure the overall contribution to the organization, while others argue that the criterion should represent a behavioural construct
  • composite should be used when decision-making is objective, while multiple should be used for understanding predictor-criterion relationships
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

How can inference 9 be justified?

A

An operational criterion measure (5) and the operational criterion measure should be related to the performance domain

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

What inferences is construct validity related to?

A

6 and 7 as it measures a specific construct which is important for job performance so inferences about job performance are justified

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

Which inference is job specification and job description?

A

7: based on evidence, the constructs underlying performance have been identified-> job spec
10: the extent to which actual job demands are analyzed adequately-> job description

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

What is the criterion problem?

A

The criterion problem refers to the tendency to overlook strong evidence when validating performance measures. This often results in performance criteria that are less precise and less theoretically grounded compared to predictor measures. As a result, it weakens theories, construct validation, and our ability to make accurate conclusions about workplace behavior. However, if organizations properly develop and validate these performance measures, they can improve hiring, career development, and overall effectiveness.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

What is the distribution of performance and star performers?

A

In a heavy-tailed distribution, it is expected that there are many star performers, unlike the normal distribution-> research supports that in most cases follows a heavy-tailed distribution

38
Q

What are the consequences of heavy-tailed distributions?

A

-Minimize constraints to allow high achievers to emerge.
- Rotate stars across teams to expand networks and knowledge sharing.
- Invest in top talent to align with strategic goals.
- Retain stars by considering their personal and professional needs.
- Prioritize stars during financial challenges to prevent decline in performance.
- Offer fair but preferential treatment to motivate high performance.
- Allocate more resources to stars for greater overall gains.
- Avoid non-performance-based incentives that discourage excellence.

39
Q

What are the types of performance measures?

A

Objective: production data and employment data
Subjective: depends on human judgement which is prone to biases (absolute or relative)

40
Q

What is the evaluation of objective measures?

A
  • helpfulunder certain conditions like highly skilled workers, different ways to achieve the same result
  • can be unreliable and contaminated by situational characteristics
  • focus on outcome of behaviour rather than the behaviour itself
  • focus on performance appraisal is to judge performance
  • useful as supplements with subjective measures
41
Q

Who should rate?

A
  • usually the supervisor as they control the consequences and feedback from supervisors more highly related to performance
  • peers but negative feedback can affect group behaviour and issues with common method variance
  • subordinates but anonymity is important and averaging
  • self which can increase motivation with goal setting and less fear around appraisal but more leniency, less variability, more bias and less agreement with others
  • clients
42
Q

Common method variance

A

Variance in a performance measure not relevant to the behaviours assessed but due to the method of measurement used
-> procedural remedies (using variables from different sources and separating their measurement)
-> statistical remedies (Harman’s single factor test- loading into one common factor)

43
Q

What are 360 degree systems?

A

This is when all of the raters are considered. This improves reliability, provides a broader range of performance info, contextual performance and counterproductive work behaviours are looked at, multiple sources mean that biases are reduced. Relevant content, data credibility, accountability and participation are important in this case

44
Q

Convergent validity

A

The degree of interrater agreement within rating dimensions

45
Q

Discriminant validity

A

The ability of raters to make distinctions in performance across dimensions

46
Q

How are self-ratings compared against actual test performance?

A

High performers slightly underestimate their performance but is similar. For low performers there is a big difference between actual performance and perceived performance (so do not realize this)

47
Q

What are the rating biases?

A
  • leniency and severity so some raters are easy or difficult, but greater leniency is used for admin compared to research
  • central tendency is that everybody is average which fails to discriminate
  • halo is based on general impression which does not distinguish between performance dimensions but not as common as believed
  • primacy/recency is that first and last impressions of person weigh heavier in influencing rating
  • contrast is that evaluation is biased up or downward due to comparison with another employee
  • overvaluing dramatic effects
  • similar to me effect
48
Q

How can leniency be controlled for?

A

Allocating ratings into a forced distribution, requiring supervisors to rank order their subordinates, encouraging rates to provide feedback on a regular basis

49
Q

How to reduce judgemental biases?

A
  • looking at the type of rating scale used
  • reduce amount of discretion exercised by rater through structure
  • improve the competency of raters in making judgements like improving observational skills, reducing biases, improving ability to communicate performance info
50
Q

How can the characteristics of the rater influence performance ratings?

A
  • no effects of gender, age interests, GMAs
  • low self-confidence and high conscientiousness results in lower performance ratings
  • high agreeableness and positive supervisors result in higher performance ratings
  • high self-monitoring, accountability and own good performance increases accuracy
  • accuracy decreases with stress, delayed ratings and limited data
  • in general, greater length of relationship the better the accuracy
51
Q

How can characteristics of the ratee influence performance ratings?

A
  • no effects of education and tenure
  • performance rating increases with dependability, low performance of others and perceived similarity between rater and ratees
  • performance rating decreases with age, obnoxiousness and gender (females for promotion)
  • but depends on proportions of gender and job satisfaction
52
Q

Relative subjective measures

A

When employees are compared to one another

53
Q

How are relative ratings seen as more accurate?

A

Accuracy in differential accuracy and stereotype accuracy

54
Q

Absolute subjective measures

A

Description without reference to other ratees, against absolute standards. Builds on structure to minimize biases, and ask raters to only judge what they know

55
Q

Relative rating systems

A
  • simple ranking method to order all employees
    -pair workers with other workers, total comparisons is n(n-1)/2
  • forced distribution scale but assumed normal distribution as true and forces people into categories, but controls for many biases
  • relative percentile method
56
Q

Different simple ranking systems?

A

Simple is ranking all ratees from highest to lowest. Alternation is when the rater lists all ratees on paper, then chooses the best, worst then second best and second worst etc

57
Q

Evaluation of relative rating systems

A

Easy to understand, help with discriminating among ratees
(differential accuracy), control for certain biases (e.g. central
tendency)
- provides no indication of the relative distance between individuals (ordinal)
- difficult to compare across groups, departments…
- reliability? (try to re-rank), ok for the very high and low performers but not in the middle
- not behaviorally specific
- rewards members of poor groups, punishes members of superior groups
- can be perceived as unfair (especial forced distribution)

58
Q

What statistical properties are considered for forced-choice scales?

A
  • discriminability (measure of the degree to which an item differentiates effective from ineffective workers)
  • preference (an index of the degree to which the quality expressed in an item is valued by people
59
Q

What is the relative percentile method?

A

Rater is asked to compare performance of an individual to a reference group which consists of the average employee, which invokes natural schemas. Accuracy is similar to absolute rating systems and perceived as more fair. Good for global dimensions, and use group rather than individual referents

60
Q

Narrative essay

A

Rater describes individual’s strengths, weakness, potential and make suggestions for improvement. This is detailed feedback but unstructured, only qualitative so difficult to compare across groups and useless as a criterion (needs to be quantified anyway)

61
Q

Behavioural checklists

A

Rater is provided with a series of descriptive statements of job behaviour, can be combined with likert scale and provides numeric rating. This is easy to use and understand, raters are reporters of job behaviour which reduces cognitive demand. But difficult to give diagnostic feedback without evaluation

62
Q

Forced choice system

A
  • is a special type of checklist in which you are forced to choose a statement
  • makes it harder to distort ratings and so should reduce leniency
  • removes control from rater and unclear how person was assessed
  • is unpopular
63
Q

Critical incidents

A
  • reports on effective/ineffective actions in accomplishing the job
  • can be used to develop standardized methods of performance appraisal like BARs
64
Q

Evaluation of critical incidents

A
  • forces attention to situational & personal determinants & uniqueness in doing the job
  • absolutely job related-> focus on job behavior
  • ideal for feedback & development
  • time-consuming & burdensome -> could delay feedback
  • qualitative, difficult to compare employees
65
Q

Graphic rating scales

A

Common title for many different formats on a continuum with examples of behaviours (response categories should be defined clearly)

66
Q

How do scales differ?

A
  • the degree to which meaning is defined
  • the degree to which the individual who is interpreting the ratings can interpret the intended response
  • the degree to which the performance dimension is defined for the rater
67
Q

Evaluation of graphic rating scales

A
  • quick & easy so liked by raters and popular!
  • Standardized and so comparable across individuals (quantitative)
  • Consider more than one performance dimension
  • Maximum control to the rater, so less control over biases (e.g. central tendency, halo, leniency)
  • Poorly defined anchors and descriptions of dimensions à lead to interrater differences
  • Do not have as much depth of information as narrative essays & CIs
68
Q

Behaviourally Anchored Rating Scale

A

Includes: identifying dimensions of effective performance with critical incidents, another group is given dimensions and CIs to put them into dimensions (retranslation) then the scale values can be identified.

69
Q

Anchoring

A

Making scale points unambiguous using qualitative, or verbal and numerical anchors

70
Q

BARS evaluation

A
  • Behaviorally based, clear (each numerical point is explained) & easy to use.
  • Good face validity, greater ratee satisfaction with BARS info. than graphic rating scales. Greater participation potential in development.
  • Long and painstaking process + in some studies not shown to be any more superior than other performance measurement systems (although other studies did show that it had higher accuracy and lower rater error).
71
Q

Performance management

A

Continuous process of identifying, measuring and developing the performance of individuals, teams and aligning performance with strategic goals of the organization

72
Q

Performance appraisal

A

Systematic description of job-relevant strengths and weaknesses within and between employees or groups. This is a key component to any PMS, known as the achilles hell of HR. This can have serious consequences for the individual like stress, mistrust and fear. It increases power distance between supervisor- subordinate. Overemphasis on uncharacteristic performance and rating personality rather than performance behaviour

73
Q

Purposes of performance management systems?

A
  • serve a strategic purpose to link employee activities with the organization’s mission and goals
  • serve communication purpose
  • serve as bases for employment decisions like promotion, training etc
  • employee performance can serve as criteria in HR research
  • establish objectives based on feedback + personal devleopment
  • facilitates organizational diagnosis, maintenance and development
  • allows organizations to keep records of HR decisions etc
74
Q

What are the realities of performance management?

A
  • important to know if individuals are performing competently
  • appraisal has consequences for individuals
  • job increases with complexity
  • political consequences play a role in decisions
  • implementation of management systems take time and effort
75
Q

Organizational challenges

A

e.g., management shortcomings such as inadequate preparation,
lack of follow up & coaching, too many forms to complete,
bureaucracy. So employees held responsible for organizational errors

76
Q

Political challenges

A

Stem from attempts by raters to enhance or protect self-interests, Managers value motivating and rewarding employees more than accuracy (reward allies and punish enemies

77
Q

Interpersonal challenges

A

Face-to-face encounters between subordinate and superior and can result in feeling judged on a set of standards due to lack of communication. There can be serious consequences for the individual, and formal performance raises this power distance

78
Q

What are important characteristics for performance management systems to function successfully?

A
  • congruence with strategy (achieving organizational goals)
    -thoroughness (employees should be evaluated, responsibilities should be measured etc)
  • practicality (available, plausible acceptable)
  • meaningfulness (implementation is important)
  • specificity
  • discriminability
  • reliability and validity
  • inclusiveness (active participations of raters and ratees)
  • fairness and acceptability
79
Q

What are the human aspects of performance management?

A
  • not only technical process, but a personal development tool
  • feedback can have negative effects if the person focuses on themselves rather than the task at hand
  • fear about giving and receiving information (supervisors avoid confronting issues, subordinates tend to rationalize it away to maintain self-esteem)
80
Q

What are the different types of teams and their recommendations?

A
  • work or service teams which are engaged in routine tasks
  • project teams which are assembled for a specific purpose and disband when their task is complete-> need performance evaluation and feedback during project to make corrections
  • network teams whose membership is not constrained by time or space
81
Q

Objectives of rater training?

A
  • improving observational skills of raters
  • reduce judgemental biases like halo, central tendency, leniency
  • improve ability of raters to communicate info constructively
82
Q

Types of trainings?

A

Rate error training is exposing raters to different errors and their causes
Frame of reference training which provides theory of performance to understand dimensions, matching them to rate behaviours, judge the effectiveness of behaviours

83
Q

How does FOR training work?

A
  1. Task: evaluate performance of 3 ratees on 3 dimensions
  2. Information about rating scales & dimensions
  3. Discuss behaviours showing different levels on each scale
    -> create common performance theory (‘frame of reference’)
  4. Rate ratee in a videotaped vignette on dimensions
  5. Collect & discuss ratings, identify behaviours that were used for
    ratings à ratings & discrepancies
  6. Feedback, explain why ratee should get certain score on certain
    dimension
    Behavioural examples and providing rating scales will increase rating accuracy
84
Q

What does fairness consist of?

A
  • process facets or interactional justice which is interpersonal exchanges between supervisor and employees-> explains more variance than systems
  • system facets/procedural justice is the structure, procedures and policies of the system
85
Q

What issues should be explored further?

A

Effective performance management depends on social power, trust, social exchange, and group dynamics. Supervisors’ perceived power influences employees’ engagement with the system, while collective trust among stakeholders is crucial for its success. Social exchange theory highlights the role of fairness in workplace relationships and performance management. Given the importance of teams, understanding group dynamics and interpersonal relationships, including workplace romances, can help improve performance evaluation and implementation. Future research should explore these factors to enhance performance management systems.

86
Q

Why should a formal system for giving feedback be implemented?

A

For those with stereotype threats, they are less likely to seek feedback which presents issues-> facilitated by electronic performance monitoring which can be stored easily online

87
Q

What should supervisors do before the appraisal interview?

A

Communicate frequently with subordinates about performance, get training in appraisal (to observe behaviour more accurately and fairly), judge own performance before others, encourage subordinate preparation (by analyzing performance beforehand), use primes to trigger information

88
Q

What should supervisors do during the appraisal interview?

A

Warm up and encourage participation to ensure helpfulness and constructive feedback, judging performance instead of personality, being specific on positive and negative behaviours (first positive, minor issues then major ones). Be an active listener by reflecting on what was said, avoid destructive criticism as this can cause employees to attribute poor performance to internal causes, set mutually agreeable and formal goals to provide direction, proportional effort and persistence

89
Q

What should supervisors do after appraisal interviews?

A

Should continue to communicate and assess progress towards goals regularly, and make organizational rewards contingent on performance

90
Q

When are employees more likely to accept appraisal?

A
  • Performance is evaluated frequently (close to action)
  • Supervisors appear familiar with employees’ performance
  • Employees have opportunity to voice their own feelings during the appraisal interview
  • New performance goals, based on the appraisals, are set during the interview
  • They receive a good appraisal interview