exam 2 study guide Flashcards

1
Q

JA definition

A

the systematic process that determines the “essence” or identify; the job tasks and responsibilities, KSAOs, and critical incidents faced on the job

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

JA purposes/functions

A

Basis of HR functions;
Helps one understand the job

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

JA roles

A
  • determining QUALIFICATIONS (skills and experience) required for to do the job
  • providing important info about the job before trying to RECRUIT the best people (increasing applicant pool)
  • choosing and developing valid SELECTION procedures (tests, interviews) to hire new people
  • developing strong CRITERION MEASURES (measures of job performance)
  • designing PERFORMANCE APPRAISAL tools and systems to evaluate employee performance
  • developing TRAINING interventions by conducting thorough needs assessments
  • JOB DESIGN
  • JOB EVALUATION
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

job design

A

systematic analysis of the org of work, which often includes job analysis to identify the best way to allocate various tasks and responsibilities among different jobs

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

job evaluation

A

a particular type of job analysis used to determine the relative value that jobs have within an organization

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

job description def

A

an overview of a job, typically one to two pages outlining what the job entails

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

sources of info for JA

A

SMEs, task-oriented analysis, and people-oriented analysis

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

task-oriented JA

A

involves generating a list of critical job tasks, and the KSAOs needed to do them, through observations and SME interviews

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

worker-oriented JA

A

a job analysis method in which the primary unit of analysis is the characteristics of the EMPLOYEE

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

critical incidents technique

A

a worker-oriented method of job analysis focused on documenting examples of critical situations faced by job incumbents, such as examples of good and poor ways to handle them, and the results

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

task vs worker -oriented JA (differences, examples)

A

task-KSA involves generating a list of critical tasks and KSAOs needed to do the job and worker-oriented approaches are focused on the characteristics of the employee

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

KSAOs

A

used to describe the characteristics an employee needs to do the job

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

KSAOs: KNOWLEDGE

A

collection of discrete, related facts and info about a particular domain; generally something that people can learn from a book

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

KSAOs: SKILL

A

practiced acts or capacity to perform specific task or job duty (computer or interpersonal skills)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

KSAOs: ABILITY

A

stable capacity to engage in specific behaviors; more innate, brought by someone to the job

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

KSAOs: OTHER CHARACTERISTICS

A

interests, personality, training, experience, etc (even political views can sometimes fall under this…)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

how is JA done: OBSERVATION

A

…of SMEs doing the work is one of the most basic ways to learn about the job (important in court, intrusive)
RIDE ALONG = common method used when *much of the work is done in the field

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

how is JA done: INTERVIEWS/FOCUS GROUPS

A

job analysts meet with SMEs to ask questions about the job regarding typical responsibilities/tasks, KSAOs, critical incidents faced on the job, and qualifications and experiences necessary (conducted individually or with groups of two or more SMEs)
*incumbents, supervisors, trainers, experts

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

how is JA done: CRITICAL INCIDENTS & WORK DIARIES

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

how is JA done: QUESTIONNAIRES/SURVEYS

A

large number of employees complete questionnaire about the job
*critical that a large number of employees are given the survey to improve the representatives of the job

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

how is JA done: REVIEW OF DOCUMENTS

A

observation descriptions, training manuals, performance appraisals, previous JAs, strategic plans, charts, ONET

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

what is ONET?

A

website owned by fed gov to display thousands of JAs done since 1939
(can be updated instantaneously when online vs on paper)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

criterion def and importance

A

outcome variables that capture performance or effectiveness of an employee or group of employees

(importance) criterion is critical to evaluate the effectiveness of org interventions such as selection procedures, training programs, safety practices

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

conceptual criterion

A

a concept or abstract idea of the ‘essence’ of a job—not something we can measure directly
- we can use job analysis to get a better understanding of a job by collecting KSAOs and job behaviors, but we can never fully capture the entire essence of a job
**hypothetical and IDEAL based on job analysis

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

actual criterion

A

the measure actually used to try and capture the conceptual criterion
- all actual criterion measures are flawed in some way because they have some degree of error
**the actual criterion measured from ratings, attendance, records, etc

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

criteria relevance

A

degree to which the actual criterion overlaps with the conceptual criterion (THIS IS WHAT WE WANT)
- helps avoid unfairness issues and legal issues
- we want actual criterion to capture a unique aspect of the conceptual criterion

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

criteria deficiency

A

degree to which actual criterion fails to overlap with the conceptual criterion (the amount of conceptual leftover that does not have overlap with actual)

EX. missing a section of information in assessments and/or not fully covering an entire subject that is the goal

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

criteria contamination

A

ACTUAL CRITERION includes something it should not, leading to error
EX. evaluating an employee on something that doesnt actually relate to their job/job responsibilities in performance evals)
**asking anything IRRELEVANT

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

how to choose good criterion measures, with examples

A

When there is great overlap between conceptual and actual criterion

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

typical performance

A

level of job performance a person usually exhibits
*Long term positions should look for this

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

maximum performance

A

level of job performance a person is capable of carrying out
*Short term positions should look for this

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

characteristics of good criteria

A
  • high reliability
  • detect difference among employees
  • accepted by employees and supervisors
  • not too costly or disruptive
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

multiple criteria; def, examples, when to use

A

each criterion measure is treated separately
- one specific element of grading determines final grade (or criterion)
EX. one exam score is the only thing that determines it
EX. number of units produced
EX.supervisor performance ratings of motivation and effort
EX. number of customer service calls

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

composite criteria; def, examples, when to use

A

combination of multiple criteria
- lots of grades or exams or assignments make up a final grade (or criterion)
EX. all exams and assignments together determines it
EX. average of: number of units produced, supervisor performance ratings of motivation and effort, number of customer service calls

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

task performance

A

behaviors aimed at completing core tasks that make up a job (required, specified in job contract, may get punishment if not done)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

contextual performance

A

behaviors that support the social environment in the workplace (not required or even expected, but can create major benefits)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

counterproductive work behaviors (CWB)

A

theft, derailment of others, abusive leadership, property destruction

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

creative performance

A

finding problems, ideation (flexibility and originality), and evaluation of ideas

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

proactive (ADAPTIVE) performance

A

how well an employee adapts to the task and social environments

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

subjective measure (strengths and limitations)

A

if some judgment is required to assign a grade or a numeric value to the thing being measured
EX. supervisor ratings, coworker ratings, other indivs judgements of a person’s, performance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

actual/OBJECTIVE measure (strengths and limitations)

A

requires no judgement

EX. like a card swipe with a time on it to know exactly when someone came in late or something—there is no judgement
EX. multiple choice questions are more objective (when written accurately)
EX. number of units produced or sold, number of sales, amount of waste generated, absenteeism

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

performance appraisal (PA) def

A

SUBJECTIVE measurement of employee performance based on pre-established criteria, and communication of this information to the employee, often conducted by supervisors; on-going process of observing behavior, providing feedback and setting goals

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

stages of PA

A
  1. performance appraisal systems determined
  2. rater/supervisor observes the performance of the employee during the appraisal period and gives frequent feedback/coaching
  3. rater/ratee meet to conduct an interview in which the level of performance is discussed with the employee and an action plan is created for improvement
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

dual nature of PA

A

PA can be thought of as a cognitive process and a social/relational process

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

purposes of PA

A
  • development—conducted to develop future performance
  • administrative—make decisions based on past performance
  • research—evaluate training effectiveness or validate selection tool
  • legal—defense for organization faced with employment discrimination claim
46
Q

characteristics of an effective PA system

A
  • strategic fit
  • content validity
  • accuracy
  • ratee reactions
  • practicality
47
Q

strategic fit

A

align PA with org’s strategy

48
Q

content validity

A

PA should cover job domain, and be neither contaminated not deficient (measuring something that people are supposed to do in the job)

49
Q

accuracy

A

degree to which PA reflects the true level of employee performance

50
Q

ratee reactions

A

perceived fairness and usefulness of the PA by ratees

51
Q

practicality

A

ease of use and practicality of the PA

  • cannot be too expensive or disruptive
  • when there is no budget for existing performance measures, the company must find and develop a way to do it themselves for cheaper
52
Q

trait appraisal approach (understanding, strengths, limitations)

A

focus on measuring KSAOs
- reliability, honesty, punctuality, friendliness (all fairly subjective)
**potential limitations:
personality does not say anything about if you can do a job correctly!; focus on the person rather than performance; focus on factors that are intrinsic to the person and harder to change; tend to increase common rating biases and errors due to vagueness of traits

53
Q

behavior appraisal approach (understanding, strengths, limitations)

A

measure the frequency with which specific observable work behaviors occur (frequency allows for things to be more quantifiable) (simply looking at BEHAVIORS, not the indiv perception of behaviors)
EX. frequency of employee greeting customers in friendly manner and address customer with their name at the register (more objective)

**while behavior appraisals provide more specific feedback and point out specific behaviors…
- time consuming to develop
- specific behaviors can result in assumption that there exists only one way to be a high performer (attendance in class is only one behavior that contributes to getting a good grade in a class)

54
Q

outcome appraisal approach (understanding, strengths, limitations)

A

focus on quan metrics (what employees produce)
EX. sales figures, number of units produced, level of absenteeism (objective measures)
**less subject to many rating errors and perceived as fair when tied to most important elements of a job (there is no judgement occurring)

(loss of context in understanding why the outcome is what it is; tend to be less under employee control; may not cover all aspects of job; can create ethical dilemmas, as they may incentivize or motivate problematic behavior)

55
Q

absolute rating scales (def, strengths, limitations)

A

compare performance to pre-established criteria

56
Q

relative rating scales (def, strengths, limitations)

A

compare performance of focal ratee to other ratees’ performance

57
Q

(abs) graphic rating scale (def, strengths, limitations)

A

most rating scales used by orgs are variations of this (simple, straight-forward, quantitative)

EX. smiley faces to understand a pain scale
- fails to define the to-be rated performance dimensions and the performance levels (subjective and capricious, not used very universally)
*not as common now-a-days

58
Q

(abs) behaviorally anchored ranking scale (BARS)(def, strengths, limitations)

A

more objective and focused on a specific behavior (most professional, relative, objective scale used today—THIS IS THE BEST)

EX. on a scale, what is the performance of the given behavior?
- as specific elements of the behavior are complete, you know which rating to give
can give very detailed feedback while being objective
**can be very time consuming though to development because of how comprehensive the job analysis must be in order to understand this fully

59
Q

(abs) behavioral observation scale (BOS) (def, strengths, limitations)

A

**more simplified version of BARS
presents a rater with a list of behaviors and asks how frequently he or she has observed each

*easier to development but has more room for for error and subjectivity
*another limitation: what does ‘frequently’ actually mean? how can you expect this to be equitable across all raters; lots of room for different understandings but easier to evaluate (made up of behaviors that are relevant to the job and rater’s task is simplified)

60
Q

(abs) essay appraisal form (def, strengths, limitations)

A

takes form of essay—beneficial when the primary purpose is to provide feedback to employees

*often times this is paired with a BARS or BOS to help be as comprehensive as possible

61
Q

(rel) straight ranking (def, strengths, limitations)

A

rater ranks employees from strongest to weakest in terms of performance (ranking helps eliminate inflation in comparison to rating)

  • becomes more difficult as the number of employees to be rated increases
    *assumes that the performance difference between those ranked 1 and 2 is equal to the difference between 5 and 6 (but this is often not the case)
  • can be useful for decision-making purposes because it forces raters to differentiate between and identify top performers
62
Q

(rel) forced distribution (def, strengths, limitations)

A

rater places employees into different categories (excellent, average, needs improvement)

  • raters must differentiate so that only a specific percentage of employees can be placed in each category (often distributed by percentiles)
  • makes raters differentiate between employees—may be important if a company is really failing and needs a lot of reformation
    *can de-incentivize teamwork and differentiate between performance when there is not meaningful difference
63
Q

(rel) practical issue (def, strengths, limitations)

A

replace the poorest performing employees with new hires (cruel system)

  • negatively impacts members in minority groups
  • more likely to get into legal trouble
    *‘rank and yank systems’ in GE during tenure of Jack Welch
64
Q

strictness (severity) (def, examples, how to minimize the effects of appraisals)

A

giving all employees very low ratings

65
Q

leniency error (def, examples, how to minimize the effects of appraisals)

A

giving all employees a very high rating

66
Q

central tendency error (def, examples, how to minimize the effects of appraisals)

A

giving all employees mid-range ratings (everyone got very similar scores)

  • trying to avoid extremely low AND extremely high ratings
  • trying to make everyone safe by avoiding instigating unethical competition
67
Q

contrast error (def, examples, how to minimize the effects of appraisals)

A

judging in comparison to what’s around (small fish in a big pond/big fish in a small pond)

68
Q

halo effect (def, examples, how to minimize the effects of appraisals)

A

rater’s first impression of ratee may drive the entire assessment of the employee, regardless of what questions are being asked

*very problematic if the performance appraisal form consisted of specific and independent performance dimensions that are not necessarily correlated with each other

69
Q

similarity and liking (def, examples, how to minimize the effects of appraisals)

A

employees who are viewed as more similar to the manager end up developing a higher quality relationship with the manager

EX. when managers have a high quality relationship with an employee, he/she rates the performance of that employee at high levels regardless of objective performance metrics
**people tend to trust others from a similar background more

70
Q

recency error (def, examples, how to minimize the effects of appraisals)

A

positive and negative events that occur just before an assessment may have undue influence on the rating

**problematic when performance appraisals are only conducted annually or semi-annually

71
Q

strategic issues: RECRUITMENT

A

increasing number of applicants, increasing the number of qualified applicants, increasing and maintaining workforce diversity

72
Q

strategic issues: METHODS (how to recruit)

A
  • career websites
  • social networking sites
  • employee referral programs
  • career fairs
73
Q

strategic issues: FACTORS AFFECTING APPLICANT ATTRACTIONS TO THE ORG

A
  • characteristics of the job and of the org
  • behavior of the recruiter
  • indivs perceptions of their potential fit in the org
  • image of the org (psychological and physical)
74
Q

validity importance

A

accurateness of inferences made based on test or performance data

  • there is no point to use a selection procedure with unknown or low validity
  • predictors with high validity enable orgs to hire the best talent
    **in the US, proof of validity is essential for defending tests and other measures in case they are found to have adverse impact
75
Q

content validity, ex

A

making sure your assessment will assess the abilities, skills, and knowledge that are relevant to the job; degree to which a selection procedure has been developed to sample the job in terms of the requires KSAOs

  • relies heavily on detailed job analysis and SME’s opinion (make a committee to assess)
  • comes in handy in situations where there are only small samples, as it is a legally defined method
76
Q

face validity, ex

A

subtype of content) degree to which a selection procedure seems job-related to a job applicant

EX. low face validity means that people do not know the purpose of the assessment when doing it—helps get honest answers
**face validity can be influenced by content validity, but they are NOT synonymous
- content involved the judgment of SMEs that the procedure is job-related and the thorough documentation of their judgment; content validity is REQUIRED

77
Q

construct validity, ex

A

demonstrated by showing that a test has an expected pattern of relationships with other measures and, in the case of selection, that is measures a construct needed for the job as documented through a job analysis

EX. if an org wants to test mechanical ability to hire mechanics, they need to first ensure that the test measures mechanical ability

78
Q

criterion-related validity

A

most important; involved showing the empirical relationship between a test and some outcome that you care about

79
Q

criterion-related validity: predictive validity (strengths and limitations)

A

(more than one); give tests to a group of applicants; then at a later time point, collect criterion data from the applicants you hired, correlate test scores with criterion data

  • using assessment score to predict future job performance
    **during recruitment: involves validating the test on the population for which you plan to use it (like applicants), less susceptible to a statistical issue called range restriction wherein there is reduced variability in test score
80
Q

criterion-related validity: concurrent validity (strengths and limitations)

A

(once); give tests to current employees, correlate test scores with their current job performance

  • when testing an assessment for validity that will be used for recruitment in future
    + easier and faster, good for solutions in which the org is not comfortable with the idea of not immediately using test scores to make decisions
81
Q

range restriction

A

reduced variability in test scores; from elimination of people who did not score well on the test and then employees who did not perform well on the job

think of the graph

82
Q

cross-validation

A

used to increase confidence that the validity coefficient, regression weights, or R^2 value found in one study was in fact accurate

*the larger the original validation sample used, the more accurate the study will be

83
Q

cross-validation: empirical

A
  1. conduct the criterion-related validity study
  2. using second sample, the regression weights established in the first sample are tested out to see how accurately they predict the job performance criterion in the second sample
  3. if weights from both samples are similar, then the original sample validates
84
Q

cross-validation: statistical

A
  • MORE CONVENIENT
  • when calculating a regression equation, software estimates based on the characteristics of the sample
85
Q

validation generalization

A

assumption that a test that is valid for one job will be valid for other, similar jobs

86
Q

situation specificity

A

belief that just because a test has been shown to be valid in one setting, you cannot assume that it will be valid in other settings, even if the two situations are similar

87
Q

civil rights act

A

civil rights act, 1991 (race, color, sex, religion, and national origin; outlawed ‘within group norming’ or score adjustments to reduce adverse impact)

88
Q

ADEA (age discrimination in employment act)

A

no longer able to discriminate for people over 40

89
Q

EEOC (equal employment opportunity commission)

A

the US government agency charged with monitoring employers’ activities in relation to providing equal opportunity to all groups, such as through selection procedures and pay

90
Q

adverse impact

A

does not imply intention on the part of the employer, but simply that a test or predictor favors one group over another

91
Q

disparate treatment

A

intentional discrimination on the part of an org or decision maker

EX. an overt preference of males in management jobs

92
Q

adverse impact step 1

A

plaintiff (applicants) demonstrates adverse impact (submit a claim or sue the company for what they did)

  • use the 4/5 rule to demonstrate adverse impact against the group to which they belong (number of available jobs / number of applicants assessed = selection ratio)
93
Q

adverse impact step 2

A

employer must demonstrate test validity (show how their assessment is valid)

94
Q

adverse impact step 3

A

plaintiff demonstrates other predictors available that could have been used instead without adverse impact

  • if employer is able to show that the selection procedure is valid, the last recourse for the plaintiff is to show that other equally valid selection procedures with lower adverse impact were available for the employer to use
95
Q

calculating adverse impact

A

calculates the unintended effects on a minority group

SR = n/N; must be above 80% or 4/5ths
**n = minority group percentage
**N = majority group percentage

96
Q

selecting predictor data

A

Using integrity tests, structured interview, and extraversion tests (but these should not overlap)

97
Q

multiple cutoff

A

all of the predictors are given at the SAME TIME

  • set cutoff scores for the procedures
    *good option for situations where administering the test and providing feedback in a timely manner is important

EX. one interview with an assessment apart of it

98
Q

multiple hurdle

A

predictors given in a sequence

**usually the more inexpensive selection procedures are given first
- applicants only proceed to the next ‘hurdle’ if they pass the previous hurdle
- this is an efficient and economic option, but generally takes longer

EX. multiple stages of interviews and assessments to get a job

99
Q

banding

A

treating ranges of scores as similar; saying that scores between 65 and 70 will be considered the same because they are so close

  • because all selection procedures contain some unreliability, test scores that are not far apart are essentially equivalent—in other words, such scores are not meaningfully different
    **banding can be controversial in the US because of its use as a method to reduce some differences between different ethnic groups
    **decreases adverse impact and can increase the number of minorities in the final stages of selection because those that may score lower are no longer penalized for it
100
Q

FALSE

A

made wrong decision (performance)

101
Q

TRUE

A

made right decision (performance)

102
Q

POSITIVE

A

scored well on test and indicated being a good candidate

103
Q

NEGATIVE

A

scored poorly on test and indicated being a poor candidate

104
Q

TEST/OUTCOME SCORE

A

FAIL is predicting failure and not offering another round of interviews and PASS is predicting success and offering more interviews or a position

105
Q

false negatives

A

performance = wrong decision
outcome score = bad

didnt hire but would have been good and regret not hiring

106
Q

false positives

A

performance = wrong decision
outcome score = good

hired but turned out bad

107
Q

true positives

A

performance = right decision
outcome score = good

hired, turned out good

108
Q

true negatives

A

performance = right decision
test score = bad

chose not to hire and it was the right decision

109
Q

factors affecting the utility of a selection procedure

A

monetary value of a personnel selection procedure
- Cost of selection procedure
- Validity
- Selection ratio (number of vacancies available relative to the number of job applicants)

110
Q

applicant reactions

A
  • Most preferred by applicants: work samples and interviews
  • Moderately preferred: resumes, cognitive tests, references, biodata, personality tests
  • Least preferred by applicants: integrity/honesty tests, personal contacts, graphology
111
Q

different factors that affect applicant reactions

A
  • Perceived fairness
  • Org attractiveness
  • Applicants’ perceptions about fairness of the hiring process