Selection Flashcards

1
Q

The process through which organizations make decisions about who will or will not be allowed to join the organization

A

Personnel selection

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Selection begins with

A

The candidates identified through recruitment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Selection ends with

A

The selected individuals placed in the jobs with the organization

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Some feature of a person you are hiring

A

Predictor

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Some organizationally relevant outcome

A

Criterion

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Some feature you want to assess

A

Construct

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

The actual score you observe

A

Measure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

These are those things used to make predictions about job applicants

A

Selection methods or devices

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

The goal of selection is to

A

Legitimately discriminate among applicants

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

The goal of this is to legitimately discriminate among applicants

A

Selection

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Different selection methods may be more/less appropriate to use depending on

A

The specific job for which you are selecting

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

5 criteria for evaluating any selection method

A
Reliability
Validity
Generalizability
Utility
Legality
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

The extent to which a measurement is free from random error

A

Reliability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

The more random error associated with a measure,

A

The less reliable it will be

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

The less reliable a measure is,

A

The less precise we can be in interpreting the scores it provides

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What measures reliability

A

Upper-limit of correlation coefficient (standardized measure of association; r) is the product of each

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

In the selection context, this refers to the extent to which performance on the selection device/test is associated with performance on the job

A

Validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

_____ is necessary for validity but not sufficient

A

Reliability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Selection device does not measure all important aspects

A

Deficient

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Selection device measures some irrelevant aspects

A

Contaminated

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Three ways of measuring validity (accepted by the government’s Uniform Guidelines on Employee Selection Procedures

A

Criterion-related
Content-related
Construct-related

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

This involves empirically assessing the relationship between scores on a selection device and scores on a “criterion”

A

Criterion related validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

The correlation between the two sets of scores assessed in criterion-related validation are assessed and referred to as

A

A validity coefficient

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

This was invented by Karl Pearson and Sir Francis Galton, who conducted research on genetics

A

Correlation (validity) coefficient

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Correlation (validity) coefficient ranges from

A

-1 to 1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

The strength of the relationship

A

Effect size (absolute value)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Strong correlation r

A

.5

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

Moderate correlation r

A

.3

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

weak correlation r

A

.1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

Whether the relationship is positive or negative

A

Direction

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

Three relationships with strong correlation (r=.5)

A

Intelligence-job performance relationship
Knowledge-job performance relationship
Structure interview score-job performance relationship

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

Three relationships with moderate correlation (r=.3)

A

Conscientious-job performance relationship
School grades-job performance relationship
Integrity tests-job performance relationship

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

This used the test scores of all applicants and looks for a relationship between the scores and future performance of the applicants who were hired

A

Predictive validation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

Drawback of predictive validation

A

Takes a long time, cannot collect DV for a while, may not want to wait to use a “great” test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

This consists of administering a test to people who currently hold a job, and then comparing their scores to existing measures of job performance

A

Concurrent validation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

Drawbacks of concurrent validation

A
May not be representative of applicants (may learn things on the job, may be less motivated to perform well on the test)
Restricted range (may predict better with more variance)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

This involves using expert opinions/judgements that the items, questions, or task used in a selection test are representative of the kinds of situations, problems, or tasks that occur on the job

A

Content-related validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

When developing content validity measures (4)

A

Look at the job analysis you have already done
Compare your current and proposed methods of assessment to the KSAO or job competency matrix
Try to develop new measures of assessment that are especially relevant for each of the job components
Reject all measures that are not demonstratively related to documented KSAOs or competencies

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

Consistency between a high score on a test and high level of a construct (I.e. Intelligence or leadership ability) as well as between mastery of this construct and successful performance of the job

A

Construct-related validity

40
Q

Criterion-related validity connects what?

A

Measures

Predictor measure and criterion measure

41
Q

Content related validity connects what

A

Predict measure and criterion construct

42
Q

Construct related validity connects what

A

Predictions, criterions
(Predictor construct and predict measure)
(Criterion construct and criterion measure)

43
Q

These apply not only to the conditions in which the method was originally developed (a specific job, organization, industry, etc.) but also across those settings

A

Generalizable selection methods

44
Q

These often measure stable traits (e.g. GMA and personality) or generic skill sets (e.g. Interviews and situational judgement tests

A

Generalizable selection methods

45
Q

This takes all of the correlations found in studies of a particular relationship and calculates a weighted average (such that correlations from studies with large samples are weighted more)

A

Meta-analysis

46
Q

This is a quantitive, rather than qualitative review of studies

A

Meta-analysis

47
Q

_____ of a selection device is the degree to which its use improves the quality of the individuals selects

A

Utility

48
Q

Procedures for this offer organizational decision-makers useful information regarding the relative values of different selection tools

A

Utility analysis

49
Q

What’s the best way to show job-relatedness

A

Through criterion-related validation

50
Q

This is when elements of the selection system look valid

A

Face validity

51
Q

High ______ results in less negative reactions and increased motivation to perform on the test/exercise

A

Face validity

52
Q

These are hands-on simulations of part of all of the job that must be performed by applicants

A

Work sample tests

53
Q

Work samples consist of (3)

A

Actual physical mock up of job tasks
In basket exercises for managerial tasks
Examples of work similar work done for another organization

54
Q

Work sample content validity and criterion-related validity?

A

Highest level of content validity possible

High criterion-related validity (.54)

55
Q

2 negatives of a work sample

A

Work samples can only be used with applicants who already know the job or have been trained for the occupation or job
Work samples are costly to develop and run, with costs generally increasing as job-complexity increases

56
Q

General information processing capacity that facilitates reasoning, problem solving, decision making, and other higher order thinking skills

A

General Mental Ability (g, IQ, Intelligence)

57
Q

Not the amount of information people know, but rather their ability to recognize, acquire, organize, update, select, and apply it effectively

A

GMA, g, IQ, Intelligence

58
Q

This is the major key to GMA and the major distinction among jobs

A

Complexity

59
Q

GMA and cognitive tests validity?

A
High-complexity jobs (.58)
Medium-complexity jobs (.53)
Low-complexity jobs (.23)
Counter-productive work behaviors (-.33)
GOA (.41)
Income (.2)
60
Q

This posits that individuals, over the course of their labor market experiences, will sort themselves into jobs that are compatible with their interests, values, and abilities

A

The gravitational hypothesis

61
Q

3 Negatives of using GMA in selection

A

Has been shown to have severe group differences and lead to adverse impact
Managers may use score banding method in HR selection process
Standard Error of the Differences (SED) bands are created such that differences in the same band may be by chance (logically flawed)

62
Q

Solution for negatives of using GMA

A

Non-cognitive measures should also be used in the selection process

63
Q

The five-factor model (FFM) of personality consists of

A
Conscientiousness
Emotional Stability
Extraversion
Agreeableness 
Openness
64
Q

Taken together, this provides a comprehensive yet parsimonious framework to examine the relationship between specific personality traits and job outcomes

A

The Five-Factor Model

65
Q

Described as being dependable, careful, thorough, responsible, and organized
Achievement-oriented, hardworking, and persevering
Predicts job performance (.28) and leadership (.28)
Sample items include:
I am always prepared
I pay attention to details
I make plans and stick to them

A

Conscientiousness

66
Q

Described as being relaxed, unenvious, tranquil, secure, and content
Often referred to as the opposite pole of neuroticism
Predicts job performance (.16) and job satisfaction (.29)
Sample items include:
I seldom feel blue
I feel comfortable with myself
I am not easily bothered by things

A

Emotional stability (neuroticism)

67
Q

Described as being sociable, gregarious, assertive, active, and dominate
Predicts sales performance (.28) and leadership (.31)
Sample items include:
I feel comfortable around people
I make friends easily
I am the life of the party

A

Extraversion

68
Q

Described as being curious, flexible, trusting, cooperative, and forgiving
These individuals prefer tasks calling for helping but dislike tasks calling for conflict (negotiation)
Predicts workplace defiance (-.44) and teamwork (.34)
Sample items include:
I make people feel at ease
I trust what people say
I treat all people equally

A

Agreeableness

69
Q

Described as being imaginative, cultured, curious, original, and broad-minded
Prefer self-direction and flexibility of idea organization
Predicts leadership (.24) and workplace accidents (.5)
Sample items include:
I have a vivid imagination
I enjoy hearing new ideas
I enjoy thinking about things

A

Openness

70
Q

Negative of a personality assessment

A

Self-reported personality, and subsequent response distortion

71
Q

Structured interviews consist of(4)

A

Evaluations standardization
Question consistency
Question sophistication
Rapport building

72
Q

The use of a formal rating system applied to each candidate

A

Evaluations standardization

73
Q

The consistent wording and ordering of questions asked by the interviewer

A

Question consistency

74
Q

The types of questions (behavioral or situational) given

A

Question sophistication

75
Q

The questions asked at the beginning of the structured interview to get to know each candidate

A

Rapport building

76
Q

5 things interviews are measuring

A
Mental capability
Declarative job knowledge and skills
Personality traits (FFM)
Applied social skills
Fit with the values of the organization
77
Q

4 negatives of interviews

A

Self-presentation tactics
Evidence of applicant misinformation and over-preparing
Interviewers were very confident they could identify the best candidates
Pre-interview impressions and confirmation bias

78
Q

These are high-fidelity simulations where assesses) current or future employees) are rated on a number of job-based exercises with the intent of predicting actual behavior on a job

A

Assessment Centers (ACs)

79
Q

In these, participants work through a series of behavioral exercises (e.g. Job simulations, in-baskets, and role plays)

A

Assessment centers

80
Q

Predictive validity of assessment centers

A

Job performance (.36) managerial potential (.53) training (.35) career advancement (.36)

81
Q

Negatives of assessment centers

A

Can only be used for certain jobs, typically managerial jobs
Considerations must be given to:
High cost
Time to create proper AC
time to conduct assessment
Pre-selected individuals must ‘go-away’ to participate

82
Q
These are designed to directly assess attitudes regarding dishonest behaviors
Job performance (.14) CWB (.38)
A

Overt integrity tests (clear purpose tests)

83
Q

These tests specifically ask about past illegal and dishonest activities

A

Overt integrity tests (clear purpose tests)

84
Q
These use composite measures of personality dimensions, such as reliability, conscientiousness, and trustworthiness
Job performance (.18) CWB (.27)
A

Personality based measures (disguised purpose tests)

85
Q

These present applicants with a work related situation and multiple possible responses to the situation
Applicants are then forced to evaluate and pick from the alternative courses of action

A

Situational Judgement Tests (SJT)

86
Q

Items on STJS with behavioral tendency instructions(what would you do)have higher correlations with

A

Personality constructs and are reflective of typical performance

87
Q

Items with knowledge instructions (what should one do?) have higher correlations with

A

cognitive ability and are reflective of maximal performance

88
Q

Why are grades used in selection

A

They reflect intelligence, motivation, and other abilities applicable to the job

89
Q

This contains questions about past life experiences

A

Biographical data measures (or Biodata)

90
Q

Hire sequentially based on the first applicants who score above the cut score for the job

A

Minimum qualification

91
Q

the minimum level of performance that is acceptable for an applicant to be considered minimally qualified

A

cut scores

92
Q

Give job offers starting from the most qualified and progressing to the lowest score

A

Top-down hiring

93
Q

Arriving at a selection decision in which a very high score on one type of assessment can make up for a low score on another

A

Compensatory model

94
Q

Process of arriving at a selection decision by eliminating some candidates at each stage of the selection process

A

Multiple-Hurdle System

95
Q

Multiple Hurdle Systems allow organizations to balance the trade-off between

A

cheap or generic tests missing important characteristics and extensive tests/assessments being costly and time consuming