L5: Selection (part 2) Flashcards

1
Q

What are cognitive abilities grounded in?

A

The psychometric approach to intelligence which has focused on understanding individuals’ ability to reason, plan, solve problems, think abstractly, learn and adapt, process and comprehend complex ideas and info. But these are not pure measures of innate ability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is an effective manager?

A

An optimizer who uses both internal and external resources to sustain over the long-term, the unit for which the manager bears some degree of responsibility. The emphasis is on managerial actions or behaviours to optimize resources

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Organizational responsibility

A

Context-specific organizational actions and policies that take into account stakeholders expectations and economic, social and environmental performance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What has research found on the predictive validity of measures?

A
  • test scores are positive predictors of diverse indices of academic performance but less strongly correlated with motivationally determined outcomes
  • specific test scores tend to be better predictors
  • cognitive ability scores predict job performance, objective leadership effectiveness and creativity
  • those with very high scores may not perform better than those with just high scores, so there is a ceiling
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is the source of this predictive power?

A
  • cognitive abilities influence knowledge and skill acquisition during training-> better job of applying and performing jobs, job skills found to mediate this relationship
  • some research argues that is just a proxy for SES, but disconfirmed
  • can be incremented with other measures of personality, values, interests habits like big 5, especially conscientiousness (but show no correlations with cognitive ability)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

When is a test not biased?

A

If it reflects a capability difference between groups and if the nature of the relationship between capability and performance is similar for all groups. Would be biased if there is a systematic difference between groups and if the test is the source of difference-> cognitive tests not found to be biased as test scores reflect differences in skills relevant for performance on the job

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Why can’t work samples be used to measure performance?

A

It is a ‘sample’ of work behavior or a snapshot (i.e., how they might perform on the job), but it does not
measure how they actually perform on the job over a period of time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is the importance of g?

A

Has a moderate validity, strong predictor of performance in learning settings, which becomes stronger as the job becomes more complex. Also predicts organizational citizenship behaviour but validity is smaller

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What are the problems with g?

A
  • can lead to adverse impact which is the degree to which the selection rate for one group differs from another and due to socioeconomic factors, societal and the impact of stereotypes
  • does not include knowledge from everyday experiences, practical intelligence
  • does not predict counterproductive work behaviour as big 5 predicts it better
  • can be improved over time with repeat tests due to memory effects
  • better at predicting maximum vs typical performance
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What are the best predictors from the Big 5?

A
  • conscientiousness is the most consistent predictor of task performance and most generalizable, positive relationship with OCB, but has a curvilinear relationship
  • this is due to status striving (exerting effort to perform at a higher level) and accomplishment striving (exerting effort to perform at a higher level)
  • extraversion is a predictor for managerial performance, taps into incremental validity, and prediction ability depends on the job type
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Core self-evaluations

A

Is a broad, higher order latent construct indicated by self-esteem (the overall value one places on oneself as a person), generalized self-efficacy (one’s evaluation regarding how well one can perform across a variety of situations), neuroticism (one of the Big Five traits) and locus of control (one’s beliefs about the causes in your life)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Dark triad of personality

A

Machiavellism—lack of empathy, low affect, willingness to manipulate
Narcissism—grandiosity, entitlement, dominance, and superiority
Psychopathy—impulsivity and thrill-seeking combined with low empathy and anxiety

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

When is extraversion a particularly good predictor?

A

When a significant portion of the job involves interacting with others, and when combined with conscientiousness has a strong predictive validity with managerial performance and leadership

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

When is agreeableness a particularly good predictor?

A

If job involves interacting with others when it involves helping, cooperating and nurturing others. High conscientiousness combined with low extraversion, low agreeableness can negatively affect performance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What are some other traits that can be useful but not captured by Big 5?

A
  • HEXACO -> which also includes honesty/humility in addition to Big 5
  • Self-efficacy (expectation of one’s performance across situations)
  • Dark triad for predicting counterproductive work behaviour -> psychopathy, narcissism, Machiavellianism
  • Emotional intelligence
  • Affective and cognitive empathy
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What are the issues with personality inventories?

A

Distortion of responses by applicants, but can be controlled for faking, using other vs self-rated personality measured and supplement with other selection procedures

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What methods address response distortion?

A

Unlikely Virtues (UV) Scale – Detects overly virtuous responses. Scores can be adjusted by penalizing extreme UV scores or used to disqualify applicants exceeding a cutoff. These methods reduce distortion without harming validity but have limitations
Idiosyncratic Response Patterns – Identifies faking through unusual response patterns rather than simple score inflation, successfully detecting 20–37% of faked responses with minimal false positives.
Observer Ratings – Personality assessments from coworkers or supervisors predict job performance better than self-reports, with incremental validity even from a single observer.
Avoiding Sole Reliance on Personality Tests – Since multiple selection methods (e.g., leadership tests) are typically used, the impact of faking is reduced.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What are some leadership ability tests?

A

Leadership ability measures, particularly providing consideration (building trust and rapport) and initiating structure (goal-oriented leadership), are relevant to managerial success but have shown mixed predictive validity. A meta-analysis (Judge et al., 2004) found that providing consideration correlated more with job satisfaction and leader effectiveness, while initiating structure correlated more with performance. Identifying high-potential leaders requires assessing not just cognitive abilities and personality but also learning, motivation, leadership, and technical skills. Predicting managerial success improves when using specific predictors (e.g., conflict resolution) for specific outcomes rather than relying on general traits.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Motivation to lead

A

Individual differences construct that affects a leader’s decisions to assume leadership training, roles and responsibility that affects his intensity of effort at leading and persistence as a leader

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Works samples

A

Focus on signs/indicators of how someone might act in a job by looking at meaningful samples of behaviour relevant for the job through a simulation of characteristic job behaviour. Should be related to observable job behaviour to understand individual behaviour

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

How can work samples be evaluated?

A
  • good validity
  • high face validity and acceptance
  • less adverse impact than GMA
  • some can be timely and costly if have many applicants
  • good for managerial level positions where costs are justified
  • tend to measure several constructs rather than one
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What is important to think about when designing work samples?

A
  • group vs individual exercises
  • bandwidth (how much of the job is covered)
  • fidelity is the extent to which tasks performed in the work sample are physically and psychologically similar to those performed on the job
  • necessary experience so what kind of KSAs are needed
  • task type (psychomotor, verbal, social)
  • mode of delivery is that behavioural is high-fidelity, verbal and written is low fidelity)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What are types of work samples?

A
  • leaderless group discussions
  • in-basket test
  • role-plays
  • situational judgment tests
  • structured interviews on CIT
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Leaderless group discussions

A

To observe applicants in a group setting and measure many aspects like leadership, communication, cooperation, persuasion, initiating structure, consideration. They are asked to discuss a topic for a period of time so can be good for face validity. There is good validity for job performance and training

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
In-basket test
Allows applicants to demonstrate their ability to solve problems they encounter on the job (by measuring managerial, administrative, declarative, procedural skills). They respond to a number of items accumulated in their in-basket. The items/components present a number of specific issues that need responses and underlying problems that need to be addressed. So they look at an applicants' ability to handle lots of info in a limited timeframe, prioritizing workload and making decisions.
26
What are some positive about the in-basket test?
- good record of behaviour-> allows for direct observation of behaviour - validity - high face validity (high fidelity) - discriminates well - together with leaderless group discussion they are good predictors of managerial success - ease of administration and scoring
27
How can role-plays be used?
These simulate critical interpersonal challenges of a job, they are given info about their role and a relevant scenario. They need to interact with a trained confederate or assessor. A well-designed role play has a rich sample of interpersonal behaviours that can be evaluated using a number of competencies
28
What are some common challenges for role-playing?
- Resolving customer problems/issues - Resolving interpersonal conflicts between employees - Reaching compromise solutions - Persuading others to take actions - Coaching others
29
What are role-player prompts?
Verbal and non-verbal cues that a role-player provides during the A exercise across candidates to elicit job-related behaviour. This leads to greater observability of behaviour and more opportunities to show relevant behaviour.
30
Situational judgement tests
- job applicants evaluate the effectiveness of the responses for addressing the problem described in the scenario, similar to situational interviews - but low fidelity-> low correspondence between testing and work situations as is hypothetical rather than actual
31
Evaluation of SJT
- inexpensive to develop, administer and score - good validity - less adverse impact in video format - good incremental validity to predict job performance - but they measure several dimensions so lower internal consistency - not always clear what the underlying construct is
32
Is combining predictors better than individual predictors?
Yes, but unclear which is the most optimal. The top 5 predictors had a high mean validity which is higher than any predictor individually
33
What are expats?
Nationals of one country sent to work in another country by parent company so there is a unique selection process. The focus is on context as the technical aspects are assumed (content is less important)
34
How can different contexts lead to different focuses?
Domestic context has a larger focus on knowledge, skills and abilities, whereas the international context is more on psychological aspects and biodata such as personality, language fluency, international experience, cross-cultural adjustment (process through which an expat becomes comfortable with the job tasks)
35
How is premature termination of the assignment related to cross-cultural adjustment?
Negative relationship between cross-cultural adjustment and premature termination. Should be used as a measure of performance as non-work satisfaction, lack of clear goals, or other locations could have played a role in this
36
What are important predictors?
Personality: conscientiousness is related to work performance and personality can help predict motivation, extraversion (learn social culture) and agreeableness (managing conflict) can help form stronger bonds. Emotional stability links to adaptability and openness Language skills: local language ability positively predicts success, can interact with openness International experience: individuals who have been in other cultures before tend to adjust better
37
What are the best practices for international assignee selection?
1. realistic previews- having accurate expectations before going abroad which increase self-efficacy. This info should be given during selection phase 2. self-selection involves assessing their own fit with personality and lifestyle requirements of the assignment such as personality, career issues and family issues 3. candidate assessment- should start with assessing technical competence, followed by evaluating job context factors like language skills and cultural adaptability, involve family early and usually rely on informal recommendations
38
How are non-work predictors a challenge?
International assignee selection is often unstructured, despite the impact of family readiness and early international experiences on success. Organizations should support family adjustment and consider biodata predictors like past international exposure and cultural adaptability in candidate selection
39
How is strategic importance of assignments a challenge?
Staffing decisions should balance autonomy, coordination, and control to enhance competitiveness. The mix of expatriates (parent-country nationals and third-country nationals) affects subsidiary performance, influenced by environmental factors like business strategy and culture. Companies must balance standardization and localization, with selection processes aligning to global vs. multidomestic strategies. More strategic roles require stricter selection criteria.
40
How is the role of selection a challenge?
International assignments develop global competence, but individuals benefit differently based on traits like extraversion, openness, and motivation to learn. Research suggests structured selection can identify candidates best suited for developmental assignments, improving leadership effectiveness in global roles.
41
How is a lack of structured selection a challenge?
Despite their strategic importance, firms rarely use formal selection methods for international assignments. HR is often excluded from decision-making, focusing instead on administrative tasks. However, companies increasingly recognize the need for structured selection to prevent costly placement failures and improve global talent management.
42
Classical validity approach
Looking at the accuracy and predictive efficiency of measures
43
Decision theory
Focus on utility, or added value of a selection method
44
What are the steps of the classical validity approach?
1. select criterion then measure score on criterion 2. select predictor and measure score on predictor (do not use for selection until validated) 3. how strong is the relationship? 4. if yes, then tentatively accept predictor and cross-validate by checking periodically after 5. if not then reject the predictor and select a new one
45
What are the types of prediction models?
1. Multiple regression approach 2. Multiple cutoff approach 3. Multiple hurdle approach
46
What is the multiple regression approach?
It minimizes errors in prediction, and combines predictors optimally to get the most efficient estimate of the criterion status. Allows you to test different combinations of predictors and check for curvilinear relations to check for inflection point. But assumes that high scores on one predictor can substitute for low scores on another predictor-> compensatory model
47
When would a multiple cutoff be necessary?
When a proficiency on one predictor cannot compensate for deficiency in another. For example for minimum standards of minimum proficiency levels then applicants will be rejected
48
How can cutoff scores be set?
a) Angoff method: expert judges rate likelihood that minimally competent candidate would answer item corectly -> mean % (passing score) b) Expectancy charts: likelihood of successful criterion performance at different predictor scores
49
What is the multiple hurdle approach?
This is sequential passing for each predictor which is less time consuming and less expensive in earlier stages. Could first provisionally accept applicants and assess them further to determine whether they should be accepted permanently but this additional cost for accuracy needs to be taken into account
50
What are the implications of the multiple hurdle approach?
they can restrict the validity of predictor tests because, as candidates progress through each stage, the pool of applicants becomes more homogenous. This results in restricted range in predictor scores, leading to a smaller observed validity coefficient than if the entire applicant pool were used. To correct for this, researchers like Mendoza et al. (2004) proposed methods to estimate what the population-level validity would be without the restrictions caused by the multiple-hurdle process.
51
What does the utility (added benefit/value) of a predictor depend on?
1. The accuracy or validity of the predictor (criterion validity) 2. The selection ratio is the % of applicants from the population of applicants that are accepted 3. The base rate of success is the % of employees who are effective using current/old selection procedures 4. The costs and benefits of selection decisions (e.g., recruitment, selection, training etc.)
52
How can the base rate and selection rate affect decision-making?
- if base rate is too high then more difficult for a selection procedure to improve it as there is a high quality applicant pool - if base rate is too low, then there are many unqualified applicants for a vacancy so not much room for improvement - if selection rate is too high then by implementing a highly predictive selection method you need to hire everyone ( the lower the SR the better-> more selective)
53
What is the utility of a selection procedure?
The degree to which its use improve the quality of the individuals selected beyond what would have occurred had that selection procedure not been used (relative to if an old selection procedure was used). Quality if defined differently depending on the model
54
Why is criterion validity still important?
The higher the validity, the more accurate/precise you are in finding the best future performers, so there is a bigger increase in performance of new employees hired with this procedure. If it has lower validity, the increase in performance will be lower as it results in more errors
55
What is the purpose of utility models?
To determine the added benefit of using a selection procedure over selecting randomly. Usually give tables to calculate success rates, performance gains and financial gains of selection procedures. This depends on: validity of selection method, selection rate, base rate, criterion cut off and costs of testing
56
What is the different between decision-theory models and utility models?
The classical validity approach is focused on improving predictive accuracy but does not factor in the differing costs or benefits of erroneous acceptances and rejections. The utility-based approach adds a layer of decision-making that weighs these outcomes based on their practical impact on the organization, which is crucial for making optimal selection decisions. By incorporating both the accuracy of predictions and the value of outcomes, organizations can improve their selection processes to better meet their specific needs and goals.
57
What is the main criticism of the decision-theory approach?
Errors of measurement are not considered in setting cutoff scores, some can be treated unfairly especially those who just missed the cutoff mark. This is appropriate in predicting attributes but not measurements as important to see how much have we missed the mark-> standard error of estimate tells us this
58
Taylor- Russell model
Utility is defined as the % of selected applicants who are judged as successful-> success ratio. Performance is a dichotomous variable so either success or no success. As validity increases, the success rate decreases and the proportion of successful performers increase
59
Naylor- Shine model
Utility is seen as the average increase in performance on a criterion measured for selected applicants (performance gain). Performance is seen as a continuous variable as it is the average criterion score, depends on validity and SR
60
Brogden-Cronbach-Gleser (BCG) model
Extends the N-S model by expressing utility in financial terms. Utility is the financial payoff of an employee to the organization from the increased performance (financial gains). Depends on validity, SR, costs of testing and financial value of performance of applicants. Gains increase as SR decreases, so gains from the selection method increases and is related to the validity of the selection method
61
How to improve understanding of paths to executive success?
1. describe the components of executive success in behavioural terms 2. develop behaviourally based predictor measures to show different aspects of managerial success 3. map the interrelationships among individual behaviours
62
What is the role of assessment centers?
Bringing together many instruments and techniques of managerial selection. Consider using assessment center ratings in managerial selection contexts
63
What are the most important aspects of personnel selection?
Personnel selection involves measuring and predicting job performance to assign individuals to roles. Traditional approaches focus on maximizing measurement accuracy and predictive efficiency. However, decision theory emphasizes evaluating selection outcomes based on their real-world impact on individuals and organizations. Thus, measurement and prediction are seen as tools within a broader decision-making system rather than ultimate goals.
64
What are the prediction strategies based on how data is collected and combined?
Pure Clinical Strategy – Both data collection and combination are judgment-based (e.g., an unstructured interview where the interviewer makes an open-ended prediction). Behavior/Trait Rating – Data is collected judgmentally but summarized in standardized rating forms. Profile Interpretation – Data is collected mechanically (e.g., personality tests), but a decision-maker interprets the results judgmentally. Pure Statistical Strategy – Both data collection and combination are mechanical (e.g., using test scores and statistical models). Clinical-Composite Strategy – Data is collected both judgmentally and mechanically but combined judgmentally (common in hiring decisions). Mechanical-Composite Strategy – Data is collected both ways but combined mechanically using prespecified rules (e.g., regression equations).
65
Which data combination strategies are more effective?
Mechanical methods (e.g., statistical models, regression equations) consistently outperform clinical (judgment-based) methods in predicting outcomes such as job performance, advancement, training success, and GPA.
66
How can utility estimates reduce management support for selection programs?
Research found that managers were less likely to support a proposed selection system when presented with utility analysis, even if it showed substantial financial benefits. Managers who received utility analysis along with other supporting information were still less likely to commit resources to the proposed system, suggesting that utility analysis may not always be effective in gaining management support.
67
What is the classical validity approach?
The classical validity approach to employee selection emphasizes measurement accuracy and predictive efficiency. Within this framework, use multiple regression to forecast job success. In some situations, compensatory models are inappropriate, in which case non-compensatory models, such as multiple-cutoff or multiple hurdle approaches, are more appropriate.
68
What is an issue with classical validity?
The classical validity approach is incomplete, for it ignores the effects of the selection ratio and base rate, makes unwarranted utility assumptions, and fails to consider the systemic nature of the selection process. Thus, use decision theory, which forces the decision maker to consider the utility of alternative selection strategies, as a more suitable alternative.
69
Why should a single-attribute utility analysis not be used?
It focuses mainly on the validity coefficient and may not be sufficient to convince top management regarding the value added of a proposed selection system. Consider strategic business issues by conducting a multi-attribute utility analysis.