I Psych - recall cites Flashcards

1
Q

Leslie et al., 2014

A

This is a meta analysis on the stigma surrounding affirmative action policies.

  • The presence of AAP influenced others’ perceptions of the beneficiaries’ competence and warmth, which in turn predicted their evaluations of the beneficiaries’ performance.
  • When examining self-stigma, the authors demonstrated that beneficiaries were likely to doubt their competence, which led to poorer performance objectively and as evaluated by others.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

The Uniform Guidelines debate.

A

Debate about Uniform Guidelines

  • McDaniel et al. (2011) argued the Uniform Guidelines (1974) are scientifically inaccurate and inconsistent with professional practice due to its focus on situational specificity, its lack of discussion of differential validity/prediction and diversity-validity dilemma, and the arbitrariness of the 4/5ths rule.
  • Outtz et al. (2011) fought back and said that there is no readily available replacement and the case law generated by the UG would remain. Additionally, there is no need to abolish or revamp the entire document when a) some of it doesn’t need to have its basis in science, and b) some of it has actually been the impetus for science.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Adler et al., 2016

A

This is an IOP on performance management vs appraisal.

  • Arguments against performance appraisal include failed interventions, rater disagreement, weak evaluative criteria, conflicting purposes.
  • Arguments for keeping PA include the need for performance mgmt, inherent evaluation of peformance regardless of PA, and merits of ratings
  • PM should:
    • Enable employees to align their efforts to org goal
    • Provide guideposts to monitory behavior and make real-time performance adjustments
    • Help employees remove barriers to success
  • If managers engaged in effective day-to-day PM behavior as needed, there should be less (if any) need for formal PM systems, such as appraisals/ratings
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Campbell et al., 1993

A

Basic model of work performance

  • It’s an expansion of Project A (Army studies), to make it more appropro for non-military jobs.
  • Argued that there had been so much focus on predictors but not enough on criterion.
  • 8 components of performance, 3 of which are essential part of all jobs: demonstrating core task proficiency, effort, personal discipline. Others may or may not, e.g., management, facilitating team performance, written/oral comms.
  • Valuable middle ground approach to viewing performance, rather than being one single broad thing that applies to all orgs, or being something that’s unique in each job. Also helps us focus on what parts of behavior are under employee control.
  • Empirically supported
  • Was updated in 2012 - revised to be as concrete as possible in defining the 8
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Borman & Motowidlo, 1993; 1997

A

Distinguishes between task & contextual performance behaviors, & presents a taxonomy of contextual performance (containing elements of OCB & prosocial behavior).

  • Influenced by Project A and other studies, proposed a model of performance with 2 factors (1993) and 1997 clarified some vague definitions and spelled out 5 OCBs
  • Task performance: the proficiency with which job incumbents perform activities that are formally recognized as part of their job
  • OCB: go beyond task perf, instead support the org, social and psych context in which work is performed.
  • 5 categories of contextual perf: 1) Persisting with enthusiasm & extra effort to complete own tasks; 2) Volunteering for extra-role activities; 3) Helping & cooperating with others 4) Following org rules & procedures; 5) Endorsing, supporting & defending org objectives
  • Supervisors consider contextual performance when making overall performance ratings and weight it approximately as highly as task
  • Cog ability predicts task perf; Personality predicts contextual performance.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Podsakoff et al., 2000

A

This is a meta on OCBs.

  • OCBs uniquely accounted for much more of performance evaluations than task performance (43% for OCBs; 10% for task). Together they accounted for 62%.
  • There are 6 factors that reflect OCB content and they overlap with subfactors of contextual performance.
  • Anteced (causes) of OCBs: 4 categories:
    • Employee characteristics (i.e., conscientiousness, agreeableness, emotional maturity)
    • Task characteristics (i.e., feedback, intrinstically satisfying tasks (job design); role conflict, role ambig (low) - so, jobs can be redesigned to encourage OCBs
    • Org characteristics (i.e., org justice, org support, goal setting, org climate) - orgs can change these to encourage OCBs
    • Leadership behaviors: communicating vision, role modeling, fostering acceptance of group goals, intellectual stimulation
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Types of Behavior in Performance

A
Task Performance: primary facet- how well someone does their job
Org Citizenship (OCB): voluntary helping behaviors that support work context (aka contextual performance, Borman & Motowidlo, 1997; Podsakoff et al., 2000: explains more variance in ratings than task)
Adaptive performance: how an employee responses to changes (existing or anticipated) in environment (Pulakos et al., 2000) - i.e., handling work stress, learning new procedures
Counterproductive work behaviors (CWB): bad behaviors; orgs try to avoid selecting
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

4 Appropriate Characteristics of Job Performance Measures

A

4 appropriate characteristics of job performance measures:
-Individualization - must be data about performance that the individual controls
-Relevance - must be focused on critical parts of the job - i.e., being prompt may not be critical
-Measurability - must be able to generate a # that represents the amount or quality of work performed.
-Variance - Scores must have differences among them to be able to distinguish high/low performers
Gatewood et al., 2019

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Forms of Rater Error, how to avoid

A

4 types of rater error (introduce bias, usually unintentional).

1) Halo: rating the subordinate equally on diff performance items (scales) based on overall impression of worker
2) Leniency: when disproport number of workers get high ratings
3) Severity: when disproport number gets low ratings
4) Central tendency: when a large number get ratings in the middle of the scale

Best way to overcome them is to train supervisors how to avoid them.
Gatewood et al., 2019

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Griffin et al., 2007

A

This is a theory paper with empirical support on a 3x3 model of work role performance.

  • Does not focus directly on latent structure of performance.
  • Instead posits 3x3 classification of work role behaviors in which one dimension represents org level (indiv, team, org) and the other goes from proficiency on prescribed tasks to proficiency in adapting to changes in indiv, team, or org requirements, to being proactive in instituting new methods or solutions at indiv, team, or org level.
  • Three items (i.e., rating scales) assess proficiency within each of the nine cells. the level dimension seems to assess indiv task performance, support in teams, and mgmt role
  • proactivity- like OCBs; adaptivity - Pulakos et al., 2000
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Heilman & Chen, 2005

A

This was an empirical study on OCBs and gender.

Men engaging in OCBs were viewed positively. Women displaying the same behavior were seen as simply doing their jobs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Van Iddekinge, et al., 2017

A

This is a meta-analysis on interaction of cog ability and motivation on performance.

  • Cog ability and motivation are weakly correlated. They are independent, and both relate to performance. This suggests that orgs should measure both variables to predict future job performance. Don’t just emphasize one over the other.
  • Additionally, the effects of ability and motivation on performance are additive, rather than multiplicative. This suggests applicants should be allowed to compensate for lower scores on ability with higher scores on motivation scores and vice versa. Instead of multiple hurdles/cutoffs for each, it could be more effective to set a minimum total score for the two measures combined.
  • Higher motivation employees could be more impacted by ability training (the interaction was there, just didn’t account for most of variance in performance).
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Job Analysis Cites

A

Gatewood et al. 2019
Job analysis is a purposeful, systematic process for collecting information on the important work-related aspects of a job.

Cascio & Aguinus 2018

  • JA is used to define the job in terms of behaviors necessary to perform it and to develop hypoth around what personal characteristics are needed to perform them.
  • JA comprised of job description (what the work is) and job specfications (what personal chars are needed to do it.) Specs should include minimally acceptable standards for selection and later performance (i.e., perform at entry? barely acceptable employees be able to do the task/have KSA? importance?)
  • Profiles created from method that meet criteria of ratings of description/specs are reviewed and rated on level (of quality of applicant) and clarify to applicants, as well as linkage to the MQs that were created.
  • No single type of job-analysis data can support all HR activities. Critical to align method with purpose
  • Methods: critical incident, interviews, SME panels, direct observation, questionnaires
  • CMs focus on identifying broader characteristics of individuals & using these chars to inform HR practices. Linked more to biz strategy and prescriptive not descriptive. But rigor and documentation of JA is more likely to withstand legal challenge.

DuVernet et al., 2015
-Meta on JA data quality - when rating scales require objective judgements, the data quality is higher. Using multiple methods and training raters did not really increase quality.

Campion et al. 2011
-CM does not inherently lack rigor and can be more manager friendly. Can be used to align all HR systems. CM Best practices: linking CM to org goals, start with top execs, use rigorous JA methods.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Cascio & Aguinus 2018 Job Analysis

A

This is a chapter on Job Analysis.

  • Job analysis is used to define the job in terms of behaviors necessary to perform it (job description) and to develop personal characteristics needed to perform them (specifications)
  • JA is used for work/org design, selection, perform appraisal, and other uses
  • No single type of job-analysis data can support all HR activities. It is critical to align method with purpose (i.e., interviewing incumbents or having observers; activities or attributes)
  • Define the job in terms of both TASK (becomes job description) and PEOPLE requirements
  • Elements may include job title, job activities, working conditions/physical envir, social envir, conditions of employment (hours/week, wage, benefits)
  • Job specs: KSAOs deemed necessary to perform a job. Bc they exclude people, must set minimally acceptable standards (MQs).
  • Job specs based on task and KSA criteria: perform at entry, barely acceptable, importance of correct performance, difficulty.
  • Reliability: higher IRR from analysts than incumbents;
  • Competency models focus on identifying broader characteristics of individs & using these chars to inform HR practices.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Levine et al., 1997

A

This refers to a methodology for developing minimal qualifications (MQs).

  • Must be set because they exclude some people based on education, experience, etc. For legal defensability purposes.
  • A court approved it and it is consistent with sound professional practice.
  • Collect info and then create draft list of tasks and KSAs, and get separate groups of SMEs to rate each on a set of 4 scales that indicate the degree to which employees must have/be able to perform the task/KSAO upon entry, the importance to the job that it’s done correctly, and its relative difficulty
  • Ratings are aggregated with means or percentages; criteria is set at majority over certain scores for each
  • SMEs then provide specific input in terms of # years experience, type of experience, etc.
  • Job Analysts then create draft MQ profiles (education, training, work experience to perform the job at satisfactory level).
  • NEW set of SMEs then review the MQ profiles to see it if itneeds editing, rate the final profiles on 2 scales, level and clarity, and establish description of barenly acceptable employee
  • Those that meet criteria on clarity/level are linked back to tasks/KSAs, to ensure the profile provides an employee with what is needed to perform at barely acceptable level. Those that are linked to more than half of either tasks or KSA or is linked to all 5 most important tasks or KSAs are considered content valid.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Job Analysis Methods

A
  1. Direct observation (of incumbents) and performing the job (by analyst). Observation best for manual, standardized, short-cycle activities (i.e., barista). Performing is best for jobs analyst can learn readily. Pros: objective. Cons: assumes jobs are static, can be intrusive.
  2. Interviews: most common; should be checked for appropriateness in ters of wording and objectivity. Pros: best for reliability, depth of info, immediately clarifies ambiguities. Cons: depends on interviewer skill, distortion of info due to falsification/misunderstanding, time-consuming, social desireability, mistrust.
  3. SME panels: 6-10 people develop info on job tasks/KSAOs to be used in developing questionnaires and establish links between tasks and KSAOs, test items and KSAOs and tasks and test items. Panels should be representative of work unit and broad level of experiences. Pros: experienced workers provide info, workers able to discuss issues and resolve disagreement. Cons: could be inaccurate (conformity, pressures, etc.)
  4. Questionnaires: Pros: cheaper, efficient, standardized, easily quantifiable for stats. Cons: $$, time consuming to develop, ambig can’t be resolved real time, no rapport ability
  5. Fleishman Job Analysis Survey - one of most researched. Describes jobs in terms of abilities required to perform them aiming to list fewest independ categories that describe perform of widest variety of tasks.
  6. Critical incidents: collection of anecdotes of job behavior (from superv, employees, others) that describe good, bad performance. Each includes what led to performance, what was so effective/not, whether it was under control of employee. Pros: static and dynamic dimensions covered; cons: time consuming to collect and categorize upwards of 100 incidents; difficult to analyze qual data. (Cascio & Aguinis, 2018)
  7. Other sources: ONET, Training manuals, diaries of work tasks
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

DuVernet et al., 2015

A

This is a meta-analysis on job analysis validity and reliability.

  • Work analysis data varies as a function of design choices. When rating scales require objective judgements, the data quality is higher.
  • Using multiple methods did not really increase quality
  • Competency modeling can produce quality data than previously thought (lower IRR but higher discriminability between jobs when analyzing work)
  • Training raters only had minimal effects on quality of data.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Campion et al., 2011

A

This is an article on best practices in competency modeling.

  • Competency modeling refers to the collection of KSAOs that are needed for effective performance in the job in question. (The KSAOs are the competencies, and a set of them is a model).
  • CM does not inherently lack rigor. But need to use rigorous JA methods.
  • CMs are more worker-focused and JA is more job task focused.
  • CM is more “manager friendly” in that the KSAOs are usually linked to business objectives & strategies. It’s best to have fewer C’s (8-12) per role; heirarchy can also help.
  • CMs used to align HR systems: hire, train, appraise, promote, and pay in terms of the same KSAOs. Those KSAOs are linked to high job performance, biz strategies and goals, and future requirements. Alignment helps promote those goals.
  • Best practices include linking CM to org goals, start with top execs, using rigorous JA methods to develop competencies, using both cross-job and job-specific competencies, using comptency libraries; using CM for legal defensability (test validation).
  • Unique CM techniques that lend rigor: behavioral event interviews (more in depth than CIs), rating future importance, rating how the C distinguishes from high vs avg perf
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Austin & Villanova

A

This article describes the “criterion problem.”

-

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Campbell & Wiernik, 2015

A

This is a review article on performance appraisal and management. 8 dimensions per Campbell et al., 1993, 1997

  • Performance is conceptualized as in-role and extra-role.
  • The latent structure of individ work performance is multidimensional, and their 8 factors represent a consensus developed over several decades
  • Leadership performance includes: support, initiation, goal emphasis, empowerment, training, and role modeling.
  • Management performance includes: problem-solving, goal setting, coordination, monitoring, external representation, staffing, admin, and compliance. (Campbell, 2012)
  • Measuring performance must be construct-valid and not tainted by outcomes over which performers have no control. (Campbell, 2012)
  • The general factor, “g”, exists in virtually all performance covariance matrices, esp in performance ratings, but is not a single latent variable that can be specified. It must be “formed” as a sum score of diff components for decision making (thru weighting)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

DeNisi & Murphy, 2017

A

This is a review on 100 years of performance mgmt and appraisal in JAP.
-Performance management - wide variety of activities, policies, procedures, and interventions designed to help employees improve performance (i.e., begin with appraisal, but include feedback, goal setting, training, reward systems).
-a formal process, which occurs infrequently, by
which employees are evaluated by some judge (typically a supervisor)
who assesses the employee’s performance along a given set
of dimensions, assigns a score to that assessment, and then usually
informs the employee of his or her formal rating.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Cascio & Aguinus 2018 Performance Mgmt/Appraisal

A

Performance management: Performance management is a continuous process of: identifying, measuring, and developing the performance of individuals and teams and aligning performance with the strategic goals of the organization. PM serves strategic purpose helps link employee activities with org mission/goals.
Performance appraisal: is the systematic description of job-relevant strengths and weaknesses and is a key component of performance mgmt.
Involves both observation and judgement. Both processes prone to bias.
-Objective data such as production data can avoid bias, but creates other bias, often measure things beyond the employee’s control and measure outcomes of behavior not behavior itself.
-Subjective measures (i.e., supervisory ratings) are prone to other forms of bias.
-Biases can be associated with raters (lack of 1st hand knowledge), ratees (gender, tenure), interaction (race, gender), or situational/organizational factors
-Bias can be reduced sharply through training on both technical and human aspects of the rating process.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Sanchez and Levine, 2009

A

This is an article on competency modeling.
Reasons to use competency models:
-They’re a solution to companies needing employees to adapt to changing circumstances. Hire people based on adapability and alignment with org mission
-Working and living in a more complex world
-Jobs have been re-defined and more varied than ever
-Results of a JA hold only as long as the job remains the same.
-Job analysis and Competency modeling supplement each other, do not replace each other.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Morgeson et al., 2016

A

This is an empirical study on job analysis.

  • Having too much experience is not good for JA - “too hard to explain”
  • This is in favor of data quality checks. Have attention check questions built in, or break up any long surveys - 30 minutes, then break, then 30 more minutes.
  • Use this: If you wanted to say something in comps about why you might want to include something beyond task analysis.
  • Also, have cutoffs for who takes it (for example experience…because of their findings that people with more experience)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Pulakos et al., 2000

A

This refers to adaptive performance as performance criteria (study and measurement development).

  • Campbell 1999 had suggested that a 9th performance dimension for their 1993 model of performance could be adaptability.
  • How an employee responds to changes in the work environment (i.e., change in work procedures, managers, or team”)
  • Has 8 dimensions, i.e., handling work stress, solving problems creativity, dealing with unpredictible situations, learning technology and procedures, etc.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Rotundo & Sackett, 2002

A

This is an experimental study on the relative importance of task, OCB and CWB to ratings of overall performance.
-Although all three components influence ratings of overall job performance,
raters demonstrate unique implicit rating policies that can be grouped into three distinct clusters. The patterns werent’ based on type of job (nursing vs accounting) or org, so likely based more on implicit policies of raters
-task performance weighted highest
-counter-productive weighted highest
-equal and large weights given to task and CWB
They define performance as those actions and behaviors that are under the
control of the individual and contribute to the goals of the organization

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Harari et al., 2016

A

This is a meta on creative and innovative performance.
This study aimed to clarify the distinction between creative-innovative performance (CIP) and other performance dimensions, due to past research showing some overlap.
-Creative-innovative performance is positively related to task performance and OCB, and negatively related to CWB.
-They argue that CIP has emerged as an important component of performance, just as adaptability did. Innovation and creativity are critical to the success of modern orgs, even in orgs that are not focused on introducing new technologies (Zhou, 2008/Zhou & Shaley, 2011)
-Creativity: generation of new ideas; Innovation: new ideas + implementation
-CIP refers to the proficiency with which employees generate and implement novel ideas in the workplace - both outcomes and the behaviors are included in conceptualization. Predictors of this would be the processes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

Meinecke et al., 2017

A

This is an empirical study on manager-employee communication during performance appraisals.
-Employee and supervisor perceptions of performance appraisal meetings/interviews were positively impacted by a communication pattern characterized by relational communication followed by employee active participation.
-These patterns were linked to higher interview success ratings by both supervisors and
employees.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

Levy & Williams, 2004

A

This is a review and framework on the importance of the social context in the performance appraisal process.

  • Takeaway: PA literature is more cognizant of the importance of the social context
  • Distal factors: i.e. org culture/climate; economic conditions, legal climate, HR strategies, Org goals and others.
  • Process Proximal variables: have direct impact on how appraisal process is conducted (i.e., rater issues - affect, motivation, accountability; ratee issues - how PA affects motivation; LMX - higher LMX leads to higher ratings (mixed results); politics, feedback environment
  • Structural Proximal variables: have direct effects on rater and ratee behavior; directly affected by distal (i.e., multi-source feedback systems; performance appraisal purpose (ratings more lenient when used for administrative purposes than for developmental); rater training
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

Motro and Ellis, 2017

A

This is an empirical study of feedback responses and gender.

  • Men who cried in response to negative feedback were seen as atypical and were rated lower on performance and leadership capability. The same was not true for women.
  • This is explained by role congruity theory (Eagly)
  • Bias can affect feedback providers and their evaluations of employees.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

Alliger et al., 1997

A

This is a meta-analysis on training criteria (outcomes that should be evaluated).

  • Expanded Kirkpatrick’s (1994) taxonomy of training criteria.
  • Reactions split into affective reactions and utility judgements
  • Learning split into behavior/skill demonstration (during training), immediate knowledge (right after training), knowledge retention (over longer period of time after)
  • Behavior renamed as transfer, which is a useful clarification
  • Results stayed as “results” (org level results)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

Huang et al., 2017

A

This is a longitudinal study on training transfer.

  • The degree to which training transfers for an individual is dependent on motivation to transfer.
  • Initial transfer attempts are dependent on post-training self-efficacy
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

Keith and Frese, 2008

A

This is a meta of error management training.

  • EMT leads to better outcomes than non-error management training.
  • EMT leads to better transfer than non-EMT, especially for adaptive transfer (transfer to novel tasks).
  • EMT may be better suited than error-avoidant trainning methods for promotion of transfer to novel tasks (useful in today’s rapidly changing work environment)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

Colquitt et al., 2000

A

This is a meta-analysis of training motivation.

  • Training Motivation predictors: individual (internal LoC, achievement motivation, conscientiousness, anxiety, pre-training SE, supervisor and peer support, job involvement, org commitment) and situational (climate)
  • Training Motivation predicted: knowledge, skill acquisition, transfer, reactions, post-training SE
  • Cog ability mediated the effects on learning outcomes. But training motivation explained incremental variane beyond cog ability.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

Salas et al., 2012

A

This is a review on practical training.

  • Pre-training needs: perceived support, motivation, anticipated opportunity to use skills
  • Needs analysis: job, organizational, and person analyses
  • Good learning climate requires realistic expectations, prep, training framed as an opportunity, and reinforcement
  • Individual differences: SE, goal orientation (learning), motivation
  • Key features of training: information, demonstration, practice, feedback
  • Transfer influences: transfer climate, post-training environment, supervisors, debriefing (refresh), communities of practice, opportunities to use learnings
  • Training evals: precise ABC (affect, behavior, cognit) measures that reflect outcomes and tie to evaluation purpose.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

Blume et al., 2010

A

This is a meta of training transfer.

  • Open (generalizeable) skills are more transferrable than closed (single-use) skills
  • Strongest predictors of transfer: cog ability, conscientiousness, supervisor and peer support, post-training SE, utility reactions.
  • Transfer is highest immediately after training and requires maintenance training to avoid skill decay.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

Grand, 2017

A

This is an empirical study on the effects of stereotype threat on training (learning) effects.

  • Stereotype threat introduces an extra regulatory strain, which taxes working memory that is responsible for coordinating attention and info processing.
  • Learning can be impacted by eliciting irrelevant affective and cognitive stimuli that hijack portions of working memory, leaving fewer cog resources for encoding domain relevant KSAs and procedures during training.
  • Stereotyped women had decreased knowledge acquisition, spent less time reflecting on learned activities, and developed less efficiently organized knowledge structures than non-stereotyped women.
38
Q

Lacerenza et al., 2017

A

This is a meta on leadership training .

  • Leader training a lot more effective than previously thought, when based on Kirkpatrick’s 1994 criteria.
  • Results effect size (org results) = .72, transfer = .82
  • Moderators show that effecftive leader trainings should have needs analysis, feedback, spaced sessions, on-site location, face to face delivery, not self-administered
39
Q

Goldstein & Ford, 2002

A

This is a textbook on training in orgs.

  • 4 steps to good training practice:
    1) Training Needs Analysis
    2) Training Design and Delivery
    3) Evaluation (Development of criteria and use of evaluation models)
    4) Assessment of validity (training validity, transfer, intra and inter-org validity)
  • Needs analysis includes: Org analysis, KSA analysis & Person analysis
  • Design: Solomon 4 group gold standard, also has limitations (need large sample)
  • 4 types of validity: training validity, transfer, intra and inter-org validity
  • Utility analysis method: minimum $ to meet goals; break even analysis; payoff analysis
  • Kirkpatrick criticisms: atheoretical, overly simplistic (multidimensional constructs of reactions, learning treated as uni)
40
Q

Berry, 2015

A

This is a review article on differential prediction and validity by race.

  • Validities for cognitive tests are 10-20% lower for Black and Hispanic test takers than White test takers.
  • Test bias: When the test itself is biased against the lower-scoring group and, therefore, is the cause of the score difference and adverse impact. Some aspect of the test causing it to work systematically differently across racial/ethnic subgroups. (“construct irrelevant variance leading to systematically higher or lower scores for groups”). Can manifest in measurement bias and predictive bias.
    • Measurement bias: individuals who are identical on the construct measured by the test but who are from different subgroups have different probabilities of attaining the same observed score. (Factor structure differs by group). This affects whether cog ability tests would predict job performance equally for all groups because it wouldn’t measure ability well for that group, which leads to lower prediction for unbiased performance criterion.
    • Predictive bias: when cognitive ability test scores do not relate to, or predict, job performance equally for White versus non-White subgroups. Viewed thru differential validity or prediction.
      • Differential validity refers to differences between subgroups in the validity coefficients (i.e., correlations between cognitive ability tests and job performance) for a cognitive ability test. (when lower for one subgroup it’s not VALID for that group.
41
Q

Pyburn et al., 2008

A

This is an article on diversity-validity dilemma.

  • The diversity-validity dilemma refers to a situation that requires orgs to choose between workforce diversity and optimal prediction in selection
  • Traditional selection procedures rely on the ID of most relevant KSAOs to perform well on the job. They positively and linearly relate to performance.
  • BUT, some of the most valid procedures such as cognitive ability produce varying degrees of mean subgroup differences, with racial ethnic minorities typically scoring lower than majority groups. This creates disproportionate selection rates, known as adverse impact. -The dilemma comes from wanting to make the most VALID choice (best candidate), so you treat people the same, but the outcome is disproportionate selection rates.
  • Because the different rates are based on an occupational requirement (i.e., cog ability, physical ability), the dilemma is, should we go with most valid assessment and accept I will likely have fewer minority/under-repp’ed people in my org, OR do I try to reduce reliance on cog ability tests to ensure more equal proportions of people across demographic groups are hired.
  • AI established by XYZ - finish
  • Orgs can reduce AI by either minimizing sub-group differences (weighting etc) or increasing employment opportunities for minorities using preferences (i.e., AA; those can only be used when it has limited duration, they can show the group has had historical exclusion and that the majority group’s rights are not trammeled. WIthin-group is ILLEGAL under 1991 CRA; score adjustments are ILLEGAL under Title VII.
42
Q

Ployhart and Holtz, 2008

A

This is an article on reducing adverse impact for women and minorities through changing selection practices (methods and scoring).
-Effective types of strategies involve 1) using predictors with smaller subgroup differences, and 2) combining predictor scores. They most directly address the problem of subgroup scores diffs.
-Most effective specific strategies involve using alternative predictor measurement methods such as structured interviews and assessment centers; assessing the entire range of KSAOs; banding (but this demands using preferences in final selection, so it’s controversial); and minimizing verbal requirements (but only to extent supported by JA).
-The only strategy that does not involve a validity tradeoff, even if minimal, is assessing full range of KSAOs (adding non-cog predictors to cog). It enhances validity and balances out diffs. Ex: Multi-predictor composites of personality & cog ability substantially lowered AI; structured interviews and cog ability increased criterion validity and decreased AI
Caveats: no silver bullet, i.e. SJTs reduce White-black diffs, but not universally favorable in mean differences across groups. Assessment centers: AI differs as a function of cog loading of exercises. BUT if we can go from 40% ratio to a 70% ratio, much closer to no AI and easier to defend.
Recommends:
-Best alt predictors are interviews, SJTs, assessment centers. Costly but effective bc they reduce reading requirements, higher face validity (better app reactions), & measure multiple KSAOs. i.e., Structured interviews best for reducing diffs for Black-White.
-Weighting predictors or criterion: more weight to predictors with less AI (i.e., structured interviews; or personality, which would be a bigger cost to validity). BUT sometimes weighting doesn’t help or can help some groups to expense of others.
-Pareto-optimally derived predictor weights (DeCorte et al., 2007) have better diversity and validity outcomes than unit weighting
-Banding. controversial among I-Os. Rationale: no one predictor is perfectly reliable and bands recognize this by making certain “bands” of scores indistinguishable (based on SE; or top 10% cut score adjustment). But requires using preferences within hands, otherwise reductions are small or non-existent. So “hard to justify”

43
Q

Kravitz, 2008

A

This is an article on how affirmative action procedures can be used to increase representation of disadvantaged groups.

  • AA started thru an Exec Order by Prez Johnson in 1965 “equal opp”
  • AA plans can also come from a court order to resolve lawsuits from EEOC. Each plan is specific to org.
  • Orgs have to detect and eliminate practices that make it more diff for groups to have equal opp and file reports (“utilization analysis” that compares % of women and minorities in workforce by job group to corresponding proporations in relevant labor market (those in recruitment area that meet minimum requirements)). If a group is under-repped, must set goals to eliminate it and make a good faith effort.
  • AAPs vary in strength; from weakest to strongest: eliminating discrimination, enhancing opps, tiebreaking, quotas and strong preferences (usually illegal; even treating as a plus is only allowed rarely).
  • APPs have a positive impact on employment rates and no impact on org performance.
  • Whites are least supportive; Blacks most, others in middle. Women more supportive than men.
  • APPs lead to stigmatization only if perceiver believes AAP is based on preferences (which is usually false bc of illegality) and if they have no other information about the person’s performance.
44
Q

Roth et al., 2001

A

Cognitive tests have an average Black-White mean difference of d = .72, but in terms of performance, differences are less than half as large.

45
Q

Roth et al., 2011

A

Cognitive tests and performance are related - r = .52

46
Q

Guimetti et al., 2015

A

This is an empirical study on adverse impact in layoffs when using forced distribution rating systems.
-The 6/5ths rule is the same as 4/5ths rule but for layoffs (120% rather than 80%).
(# of employees laid off / # of minorities originally in org - if it’s more than 120% of the equiv proportion of majority group, then AI has occurred)
-4/5ths rule looks at whether the proportion of retained minorities is less than 80% of the equivalent majority.
-Larger orgs were at greater risk of AI violations.
-Forced distribution rating systems helped preserve workforce quality in layoff situations but at the expense of org diversity and AI violations.
-Retention rule (4/5) led to fewer AI violations than layoff rule (6/5)

47
Q

Walton et al., 2015

A

This is an article on ways to reduce stereotype threat in selection for women and minorities.
To reduce salience of one’s group and associated stereotypes and by extension help buffer against stereotype threat:
-Place identity related questions at end of test
-Represent tests in ways that assure test takers their performance will not be viewed as evidence for neg group stereotype
-Conduct item sensitivity analyses and remove problematic items

48
Q

Finch et al., 2009

A

This was a study on adverse impact in multi-stage selection strategies.
Found that adverse impact was lower in multi-stage (multi-hurdle) selection strategies when the first stage used the least biased predictor, when first stage didn’t use cog ability or ones correlated with cog ability

49
Q

Song et al., 2017

A

This is a simulation study on Pareto-optimal weighting.

-Pareto optimal weights outperform unit weights and yield greater diversity outcomes.

50
Q

Dahlke and Sackett, 2017

A

This is a meta on cognitive saturation and race-based test score differences.

  • Cognitive saturation is the extent to which a predictor overlaps with g (cognitive ability). It helps explain Black-White group differences on selection tools.
  • Cognitive saturation correlated highly with Black-White (r = .84) and Hispanic-White (r = .95) difference scores on performance.
  • A company can use their procedure to forecast mean differences on a new predictor based on its cognitive saturation and other attributes
51
Q

Roth et al., 2017

A

This is a meta on racial differences in outcomes on selection procedures, comparing White, Asian, Hispanics.

  • Generally, Whites performed better than Hispanics and similarly to or worse than Asians.
  • Asians and Hispanics are likely to perform worse than Whites when verbal ability, work experience, job knowledge, work samples, and biodata are scored.
  • Asians and Hispanics are likely to perform similar to Whites when structured interviews and physical ability tests are scored.
52
Q

Cottrell et al., 2015

A

This is an article on cultural factors that help explain the Black-White gap on cognitive tests.

  • Systemic racism has created Maternal advantage gaps. Disadvantage lead to differential parenting factors (ie., lack of resources, education, verbal ability limit Black parents’ capacity, skill and knowledge to provide learning oriented, safe, sensitive and accepting parenting environment, which leads to the gap.
  • Significant Black–White cognitive test score gaps throughout early development that did not grow significantly over time (i.e., significant intercept differences, but not slope differences). So the gap already existed at age 4 and remains stable in magnitude afterward.
  • Racially disparate conditions listed above can (together as covariates) account for the relation between race and cognitive test scores.
53
Q

Schmidt and Hunter, 1998

A

This is a meta-analysis on the validity of predictors of job performance. The goal was to determine which among 19 selection procedures would be the best supplementary procedures to GMA (cog ability).

  • Prior to this meta, it was thot that bc validities varied across studies for cog ability, there were unknown org factors at play so each org needed to do a validation study for each selection program. This meta showed that the diffs are actually due to method deficiencies that can be statistically corrected.
  • Recruiting and selecting high-quality employees is critical to an organization’s competitive advantage. To select the most competent, they utilize selection tools that best predict future performance on the job.
  • Cognitive ability is the most valid predictor of job performance (r=.51) and training performance (.56).
  • Integrity tests (.65) and structured interviews (.63) provide the greatest incremental validity over cognitive ability in predicting performance AND training performance.
  • Integrity tests measures mostly conscientiousness (and predicts lower CWBs).
  • Work samples add high incremental validity, but can only be used for existing employees bc they require knowing the job. Exceptions are jobs that require specific schooling like hair salons, carpenters, etc.
  • The smaller the correlation between other measures and GMA, the larger the increase in overall validity.
54
Q

McDaniel et al (2011); Outtz et al. (2011)

A

Uniform Guidelines Debate
Uniform Guidelines on Employee Selection Procedures (EEOC, 1978): U.S. federal guidelines “designed to assist employers […] to comply with requirements of law prohibiting employment practices which discriminate on grounds of race, color, religion, sex, and national origin.[…] framework for determining the proper use of tests and other selection procedures.”
-McDaniel et al. (2011) argued that the Uniform Guidelines are inconsistent with professional practice for reasons of practicality (they call for things that most employers can’t accomplish due to sample size issues).i.e., its requirement for local validation studies (which are impractical), to not rely on content validity (also not practical bc don’t have numbers to always do criterion validity), lack of discussion of differential validity/prediction and being silent on diversity-validity dilemma. They also argue the 4/5ths rule is arbitrary.
-Outtz et al., (2011) and others argued back saying that revising it would be be a huge effort as there is no readily available replacement. Also, it is not not a scientific document and were never really meant to be a scientific document

55
Q

Reliability - definition and types

A

Reliability is the degree of consistency or agreement between two sets of scores that were collected independently. Measured by a correlation. Estimates precision of a measurement tool.
-Test-retest: same form of test to same group on 2 different occasions a few months apart ideally. Most simple and direct.
-Parallel/Alternate Forms: each has same # and difficulty of items, similar means. Correlate them to see how equivalent they are, which is the reliability. Ideally at same time or w/in a few days of each other. Sampling error inevitable, which lowers correlation so it’s a conservative estimate.
-Internal consistency: technique that’s not about forms or time (as above). About the degree to which items on the test correlate with each other. Cronbach’s alpha.
Gatewood et al., 2019; C&A, 2018

56
Q

Interrater reliability - define and methods

A
IRR: degree to which ratings are free from unsystematic error variance arising from ratee/rater. Hard to measure in real life bc need strict design. 3 methods:
-interrater agreement: agreement between raters on their ratings of some dimension. (use % agreement or kappa)
-interclass correlation: when 2 raters are rating multiple objects or indivs (uses r)
-intraclass correlation: how much of the diffs among raters is due to diffs in indivs vs measurement error.
C&A, 2018
57
Q

Validity of Individual Difference Measures - importance, types of evidence

A

Validity: what a test measures (underlying trait/construct), and how well it measures it (how much it relates to some external criterion measure like performance). aka the degree to which a measure measures what it is designed to measure. Validation is the process of gathering validity evidence across multiple studies. Need statistically signif validity coeff and .30+

  • Importance: must provide validity evidence if procedure has adverse impact on a protected group; also just fundamental for useful and competent HR practice.
  • Diff kinds of evidence: content, criterion (predictive and concurrent), and construct
    • Content: degree to which a test samples the content domain of the job associated with successful performance. Useful in small biz/situations with small numbers where stats are not poss; also can help with app reactions / face validity. Use job analysis and SMEs. Assess overlap b/w content of selection measure & job (as rated by SMEs).
    • Criterion: is test predicting performance (or other criterion). Predictive = use applicant data and hold for later. Select candidates w/out using results of measurement procedure. Weakness is time delay. But if we use it in selection before validating first, it will have range restriction. Concurrent = test current employees and correlate with perf. Range restriction can happen; correct statistically; useful for cog ability tests (equiv to predictive). Criterion considered best but not always - low sample sizes won’t work.
    • Construct: demonstrating and tests rx’s between measures and constructs. Discriminant, convergent. Use many diff sources/studies. MTMM matrix
  • Synthetic validity: breaking down jobs into components and collapsing the components for validity statistic procedures in order to get higher samples size. validity not for whole jobs but components.
  • Test transportability - using test that’s been validated elsewhere. i.e., using test publisher
  • Generalization: using meta-analyses to demonstrate validity. called VG studies. Good for small biz. But should have other forms of evidence too.
  • Small biz: use content validity, validity generalization, or synthetic validity
58
Q

Cascio & Aguinis, 2018 recruiting

A

This is a textbook chapter on recruiting.

  • Targeting a pool of diverse AND qualified workforce will reduce adverse impact in subsequent hiring decisions. Need to ensure also qualified, otherwise undermines efforts.
  • Yield Ratios: ratios of leads to invites, invites to interviews, interviews to offers, offers to hires over some specific period
  • Time-lapse data: the average interval between events i.e. b/w offer extension and acceptance or acceptance and payroll start.
  • Use yield and time lapse data to estimate estimating recruiting staff and time requirements.
  • Use existing data to figure out yield ratios and time lapse data, but if that doesn’t exist, use “best guesses” as hypotheses and then monitor for future. These things vary by role and company - they’re not universal
  • Source analysis: cost per hire, time lapse, and source yield from various sources
  • Delays in timing are perceived very negatively by applicants, esp high quality ones, and can cost acceptances.
  • Positive org image/reputation influences intent to apply: bc it enhances self-esteem to bask in its glory, or may signal the org can provide pay and benefits
59
Q

Breaugh, 2013

A

This is a review article on employee recruitment.

  • Relevent theories: Hovland et al. (1953) model of persuasion, Schneider (1967) ASA
  • Recruits’ initial impressions are hard to change due to confirmation bias and information processing bias.
  • Employee referrals yield better applicants in terms of KSAOs, work experience, job performance, selection measure scores, and offer receipt and acceptance.
  • Providing fit information can result in a smaller, more qualified applicant pool. More specific info yields more self-screening, higher levels of interest and attention, and better P-O fit.
  • When employee expectations are lowered or met, turnover is lower and job satisfaction is higher. RJPs can promote accurate expectations (but should be balanced in pos/neg)
  • Delays in the recruitment process result in less attraction and lower likelihood of acceptance, especially for high quality applicants.
  • An employers’ reputation regarding diversity is more important than their messaging in recruitment.
60
Q

Allen et al., 2007

A

This is an empirical article on applicant reactions to job information shared on organizations’ websites.
-Employer Web sites that provided more information about a job opening were
viewed more positively and resulted in greater likelihood of applying for a job
-It may be due to perceptions that a lack of info creates a state of uncertainty
for individuals, which they would prefer to avoid in making a job choice decision.
-Mediated by attitudes and attraction toward org. (info->attitudes->attraction->intentions to apply)

61
Q

Avery, 2003

A

This was an empirical study on diversity messaging (ads showing blacks)in recruiting messages (on website).

  • No effect on Whites for org attractiveness.
  • Diverse ads had an effect for black viewers but only when it extended to supervisory level positions.
62
Q

Schmitt, 2014

A

This is a review article on personality and cognitive ability as predictors of performance.

  • Facets of FFM are better than FFM as a whole. (Judge et al., 2013)
  • Contextualizing personality increases validity (Schaffer & Postelethwaite, 2012)
  • Personality and cog ability tests are typically evaluated favorably by applicants, though work samples and interviews are MOST favored (face validity)
  • Computer and written versions of tests: equivalent for cog ability (unless speed is something that is being assessed), biodata, and personality tests. The most common approach is to use unproctored versions online for screening purposes then give proctored version to smaller group later.
63
Q

Judge et al., 2013

A

This is a meta on personality and work performance

  • Lower-order (subfacet level - 6 for each of the FFM) traits can be better predictors of work performance
  • Moving from broader to narrower traits produced significant gains in predicting overall job performance, task performance, and contextual performance
  • Example: 2 extraversion facets are linked to two diff facets of performance: assertiveness to task performance and enthusiasm to contextual performance
  • Implication for practice: Use faceted approach given the gains in prediction.
64
Q

Gonzalez-Mule et al, 2014

A

This is a meta on predictors of task vs contextual performance.

  • Cog ability better predictor of task performance than it is contextual performance overall (Cog ability can’t predict CWBs).
  • Cog ability better predicts task performance than FFM personality does
  • Personality (FFM) is better for CWB than cog ability (cog ability does not predict CWB at all)
  • Cog ability CAN predict OCB; FFM equally as good
  • Manager-rated CWBs were lower for those with high GMA, but self-reported CWBs were no different for low and high GMA. So this means managers should monitor their smart, high performing employees as they would other employees because it’d be an error to think smart people engage less in CWBs.
  • Reaffirms using non-cog predictors (which also helps reduce AI) - bc it helps predict OCBs also.
65
Q

Shaffer & Postlethwaite, 2012

A

This is a meta on validity of personality measures in selection.

  • Contextualizing measures and providing frame of reference gives respondents a referent (i.e., “at work”)
    • Person situation interaction theory: People’s behavior is an interaction of person and situation (explains why)
  • Contextualized measures are more valid than non-contextualized measures and therefore more likely to predict performance
  • Workplace specific scales had higher validity than general purpose
  • These effects found for all personality traits, smallest for conscientiousness.
  • Contextualize both instructions and scale items
66
Q

Morgeson et et al., 2007

A

This refers to a debate on personality testing in selection (IOPs).

  • Concerns about low criterion validity and faking
  • Faking should be expected, it prob can’t be avoided. Some people view faking as a bigger issue than others. Despite faking, they have validity
  • Faking or ability to fake might not always be bad. In fact it might be socially adaptive or even desired for some jobs
  • Remember personality has low validity for performance. Many self-report tests should not be used in selection. Combine with cog predictors bc it provides incremental (doesn’t overlap)
  • Good for predicting CWBs, whereas cog doesn’t
67
Q

Hough et al., 2015

A

This is a review article on alternatives to the five factor model.

  • The FFM is a method-bound, not comprehensive, abysmally valid, and is not modifiable.
  • Alternatives include HEXACO (similar to OCEAN but has honesty), circumplex models, and nomological-web clustering.
68
Q

Shoss et al., 2018

A

This is an empirical article on resilience as a buffer of job insecurity on strain outcomes (exhaustion, cynacism, psych contract breach, CWBs)

  • The rx between job security and neg outcomes were attenuated when the employee was highly resilient
  • Research suggests that resilience is a process of coping, rather than just a trait.
  • Recent research suggest that resilience can be developed and promoted within employees and that resulting gains in resilience can persist over time (see Robertson, Cooper, Sarkar, & Curran, 2015
69
Q

Van Iddekinge et al., 2011

A

This is an empirical article on vocational interests
-Vocational interests related positively to job knowledge, performance, and intentions to continue.
-Vocational interests provided incremental validity beyond GMA and FFM
-Small to medium subgroup differences in favor of under represented groups when examining vocational interests.
This means Vocational interests may be useful to reduce adverse impact.

70
Q

Levashina et al., 2014

A

This is a review article on structured interviews.

  • Campion et al., 1997 is the most comprehensive taxonomy of interview structure: job analysis, same questions, better types of questions (behavior), anchored rating scales, rating each question, using multiple interviewers, and training interviewers.
  • Past behavior questions slightly more valid than situational questions, but both are.
  • Behavior questions measure experience and personality; situation questions measure job knowledge & cog ability
  • Demographic and impression mgmt (IM) bias exist in interviews but are lower with more structure
  • Other-focused IM more likely in situational questions (try to match interviewer); self-promotion IM more likely in past behavior
  • Applicants consistently prefer interviews over other selection methods.
  • Structured interviews are best from validity and legal perspectives.
71
Q

Swider et al., 2016

A

This is an empirical study on rapport-building in interviews.

  • Initial impressions of the applicant during rapport-building were positively correlated with interview scores.
  • Initial impressions led to higher scores on initial questions and lower scores on subsequent questions.
  • Tips: use a structured rapport building script; keep rapport building focused on superficial topics; score rapport building as a skill itself, if related to job perf; keep an open mind throughout interview
72
Q

The integrity test meta-analytic debate.

A

Debate about the use of integrity tests.

  • Van Iddekinge et al. (2012) aimed to improve upon Ones et al’s (199) meta-analysis of integrity test validities, and found drastically smaller validity coefficients. Overt integrity tests were a better predictor of CWB than personality-based. Personality-based yielded somewhat larger validity estimates for task performance.
  • Ones et al. (2012) attributed Van Iddekinge et al’s (2012) results to a) including only a partial database of integrity test validities, b) miscoding data and including misleading data, c) committing analytic errors and failing to identify moderators
  • Sackett and Schmidt 2012 - peacemakers. Said both have problems and here’s how to move forward: lack of detailed info of effect size estimates in both, makes it impossible to consolidate and figure out why results are so different. “We should have more rigorous detail going forward and share all the info we can.”
73
Q

Van Iddekinge et al., 2016

A

This is an empirical article on AI and social media assessment.

  • Recruiters who looked at Facebook pages favored female and White applicants.
  • Recruiter ratings of Facebook pages were unrelated to job performance, turnover intentions, and turnover.
74
Q

Berneth et al., 2012

A

This is an empirical article on credit scores in selection.

  • Credit score is a bio measure of financial responsibility that represents personality traits
  • Related to some personality chars (conscientiousness (+); agreeableness (-))
  • Predictive of task performance and OCB, but NOT CWBs - which calls into question claims that employees with poor credit will engage in harmful acts
  • Credit scores may have adverse impact
  • People are going to be pissed and think this is unjust and invasive
  • Credit scores also fluctuate often and due to factors outside the person’s control
75
Q

Weekley et al., 2015

A

This is a review article on low-fidelity simulations (i.e., SJTs, inbox)

  • Low-fidelity assessment simulations are typically composed of a stem presenting a challenging work scenario & options that vary in effectiveness. Closed-ended response options, i.e., text and multi-media SJTs.
  • SJTs usually exhibit poor reliability due to being multi-dimensional, but have incremental validity over other selection methods.
  • Multi-media SJTs are typically animated. These are costly but more easily adapted for global use. Work has found equivalence between animation and live action.
  • Cut scenes and branching have been incorporated into mSJT development.
  • Challenges with low-fi SJTs: lack of complexity in elicited behavior, social desireability (consistent with traditional SJTs) and lack of social interaction
  • Low-fi in-baskets often adapted to online delivery and equivalent to traditional AC exercises.
76
Q

Gatewood et al.., 2019 - Simulation tests

A

Simulation tests: Selection test with content that replicates job activities
Fidelity: the degree to which the simulation replicates demands of job for which it is designed
Three main types: Work samples (high-fidelity), Assessment centers (high fi), Situational judgement tests (low-fi).
Types of simulations differ levels of fidelity (how realistic). The appeal of less realistic is they’re less costly.
Work samples: Pros: higher validity and higher face validity; also can help reduce app pool by showing them the job. Cons: hard to make relate to job, doesn’t really lower adverse impact
Assessment center: work samples for execs, managers. Measure oral comm, planning/org, delegation, decisiveness, stress tolerance as dimensions (WRCs). Interviews, simulation tests (in-basket tests, leaderless discussions). 2+ activ/dimension & each activity 2+ dimensions. 3 levels of ratings (dimen, activity, overall) - all related to perf. Pro: more valid than cog ability when looking at broad performance domain (appropro for leaders) - Sackett, et al., 2017! Cons: super expensive. doesn’t lower AI.
SJT: low-fi b/c verbal simulations. Not really replications of actual work situations. MC test - how they’d approach. Edit critical incidents to have a mix and cover all important aspects; reword to be stems. Pros: high face validity; valid .34; as valid as ACs and cheaper. Note: should NOT be considered a way to reduce adverse impact.

77
Q

Roth et al., 2013

A

This is a review article on social media in selection decisions.

  • Although SM assessments may come with several benefits (i.e., being free, already in existence, publicly available information), it may also come with disadvantages for organizations, their staffing professionals, and candidates themselves - they may even be a liability.
  • Most popular uses: gather info about personality/values to see if they align with job requirements or culture fit; and as informal “reference check” to see if they present themselves professionally. (“cybervetting”)
  • SM assessments will lead to adverse impact through the digital divide and increased availability of demographic information.
  • Neg info will be weighed more heavily than positive information in SM assessments. Missingness will have a negative influence on assessments.
  • SM assessments will be more valid with less information, greater structure and mathmatical combinatino, and professional social media websites (linkedin)
78
Q

Van Iddekinge et al. 2016

A

This is an empirical article on adverse impact and social media.

  • Recruiters who looked at Facebook pages favored female and White applicants.
  • Recruiter ratings of Facebook pages were unrelated to job performance, turnover intentions, and turnover.
79
Q

Roulin & Levishana 2019

A

This is an empirical study on the use of LinkedIn in selection.

  • Linked in is a professional social media site created to facilitate job searches and career development.
  • FB is not reliable for skills or job suitability, but could be for personality’ Linked in higher reliability for KSAOs bc info is job related
  • As such, it should provide more job-related information to employers about applicants than other social media sites such as Facebook
  • Disadvantages of SM: reliability and validity, bias/adverse impact and negative applicant reactions.
  • Validity of LinkedIn: Found it has the potential to predict a variety of relevant job-related outcomes
  • Must establish construct validity using SMEs beforehand and establish guidelines (ie., using Instagram photos of hair stylists who post photos of their work, mapped onto specific related KSAOs).
80
Q

Koch et al., 2015

A

This is a meta on gender bias in selection.

  • Men were preferred for male-dominated jobs (i.e., gender-role congruity bias), whereas no strong preference for either gender was found for female-dominated or integrated jobs
  • Male raters exhibited greater gender-role congruity bias than did female raters
  • Gender-role congruity bias did not decrease when decision makers were provided with additional info about those they were rating, but WAS reduced when info clearly indicated high competence of those being evaluated
  • Decision makers who were motivated to make careful decisions tended to exhibit less gender-role congruity bias for male-dominated jobs
  • Bias declined with work experience
81
Q

Campion et al., 2019

A

This is an article on practice tests in selection.
-Those who took the practice tests scored higher on the actual tests;
-Score gains were greater for Blacks and Hispanics when compared to Whites;
-Those with higher scores tended to apply (self-selection)
-Score gains were similar to scores observed for those retesting on the actual tests.
-Practice tests may thus both enhance organizational outcomes (e.g., increased
quality of applicants, reduced cost of testing unqualified applicants, and reduced adverse impact) and applicant outcomes (e.g., increased human capital, increased chances of eventual employment, and
reduced disappointment and wasted effort from unsuccessful application).

82
Q

Veale & Binns, 2017

A
Artificial intelligence (AI) and machine learning (ML) algorithms organize, analyze, and use job information, applications, and other relevant, data rich sources to efficiently and effectively aid managers and recruiters for making decisions in order to acquire the most promising employees
-Used for 1) resume screening by automating scanning and scoring thru ID'ing key words (training, education)  2) makes patterns from unstructured data (natural lang processing) 3) automatic scoring of essays
-ML is an example of mechanical scoring
-Can be used to just do earlier steps, like select out the bottom third. Or transform inputs to make them better for modeling
-Pros: standardization, quick results, cost effective; can allow for quick rejection notices for people automatically eliminated if using in that way; may reduce other factors that influence human perception (judgement based on apperance or missing info). 
-Limitations: can't control level of detail in inputs (from applicants); some methods better suited to ML than others (i.e., achievement records).; makes inferences based on previous human ratings, which had bias. difficult to design codes that do not have innate biases. The model will learn the undesired discrimination exhibited in society and encode the same patterns, which increases the possibility of displaying historical disparities (Veale & Binns, 2017)
-3 strategies to make ML fairer:
-use trusted third parties to selectively store data necessary for performing
discrimination discovery and incorporating fairness constraints into model-building in a privacy-preserving manner.
-Collaborative online platforms would allow diverse organisations to record, share and access contextual and experiential
knowledge to promote fairness in machine learning systems. Finally, unsupervised learning and pedagogically interpretable algorithms might allow fairness hypotheses to be built for further selective testing and exploration.
83
Q

Kuncel et al., 2013

A

This is a meta on the validity of data combination methods.

  • Mechanical approaches involve applying an algorithm or formula to each applicant’s scores. i.e., aggregating scores using unit weights, estimating optimal weights, or complex decision trees.
  • Holistic methods are more common (aka clinical, expert judgment, intuitive, subjective), include individual judgments of data or group consensus meetings. Data are combined using judgment, insight, or intuition, rather than an algorithm or formula that is applied the same way for each decision.
  • Mechanical data combination out-performed holistic data combination when predicting academic and work performance.
84
Q

Tippins et al., 2015

A

This is a review article on computer-based testing

  • CBT advantages: enhanced security, dynamic reporting, interactive responding, tracking response times and changes. Mobile is easily transportable but concerns about error due to testing conditions, Internet connection, and UI. Video can create more realistic simulations, easier interviews, response recording.
  • Simulations allow measurement to be integrated into the selection task. Gamification can increase engagement and applicant reactions.
  • Although the opportunity for cheating is high, rates of cheating are low.
  • Findings have generally found Web-based and paper-pencil testing for cog and non-cog abilites to be equivalent.
85
Q

Gilliland, 1993

A

This refers to 10 justice rules and a model for justice in selection.

  • Selection systems are viewed favorably by applicants (considered fair) to the extent they comply with or violate justice rules (procedural and distrib).The rules represent how employees expect to be treated and how selection procedures should be used.
  • The rules indirectly relate to applicant intentions and behavior through FAIRNESS perceptions.
  • The most important rule is job relatedness (extent to which the content of a test reflects content of the job). It influences fairness perceptions and test perf.
  • Other rules i.e., opp to perform, reconsideration opp, feedback, selection info, honesty, interpersonal effectiveness of administrator, 2-way communication, propriety of questions (vs offensive)
  • Rules have implications for reactions such as intentions to accept the job and for post-hire behaviors like performance.
  • led to development of selection procedural justice scale and a large body of literature on applicant reactions.
86
Q

McCarthy et al., 2017

A

This is a review on applicant reactions.
-how job candidates perceive and respond to selection tools (i.e., perceptions of fairness and justice, feelings of anxiety, levels of motivation, etc.) has important consequences for performance on selection procedures, self-perceptions and other attitudes and behavior.
-Procedural and distributive justice each influence employer attractiveness,
applicants’ intentions of accepting a job offer, and whether job candidates would recommend the employer to others.
-Gilliland 1993’s 10 rules of org justice fueled this research
-Smaller effect for test reactions impacting job performance but not trivial
-Positive reactions to web-based recruitment, job previews, and feedback
-Face to face interactions better than video-conferencing
-Adding pretest info and increasing choice over aspects of testing proces can positively influence reactions.
-Indiv differences (FFM, affect, CSE) pre-dispose people to have certain reactions

87
Q

Principles for Selection - job analysis

A

SIOP
A less detailed analysis may be appropriate when prior research about the job
requirements allows the generation of sound hypotheses concerning the predictors
or criteria across job families or organizations. When a detailed analysis
of work is not required, the researcher should compile reasonable evidence
establishing that the job(s) in question are similar in terms of work behavior and/or required knowledge, skills, abilities, and/or other characteristics, or falls into a group of jobs for which validity can be generalized. (p. 11)

88
Q

Pros and Cons of Selection Tests

A

Use grant slides and the cost chart for this
Assessment Centers: The average assessment center includes seven exercises or assessments and lasts 2 days. Leaderless group discussions, business games, ability and personality tets, in-depth structured interviews. For managers.
Biodata: Can predict absenteeism and job tenure; correlate highly with GMA (Hunter/Schmidt 1998)

89
Q

Hysong et al., 2020

A

This is a handbook chapter on epidemics’ impact on orgs and I-O’s role.

  • Epidemic: sudden increase in confirmed cases of a contagious disease in a large geographic area; pandemic: epidemic spread over several countries or continents
  • Orgs are exposed to risk/impact of them & can play integral role in controlling spread
  • Organizations must make critical decisions to protect their customers and employees against the spread of disease and to ensure a healthy workplace
  • Vast majority of companies have no plan for pandemics.
  • Preventive measures: require vaccinations (for some industries) if avail. (controversial bc it puts safety against autonomy) 2) in less high risk industries, adapt into benefit campaigns - free vaccinations, flex leave and telecommute policies (cheaper than lost productivity) 3) provide PPE and hand sanitizer, encourage culture of hygiene
  • Acute measures (when hits): emergency planning, checklists for employees and organizations, and points of contact for reporting cases; hospitals stockpile med
  • I-Os can play roles in placement, updating training, leadership coaching in handling emergencies, burnout mgmt, safety procedures and culture, expats
90
Q

Org support cite from Limeade white paper

A

Eisenberger et al., 2019 - org support rx is lower/steady but would improve if employees perceived more support
Rhodes and Eisenberger, 2002 - POS anteced and outcomes