DONE: Selection Flashcards

1
Q

Aiden & Hanges (2017)

A

Data from selection assessments can be judgmental (e.g., interviews) or mechanical (e.g., written tests), or both combined.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Cascio & Aguinis (2019)

A

Important to consider both the values of selections as well as weight costs (e.g., worktime lost due to being allocated to selection, potential of using a poor assessment such as informal interviews).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Martin et al. (2019)

A

Avoiding race-bias from cog-abil tests by using WM measures.

The authors describe the development of Global Adaptive Memory
Evaluation (G.A.M.E.) – a working memory assessment – along with three studies focused on refining and
validating G.A.M.E., including examining test-taker reactions, reliability, subgroup differences, construct and
criterion-related validity, and measurement equivalence across computer and mobile devices.

Findings – Evidence suggests that G.A.M.E. is a reliable and valid tool for employee selection. G.A.M.E.
exhibited convergent validity with other cognitive assessments, predicted job performance, yielded smaller
subgroup differences than traditional cognitive ability tests, was engaging for test-takers, and upheld
equivalent measurement across computers and mobile devices.

First, many cognitive ability tests predated or were developed
with little regard to the well-accepted Cattell–Horn–Carroll (CHC) theory of cognitive
abilities, focusing instead on learned knowledge (Gc), which does not necessarily generalize
and is sensitive to socioeconomic status.

Second, many cognitive ability tests cannot be administered flexibly across modalities
(e.g. computer, mobile phone). Test-takers tend to score lower on traditional cognitive ability
assessments when using mobile devices (Impelman, 2013) due to greater distractions (Morelli et al., 2014).

We next examined mean score differences and score distribution properties between
majority and minority groups. Observed subgroup differences for gender (Cohen’s
d ¼ −0.36, favoring males) and race/ethnicity (|d|s ranging from 0.09–0.38) were small, and
were moderate for age (r ¼ −0.25 and d ¼ −0.63, favoring test-takers younger than
40 years)[1]. Importantly, White-Black (d ¼ −0.38, favoring Whites) and White-Hispanic
differences (d ¼ 0.09, favoring Hispanics) were substantially smaller than the effect sizes
of 1.00 and 0.72, respectively, typically found for cognitive ability tests (Roth et al., 2001).

G.A.M.E. demonstrated incremental validity in
predicting performance ratings over a custom composite of ADEPT-15 personality
dimensions that was tailored to each role.

Only about half had validity coefficients over .2 and about half between .1 and .2. Notably lower than validity coefficient of .3 for CA.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Pyburn et al. (2008)

A

“diversity-validity tradeoff dilemma”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Bosco et al. (2015) b

A

We tested if h executive attention (EA) and GMA predict simulation performance and supervisory
ratings of performance.

and how much EA and GMA are associated with subgroup differences.

Results indicate
that, like GMA, EA positively predicts managerial simulation and supervisory ratings of performance. In addition, although reaching statistical
significance in only 1 of our 4 studies, EA was generally associated with
smaller subgroup differences than GMA, and meta-analysis across our
samples supports this reduced subgroup difference.

key attribute of EA is that, unlike GMA, measures of EA are relatively uninfluenced by learned knowledge (Kyllonen, 2002).

Employers
have tried to reduce differences in hiring rates through a variety of strategies—adjusting test scores
to minimize between-group differences in scores, assigning more weight to predictors associated
with less adverse impact, and using more non-cognitive selection methods—but none of these
strategies have been really effective.

Put differently, attention refers
to a state in which cognitive representations (e.g., goals, chunks of information) are held active and ready for processing - an underlying ability
to manage cognitive representations (i.e., information, goals) in temporary storage.

we observed comparable
validity coefficients for the EA–performance and GMA–performance
relations, which then provided nearly identical validity coefficients, corrected and
uncorrected, for EA and GMA.
though score adjustments are illegal!

which then provided nearly identical validity coefficients, corrected and
uncorrected, for EA and GMA.

Subgroup diffs: GMA usually resulted in the standard 1.0 SD diff. but EA tended give a Cohen’s d value of ~.85

*In 3/4 studies, this did not result in a statistically significant subgroup difference.

And comparable validiites:
Predicting supervisor rated perf, each being about .2

and each being about .4 in predicting simulation performance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Hunter et al. (2016)

A

This paper is an update of Schmidt and Hunter (1998), which summarized 85 years of
research findings on the validity of job selection methods up to 1998.

this paper presents the validity of 31
procedures for predicting job performance and the validity of paired combinations of general
mental ability (GMA) and the 29 other selection procedures. Similar analyses are presented for
16 predictors of performance in job training programs. Overall, the two combinations with the
highest multivariate validity and utility for predicting job performance were GMA plus an
integrity test (mean validity of .78) and GMA plus a structured interview (mean validity of .76)

Similar results were obtained for these two combinations in the prediction of performance in job
training programs. A further advantage of these two combinations is that they can be used for
both entry level hiring and selection of experienced job applicants.

During this time, a new and
more accurate procedure for correcting for the downward bias caused by range restriction has
become available (Hunter, Schmidt, & Le, 2006). This more accurate procedure has revealed that
the older, less accurate procedure had substantially underestimated the validity of general mental
ability (GMA) and specific cognitive aptitudes (e.g., verbal ability, quantitative ability, etc.;
Schmidt, Oh, & Le, 2006)

For example, an expanded meta-analysis shows that job sample or work
sample tests are somewhat less valid than had been indicated by the older data. Also, metaanalytic results are now available for some newer predictors not included in the 1998 article.
These include Situational Judgment Tests (SJTs), college and graduate school grade point
average (GPA), phone-based structured employment interviews, measures of “emotional
intelligence”, person-job fit measures, person-organization fit measures, and self-report measures
of the Big Five personality traits.

Results show that many procedures that are valid predictors of job performance
nevertheless have little or no incremental validity over that of GMA. The rank order for zero
order validity is different from the rank order for incremental validity.

Also, the incremental
validity of most procedures is smaller than reported in Schmidt and Hunter (1998). This
reduction in apparent incremental validity results from the increase in the estimated validity of
GMA resulting from use of the more accurate correction for range restriction.

The validity of a hiring method is a direct determinant of its practical value, but it is not
the only determinant. Another direct determinant is the variability of job performance. At one
extreme, if variability were zero, then all applicants would have exactly the same level of later
job performance if hired.

At the other extreme, if performance variability is very large, it then becomes
important to hire the best performing applicants and the practical utility of valid selection
methods is very large. As it happens, this “extreme” case appears to be the reality for most jobs.

This latter variability is called the applicant pool variability, and in
hiring this is the variability that operates to determine practical value.

Another determinant of the practical value of selection methods is the selection ratio—the
proportion of applicants who are hired. At one extreme, if an organization must hire all who
apply for the job, no hiring procedure has any practical value. At the other extreme, if the
organization has the luxury of hiring only the top scoring 1%, the practical value of gains from
selection per person hired will be extremely large. But few organizations can afford to reject
99% of all job applicants.

Actual selection ratios are typically in the .30 to .70 range, a range that
still produces substantial practical utility.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Morelli et al. (2014)

A

Test-takers tend to score lower on traditional cognitive ability
assessments when using mobile devices due to greater distractions (Morelli et al., 2014)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Hunter et al. (2006)

A

After applying the new range correction procedure, found that GMA validity
ranged from .74 for professional and managerial jobs down to .39 for unskilled jobs. The mean
validity for medium complexity jobs (62% of all jobs in the U.S.) was .66.

The medium complexity category includes skilled blue collar jobs and mid-levelwhite collar jobs, such as upper level clerical and mid to lower level administrative and
managerial jobs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

(Schmidt et al., 1979).

The meanings of validity

A

The predictive validity coefficient is directly proportional to the practical economic
value (utility) of the assessment method.

For differential validity per say, the general finding has been that validities (the focus of this
study) do not differ appreciably for different subgroups.

That is, given similar scores on selection procedures, later job
performance is similar regardless of group membership and regardless of how job performance is
measured (objectively or via supervisor ratings).

On other selection
procedures (in particular, personality and integrity measures), subgroup differences are rare or
nonexistent.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Hunter et al. (1990)

A

Use of hiring methods with increased predictive validity leads to substantial
increases in employee performance as measured in percentage increases in output, increased
monetary value of output, and increased learning of job-related skills.

Research has shown that the variability of performance and output among (incumbent) workers
is very large and that it would be even larger if all job applicants were hired or if job applicants were selected randomly from among those that apply.

Employee output can also be measured as a percentage of mean output; that is, each
employee’s output is divided by the output of workers at the 50th percentile and then multiplied
by 100. Research shows that the standard deviation of output as a percentage of average output
varies by job level. For unskilled and semi-skilled jobs, the average figure is
19%. For skilled work, it is 32%, and for managerial and professional jobs, it is 48%.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Hunter et al. (2016) on The variability of employee job performance can be measured in a number of ways, but
two scales have typically been used: dollar value of output and output as a percentage of mean
output

A

The variability of employee job performance can be measured in a number of ways, but
two scales have typically been used: dollar value of output and output as a percentage of mean
output. The standard deviation across individuals of the dollar value of output has been found to be at minimum 40% of the mean salary of the job (Schmidt & Hunter, 1983;
Schmidt et al., 1979; Schmidt, Mack, & Hunter, 1984). The 40% figure is a lower bound value;
actual values are typically considerably higher. Thus, if the average salary for a job is $40,000,
then 1SD is at least $16,000. If performance has a normal distribution, then workers at the 84th
percentile produce output worth $16,000 more per year than average workers (i.e., those at the
50th percentile). And the difference between workers at the 16th percentile (“below average”
workers) and those at the 84th percentile (“superior” workers) is twice that: $32,000 per year.

Employee output can also be measured as a percentage of mean output; that is, each
employee’s output is divided by the output of workers at the 50th percentile and then multiplied
by 100.

a superior
worker in a lower level job produces 19% more output than an average worker, a superior skilled
worker produces 32% more output than the average skilled worker, and a superior manager or
professional produces output 48% above the average for those jobs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Schmidt et al. (2016) - History of “the theory of situational specificity of validity”

A

However, as early as the 1920s it became apparent that different
studies conducted on the same assessment procedure did not appear to agree in their results.
Validity estimates for the same method and same job were quite different for different studies.

During the 1930s and 1940s the belief developed that this state of affairs resulted from subtle
differences between jobs that were difficult or impossible for job analysts and job analysis
methodology to detect. That is, researchers concluded that the validity of a given procedure
really was different in different settings for what appeared to be basically the same job, and that
the conflicting findings in validity studies were just reflecting this fact of reality.

This belief, called the theory of situational specificity of validity, remained dominant in
personnel psychology until the late 1970s when it was discovered that most of the differences
across studies were due to statistical and measurement artifacts and not to real differences in the
jobs (Schmidt & Hunter, 1977).

The largest of these
artifacts was simple sampling error variation, caused by the use of small samples in the studies.

Studies based on meta-analysis provided more accurate estimates of the average
operational validity and showed that the level of real variability of validities was usually quite
small and might in fact be zero (Schmidt, 1992)

In addition, the findings
indicated that the variability of validity was not only small or zero across settings for the same
type of job, but was also small across different kinds of jobs of similar complexity (Hunter,
1980

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Neter & Ben-Shakar (1989) - on handwriting validity!

A

When the writers/participants are required to
copy the same material from a book to create their handwriting sample, the evidence indicates
that neither graphologists nor non-graphologists can infer any valid information about personality traits or job performance from the handwriting samples.

So it’s not in the writing as much as things like: style of writing, range of vocabulary, expression of emotions, verbal fluency, grammatical skills,
and general knowledge.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Hunter et al. (2016)

cost of application, best predictor of job learning.

When some selection measures can’t be used.

Age and performance

A

First, it has the
highest validity and lowest application cost.

Like work sample measures, job
knowledge tests cannot be used to evaluate and hire inexperienced workers.

Adverse impact

Executives who prefer to be judged based on accomplishments.

Table 1 shows that age of job applicants shows no validity for predicting job. Age is about as totally unrelated to job performance as
any measure can be.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Schmidt et al. (2016)

Juicy points

  • crazy and in congrats to Bartram’s (2005) eight great comptencies analogue
A

validity of GMA is higher than that of specific aptitudes—even when specific aptitudes are
chosen to match the most important aspects of job performance (i.e., spatial perception for
mechanical jobs; cf. Schmidt, 2011).

from a purely economic standpoint, research shows that the value of the
increases in job performance from good selection practices overshadows any potential costs
stemming from defending against such suits. Thus, there is little legal risk stemming from the use
of GMA assessments. Thus, there is little legal risk stemming from the use
of GMA assessment.

In court, when defending, it’s now more ccommon to use general findings (e.g., Schmidt et al., 1998) -Summary of 100 years & metas

Such
demonstrations rely are increasingly based on summaries of the kinds of research findings.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Ployhart & Holtz, 2008)

A

Review 16 strategies including using educational attainment as a proxy for GMA, test
score banding, predictor and criterion weighting, and retesting.

Category 1 - using diff predictors/combinations

  1. Use alternative predictor measurement
    methods (e.g., interviews, work samples,
    assessment centers, situational judgment
    tests, biodata).
    -Comments: predictors w/ smaller cog loadings have smaller diffs.
    - Some methods decrease diffs for 1 group but increase for another.
  2. edu attainment - but subgroup diffs increase as edu attain increases.
    - Edu attainment may be more useful than GPA
    - Applicant faking
    - Reduces diffs bc GPA depends on conscientiousness
  3. Use more precise measures of ability (dovetails with Bartram 2005 meta on 8 great competencies).
    - Small to moderate reduction in subgroup diffs
    - May again help 1 group diff but worsen another

Broader categories:
2. Strategies that Combine & manipulate scores.

    • Assess the full range of KSAOs.
      adding noncognitive predictors that
      are related to performance but engender smaller subgroup differences may reduce the overall
      subgroup difference.

Typically effective but magnitude of reduction (AI) depends on predictor validities.
AND
diminishing returns after adding 4 predictors.

-The predictor with the highest validity will most
determine the composite subgroup difference
(when using regression-based weights)

-Including a full battery of predictors usually
produces higher validity.

  1. Banding
  • Racioethnic minority or female
    preference is usually illegal.
    -May lower validity.

Banding and score adjustments. There is no perfectly reliable predictor; acknowledging this unreliability by creating “bands” from within which scores cannot be empirically distinguished may increase racioethnic mino

Category III Strategies that reduce construct irrelevant variance from predictor scores;

By assessing verbal ability
only to the level supported by a job
analysis, this reduces variance from subgroup diffs.

• Must demonstrate equivalence when developing
“lower verbal ability” alternative.
• Must ensure verbal ability is not contributing to
the validity of the alternative.
-Verbal ability requirements cannot be lower than
the minimum level identified in the job analysis.

Use “content free” items that are not more (un)familiar to, or do not serve to advantage, any particular cultural subgroup Effects are small & inconsisstent.

Differential Item Functioning (DIF).
Removing items that demonstrate
DIF will reduce subgroup
differences
- small & inconsistent results.

Extend time limits: White and racioethnic minority groups both
improve; White group sometimes improves more.

Predictor scores tend to improve with retesting for
Whites and Blacks

The most effective categories of strategies involve using predictors with smaller subgroup differences (Category I) and combining/
manipulating predictor scores (Category II).

Among the most effective strategies, the only strategy that does not
also reduce validity is assessing the full range of KSAOs (Strategy 4).

Among the most effective strategies, the only strategy that does not
also reduce validity is assessing the full range of KSAOs (Strategy 4).
In fact, this strategy tends to enhance validity

Recommendations:
. Use job analysis to carefully define the nature of performance on the
job, being sure to recognize both technical and non technical aspects
of performance.
2. Use cognitive and non cognitive predictors to measure the full range
of relevant cognitive and non cognitive KSAOs, as much as practically
realistic

Decrease the cognitive loading of predictors and minimize verbal ability and reading requirements to the extent supported by a job analysis.
For example, items and directions shouldn’t be at a college reading level if not required.

17
Q

Sacket et al.

(2001) showed that

A

Creating a composite battery of IQ & structured interviews & integrity tests produces only very small reductions in differential hiring rates (racially).

18
Q

DeCorte et al. (2007)

Pareto optimization for…

A

provide an innovative
pareto-optimization approach to combining predictors that maximizes validity while minimizing subgroup differences. The approach is useful for
practicing managers attempting to balance diversity and predictive validity goals.

19
Q

Wonderlic Personnel Test (WPT; Wonderlic &

Associates, 2002)

A

Standard Wonderlic does not include measures of personality traits.

20
Q

Ryan et al. (1998) demonstrated that,

A

relative to selecting solely on cognitive ability scores,
selecting solely on personality scores would reduce adverse impact
against Blacks and Hispanics but would simultaneously increase adverse impact against women.

21
Q

Murphy et al. (2003)

A

In contexts in which there are a large
number of applicants for a small number of positions, the use of
cognitive ability tests in selection can virtually eliminate Black and
Hispanic applicants from consideration.

*The use of tests that overestimate performance differences
among racial groups (i.e., because groups differ more on the test
than they do on job performance) may contribute to an artificial or
overstated stratification of organizations and society by race.

About 3/4 disagree with statements such as:

General cognitive ability tests show levels of validity too low to justify the negative social consequences of those tests

The validity of cognitive ability tests for real-life outcomes is low.

General cognitive ability is little more than academic
intelligence

Nearly 2/3 agree cog ability tests are fair!

For example, 41% of respondents disagreed with the statement “General cognitive ability is the most
important individual difference variable,” but 44% agreed.

Schmidt (2002) suggested that many students,
researchers, and users of cognitive ability tests are uncomfortable
with the policy implications of the g-ocentric point of view.

To compound this fact, it is
also likely that ethnic–racial differences on ability tests are considerably larger than ethnic–racial differences in job performance
(Ford, Kraiger, & Schechtman, 1986; Outtz, 2002; Schmidt, 2002). ———————————— But didn’t another source say the validity was the same for 2 indys with the same score but from 2 diff racial groups?

22
Q

Sternberg & Hedlund (2002)

Is tacit knowledge what Campion refers to as procedural knowledge?

Sternberg & Hedlund (2002) state: This knowledge tends to be procedural in nature and to operate outside of focal awareness

A

Creative and practical intelligence (Sternberg, 1985, 1997, 1999). These broader conceptualizations of intelligence recognize that individuals have different strengths and that
these strengths may not be identified through traditional approaches to measuring
intelligence.

Practical intelligence is defined as the ability that individuals use to find a more
optimal fit between themselves and the demands of the environment through adapting to the environment, shaping (or modifying) the environment, or selecting a new
environment in the pursuit of personally-valued goals.

Such notions about the
tacit quality of the knowledge associated with everyday problem solving also are reflected in the common language of the workplace as people attribute successful performance to “learning by doing” and to “professional intuition” or “instinct.”

It has three main features, which correspond to the conditions under which
it is acquired, its structural representation, and the conditions of its use:

  1. First, tacit knowledge is viewed as knowledge that generally is acquired with
    little support from other people or resources. In other words, the individual is not
    directly instructed as to what he or she should learn, but rather must extract the
    important lesson from the experience

Second, tacit knowledge is viewed as procedural in nature. It is knowledge about
how to perform various tasks in various situations. Drawing on Anderson’s (1983)
distinction between procedural and declarative knowledge, tacit knowledge can be
considered a subset of procedural knowledge that is drawn from personal experience.

2 - it’s procedural in nature
Part of the difficulty in articulating tacit knowledge is that it typically reflects a set of complex, multicondition rules
(production systems) for how to pursue particular goals in particular situations (e.g.,
rules about how to judge people accurately for a variety of purposes and under a variety of circumstances).

  1. The third characteristic feature of tacit knowledge is that it has direct relevance
    to the individual’s goals. Knowledge that is based on one’s own practical experience.

Measuring tacit knowledge is often done through situational judgement tests!

These
types of tests generally are used to measure interpersonal and problem-solving skills.

In a situational Judgment test (SJT) or tacit
knowledge(TK)test,eachquestionpresentsaproblem relevant to the domain of interest (e.g., a manager intervening in a dispute between two subordinates) followed by a set of options (i.e., strategies) for solving the
problem.

Respondents are asked either to choose the best and worst alternatives
fromamongafewoptions,ortorateonaLikert-typescalethequalityorappropriateness of several potential responses to the situation.
The development of TK tests, like many SJTs, begins with the identification of
critical incidents in the workplace (Flanagan, 1954).

TK tests have been scored in one of four ways: (a) by correlating participants’
ratings with an index of group membership (i.e., expert, intermediate, novice), (b)
by judging the degree to which participants’ responses conform to professional
“rules of thumb,”

The correlation between job experience and job performance falls in the range of
.18 to .32. Additional research suggests that this relation is
mediated largely by the direct effect of experience on the acquisition of job knowledge.

Wagner and Sternberg
(1985) found a significant correlation between tacit knowledge scores of 54 business managers and the manager’s level within the company, r(54) = .34, p < .05,
and years of schooling, r(54) = .41, p < .01. In a follow-up study with 49 business
professionals, Wagner (1987) found significant correlations between tacit knowledge scores and years of management experience, r(49) = .30, p < .05.

The relation
between g and performance is attributed largely to the direct influence of g on the acquisition of job-related knowledge (Borman et al., 1993; Hunter, 1986; Schmidt et
al., 1986).

In the research reviewed here, TK tests exhibit trivial to moderate correlations
with measures of g.

In general, job knowledge tests have been found to predict performance fairly consistently, with an average corrected validity of .48 (Schmidt & Hunter, 1998). As
indicated earlier, much of this prediction is attributed to the relation between job
knowledge and g (Borman et al., 1993; Hunter, 1986.

Simply put, individuals who learn
the important lessons of experience are more likely to be successful. However, because tacit knowledge is a form of practical intelligence, it is expected to explain
aspects of performance that are not accounted for by tests of g.

TK measures tend to have discriminant validity with personality traits, and tend to correlate among themselves and to show a general factor among themselves.

xhibit the same group differences found on traditional intelligence tests.
The research reviewed earlier spans more than 15 years and lends support to
several assertions regarding tacit knowledge. First, tacit knowledge generally increases with experience. Second, tacit knowledge is distinct from general intelligence and personality traits. Third, TK tests predict performance in several domains and do so beyond tests of general intelligence. Fourth, practical
intelligence may have a substantial amount of generality that is distinct from the
generality of psychometric g. Finally, scores on TK tests appear to be comparable across racial and gender groups.

In-basket tests were designed to assess indy’s ability to deal with job-related tasks under similar constraints (e.g., deadlines). (Sternberg & Hedlund, 2002)

Assessment centers also are used to observe an individual’s performance in situations that have been created to represent aspects of the actual job situation. Assessment centers typically present small groups of individuals with a variety of tasks, including in-basket tests, simulated interviews, and simulated group
discussions (Bray, 1982; Thornton & Byham, 1982).

23
Q

Campion et al. (2001) - on banding

A

Selection devices contain error - so, strict rank-order selection of applicants’ socres ignores error, which is part of the reason their scores vary.

Technically, strict-rank order does imply banding (just very narrow (e.g.,, 1-point) bands)

Reasons some consider banding: they fairly do it in accordance with the reliability of the test, they consider consequences, standard error of measurement, or consequence of prediction error.

One form is, treat scores as equivalent unless their difference is statistically signfiicant (Casio et al., 1991).

As of Campion et al. (2001), the premise and logic of banding have actually been withheld in court - instead, what has been successfully changed is how candidates are selected from within the band - in particular, specific preference for minority candidates hasn’t been supported.

2 things are clear about banding:

  • if banding is used without minority preferences, no problem.
  • But the courts clearly reject banding with systematic minority prefs.

But banding without minority preference does little to reduce AI.

24
Q

Carpini et al. (2017)

A

Five clusters of work performance literature have emerged over the previous 40 years:

Management – largest cluster; focus on individual performance in relation to achieving organizationally relevant outcomes
Personnel selection perspective – 2 pursuits: 1) identify and reliably measure individual performance (e.g., “criterion” measure), 2) reliably predict future performance using selection tests
Motivation - role of motivation in facilitating task performance
Good citizen - OCBs and related concepts
Job attitudes – perpetuates the belief that a happy worker is most likely a productive one