Week 4, Selection, Key Terms Flashcards

1
Q

Why is Validity so Important?

A

Scientific perspective
-Impossible to understand the dynamics of an organization if we don’t have accurate measurement

Applied perspective
-Why test at all if it doesn’t predict desired behaviors?
-Important legal defense in the case of a lawsuit alleging discrimination where adverse impact is found.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Whole person approach

A

Cognitive Ability
Specific Knowledge
Personality
Integrity Tests
Interview
Physical Ability
Etc.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Predicting variance

A

Variance = people are different (on job performance)

Predicting variance is using a measure to explain why people are different (on job performance) and predict why one person will perform well, and another less well

Predict X% of variance in the DV (on job performance)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Incremental Variance

A

Cognitive Ability
R = .50
V = 25%

Interview
R = .35
V = 12%

Personality
R = .20
V = 4%

Degree of incremental variance depends on the correlation between predictors

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Ability Tests

A

An ability or aptitude is a person’s capacity to do or learn to do a task.

Often referred to as “G” or “GMA”

What does “G” stand for?
-Multiple sub categories

Cognitive abilities involve information processing and learning (e.g., intelligence).

Probably the best predictor we have in terms of criterion validity (r = .5).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Ability

A

Cognitive ability tests include general tests of intelligence and tests of specific abilities (e.g., mathematical).

They are usually paper-and pencil, group tests (now online both proctored and unproctored).

They tend to be efficient, valid, and low-cost predictors of job performance for many jobs.

Best used in job with at least moderate complexity

Cognitive abilities can be defined as hypothetical attributes of individuals that manifest when those individuals performing tasks involve the active manipulation of information.

Top overall predictor of JP 0.40 – 0.50 Hunter & Hunter (1984)

We believe ability predicts performance

G is relatively stable over time.

Every individual has the ability to do something or some class of things but there are difference between individuals and the degree of their abilities.

G is involved when individuals are performing tasks that require the active manipulation of information.

May be nonverbal where test takers aren’t proficient in the local language.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Achievement Tests

A

Knowledge and skill (achievement) test assesses present level of proficiency, not merely ability; emphasizes acquired knowledge and skills more than ability tests do.

More content valid

Some knowledge and skill tests are general (math or reading), some specific (SOPs).

May be pencil-and-paper or performance tests.

These are much more tied to content validity approach

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Adverse impact

A

Adverse impact refers to the impact of a given selection practice on a protected class

Defined in terms of selection ratios of the protected class and the majority group.

Adverse impact occurs when the selection ratio for the protected class is less than 80% of the group with the largest selection ratio.

It is not illegal to use a selection device with adverse impact. But to be legal it must be job-relevant, assessing a KSAO necessary for job success.

That is, if 60% of male applicants were offered a job, there would be adverse impact against females if fewer than 48% of them (80% of 60%) were offered a job.

Organization should be prepared to defend itself legally.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Adverse Impact example

A

Caucasian
100 applicants
60 are hired
SR = 60/100 = .6

Latino
100 applicants
20 are hired
SR = 20/100 = .2

.2 /.6 = .33 < .80 (4/5s) = Adverse Impact

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Self-Report Measures

A

Self-report measures are favored by organizations for several reasons:

Self-report measures have demonstrated considerable validity with respect to personnel-related decisions (Barrick & Mount, 1991)

Self-report measures such as personality measures and biodata inventories exhibit less adverse impact than do alternative selection devices such as cognitive ability tests (Sackett & Wilk, 1994)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Personality Measures

A

A personality trait is a predisposition or tendency to behave in a particular way across different situations.
-Example: sociability.

Useful in predicting typical performance

Personality traits can be relevant for job performance and other behaviors on the job
-(Example: sociability in a salesperson).

General term that may subsume many other individual differences variables.

Set of characteristics of a person that account for consistent patterns of responses to situations.

Cognitive ability is the can do.
Personality is the will do.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Personality

A

A personality inventory assesses personality traits

May assess one trait or many.
-Single construct
-Multiple construct
-Profile

May group people into types (trait combinations; e.g., extroverted vs. introverted).

Generally is a paper-and-pencil test (or online).

Have been popular in organizations but have some problems.

Most personality variables have significantly less adverse impact against protected classes than cognitive abilities.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Personality Alternative Perspective

A

Mischel (1968) heavily criticized the use of personality measures
Said personality didn’t exist
Just a set of labels for similar behaviors

Which comes first, the chicken or the egg?

Theoretical argument

Trait perspective vs. behaviorist perspective

Chicken or the egg?

The answer is both!

Genetic support of personality + stability over time

Environmental influences and malleability

Situational constraints
Ex. Extraversion
Church
Bar

Tett & Gutterman (2000) trait activation theory
Use of FOR

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Frame of Reference (FOR)

A

Many personality measures ask questions across a wide range of situations

The average of the responses across these situations is considered the true score of the trait

Organizations don’t care about true score, just work performance

Situational specificity

Better Psychometric properties (Schmitt et al. 1995; Bing et al. 2004)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

The “Big 5”

A

The factors that comprise the big five model of personality are:

Extroversion
-I really enjoy talking to people

Agreeableness
-I try to be courteous to everyone I meet

Conscientiousness
-I work hard to accomplish my goals

Neuroticism (Emotional Stability)
-I am not a worrier (R)

Openness to Experience
-I enjoy thinking about theories & abstract ideas

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Big 5 in studies

A

Barrick and Mount (1991) found that the personality dimension of conscientiousness had moderate predictive validity (mean corrected R = 0.22) with all job performance criteria across a wide variety of occupational settings.
The personality factor of extroversion was found to be a valid predictor of job performance for managers and sales personnel.
Study also demonstrated that openness to experience was a significant predictor of training performance (Hunter & Hunter, 1984).
Barrick and Mount (1996) and Tett et al. (1991) found that the personality construct of agreeableness was a valid predictor of job performance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

History of Big Five

A

Birth- Sir Francis Galton (1884)-dictionary words used to describe people.

Cattell reduced the larger (17, 953) list to 16 factors (factor analysis)

Tupes and Christal (1961) credited with the Big Five.

Costa and McCrae developed the NEO-PI which brought attention to the big five.

Replicability of the big five has been it’s biggest selling points.
It has been replicated across many samples.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Prediction and personality

A

Guion & Gottier (1965) said that personality tests were useless because they didn’t predict behavior
.30 ceiling

Problem: too many scattered traits over too many scattered studies

A consistent taxonomy improved prediction

Still criticisms (Morgeson et al. 2007)…

Murphy (2005)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Research on the big five

A

Conscientiousness (C) consistent predictor of job performance (Barrick et al 1991, & 2002; Hurtz & Donovan, 2000; Tett et al. 1991).

Traits interact
Agreeableness X “C” received higher rating of job performance than those low in agreeableness with a high level of “C” (Witt et al. 2002)

Openness – Training

Extraversion – Sales and management
Barrick et al. (1991)

Workers high in concs are predisposed to be organized, exacting, disciplined, diligent, dependable, methodical purposeful.
more likely than low consc workers to thoroughly and correctly perform work tasks, to take initiative to remained committed to their work performance

High levels of agreeableness tend to give conscientious workers the boost they need to be effective in the workplace.

Job performance is multidimensional, to succeed you may need a combination of g and traits

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

C as Universal predictor?

A

“C” has been purported to predict across all jobs
Those high in “C” should have higher job performance

Validity generalization
-Accountant…yes.
-Commercial Artist…no!

Personality based job analysis

Linear vs. Curvilinear relationships

19
Q

Self report measures

A

The largest disadvantage of self-report measures is that the accuracy of the data collected is completely dependent on the willingness of the respondent to give accurate information.

In organization settings it is often not in the best interest of the respondent to answer honestly.

20
Q

Factors contributing to the elevation of applicant scores

A

Situational variance
Cognitive biases
Ability
Integrity
Motivation
Faking related constructs

21
Q

Does Faking Matter?

A

Several studies have reported minimal effects on criterion-related validity.
-Christiansen et al. (1994)
-Hough et al. (1990)
-Ones et al. (1996)

Mueller-Hanson et al. (2003) suggested SD is not likely to be synonymous with faking

Griffith & Peterson (2008).

The argument has logically proceeded to whether or not faking “matters”. First of all, any grad student can tell you that “does faking matter? ” is a poor research question. To progress in our understanding of the phenomenon we must be more precise in our investigation. Many who examine this research area are concerned with the effects of faking on criterion validity, or more implicitly, the job performance of the faker

22
Q

Faking maybe mattering

A

Other researchers have found faking to lower criterion-related validity coefficients
-Douglas et al. (1996)
-Komar et al. (2008)
-Zickar, Rosse & Levin, (1996)

Studies have been criticized for unrealistic settings

Some sales and service jobs require a form of self-presentation that is very similar to faking (or SD). The issue of cross-situational specificity would be particularly relevant for this argument. More recent views of faking suggest that it is a complex process where the applicants’ responses may require an understanding of the job, and an ability to formulate a schema of a “desirable” candidate (Vasilopoulos et al., 2000). To the extent that this situation is duplicated in the employment environment, the skill of faking may transfer and manifest itself in improved performance. Therefore, for certain jobs, or certain job activities, this argument is plausible.

23
Q

Personality & Prediction

A

Researchers have called for a cessation of faking research because personality is a bad predictor to start with (Morgeson et al., 2004)

Perhaps faking is one of the culprits

When fakers were removed from the applicant pool observed validities were r = .40

Corrected validities were r = .46

Getting close to “G” range…

24
Q

Integrity Tests

A

An integrity test is designed to predict whether or not an employee will engage in counterproductive or dishonest behavior on the job.

-Examples include cheating, poor performance, sabotage, and theft; sometimes absence and turnover.

In the last decade the use of integrity tests has increased drastically (Ones et al., 1993).

Integrity tests are self-report measures that allow the scorer to make inferences about the respondent’s honesty (Murphy, 1993)

25
Q

Overt Integrity Test

A

Assesses attitudes and prior behavior.

Test taker asked to agree/disagree with statements about moral behavior and to say how often have done particular behaviors, such as theft.

These tests typically contain items related to past illegal behaviors, hypothetical situations related to dishonest behavior, or opinions about illegal activities (Murphy, 1993)

Overt Integrity Test
Assesses attitudes and prior behavior.

Test taker asked to agree/disagree with statements about moral behavior and to say how often have done particular behaviors, such as theft.

Are designed to directly assess attitudes about illegal behavior.
These tests typically contain items related to past illegal behaviors, hypothetical situations related to dishonest behavior, or opinions about illegal activities (Murphy, 1993)

26
Q

Personality-based (covert) integrity test

A

A personality integrity test assesses personality characteristics found to predict counterproductive behavior. Purpose of the test is hidden to the testee.

Integrity tests can predict counterproductive behavior and job performance.

They do a better job of predicting absence, general counterproductive behavior, and job performance than they do predicting theft.

Assesses personality characteristics found to predict counterproductive behavior. Purpose of the test is hidden

Designed to predict a wide range of counterproductive work behaviors.
These tests are usually composites of personality dimensions such as conscientiousness, reliability, and adjustment.

27
Q

Stats of personality-based integrity test

A

Integrity tests seem to predict counterproductive behavior

These tests are usually composites of personality dimensions such as conscientiousness, self control, & adjustment.

At first examination these tests appear to be measuring different constructs.

However, some researchers (Ones et al., 1993; 1995) assert that both categories of these tests broadly measure the construct of conscientiousness.

Integrity tests were also correlated with counterproductive work behaviors between 0.29 and 0.39.

A meta-analysis investigating integrity test validates found moderate validities for predicting job performance, with a mean corrected r of 0.41 (Ones et al., 1993).

28
Q

Biographical Information

A

The biographical inventory asks about experiences at school and work, and other areas of life.

Some questions ask objective, verifiable facts, others ask about opinions or subjective experiences (focus is on past reactions and experiences).

Biographical information assesses relevant prior experience and characteristics such as job skills, education, work history, and personal characteristics

Often called biodata

Based on the assumption that the best predictor of future behavior is
behavior past

Less fakable (Usually verifiable)

Empirical nature of tests

29
Q

Biodata

A

Biodata measures are based on the measurement principle of behavioral consistency, that is, past behavior is the best predictor of future behavior.

Biodata items reflect external actions that were observable by others.

Objective in the sense there is a factual basis for responding to each item.
-“How many books have you read in the last 6 months?”
-“How often have you put aside tasks to complete another, more difficult assignment?”

Biodata measures have been shown to be effective predictors of job success

Moderate degree of criterion-related validity in numerous settings and for a wide range of criterion types (e.g., overall performance, customer service, team work)

Biodata measures also appear to add validity (i.e., incremental validity) to selection systems employing traditional ability measures

30
Q

Interview

A

An interview is a face-to-face meeting between an interviewee and one or more interviewers who are collecting information or making hiring decisions.

The most common selection method

Managers love it because it allows them to use their “gut feelings”

Interviews can be used:
-in place of an application form or to –supplement it.
-as a sample of interpersonal behavior.
-Test for specific knowledge
-To assess interpersonal fit

Interviewers may make ratings on job-relevant dimensions.

31
Q

Interview Advantages and Disadvantages

A

Advantages of interviews:
-They allow for more detailed answers than questionnaires.
-They reduce confusion from misunderstandings, because both parties can ask the other to clarify unclear questions.

Disadvantage of interviews:
-Interviewer can affect the answers given.
-Time consuming.
-Bias.

32
Q

Bias

A

When individuals conduct interview they form an initial impression, and this impression biases all information following.
-Decision in the first four minutes of the interview
-These impressions are based on traits of the job candidate.

Happens immediately and they are effortless (Krull & Erickson, 1995).

Krull and Erickson state that we can attend to other information, but it is not easy to do so.

The authors state that it is particularly difficult when the interviewer is under cognitive load, which is likely.

33
Q

Types of Interviews

A

In an unstructured interview, the interviewer asks whatever comes to mind.
-Superficial or shoot from the hip questions
-Negative evaluations from applicants
-Signal low preparation (poor applicant reactions)

In a structured interview, the interviewer asks a fixed set of questions of every interviewee; gets same information about each.

34
Q

Interview Studies

A

Hunter and Hunter (1984) performed a meta analysis on various employment predictors and their performance.

Relationship between the employment interview and job performance is r = 0.14.

However, there are some methodological concerns that may have led to a smaller coefficient.

Hunter and Hunter’s 0.14 coefficient was based on only 10 studies.

Meta analysis wit few studies have a stronger likelihood of being “swamped” by a few influential studies (McDaniel, 1989).

This small relationship led many researchers to believe that the employment interview had questionable utility.

The second methodological concern is that Hunter and Hunter did not attempt to look at potential moderators.

Moderators that could have possibly been included in the meta analysis are content of the interview and degree of structure.

Further examinations of the employment interview reveal a much stronger relationship to job performance (McDaniel et al., 1991; McDaniel et al., 1994).

35
Q

Interview Structure

A

McDaniel et al. (1994) examine the degree of structure in the employment interview.

The authors found a validity of 0.44 for structured interviews.

A structured interview is standardized and it is usually based, in part, on a job analysis (Cascio, 1991).

Assures that the interview is content valid (the interview is a test and should be held to the same standards)

Research concerning the employment interview painted a bleak picture regarding its ability to predict job performance.

This was especially troubling since the employment interview is the most widely used selection technique (Cascio, 1991)

36
Q

Situational interview

A

“What if” scenarios

The purpose of the interview is to discover the applicant’s intentions with regard to the various situations that will be encountered in the organization.

Questions asked in a situational interview are based on a job analysis conducted within the organization.

A strong relationship to job performance was uncovered when the organization used situational interviews.

Situation interviews are conducted in such a way that the job candidate is given a scenario that he or she is likely to encounter on the job and is asked how they could respond to the situation.

37
Q

Assessment Centers generally

A

An assessment center involves a series of exercises that measure how well a person can perform a sample of tasks from a job.

Generally used for managerial and other white-collar jobs. High level of realism (fidelity).

Exercises can take days; are usually given to several individuals at a time. Rated on job-relevant dimensions by a panel of training assessors.

Dimensions used in rating might include decision making, delegation, leadership, organizing and planning

Can use many different activities and exercises, including an interview or battery or psychological tests.

38
Q

Assessment Center activities

A

Exercises often include an in-basket
-An in-basket exercise requires the tester to deal appropriately with the items in a simulating in-basket—memos, letters, etc.

A leaderless group exercise gives several tester a problem to solve together; may be competitive (division of resources) or cooperative (marketing decision).
-In addition, the person being tested may also be asked to role play a particular management position.
-Ex. Army leadership exercises at OCS

In the problem-solving simulation, the tester is given a problem and must come up with a solution.

High degree of content validity

39
Q

Assessment Center process

A

Assessors rate each tester on each dimension by observing behavior and reviewing materials produced during each exercise.

Dimension scores can be used to give a person feedback about his/her strengths and weaknesses.

The overall score is useful when the assessment center performance is used to making a hiring or promotion decision.

Scores in the assessment center are correlated with job performance.

Cost $
Lots of problems with halo error
Reliability Although the overall scores in an assessment center have been shown to be valid, individual dimension scores seem to lack construct validity. (The different dimension scores are too highly correlated within an exercise, and scores on the same dimensions across differnt exercises aren’t correlated enough). Thus, feedback on individual dimensions probably shouldn’t be given.
Research efforts at improving the validty of dimension ratings have shown that using checklists, offering frame of reference training, and more careful selection of assessors and conduction of the centers improve assessment center results. Assessment center scores predict job performance. Dimensions are too highly correlated within an exercise, and too poorly correlated across exercises. Assessors are overloaded with information.

40
Q

Use of Computers in Assessment

A

Computers are increasingly used in assessment—administering and scoring tests. Paper-and-pencil tests easily adapted. Performance tests like typing tests are adaptable

-Advantage: automatic scoring; people may complete personality tests more honestly.

-Disadvantage: only one test taker at a time if test is on screen; more if use printed tests and computer-scan answer sheets.

41
Q

Computers

A

Computers allow easier tailored testing, in which specific items given to a test taker vary depending on the answers already given.

CAT

Allows adapting the test to individual’s ability. (You miss one, you get an easier one; you’re correct, you get a harder one.)

Requires fewer items to reliably measure whatever is being measured.

42
Q

AI and resume screening

A

Machine learning can be used to identify key words, phrases, etc. in resumes

Also used to screen candidates on social media sites

Claim to reduce unconscious bias…

Machine learning approximates human rating

43
Q

Physical Ability Testing

A

For physically demanding jobs
-Police
-Fire fighters

Identify the critical job tasks and the abilities necessary to perform them

Specify performance tests

Specify Criteria

44
Q

Content-valid

A

Content-valid physical tests sample content of the job.

The key for validity is the judgment that the content of the test is a high fidelity representation of the critical job tasks

These test types (e.g., subduing and rescuing victims, dragging hoses, loading boxes) are used widely by police and fire departments, although the tests themselves differ across organizations.

45
Q

Construct-valid physical tests

A

All physical ability tests are, to some degree, measures of constructs.

For example, tests of lifting, pushing, pulling, and carrying are measures of upper body strength

Determine which constructs are necessary and then develop tests of those constructs

The underlying construct provides the theoretical link between the predictor and the criterion measure.

In our judgment, all physical ability test validation research is a form of construct validity.

46
Q

Applicants

A

Applicants are not passive receptors of selection procedures

Applicants can react very strongly to what they are asked to say or do in order to get a job.

Applicants favor selection procedures with a strong relationship to the job content (face validity).

Important consideration in terms of subsequent legal action on the applicant and subsequent consumer behavior