chapter 6 - selection (14 mc, 2 sa) Flashcards

1
Q

selection and selection ratio

A

Selection
The process of choosing from among the individuals who have relevant qualifications in the applicant pool to fill existing or projected job openings.
Success of selection decisions is dependent on successful recruitment efforts.

Selection ratio = # of positions / # of qualified applicants

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

reliability vs. validity

A

**Reliability:
The degree to which interviews, tests, and other selection procedures yield consistent data

Reliability takes into account the effect of error in measurement
We always expect some degree of error when it comes to measuring psychological characteristics
e.g., intelligence v. height
Reliability – consistency of measurement
1. Can a test be reliable and not valid?
2. Can a test be valid but not reliable?

**Validity:
Degree to which the inferences we make from a test score are appropriate

e.g., an applicant gives you a wimpy handshake
Should you infer that the person is timid and shy?
Is this an appropriate inference? NO
Validity depends on the reliability of the test (rxx)
Maximum possible validity of a test = square root of its reliability estimate
So if the reliability (rxx) of a test is .81…
…what is the maximum validity (rxy) you could obtain from the scores on that test?
Max rxy = √.81 = .90

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

sources of error in measurements

A

Environmental factors
Examples: Room temperature, lighting, noise

Personal factors
Examples: Anxiety, mood, fatigue, illness

Interviewer/recruiter behavior
Example: If smiling during an interview with one applicant then not with another

Test item difficulty level
What difficulty level is most reliable?

Item response format
MC v. T/F – which is more reliable?

Length of the test
Longer or shorter test – which is more reliable?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

interpreting reliability coefficents

A

Reliability coefficient (rxx)
x = selection test or assessment
y = the thing we are trying to predict (usually job performance)
r = correlation

So, rxx = the correlation of the test “X” with itself (the test itself)
rxy = correlating the test with something else
Y can be job performance etc.

Shows % of score that is thought to be due to true differences on the attribute being measured
rxx = 0 – no reliability (100% error measurement)
rxx = 1.0 – perfect reliability (0% error measurement)
rxx = .90 – 90% of the true variance between individual’s scores are due to true differences on the attribute being measured (10% due to error)

How high is high enough? Depends…
rxx = .80 or higher is one rule of thumb

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

types of reliability estimates: TEST-RETEST

A

Test-retest = How consistent are scores on a test over time?
Give different groups of people the same test at different times then we correlate them

Test-retest reliability – estimates the degree to which individuals’ test scores tend to vary over time on the test
Time 1————————Time 2
Test X Test X

Two person-related factors that effect test-retest:
1) Memory – when the test taker simply recalls how they answered they responded to the question at Time 1 and answers the same way at Time 2
Will memory inflate or deflate reliability of a test?
2) Learning – Learning means that the test taker has changed (e.g., learned new information) between Times 1 and 2 and therefore answers the questions differently
Will learning inflate or deflate the reliability of a test?

IMAGE IN NOTES

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

types of reliability estimates: PARALLEL FORMS

A

Parallel Forms = How consistent are scores across different forms of a test?
Form a and form b which are different questions but should report the same thing

Parallel forms - Examines the consistency of measurement of an attribute across forms
Similar to test-retest but controls the effect of memory by using 2 different versions of a test and seeing how well the scores correlate:
Time 1————————Time 2
Form A Form B

Can administer both forms at same time (Time 1 – Form A and B)
Example tests with multiple forms? SAT, ACT, GMAT, etc…

IMAGE IN NOTES

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

types of reliability estimates: INTERNAL CONSISTENCY

A

Internal Consistency = How consistent are the items on a test in measuring a construct?
One test at one time and run analyses to see how you responded to each item

Internal consistency – Measures whether all items are acting in a consistent manner? (i.e., do item responses hang together as you would predict?)
Most commonly used reliability estimate in HR research

Two types of internal consistency estimates: IMAGE IN NOTES
1. Split half reliability estimate
2. Cronbach’s alpha

**1. Split half reliability estimate
Administer one test, one time, then divide the items into 2 halves (odd, even) and correlate the two sides’ scores
**
2. Cronbach’s alpha
Administer one test, one time, then divide the items into every possible split half to calculate the average reliability across all ways to split the items in half (computer programs do this calculation for you)
Multiple split halves are going to give a better estimate than calculating reliability off of one split half

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

types of reliability estimates: INTER-RATER

A

Inter-rater = How consistent are scores across raters?
Multiple raters rate the same people (ask how raters evaluate someone)

Inter-rater - Measures the degree to which multiple raters give similar ratings to applicants
Are some raters biased? Too Lenient? Too strict?
Need multiple raters so we can know how reliable any one rater is
Inter-rater reliability estimate assesses the degree of objectivity in ratings

IMAGE IN NOTES

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

3 ways to validate a test (validation approaches)

A
  1. Criterion-related validity – do test scores predict future job performance?
  2. Content validity – does the test adequately measure KSAs needed on the job?
  3. Construct validity – does the test adequately measure a particular theoretical construct?
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

4 ways to see if a test is RELIABLE

A

Test-retest = How consistent are scores on a test over time?
Parallel Forms = How consistent are scores across different forms of a test?
Internal Consistency = How consistent are the items on a test relative to one another in measuring the construct of interest?
Inter-rater = How consistent are scores across raters?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Criterion-related Validity

A

The extent to which a selection test (x) predicts, or significantly correlates with, important elements of work behavior (y).

directionality of relationship
If a test has criterion-related validity, a high test score indicates high job performance potential; a low test score is predictive of low job performance.OR
A high test score could indicate low job performance; a low test score could indicate high job performance
What we are looking for is a relationship b/t the two

Two options for determining the criterion-related validity of a selection test:
**1. Concurrent validation - Use current employees as sample
**2. Predictive validation - Use applicant pool as sample

Directionality (or sign) of the correlation (look at it based on number. Doesn’t matter if its positive or negative)

IMAGE IN NOTES

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Criterion-related validity - concurrent validation

A

Concurrent validation: examining the extent to which test scores correlate with job performance data obtained from current employees

Steps:
1. Have employees take the test
2. Collect most recent job performance ratings on these employees
3. Correlate the two measures to obtain validity estimate

IMAGE IN NOTES

Advantage
Can be done quickly because you can collect all the pieces of data can be collected simultaneously (can send out a link)

Disadvantages
Motivation to take the test
Job tenure
Sample may not generalize demographically to our applicant pool - Do our employees look like our applicants – they might not – (US v. Georgia Power)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Criterion-related validity - predictive validation

A

Predictive validation: examination of the extent to which applicants’ test scores match criterion data obtained from those applicants/employees after they have been on the job for some indefinite period. (x variable is who is taking the test)

Steps:
1. Administer the test to your applicants (as part of the selection process but don’t use the scores for hiring purposes)
2. File the test scores away
3. Collect job performance measures on the applicants you ended up hiring (6 months to 1 yr later)
4. Correlate the test scores with the job performance measures

Advantages
High motivation to try hard if you think it helps with your application as a candidate for a job
No sample generalizability problem (applicant pool is the sample)
Equal job tenure

Disadvantages
Time interval required between test and job performance data collection
Amount of time needed to get a large enough sample to be statistically (N = 300)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

validity coefficient

A

rxy can range between -1.0 to +1.0
rxy = 0 = no validity
rxy = 1.0 = perfectly positively correlated
rxy = -1.0 = perfectly negatively correlated

How high is high enough? Depends on the type of test you are looking at…
Unlike reliability coefficients, there is no minimum accepted rule of thumb for validity coefficients

IMAGE IN NOTES

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

reliability coefficient vs validity coefficient

A
  • reliability coefficent (rxx)
    x = selection test or assessment
    y = the thing we are trying to predict (usually job performance)
    r = correlation
    So, rxx = the correlation of the test “X” with itself (the test itself)
    rxy = correlating the test with something else
    Y can be job performance etc.
    Shows % of score that is thought to be due to true differences on the attribute being measured
    rxx = 0 – no reliability (100% error measurement)
    rxx = 1.0 – perfect reliability (0% error measurement)
    rxx = .90 – 90% of the true variance between individual’s scores are due to true differences on the attribute being measured (10% due to error)
    How high is high enough? Depends…
    rxx = .80 or higher is one rule of thumb
  • validity coefficient (rxy)
    rxy can range between -1.0 to +1.0
    rxy = 0 = no validity
    rxy = 1.0 = perfectly positively correlated
    rxy = -1.0 = perfectly negatively correlated

How high is high enough? Depends on the type of test you are looking at…
Unlike reliability coefficients, there is no minimum accepted rule of thumb for validity coefficients

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

selection test validity correlation magnitudes

A

Low validity
rxy = .00 to +/-.15

Moderate validity
rxy = +/-.16 to +/-.30

High validity
rxy = +/-.31 and higher

17
Q

selection test validity correlation directionality

A

rxy can range between -1.0 to +1.0

rxy = 0 = no validity
rxy = 1.0 = perfectly positively correlated
rxy = -1.0 = perfectly negatively correlated

How high is high enough? Depends on the type of test you are looking at…
Unlike reliability coefficients, there is no minimum accepted rule of thumb for validity coefficients

18
Q

interpreting a validity coefficient (rxy) INCLUDING P VALUE

A

Validity Coefficient
Three main components:
1. size of the correlation (magnitude)
2. sign of the correlation (directionality)
3. p-value (statistical significance)

Generally we want to see that the probability of this value occurring by random chance alone is < 5% (p < .05) = our study is well constructed so the likelihood of our results being wrong is less than 5% (because of large sample size) = statistical significance is important

rxy = .35 (p < .05) …if we conducted this study 100 times on a different group of people (same #) from the same population, in 95 of the studies, we would find a similar relationship

19
Q

statistical significance - ON EXAM

A

Statistical significance = generalizability of our experiment to other samples; driven primarily by sample size
See if experiment would work in real world

  • STATISTICALLY significant if P value is equal/less than 0.05 or 5% *
20
Q

practical significance - ON EXAM

A

Practical significance = size of the correlation—is correlation large enough to be useful? Looks at the magnitude of correlation.

  • PRACTICALLY significant if VALIDITY is equal/larger than 0.31*
    –> High validity
    –> rxy = +/-.31 and higher
21
Q

Are the following statistically and/or practically significant? (ON EXAM) - LOOK AT NOTES

rxy = .35 (p = .04)
rxy = .05 (p = .03)
rxy = .32 (p = .08)
rxy = .13 (p = .09)

A

rxy = .35 (p = .04)
statistically significant (less than 0.05)
practically significant (equal/larger than 0.31)

rxy = .05 (p = .03)
Statistically significant (less than 0.05)
Not practically significant (less than 0.31)
0.05 not 0.5 lol

rxy = .32 (p = .08)
Not statistically significant (larger than 0.05)
Practically significant (larger than 0.31)

rxy = .13 (p = .09)
Not statistically significant (larger than 0.05)
Not practically significant (less than 0.31)

22
Q

types of validity: content validity

A

The extent to which a selection instrument, such as a test, adequately samples the knowledge and skills needed to perform a particular job.

Focus is on description rather than prediction
This assessment is performed on tests measuring more visible KSAs using (SME judgment) rather than statistics (criterion-related validity) which are used for psychological traits we can’t see
Example: typing tests, driver’s license examinations

23
Q

types of validity: construct validity

A

The extent to which a selection tool measures a theoretical construct or trait.

Most difficult validation method – have to prove that your test really measures the concept it says it measures
EX: How do you know your intelligence test measures intelligence?
Look at how your test compares to other similar and different tests: honesty tests, intelligence tests, personality tests

Dr. Dean’s intelligence test construct validity experiment
Need a large number of individuals take a number of similar and dissimilar assessments:
- Dr. Dean’s intelligence test (my new test)
- Established IQ test (measuring similar construct)
- Established other intelligence test (similar construct)
- Established integrity test (dissimilar construct)
- Established personality test (dissimilar construct)
Look at the correlations among the test scores—how would I be able to show that my test is in fact measuring intelligence?

24
Q

measurement recap!!!

A

Legally and practically ALL selection devices need to be reliable and valid

We need to be able to show that:

  1. The tests we use yield consistent data either…
    Across test administrations (test-retest reliability),
    Across test forms (parallel forms reliability),
    Across items within a test (internal consistency reliability), or
    Across raters/interviewers (inter-rater reliability)
    Interpret rxx = .60
    Is this an acceptable level of reliability?
  2. The inferences we make from our tests are valid
    Test scores are correlated with job performance (criterion-related validity),
    The test looks like it captures important KSAs needed on the job (content validity), or
    The test measures the construct we think it is measuring (construct validity)
    Interpret rxy = .60 (p < .05)…
    Is this an acceptable level of validity?
25
Q

application forms and form issues

A

Application forms.. Purposes Served?
- Quickly determine if candidate has minimum qualifications
- Standardized format (unlike resumes)
- Source of references
- Requires signature
- Employment-at-will
- (EAW) statement

Application Form Issues
- Application forms are considered a “test” under Title VII (must be validated, etc.)
- EEOC advises employers not to ask non-job related questions
- Some states strictly prohibit some questions on application forms
- Federal law doesn’t prohibit any questions
- Should be tailored to fit specific jobs
- Job analysis should drive the questions asked
- Should be easy to fill out
- Have statement that tells applicant not put information on the application form that is not requested

26
Q

application forms - inappropriate questions

A

Potentially inappropriate and/or illegal questions:

Age? Race? Religion?
Date graduated from high school?
Maiden name?
Disabilities?
Are you a U.S. citizen?
What language to you most commonly use?
Title (Mr, Mrs, Miss)?
Credit history – only if job related (see later slide on 2012 CA law)
Arrests/convictions? – depends on state law (illegal in CA as of 2018,see next slide)
Previous salary history? (illegal in CA as of 2018, see next slide)

27
Q

Recent CA Employment Laws Related to Application Forms

A

CA Assembly Bill 1008 – “Ban the Box” Bill
ERs may not…
Include any questions seeking conviction history on initial application
Consider conviction history until after conditional offer of employment is made

CA Assembly Bill 168 – Applicant salary history
ERs may not ask for or use previous salary history to determine whether to offer a job or to determine offer amount
If applicant voluntarily discloses without prompting, ER may use the information to determine offer amount but not to determine whether or not to make an offer

28
Q

background checls/investigations

A

Critical that employers perform background investigations on their applicants (concern over negligent hiring lawsuits)

Types of information sought in background investigations:
- educational credentials
- verification of former employment
- criminal record
- driving records
–> - credit history - CA Assembly Bill 22 (2012)
Prohibits ERs and prospective ERs from obtaining and using credit info on applicants or EEs unless job-related (OK for MGT, DOJ, law enforcement officers, jobs with access to info needed for credit card processing, proprietary info, or access to cash)

29
Q

reference checks

A

Checking References – low validity – Why?
Most personal references are going to say nice things about you – no differentiation
Former employers aren’t likely to give any information at all
Is it important to check references?
Are references useful for prediction?

Checking references is seen more as a “negative selection” device – weed out problem individuals

QUALIFIED PRIVILEGE ON ANOTHER CARD

30
Q

letters of reccomendation

A

Generally low validity because you wont ask someone to write a letter who would say negative things about you!

Interesting studies:
Study #1: Content analyzed adjectives used in 625 letters
Two categories of letters:
1. Adjectives focused on candidate’s intelligence
2. Adjectives focused on candidate’s personality
Which category do you think was most predictive of job performance?

Study #2
Performed a word count of each letter of recommendation
Which do you think were more predictive of who ended up performing well on the job–longer or shorter recommendation letters?

Disadvantage of letters:
1. Communication ability of letter writer
2. Don’t get same info across candidates

31
Q

reference checks and qualified privilege

A

Qualified Privilege - Some states have legislation that protects former employers against lawsuits and encourages them to share information without fear of retribution by the former employee (defamation suits)

qualified privilege:
Conditions that must be met for an employer to invoke qualified privilege:
1. Info was given in good faith, without malicious intent
2. Info can be substantiated/proven (opinions and false statements are not covered)
3. Info given was limited to the inquiry (can’t volunteer info that wasn’t directly asked for)
4. Info was communicated to the proper parties

32
Q

integrity tests

A

Passage of the Polygraph Protection Act led to the development of paper and pencil tests
What employers are most likely to use integrity tests?
Cost of ee theft = $40-50 billion/year
These are weed out devices

Two types of integrity tests:
1. Clear purpose (overt) integrity tests
Direct questions on attitudes on theft and deviant behavior
attitudes/previous → future deviant
behavior behavior

  1. Personality-based (veiled) integrity tests
    general personality questions
    indirect questions on theft
    personality traits → future deviant behavior

Integrity Tests
Which type would be more palatable to applicants?
Which type of test would applicants be more likely to try to “fake good?”
More social stigma is attached to “failing” an overt test v. general personality test

33
Q

cognitive ability tests

A

Research suggests cognitive ability is one of the best predictors of job performance across a wide range of jobs

Wonderlic – developed in 1938 – test used by Duke Power
12 minute, 50 item test
Ascending difficulty level
Scored based on # correct out of 50
Parallel forms reliability (6 forms) rxx = .94

High criterion-related validity (rxy = .45-.50)
Low cost (online or paper exam with MC/fill in the blank responses)
High adverse impact potential (relative to other selection devices)

34
Q

personality tests (early and later)

A

Tests designed to measure dispositional traits

  1. Early personality tests:
    Very low validity (rxy = .10 - .15)
    Why? Early tests were developed to diagnose psychological disorders! e.g., schizophrenia
    Minnesota Multiphasic Personality Inventory (MMPI)
    Example item:
    “I often feel as if one of my limbs will fall off.”
  2. Later personality tests:
    Higher validity (rxy = .30)
    Tests are now designed to predict job performance
    Example - “Big Five” personality factors:
    Extroversion
    Agreeableness
    Conscientiousness
    Emotional stability
    Openness to experience
35
Q

personality tests: big five personality test/factors

A

ACEEO

Agreeableness
Trust—I believe people are usually honest with me.
Teams, customer service

Conscientiousness*** rxy = .31
Attention to detail—I like to complete every detail of tasks according to the work plans.
Good across all jobs

Extroversion
Adaptability — For me, change is exciting.
Sales, management jobs

Emotional Stability
Self-confidence — I am confident about my skills and abilities.
Public safety, all jobs

Openness to Experience
Independence — I tend to work on projects alone, even if others volunteer to help me; jobs requiring innovative thinking, creativity
Expatriate assignments

36
Q

situational interview

A

Situational Interview
- An interview in which an applicant is given a hypothetical - incident and asked how he or she would respond to it.
- How would you lead a group if given the opportunity?
- How would you go about disciplining an employee?
Which would you guess would have higher validity (situational or behavior descriptive interviews)? Why?

Sample:
QUESTION:
It is the night before your scheduled vacation. You are all packed and ready to go. Just before you get into bed, you receive a phone call from the plant. A problem has arisen that only you can handle. You are asked to come in to take care of things. What would you do in this situation?

RECORD ANSWER:
_____________________________________________________________

SCORING GUIDE:
Very Good: “I would go in to work and make certain that everything is
O.K. Then I would go on vacation.”
Good: “There are no problems that only I can handle. I would make certain that someone qualified was there to handle things.”
Fair: “I would try to find someone else to deal with the problem.”
Poor: “I would go on vacation.

37
Q

behavioral description interview (bdi)

A

Behavioral Description Interview (BDI)
- An interview in which an applicant is asked questions about what he or she actually did in a given situation.
- Tell me about a time when you had to lead a group…
- Tell me about a time when you had to discipline an employee…

38
Q

structured vs. unstructured interview

A

Unstructured Interview
No predetermined set of questions are developed (the interviewer “wings it” so to speak)

Structured Interview
An interview in which a set of standardized questions having an established set of answers is used
Much higher validity than typical unstructured interview

**Two types of structured interviews:
Behavioral descriptive interview
Situational interview