Exam 2 Flashcards

You may prefer our related Brainscape-certified flashcards:
1
Q

Selection as a Process

  1. Job Analysis
  2. Recruit Applicants to the Job
  3. Assessment
  4. Make a decision
A

Selection as a process

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

the measurement of mental processes

A

psychometrics

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Test Properties

  1. Error: We always have error in measurement
  2. Reliability: The consistency of measurement. Can we reliably measure a given predictor or criterion?
  3. Validity: Are we accurately measuring what we want to measure. How accurate are the inferences that we are making?
A

Test Properties

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

the consistency, stability, or equivalence of a measure

A

reliability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

4 major ways that we measure reliability:

A

Test-Retest Reliability
Equivalent (Parallel)-Forms Reliability
Internal Consistency Reliability
Inter-Rater Reliability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Measuring reliability by giving participants a test and then giving them the same test at a later date or time and then correlate the two sets of scores (helpful in establishing personality and intelligence measures)

A

test-retest reliability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

how much variance or error do we see in the measurement of a construct over time

A

coefficient of stability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

a way to test reliability–two tests with the same mean and standard deviation, but separate items are used to measure the same construct. The scores of the two tests are then correlated to get a coefficient of equivalence; also called parallel or alternate forms reliability (difficult to construct, not often used)

A

equivalent forms reliability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

measure of reliability; the homogeneity of the items composing a test; involves: corrected item-total correlations and split half reliability

A

internal consistency reliability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

the extent to which two raters agree on their assessments of a construct (also called conspect reliability); correlations between the rating provided by each rater
Examples would be agreement on job analysis judgements or agreement on performance evaluations

A

inter-rater reliability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

the accuracy and appropriateness of drawing inferences from test scores

A

validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

validity based on the judgement of Subject Matter Experts (SMEs)

A

content validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

the degree to which a test forecasts or is statistically related to a criterion; validity coefficients are the correlation between a predictor and a criterion; two major types are concurrent and predictive

A

criterion-related validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

in the “unitary” view, this is the true form of validity; the degree to whih the test is an accurate and faithful measure of the construct it purprts to measure (convergent and divergent validity)

A

construct validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

defdefinition of a good employee

A

criterion

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Steps in conducting a validation study

A
  1. Conduct a job analysis
  2. Specify criteria
  3. Choose predictors
  4. Validate the predictors
  5. Cross-validate
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

what one knows

A

knowledge

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

what one is able to do

A

skill

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

predicts whether a person will engage in dishonest behaviors

A

integrity tests

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

problems with personality/integrity tests

A
  1. faking: intentionally misrepresenting oneself in personality inventories
  2. job relevance: are these dimensions job relevant?
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

simulation of actual job tasks; good predictors of future job performance

A

work samples

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

ssimulation of management and other subjective jobs

A

assessment centers

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

tasks of assessment centers

A
  1. In-basket exercise: come in to work and this is what you find
  2. Leaderless group exercise: leaders often emerge
  3. Problem-solving simulation: write up a report for the solution of give a presentation
  4. Role-play exercise: act out firing me
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

cognitive ability/general mental ability; the ability to learn and acquire information; measured by (g); meausres aptitude and achievement

A

intelligence

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

_____ ability measures are amongst the highest predictors of performance across a wide variety of jobs

A

cognitive

26
Q

Meta-analyses siggest an aveage coefficient of r=___, between cognitive ability and performance

A

.51

27
Q

ththe result of using a selection method has a negative effect on a legally protected minority group compared to a majority group

A

adverse impact

28
Q

Cognitive ability is a better predictor of performance in Caucasians than in __________ or ____________

A

hispanics, african americans

29
Q

the trend that there is an increase in mean intelligence scores over time

A

Flynn Effect

30
Q

Is adverse impact legal?

A

Yes, so long as the validation study demonstrates a direct connection to performance on the job

31
Q

Typical meta-analytic validity of interviews estimates range from r=___ to r=___

A

.25 to .30

32
Q

Why don’t interviews work?

A
  • high variability in judgment (disagreement between raters, poor inter-rater reliability
  • lack of established criteria
  • poor interviewing skills
  • high influence of rater’s personal preference (attractiveness, ethnicity, weight, and like-me bias)
33
Q

Why do we cling to interviews?

A

control, we overestimate our own reliability and accuracy of judgment, and we have a self-serving bias in memory

34
Q

describe a problem to the test taker and require the test taker to rate various possible solutions in terms of their feasibility or applicability; measures “practical intelligence”

A

Situational Judgment Tests

35
Q

projective interpretation of handwriting to determine personality features

A

graphology

36
Q

the five “protected” groups

A

race, sex, religion, color, national origin

37
Q

What dis Griggs v. Duke Power establish?

A

before we can use our assessments, we need to validate them; we need to statistically demonstrate that our assessment is predictive of performance on the job

38
Q

current employees take selection measures, scores are correlated with their Performance Evaluations

A

concurrent

39
Q

all applicants are given selection measure, months later performance evaluations are given, and scores are then correlated from records

A

predictive

40
Q

validating our findings by collecting two samples: splitour sample in two or concurrent first and then prefictive later or validity generalization (same job, same KSAOs, should have same relationships between predictors and criterion)

A

cross validation

41
Q

the minimum acceptable performance on the criterion measure

A

criterion cutoff

42
Q

the minimum acceptable score on the predictor assessments

A

predictor cutoff

43
Q

the number of openings divided by the number of applicants

A

selection ratio

44
Q

the percentage of employees who are currently performing at an acceptable level

A

base rate

45
Q

must pass each criterion cutoff to be considered; failure at any stage prevents passage upward, “bottom up elimination”

A

multiple hurdles approach

46
Q

advantages of multiple measures

A

can save the most expensive or time consuming measure for only a few applicants
best for jobs which require sufficient levels of multiple measures in which one KSAO can not compensate for another

47
Q

disadvantages of multiple measures

A

range restriction makes validity of each subsequent measure more difficult to determine

48
Q

uses scores from each predictor in an equation to estimate criterions; “compensatory approach” low score in one area can be outweighed by a high schore in another, highest score is the best applicant (“Top down approach”

A

multiple regression approach

49
Q

in order to account for error, a standard errorin measurement is calculated (a standard deviation around the regression line)–this creates a confidence interval based on the amount of error estimated to be included in the measurement

A

banding

50
Q

value of selection system to the organization

A

utility

51
Q

utility is maximized by:

A
  1. base rate success-should be 50%
  2. Selection ratio (#hired/#applicants)-should be low
  3. Validity of selection device-should be high
52
Q

Valid predictors:

A

increase true positives and reduce false positives

53
Q

affirmative action is required of all organizations:

A

with 50+ employees or government contracts of $50,000+

54
Q

Gratz v. Bollinger

A

University of Michigan used a points system for undergrads (20 pts out of 100 needed) were given for underrepresented minority status
*ruled against can’t have different standards for minority groups

55
Q

Grutter v. Bollinger

A

University of Michigan Graduate School (law) considered minority status a plus, but no definitive points added, it was equivalent to other possible good characteristics

56
Q

What has changed after the Fisher case?

A

Colleges are now required to demonstrate they can not achieve sufficient diversity w/o taking race into account

57
Q

four fifths rule

A

if the selection ratio for a minority group is less than 80% of what it is for a majority group, adverse impact is present

58
Q

Case where required an IQ test that was not a valid predictor of performance; if Adverse Impact, system must be valid and company must prove it

A

Briggs vs. Duke Power

59
Q

Case where blacks not as likely to receive promotion recommendation

A

Rowe vs. General Motors

60
Q

Case where a test can be used for multiple jobs only if jobs are similar

A

Albermarle Paper Company vs. Moody

61
Q

Court case. Established a separate admission process for Blacks, Chicanos, Asians, and American Indians (reverse discrimination)

A

Bakke vs. Regents of University California