Lecture 6: Selection decisions Flashcards

You may prefer our related Brainscape-certified flashcards:
1
Q

Validity

A

The degree to which inferences from test scores are justified by the evidence (Does the instrument measure what it is supposed to measure?)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Constuct validity

A

Are you measuring what you’re supposed to measure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Content validity

A

The extent to which the items on a test are fairly representative of the entire domain the test seeks to measure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Criterion validity

A

The extent to which an operationalization of a construct relates to a theoretical representation of the construct

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Concurrent validity (Part of criterion validity)

A

Your measure should correlate highly with other related measures which you expect it to correlate with and don’t correlate with constructs that don’t measure the same

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Predictive validity (Part of criterion validity)

A

Your measure should be correlated with a future outcome in the way you expect it to be

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Face validity

A

The extent to which a test is subjectively viewed as covering the concept it is supposed to measure –> subjective

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Reliability

A

The extent to which a score from a test or from an evaluation is consistent and free from error (Does the instrument measure and perform well?)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Temporal stability

A

Test-retest reliability–> testing at different time points should give the same results

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Form stability

A

Different forms of tests should give the same results

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Internal reliability

A

It all the items in the test measure the same (Cronbach’s alpha)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Scorer reliability

A

Objective evaluation of scores–> In structured interviews scorer reliability is high, panel members often agree

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Reliable, not valid

A

Consistent but not measuring what it’s suppost to measure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Valid, not reliable

A

Not consistent but measures what it is supposed to measure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Neither reliable or valid

A

Something that doesn’t measure what it’s supposed to and is different every time you test it

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Both reliable and valid

A

Measures what it’s supposed to and is consistent

17
Q

Utility

A

The degree to which a selection device improves the quality of a personnel system, above what would have occurred had the instrument not been used

18
Q

Taylor-Russel tables

A

Designed to estimate the percentage of future employees who will be successful on the job if a particular selection method is used

19
Q

Lawshe tables

A

Tables that use the base rate, test validity and applicant percentile on a test to determine the probability of future success for that applicant

20
Q

Proportion of correct decision

A

Refers to a utility method that compares the percentage of times a selection decision was accurate with the percentage of succesful employees

21
Q

Utility formulas

A

Provides an estimate of the amount of money an organization will safe if it adopts a new testing procedure

22
Q

Measurement bias

A

Technical aspects of the test A is biased if there are group differences in test scores (race, gender) that are unrelated to the construct being measured

23
Q

Predictive bias

A

A situation in which the predicted level of job success falsely favours one group over another

24
Q

Unadjusted Top-down selection

A

A “performance first” hiring formula

25
Q

Passing scores

A

Who will perform at an acceptable level? (passing score is a point in a distribution of scores that distinguishes acceptable from unacceptable)

26
Q

Banding

A

A compromise between the top-down and passing scores approach. Takes into account that tests are not perfectly reliable because of error variance

27
Q

SEM Banding

A

Based on the concept of Standard Error of Management –>
- Non sliding band: hire anyone whose scores fall between the top score and the band score
- Sliding band: start with the highest score and subtract from it the band width