Ch. 4 Predictors: Psychological Assessments Flashcards
1
Q
What is a predictor?
A
- Any variable used to forecast/predict a criterion
- For example, you want someone who will show up on time for work (criterion)
2
Q
What is Reliability
A
- Consistency and stability of measurement
- Three types of reliability used for different reasons
NOT interchangeable - Regardless, good scores are r = .70 and above
3
Q
Test-retest Reliability: Coefficient of stability
A
- If a person took the assessment again in a month, would they get the same scores?
- If we think the assessment is tapping into something enduring or trait-like, it should not vary wildly across time
4
Q
Internal-Consistency Reliability: Homogeneous content
A
- Degree to which individual items of an assessment relate to one another
- Split-half Reliability: Divide test into two and see how well their parts relate to one another
- Cronbach’s Alpha or Kuder-Richardson 20 (KR20): Each individual item is related to all other items and degree of agreement among items is assessed
- Split-half Reliability - Divide test into two and see how well their parts relate to one another
5
Q
Inter-Rater Reliability: Conspect reliability
A
- If three separate interviewers rate the performance of an interviewee, you can evaluate the degree to which they agree with one another
- Did everyone see the candidate in the same way?
- Disagreement needs to be discussed and understood
6
Q
Validity
A
- Accuracy of measurement
- Are we measuring what we seek to measure?
7
Q
Construct Validity
A
- The degree to which a test is an accurate measure of the construct it is trying to measure
8
Q
Convergent validity
A
- The degree to which our test relates to what it should theoretically relate to
- Happiness should relate to optimism and negative affect (inversely)
9
Q
Discriminant validity
A
- The degree to which the construct does not relate to things it should not theoretically relate to
- Happiness does not relate to intelligence
10
Q
Criterion-Related Validity
A
- Another way of assessing construct validity
The degree to which a predictor relates to a criterion - Concurrent criterion-related validity
- Predictive criterion-related validity
- Determine both of these in a sample of employees for whom we have these scores
11
Q
Validity Coefficient
A
- The correlation between predictor scores and a criterion
Desired (and common) range is .30 to .40 - Squaring the correlation tells us variance we can explain in the criterion variable
- r = .40, we are explaining 16% of the variance
12
Q
Content Validity
A
- Another way to assess construct validity
- No statistics
- An evaluation of how well the test represents the domain you seek to assess
- Only assessing knowledge of one chapter would lead to poor content validity if the criterion is knowledge of I/O psychology in general
- Typically assessed by Subject Matter Experts
Content of assessment needs to relate to the content of a job (as outlined by the Work Analysis)
13
Q
Similar type of “validity” - Face Validity
A
- Items appear appropriate for purpose of assessment
- Book says this is assessed by test-takers
14
Q
Predictors: Measured via a test
A
- Give potential mechanics various questions assessing knowledge of cars and how to fix them
15
Q
Predictors: Measured via a sampling of behavior
A
- Give potential mechanics a broken car to assess and fix