Ch. 4 Predictors: Psychological Assessments Flashcards

1
Q

What is a predictor?

A
  • Any variable used to forecast/predict a criterion

- For example, you want someone who will show up on time for work (criterion)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is Reliability

A
  • Consistency and stability of measurement
  • Three types of reliability used for different reasons
    NOT interchangeable
  • Regardless, good scores are r = .70 and above
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Test-retest Reliability: Coefficient of stability

A
  • If a person took the assessment again in a month, would they get the same scores?
  • If we think the assessment is tapping into something enduring or trait-like, it should not vary wildly across time
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Internal-Consistency Reliability: Homogeneous content

A
  • Degree to which individual items of an assessment relate to one another
  • Split-half Reliability: Divide test into two and see how well their parts relate to one another
  • Cronbach’s Alpha or Kuder-Richardson 20 (KR20): Each individual item is related to all other items and degree of agreement among items is assessed
  • Split-half Reliability - Divide test into two and see how well their parts relate to one another
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Inter-Rater Reliability: Conspect reliability

A
  • If three separate interviewers rate the performance of an interviewee, you can evaluate the degree to which they agree with one another
  • Did everyone see the candidate in the same way?
  • Disagreement needs to be discussed and understood
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Validity

A
  • Accuracy of measurement

- Are we measuring what we seek to measure?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Construct Validity

A
  • The degree to which a test is an accurate measure of the construct it is trying to measure
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Convergent validity

A
  • The degree to which our test relates to what it should theoretically relate to
  • Happiness should relate to optimism and negative affect (inversely)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Discriminant validity

A
  • The degree to which the construct does not relate to things it should not theoretically relate to
  • Happiness does not relate to intelligence
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Criterion-Related Validity

A
  • Another way of assessing construct validity
    The degree to which a predictor relates to a criterion
  • Concurrent criterion-related validity
  • Predictive criterion-related validity
  • Determine both of these in a sample of employees for whom we have these scores
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Validity Coefficient

A
  • The correlation between predictor scores and a criterion
    Desired (and common) range is .30 to .40
  • Squaring the correlation tells us variance we can explain in the criterion variable
  • r = .40, we are explaining 16% of the variance
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Content Validity

A
  • Another way to assess construct validity
  • No statistics
  • An evaluation of how well the test represents the domain you seek to assess
  • Only assessing knowledge of one chapter would lead to poor content validity if the criterion is knowledge of I/O psychology in general
  • Typically assessed by Subject Matter Experts
    Content of assessment needs to relate to the content of a job (as outlined by the Work Analysis)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Similar type of “validity” - Face Validity

A
  • Items appear appropriate for purpose of assessment

- Book says this is assessed by test-takers

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Predictors: Measured via a test

A
  • Give potential mechanics various questions assessing knowledge of cars and how to fix them
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Predictors: Measured via a sampling of behavior

A
  • Give potential mechanics a broken car to assess and fix
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Predictor: Present-oriented

A
  • Job interview may assess level of interpersonal skills
17
Q

Predictor: Past-oriented

A
  • Job interview may ask “tell me about a time in which you…”

- Letters of recommendation

18
Q

Intelligence

A
  • Concept of g vs. multiple intelligences
  • Racially charged history and controversy
  • Book says “single best predictor job performance” which is true numerically, but likely not practically or ethically
19
Q

Mechanical Aptitude

A
  • Assess the recognition of mechanical principles like sound and heat conductance, velocity, gravity, and force.
  • Predictive of success in manufacturing/production type jobs
  • Women tend to perform worse
  • Example: Bennett-Mechanical Test
    Series of pictures that illustrate various mechanical concepts and principles
20
Q

Personality

A
  • No “right v. wrong” answers

- Scale scores used to predict job success

21
Q

Big 5 Personality Theory

A

Openness to experience, Conscientious, Extraversion, Agreeableness, Neuroticism (OCEAN or CANOE)

22
Q

Dark Triad

A
  • Machiavellianism, narcissism, psychopathy
23
Q

Integrity

A
  • Overt (transparent)
  • Asks questions about attitudes toward theft and other forms of dishonesty, endorsement of common rationalizations of theft, or admission of theft
  • Incentive for employee to lie
  • Personality-based
  • Take a personality measure that makes no overt reference to theft, but has been found to be predictive of theft
  • More predictive of counterproductive work behavior than job performance or turnover
  • Incentive to lie
24
Q

Situational Judgment

A
  • Responses are not scored simply as right or wrong

- Designed to reflect reality of making decisions in life

25
Q

Computerized Adaptive Testing (CAT)

A
  • Automated, difficulty level of questions pre-calibrated

- Used by the military and other large volume assessments (SAT, GRE)

26
Q

Testing on the Internet

A
  • Many tests moving from from paper-and-pencil testing to online computer testing
  • Proctored v. unproctored web-based testing
27
Q

Factors to Consider When Using Psychological Assessments

A
  • Tyranny of testing: Critical decisions based on a single test
  • Does it systematically select certain groups over others?
  • Cheating/lying on employment tests
  • Also anxiety associated with taking tests
  • Each test should be re-validated on each job
  • Some tests may predict certain jobs well but not others
  • All in all, psychological assessments are moderately predictive of job performance
  • Thus, need other methods of assessments to triangulate information
28
Q

Interviews

A
  • Universal in its use
  • Social exchange between at least two individuals
  • Can be subject to biases inherent in all social processes (e.g., we tend to like people similar to ourselves)
  • More structure increases validity and fairness
  • “Structured interview” is an interview in which all candidates are asked the same questions
  • Questions should be driven by content of work analysis
29
Q

Situational interviews

A
  • ask candidates to state what they would do in a future (hypothetical) situation
30
Q

Behavioral interviews

A
  • ask candidates to describe a past time in which they have exhibited certain behaviors
31
Q

Work Samples

A
  • Candidate performs a sample of the work they would be doing
  • Typing, driving a forklift, running analyses on a dataset
    High validity in blue-collar jobs that involve specific skills
    Less effective/useful in people-oriented jobs
  • Can be time consuming
  • Effective at predicting “can do,” but not potential
  • Physical abilities testing also used to assess strength, endurance, and coordination
32
Q

Situational Exercises

A
  • Similar to work samples, but not an exact simulation of job
    In-basket test – candidates given a basket of things to do and are rated on productivity and problem-solving
  • Leaderless group discussion – group of 8-10 people with no assigned leader; watch how they handle social interactions and getting stuff done
  • Modest validity among managerial positions and very costly
  • Not as similar to actual work as the work samples were, but could still be a reasonable compromise
33
Q

Biographical Information

A
  • Past life experiences used to predict future behavior
    Education and past employment (asked for on an application or included in a resume)
  • Book mentions constructs I would consider personality assessments (e.g., need for achievement, satisfaction with life, optimism)
  • Also mentions questions I wouldn’t ask about parents and family
34
Q

Usefulness of Biographical Information

A
  • Typically high validity
  • Reveals consistent patterns of behavior in our lives
  • Often locates unique criterion variance
  • Legally defensible, but be careful with types of questions
35
Q

Letters of Recommendation

A
  • Commonly used selection method but least valid
  • Candidate only asks people who will speak well of them
  • Writer only agrees to write if they can speak well of candidate
  • Letter-readers might read between the lines (which is not good)
36
Q

Drug Testing

A
  • Substance abuse is major global problem
  • Increases danger to employee, coworker, and clients/patients/patrons

Two types of assessments:

  • Screening test
  • Confirmation test

Practical issues

  • Cost savings in the end, costly in the beginning
  • Controversial – some say infringing on privacy

Laws differ by state and local governments

37
Q

Polygraphy (Lie Detection)

A
  • Validity in question
  • There is no specific physiological reaction to lying
  • People may be aroused just because they are in that situation
  • Try to establish baseline, but arousal may rise just simply by asking more difficult or crime-related questions, not because of lie
  • President Reagan 1988 ban on private-sector use
    But increasing use in government and security agencies
38
Q

Test of Emotional Intelligence

A
  • Highly controversial and still not well-established

- But intuitive appeal and predictive of success in jobs with high emotional labor

39
Q

Four major evaluative standards:

A
  • Validity: Predictive accuracy
  • Fairness: Should not have differential predictive accuracy across different groups
  • Applicability: Can the method be used across job types? Interviews used in almost all jobs.
  • Cost: How expensive the selection method is to administer has a bearing on its use