Chapter 14: Occupational Setting Flashcards

You may prefer our related Brainscape-certified flashcards:
1
Q

Purpose of Testing

A
  1. To determine potential for success in a program.
  2. To place individuals into programs.
    This involves matching the candidates’ abilities and competencies with the requirements of specific training programs.
    3.To match applicants with specific job openings.
  3. To counsel individuals, for career advancement, or career changes, for example.
  4. To provide information for program planning and evaluation.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Preemployment Testing

A
  1. To elicit a candidate’s desirable and undesirable “traits.”
  2. To identify those characteristics of the candidate that most closely match the requirements of the job.

Test can be given before interview to screen out people
Test can be given after interview to confirm findings

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Government regulations

A
  • Before 1960s, employment discrimination was not illegal
  • Civil Rights Act of 1964 was designed to provide equality
  • In 1978, EEOC published the “Uniform Guidelines on Employee
    Selection Procedures”
  • In 1987 (revised 2018), SIOP published the Principles for the
    Validation and Use of Personnel Selection Procedures
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Predictors

A
  • Cognitive Ability
  • Psychomotor tests
  • Personality tests
  • Integrity tests
  • Work samples
  • Assessment centers
  • Biographical Information
  • Interviews
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

The Criterion Problem

A
  • Difficulty or complexity in measurement of performance criteria
  • Performance: Criterion in which most organizations, employees,
    managers, and I/O psychologists are interested

Nathan and Alexander (1988) conducted metaanalyses of validity coefficients from tests of clerical abilities for five criteria: supervisor ratings, supervisor rankings, work samples, production quantity, and production quality.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

How Job Success is Measured

A
  1. Quantity and/or quality of production
  2. Personnel records
  3. Administration actions
  4. Performance ratings
  5. Job samples
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Criteria are Dynamic

A

Ghiselli and Haire (1960) suggested that the criteria against which tests are validated are dynamic, that is, they change over time.

Criteria can change over time
The example used in the book with the taxi drivers may have had
more to do with amount of learning on the job.
You may need a criteria for when first learning a job versus a criteria
for performance after a person has been on the job for a certain
amount of time.
Level of performance desired should be determined BEFORE the
validation study

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Ratings

A

Another meta-analysis was conducted by Conway & Huffcutt (1997) on multi-source ratings.
The found results similar to Harris & Schaubroeck (1988).
* Subordinates had the lowest mean reliability (.30) followed by peers (.37), and supervisors were the highest (.50).
* Comparing the correlations between the ratings for the different
groups:
o Subordinate ratings
 .22 with supervisor and with peers
 .14 with self-ratings
o Self-ratings
 .22 with supervisor
 .19 with peers
o Supervisor- peer correlation was .34

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Rating Errors

A

Cognitive processing – supervisors need to recall information about employee performance. There can be errors anywhere in that process.
* Observe behavior
o Miss important behaviors or see what wants to see
* Encode information about behavior
o Label information incorrectly or not well enough
* Store information
o Don’t store information or store the wrong information
* Retrieve information
o Not able to retrieve or retrieve irrelevant information
* Integrate information
o Make a poor decision or make a biased decision

For all types of ratings errors, there can be a true component of
the rating and an invalid component
Forced distribution methods for ratings (forced distribution,
alternation-ranking, and paired comparisons) are not popular with
raters and ratees.
Ratings using behavioral anchors have more face validity with
ratees.
They also tend to give better ratings.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Rating Errors: Halo

A

Two types:
1. Use a global evaluation to assess the performance of an employee
2. Unwilling to distinguish levels of performance on independent
dimensions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Rating Errors: Leniency

A

Mean ratings is higher than mean ratings by other raters
Mean ratings are higher than the midpoint of the scale

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Rating Errors: Central Tendency

A

Only use the midpoint of the rating scale

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Rating Errors: Severity

A

Use only the low end of the ratings scale
Give ratings that are lower than other raters

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Standardized tests

A
  • What do you think about the used of standardized tests in employment contexts?
  • Evidence suggests that general cognitive ability Accounts for a large proportion of variance in criterion
    performance
  • Validity coefficient = .532 (or r2 = .25)
  • Predicts performance similarly across countries
  • Example: Wonderlic Personnel Test
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Types of standardized tests

A

Specific Cognitive Ability Tests
Predict the likelihood that an individual will do well in a
particular job given his or her specific abilities; validity
coefficients range from .40 to .50
* Clerical tests
* Minnesota Clerical Test
* Number and name comparison
* Mechanical comprehension
* Bennett Mechanical Comprehension Test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Types of standardized tests

A

Specific Cognitive Ability Tests
Predict the likelihood that an individual will do well in a
particular job given his or her specific abilities; validity
coefficients range from .40 to .50
* Clerical tests
* Minnesota Clerical Test
* Number and name comparison
* Mechanical comprehension
* Bennett Mechanical Comprehension Test

17
Q

Employment interviews: what it crucial

A

Standardization (structured interviews) and training are
crucial to the utility of interviews

18
Q

Employment interviews: validity

A
  • Initial evidence suggested that interviews have low validity—
    r = .14.
  • More recent evidence—r = .37
  • Structured interviews were more predictive
    (r = .44) than unstructured (r = .33)
19
Q

Employment interviews: Behavioral-based questions vs. situational interview

A
  • Behavioral-based questions
  • Focuses on past behavior; interviewees are asked to describe specific ways in which they have addressed past
    situations
  • Situational interview
  • Focuses on future behavior; interviewees are asked how they would handle work dilemmas or situations
20
Q

Employment interviews: Legality

A

Interview questions that shouldn’t be asked:
* How old are you?
* Have you ever been arrested?
* Do you plan on having children? Pregnant?
* Are you a U.S. citizen?
* Do you have a disability?
* Do you have children? Day care?
* Are you actively involved in the NAACP?
* Have you ever been treated by a psychologist/psychiatrist?
* Have you ever been hospitalized? For what?
* How many days were you absent from work due to illness in the past year?
* Are you taking any prescribed drugs?

21
Q

Work-sample tests

A
  • Take a sample of present performance to predict future
    performance
  • Range in complexity
  • 5 minute typing test
  • Operating a flight simulator
  • Intended to maximize validity
  • Why/how? What part?
22
Q

Work-sample tests: Situational Judgement Tests (SJTs)

A
  • Refers to paper-and-pencil tests or video scenarios that measure applicants’ judgment in work settings
  • “Which of the following ways would you be most likely to respond?” scored by subject matter experts (SMEs)
  • Have incremental validity over personality, job experience, and cognitive ability; meta-analysis
  • r = .38 with job analysis, r = .29 without job analysis
23
Q

Assessment centers: what are they?

A
  • Multiple raters (assessors) evaluate applicants or
    incumbents on a standardized set of predictors (exercises)
    that simulate job
  • Involves multiple methods of assessment, assessors,
    assessees
  • Lasts two to three days
  • Used by many large companies
  • Are expensive (time and money)
24
Q

Assessment centers: Two popular AC exercises

A
  • In-basket: Assessee responds to a series of job-related scenarios and information that would typically appear in a manager’s in-basket, takes action, and makes decisions about how to proceed
  • Leaderless group discussion (LGD): Group exercise designed to tap managerial attributes, requires small group interaction
    * Given an issue to resolve, no roles assigned
    * Observed by assessors
25
Q

The Role of Personality

A

The Big 5 has shown correlations with job performance
Some believe that the constructs are too broad
Hough proposed that personality constructs should be broken down into smaller facets

26
Q

Biodata: what is it?

A

Assess past behavior with biographical data
Items
* Factual and verifiable
* Subjective and less verifiable
* Center on a particular domain or criterion (job-related knowledge and skills)
* Reflect previous life experiences (prior job experiences or
attitudes)
Use different methods to weight and score the items
Have been used since the 1920s

27
Q

Biodata: Scaling procedures

A
  1. Empirical keying
  2. Rational scales
  3. Factorial scales
  4. Subgrouping
28
Q

Biodata: Scoring

A
  • Use of an empirical keying method
    * Administer items
    * Find pool of items that differentiates groups on
    criterion of interest
    * Score items in the direction that predicts performance
    * Cross-validate scoring system on a new sample
  • Option-keying
    * Each response option is analyzed separately and is
    scored only if it correlates significantly with the
    criterion
    • Use contrasted groups
    • Items are scored plus, minus, or zero depending on
      how they differentiate the groups
29
Q

Biodata: Empirical questionnaires

A
  • Developed using criterion validity
  • Multiple choice
  • Scoring weights are based on the empirical relationship between the item and criterion
30
Q

Biodata: Rational questionnaires

A
  • Developed using content validity
  • Items require narrative responses
  • Responses are evaluated by raters using predetermined standards
    Which is better? Empirical approach showed better validity
31
Q

Biodata: Accuracy of reliability and validity

A

Research has shown mixed results
Reliability –
* Can be enhanced with thoughtful development
* Shows good test-retest
* Shows low internal consistency due to heterogenous items
Validity –
* Has shown good levels of validity in variety of settings and samples
* Has been shown over time
* Doesn’t show black-white differences

32
Q

Integrity Tests: what are they?

A

Attempt to predict whether an employee will engage in
counterproductive or dishonest work-related behaviors (e.g., cheating, stealing, sabotage)

33
Q

Integrity Tests: two types

A
  • Overt integrity test: Measures attitudes toward theft and actual theft behaviors
  • There is nothing wrong with telling a lie if no one suffers any harm (True or False)
  • How often have you arrived at work under the influence of alcohol?
  • Do your friends ever steal from their employers?
  • Personality-type integrity test: Measures personality
    characteristics believed to predict counterproductive behavior
  • Do you like taking risks?
  • Would your friends describe you as impulsive?
  • Would you consider challenging an authority figure?