Exam 1 Review Flashcards
Munsterberg, Taylor, the Gilbreths were the first to focus on which aspect of I/O Psychology?
A focus on the “I” side; concerned with personnel issues.
What is the significance of the Hawthorne Studies?
Shifted interest towards the “O” side and became concerned with the work environment.
Describe the Hawthorne Studies.
Workers increased production regardless of illumination levels.
What is the Hawthorne Effect?
Change in behavior due to novel treatment, attention, etc.
True or False: In practice, “I” and “O” overlap considerably.
True
Explain the components of Experimental Strategies.
- True experiments have several defining characteristics:
- Manipulation of an independent variable.
- Control over confounding variables.
- Random assignment to conditions.
- Without these our ability to determine causality is hindered.
- We want to be able to say that changes in the IV (and nothing else) caused changes in the DV.
Why is experimentation rare in I/O Psychology?
- Difficult to randomly assign workers to conditions, i.e., to other jobs or other procedures
- Employers fear loss of productivity or disruption to their operations
- Can be costly, time-consuming, etc.
- Quasi-experiments may be acceptable
- Like a true experiment, but lacking a defining element
- ex: maybe no random assignment, or maybe IV not under the experimenter’s control
- Like a true experiment, but lacking a defining element
- Lab experiments may be acceptable
- Issues of generalizabilty from the lab to the field become a consideration
Explain the components of Correlational Strategies.
Examining relationships between variables.
- Correlational strategies include surveys, interviews, naturalistic observation, questionnaires.
What is the Correlation Coefficient?
The statistic that describes the relationship between two variables.
- Related = magnitude and direction
- Variables can be related to each other, strongly, weakly, or anywhere in between
- Variables can be positively related, negatively related, or unrelated to one another
- Positive correlation: high scores on one variable are associated with high schores on the other
- Negative correlation: high scores on one variable are associated with low scares on the other
Correlation coefficient ranges between -1.00 and +1.00
Correlation only indicates that two variables are associated in some way
Causality is unknown with correlations
The correlation coefficient is the basis for estimating both reliability and validity of measures
What are the three ways to explain the correlation between two variables?
- The first variable caused the second
- The second variable caused the first
- the problem of directionality
- An unmeasured third variable is responsible for the relationship between the other two
- the third-variable problem
Define Reliability.
The consistency of a measure; how well scores on the same subject are replicable across repeated measurements of the same variable.
- A person’s observed score equals the true score plus a component of error
- The reliability of a test is the ratio of true score varience divided by observed score variance
- Whatever portion is left over is due to error variance
- The way to obtain that ratio of true variance to measured variance is by computing a correlation coefficient, here called a reliability coefficient
- A reliability of .70 means that 30% of whatever’s being measured is due to error
- crummy items, misinterpretations, faking good by the respondents, etc.
- A reliability of .70 means that 30% of whatever’s being measured is due to error
What are the three types of Reliability?
- Test-Retest: same individuals, same test, two separate testing sessions
- consistency of responses over time
- Internal Consistency: consistency of multiple items in a single measuring instrument
- correlate responses to every item with responses to every other item
- the average of those intercorrelations is Coefficient Alpha, or the average interitem correlation
- Inter-Rater Reliability: consistency of ratings made by independent scorers
A test must be reliable before it can be valid
What is Face Validity?
- Do the items on this test or the procedures we use look like they measure the domain of interest?
- “Public-relations” validity: test should look like it measures what it claims to measure
What is Construct Validity?
The correlation between the predictor (the test score) and the criterion (the performance, behavior, or other test score that you want to predict) is the basis of criterion validity
The square of any validity coefficient tells the percent of variance accounted for in the criterion by the predictor (vice-versa)
r = .50 means that 25% of the variance in the criterion is accounted for by the predictor
25% of variability in the criterion can be predicted by knowing scores of the predictor variable
Explain Multiple Regression.
Combining predictors to account for the maxium amount of variability in a criterion.
- Determining the optimal weighted combination of predictors to acount for the most variability in a criterion
- a set of correlations between the predictors and the criterion
- the unique contribution made by each predictor is taken into account, along with the correlations among the predictors themselves
- each predictor should contribute independently of the others
What is Job Analysis?
- Comprehensive description of a job and the attributes needed to perform it.
- Used for career development, selection and training, performance appraisal, hedge againsts
- Information about jobs come from many different sources
- Job analysts, job incumbants, supervisors, observers contribute to defining aspects of a job
What are the two approaches to job analysis?
- Job-oriented approach:
- examining the nature of the tasks to be performed
- Person-oriented approach:
- examining characteristics of a person necessary to perform the job
- KSAOs:
- Knowledge, skills, abilities, and other personal characteristics
O*NET shows the multifaceted nature of most jobs
Define Performance Appraisal.
The process of providing feedback to employees regarding their performance
A proper job analysis can identify necessary elements of a job; this adds the appraisal process
- Performance appraisal should involve matching employee performance against standards or criteria
Describe Theoretical and Actual Criterion.
- Theoretical: the concept of what acceptable performance should be
- Actual: the way in which the theoretical criterion is assessed
Describe Criterion Relebance, Contamination, and Deficiency.
- Relevance: the extent to which the actual and theoretical criteria overlap
- Contamination: actual criterion measures something other than the theoretical criterion
- Deficiency: elements of the theoretical criterion not tapped by the actual criterion
What are some examples of Objective Performance Indicators?
Production records, absenteeism, sales, turnover rate, number of accidents, etc.
Who uses Subjective Performance Indicators and what are some examples?
Managers, supervisors, making appraisals
GRF, MSS, BARS, BOS
What are some shortcomings when using Subjective Ratings?
- Negativity effect: negative information weighted more heavily than positive in evaluations
- Halo effect: positive performance in one area makes other bad areas look not-so-bad
- Mood effects: positive moods lead to favorable judgments; negative moods are the opposite
- Attractiveness: a physically attractive employee may be rated more favorably
What are the two types of criterion validity?
- Predictive Validity: predict future performance
- predictor is administered now; criterion is measured later
- Concurrent Validity: predict present performance
- both predictor and criterion are given at about the same time
What are the testing options for criterion validity?
- Group vs. individual administration
- Objective vs. open-ended format
- Speed vs. power test
- Written vs. performance test
What are the common predictors and typical measures?
- Interest Inventories: self-directed search, Strong Vocational Interest Blank, Jackson Vocational Interest Survey
- Cognitive Abilities: Wechsler Adult Intelligence Scale, Wonderlic Personnel Test, specific test of specific abilities
- Personality Tests: Self reports (MBTI, CPI, MMPI) vs. projective techniques (TAT)
- Skill and Ability Tests: Psychomotor skills (Crawford Small Parts Dexterity Test), mechanical aptitude
- Integrity, Lie Detection, Drug Testing: controversial issues surround these methods, especially for selection
- Interviews and biographical information: structured vs. unstructured interviews / objective vs. subjective background info
- Work samples and assessment centers: sample of actual job to be performed / in-basket, leaderless group activities