Mid-Term study guide Flashcards
Industrial-organizational (I-O) psychology
The application of psychological principles, theory, and research to the work setting.
Society for Industrial and Organizational Psychology (SIOP)
An association to which many I-O psychologists, both practitioners and researchers, belong. Designated as Division 14 of the American Psychological Association (APA)
Personnel psychology
field of psychology that addresses issues such as recruitment, selection, training, preformance appraisal, promotion, transfer, and termination
Human Resources Management (HRM)
Practices such as recruitment, selection, retention, training, and development of people (human resources) in order to achieve individual and organizational goals.
Organizational psychology
Field of psychology that combines research from social psychology and organizational behavior and addresses the emotional and motivational side of work.
Human engineering or human factors psychology
the study of the capacities and limitations of humans with respect to a particular environment
Scientist-practitioner model
A model that uses scientific tools and research in the practice of I-O psychology
TIP (The Industrial-Organizational Psychologist)
Quarterly newsletter published by the Society for Industrial and Organizational Psychology: provides I-O psychologists and those interested in I-O psychology with the latest relevant information about the field
Telecommuting
Accomplishing work tasks from a distant location using electronic communication media
Virtual team
Team that has widely dispersed members working together towards a common goal and linked through computers and other technology
Title VII of Civil Rights Act of 1964
Federal legislation that prohibits employment discrimination on the basis of race, color, religion, sex, or national, origin, which define what are know as protective groups. Prohibits not only intentional discrimination but also practices that have the unintentional effect of discrimination against individuals because of their race, color, national origin, religion, or sex
American Psychological Association (APA)
The major professional organization for psychologists of all kinds in the United States.
Experimental design
Participants are randomly assigned to different conditions
Quasi-experimental design
Participants are assigned to different conditions, but random assignment to conditions is not possible
Nonexperimental design
Does not include any “treatment” or assignment to different conditions.
Observation design
The researcher observes employee behavior and systematically records what is observed
Survey design
Research strategy in which participants are asked to complete a questionnaire or survey
Quantitative methods
Rely on tests, rating scales, questionnaires, and physiological measures and yield numerical results
Qualitative methods
Rely on observations, interviews, case studies, and analysis of diaries or written documents and produce flow diagrams and narrative descriptions of events or processes
Triangulation
Approach in which researchers seek converging information from different sources
Experimental control
Characteristic of research in which possible confounding influences that might make results less reliable or harder to interpret are eliminated: often easier to establish in laboratory studies that in field studies
Statistical control
Using statistical techniques to control for the influence of certain variables. Such control allows researchers to concentrate exclusively on the primary relationships of interest.
Descriptive statistics
Statistics that summarize, organize, and describe a sample of data
Measure of Central Tendency
Statistics that indicates where the center of a distribution is located. Mean, median, and mode are measures of central tendency
Variability
The extent to which scores in a distribution vary
Skew
The extent to which scores in a distribution are lopsided or tend to fall on the left or right side of the distribution
mean
The arithmetic average of the scores in a distribution: obtained by summing all of the scores in a distribution and dividing by the sample size
Mode
The most common or frequently occurring score in a distribution
Median
The middle score in the distribution
Inferential statistics
Statistics used to aid the researcher in testing hypotheses and making inferences from sample data to a larger sample or population
Statistical Significance
Indicates that the probability of the observed statistic is less than the stated significance level adopted by the researcher (commonly p<.05). A statistically significant finding indicates that the results found are unlikely to have occurred by chance, and thus the null hypothesis (hypothesis of no effect) is rejected
Statistical power
the likelihood of finding a statistically significant difference when a true difference exists
Correlation coefficient
Statistic assessing the bivariate, linear association between two variables. Provides information about both the magnitude (numerical value) and the direction (+ or -) of the relationship between two variables
Regression line
Straight line that best “fits” the scatter plot and describes the relationship between the variables in the graph; can also be presented as an equation that specifies where the line intersects the vertical axis and what the angle or slope of the line is.
Linear
Relationship between two variables that can be depicted by a straight line
Nonlinear
Relationship between two variables that cannot be depicted by a straight line; sometimes called “curvilinear” and most easily identified by examining a scatter plot
Meta-analysis
statistical method for combining and analyzing the results from many studies to draw a general conclusion about relationships among variables.
Reliability
consistency or stability of a measure
Validity
The accuracy of inferences made based on test or performance data; also addresses whether a measure accurately and completely represents what was intended to be measured.
Test-retest reliability
A type of reliability calculated by correlating measurements taken at the time 1 with measurements taken at time 2
Equivalent forms reliability
a type of reliability calculated by correlating measurements from a sample of individuals who complete two different forms of the same test
Internal consistency
form of reliability that assesses how consistently the items of a test measure a single construct; affected by the number of items in the test and the correlations among the test items
Generalizability Theory
A sophisticated approach to the question of reliability that simultaneously considers all types of error in reliability estimates (e.g., test-retest, equivalent forms, and internal consistency).
Predictor
The test chosen to assess attributes identified as important for successful job performance
Criterion
An outcome variable that describes important aspects or demands of the job; the variable that we predict when evaluating the validity of a predictor
Criterion-related validity
Validity approached that is demonstrated by correlating a test score with a performance measure; improves researcher’s confidence in the inference that people with higher test scores have higher performance
Validity coefficient
correlation coefficient between a test score (predictor) and a performance measure (criterion)
Predictive validity design
Criterion-related validity design in which there is a time lag between collection of the test data and the criterion data
Concurrent validity design
Criterion-related validity design in which there is a time lag between collection of the test data and the criterion data
Content-related validation design
A design that demonstrates that the content of the selection produce represents an adequate sample of important work behaviors and activities and/or worker KSAOs defined by the job analysis
Construct validity
Validity approach in which investigators gather evidence to support decisions or inferences about psychological constructs; often begins with investigators demonstrating that a test designed to measure a particular construct correlates with other tests in the predicted manner
Construct
psychological concept or characteristic that a predictor is intended to measure; examples are intelligence, personality, and leadership
Individual differences
Dissimilarities between or among two or more people
Psychometrics
Practice of measuring a characteristic such as mental ability, placing it on a scale or metric
Intelligence test
Instrument designed to measure the ability to reason, learn, and solve problems
Psychometrician
psychologist trained in measuring characteristics such as mental ability
Cognitive ability
Capacity to reason, plan, and solve problems; mental ability
“g”
Abbreviation for general mental ability
Personality
An individual’s behavioral and emotional characteristics, generally found to be stable over time and in a variety of circumstances; an individual’s habitual way of responding
Americans with Disabilities Act
Federal legislation enacted in 1990 requiring employers to give applicants and employees with disabilities the same consideration as other applicants and employees, and to make certain adaptations in the work environment to accommodate disabilities.
Big 5
A taxonomy of five personality factors; the Five-Factor Model (FFM)
Five-Factor Model (FFM)
A taxonomy of five personality factors, composed of conscientiousness, extraversion, agreeableness, emotional stability, and openness to experience
Integrity
Quality of being honest, reliable, and ethical
O*NET
Collection of electronic databases, based on well-developed taxonomies, that has updated and replaced the Dictionary of Occupational Titles (DOT)
Procedural knowledge
Familiarity with a procedure or process; knowing “how”
Declarative knowledge
Understanding what is required to perform a task; knowing information about a job or job task
Test battery
Collection of tests that usually assess a variety of different attributes
Bias
Technical and statistical term that deals exclusively with a situation where a given test results in errors of prediction for a subgroup
Fairness
Value judgment about actions or decisions based on test scores
Screen-out test
A test used to eliminate candidates who are clearly unsuitable for employment; tests of psychopathology are examples of screen-out tests in the employment setting
Screen-in test
A test used to add information about the positive attributes of a candidate that might predict outstanding performance; tests of normal personality are examples of screen-in tests in the employment setting
Self- presentation
A person’s public face or “game face”
Emotional intelligence (EI)
A proposed kind of intelligence focused on people’s awareness of their own and others’ emotions
Structured interview
Assessment procedure that consists of very specific questions asked of each candidate; includes tightly crafted scoring schemes with detailed outlines for the interviewer with respect to assigning ratings or scores based on interview performance
Situational interview
An assessment procedure in which the interviewee is asked to describe in specific and behavioral detail how he or she would respond to a hypothetical situation
Unstructured interview
An interview format that includes questions that may vary by candidate and that allows the candidate to answer in any form he or she prefers
Assessment center
Collection of procedures for evaluation that is administered to groups of individuals; assessments are typically performed by multiple assessors.
Work sample test
Assessment procedure that measures job skills by taking samples of behavior under realistic job-like conditions
Situational judgment test
Commonly a paper-and-pencil test that presents the candidate with a written scenario and asks the candidate to choose the best response from a series of alternatives
Incremental validity
The value in terms of increased validity of adding a particular predictor to an existing selection system
Biodata
information collected on an application blank or in a standardized test that includes questions about previous jobs, education, specialized training, and personal history; also known as biographical data
Computer Adaptive Testing (CAT)
A type of testing that presents a test taker with a few items that cover the range of difficulty of the test, identifies a test taker’s approximate level of ability, and then asks only questions to further refine the test taker’s position within that ability level
Objective performance measure
Usually a quantitative count of the results of work, such as sales volume, complaint letters, and output
Judgmental performance measure
Evaluation made of the effectiveness of an individual’s work behavior; judgment most often made by supervisors in the context of a performance evaluation
Performance management
System that emphasizes the link between individual behavior and organizational strategies and goals by defining performance in the context of those goals; jointly developed by managers and the people who report to them
Performance
Actions or behaviors relevant to the organization’s goals; measured in terms of each individual’s proficiency
Effectiveness
Evaluation of the results of performance; often controlled by factors beyond the actions of an individual
Productivity
ratio of effectiveness (output) to the cost of achieving that level of effectiveness (input)
Declarative knowledge (DK)
Understanding what is required to perform a task; knowing information about a job or job task
Procedural knowledge and skill (PKS)
knowing how to perform a job or task; often developed through practice and experience
Motivation (M)
Concerns the conditions responsible for variations in intensity, persistence quality, and direction of ongoing behavior
Determinants of performance
Basic building blocks or causes of performance, which are declarative knowledge, procedural knowledge, and motivation
Performance components
Components that may appear in different jobs and results from the determinants of performance; john Campbell and colleagues identified eight performance components, some or all of which can be found in every job
Criterion deficiency
A situation that occurs when an actual criterion is missing information that is part of the behavior one is trying to measure
Criterion contamination
a situation that occurs when an actual criterion includes information unrelated to the behavior one is trying to measure
Ultimate criterion
ideal measure of all the relevant aspects of job performance
Actual criterion
actual measure of job performance obtained
Adaptive performance
performance component that includes flexibility and the ability to adapt to changing circumstances
Expert performance
Performance exhibited by those who have been practicing for at least 10 years and have spent an average of four hours per day in deliberate practice
Personnel measure
Measure typically kept in a personnel file, including absences, accidents, tardiness, rate of advancement, disciplinary actions, and commendations of meritorious behavior
Task-oriented job analysis
Approach that begins with a statement of the actual tasks as well as what is accomplished by those tasks
Work-oriented job analysis
Approach that focuses on the attributes of the worker necessary to accomplish the tasks
KSAOs
individuals attributes of knowledge, skills, abilities, and other characteristics that are required to successfully perform job tasks
Subject matter expert (SME)
Employee (incumbent) who provides information about a job in a job analysis interview or survey
Critical incident technique
Approach in which subject matter experts are asked to identify critical aspects of behavior or performance in a particular job that led to success or failure
Cognitive task analysis
A process that consists of methods for decomposing job and task performance into discrete, measurable units, with special emphasis on eliciting mental processes and knowledge content
Think-aloud protocol
Approach used by cognitive psychologists to investigate the thought processes of experts who achieve high levels of performance; an expert performer describes in words the thought process that he or she uses to accomplish a task
Job evaluation
method for making internal pay decisions by comparing job titles to one another and determining their relative merit by way of these comparisons
Compensable factors
Factors in a job evaluation system that are given points that are later linked to compensation for various jobs within the organization; factors usually include skills, responsibility, effort, and working conditions.
Comparable worth
notion that people who are performing jobs of comparable worth to the organization should receive comparable pay
Equal Pay Act of 1963
Federal legislation that prohibits discrimination on the basis of sex in the payment of wages or benefits, where men and women perform work of similar skill, effort, and responsibility for the same employer under similar working conditions
Critical incidents
Examples of behavior that appear “critical” in determining whether performance would be good, average, or poor in specific performance areas
Graphic rating scale
Graphic display of performance scores that runs from high on one end to low on the other end
Checklist
list of behaviors presented to a rater, who places a check next to each of the items that best (or least) describe the rate
Weighted checklist
A checklist that includes items that have values or weights assigned to them that are derived from the expert judgments of incumbents and supervisors of the position in question
Behaviorally anchored rating scales (BARS)
Rating format that includes behavioral anchors describing what a worker has done, or might be expected to do, in a particular duty area
Behavioral observation scale (BOS)
Format that asks the rater to consider how frequently an employee has been seen to act in a particular way
Employee comparison methods
form of evaluation that involves the direct comparison of one person with another
360-degree feedback
Process of collecting and providing a manager or executive with feedback from many sources, including supervisors, peers, subordinates, customers, and suppliers.
Central tendency error
Error in which raters choose a middle point on the scale to describe performance, even though a more extreme point might better describe the employee
Leniency error
error that occurs with raters who are unusually easy in their rating
Severity error
Error that occurs with raters who are unusually harsh in their ratings
Halo error
Error that occurs when a rater assigns the same rating to an employee on a series of dimensions, creating a halo or aura that surrounds all of the ratings, causing them to be similar
Psychometric training
Training that makes raters aware of common rating errors (central tendency, leniency/severity, and halo) in the hope that this will reduce the likelihood of errors
Frame-of-reference (FOR) training
Training based on the assumption that a rater needs a context or “frame” for providing a rating; includes (1) providing information on the multidimensional nature of performance (2) ensuring that raters understand the meaning of anchors on the scale (3) engaging in practice rating exercises, and (4) providing feedback on practice exercises.