I - General Flashcards

1
Q

Writing recruitment ads

A
  • Providing info about the job increase applicant attraction to org
  • Providing information about the selection process affects probability applicants will apply
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Increasing applicant diversity

A
  • Recruiting at historically black colleges
  • Developing targeted intern positions
  • Highlighting org’s commitment to diversity in recruitment materials
  • Key to recruiting diverse applicants is how hey perceive the diversity of the org during site visits
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

best recruitment evaluation strategy

A

of qualified applicants
Look at # of successful employees generated by each recruitment method in the recruitment program
-Effective because not every applicant will be qualified, and not every qualified applicant will become a successful employee

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

RJPs in recruitment

A

-Other methods should be used for recruitment in order to most effectively recruit successful candidates. RJP is one of these methods
-An applicant gets an honest preview of the job
-Focuses on a specific job
-telling the truth does scare away many applicants, especially qualified ones
-The ones who stay will not be met with surprise on the job
-Prime driver of RJP success is the perception of company honesty
-A variation of RJPs are expectation-lowering procedures (ELPs)
More general than an RJP

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Designing a recruitment plan

A
  1. determine who to target
    - Star performers, hiring from competitor or client, executive from poorly performing company
  2. Determine method of recruitment
    - employee referrals
    - organization website
  3. Design recruitment message
    - RJPs can be used in advertisements and then added in selection process
    - organization special factors such as their image, reputation, diversity, etc.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Future directions for recruitment

A

-using social media to recruit applicants

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

when does personality most influence performance?

A

when the traits are aligned with the specific behaviors associated with high job performance and when those behaviors are also elicited by the situational cues or demands of the work environment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Big 5 relationship to performance

A

conscientiousness and emotional stability are universal predictors; conscientiousness was predictive across all occupations and specific criteria; ES predictive across all occupations and some specific criteria

extraversion valid for only some occupational groups and specific criteria

agreeableness (related to teamwork) and openness (training performance) to experience modestly valid but each trait related to specific criterion

the moral of the story is that niche predictors work best when they are carefully matched to relevant criteria or work situations, and that validity of personality improves when theory is used to identify which personality traits to include in the selection battery

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

personality tests work best in a selection battery when combined with what

A

cognitive ability, biodata, SJTs, and interviews

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

faking with personality tests

A

approach it as if they are going to fake, and include a warning that faking may be detected when administering the test to applicants and/or use scales that can detect lying and eliminate people that way

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

the interview and personality

A

interviews can help assess personality, name conscientiousness via the assessment of interpersonal skills and work habits
use JA information to identify and measure social interaction and work patterns to limit the number of personality traits that should be judged

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

key guidelines when using personality data in selection

A

define personality in terms of job behaviors: think of them as another type of KSA identified via job analysis. JA information for personality can help generate solid definitions to use for measurement. task approaches or worker attributes approaches such as the PAQ can be used

broadness: measure traits that affect a wide set of behaviors rather than a narrow set of behaviors, look at data on the reliability and item statistics for the scales

nature of job performance: personality may be less important for technical jobs where there are very structured behaviors (remember personality x situation) but hardworking and dependability can still be important; for jobs that have a number of acceptable ways of approaching performance, personality will be even more important. since jobs are moving towards a lack of structure with the changing nature of work (teams, technology, offshoring, globalization), personality is only going to become more important

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

work sample tests

A

ask the applicant to complete some job activity, either behavioral or verbal, under structured testing conditions. they provide direct evidence of applicant skill and are representative of job tasks. such tests become more difficult to develop as the complexity of the job increases. such tests should not focus on trainable or specialized knowledge

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

steps to develop a work sample test

A
  1. perform JA: choose people who perform the job well
  2. identity important job tasks: SME ratings on frequency, importance, difficulty, and/or frequency of error is important.
  3. develop testing procedures: select the tasks to be tested, consider time to perform task, avoid tasks that only a small minority or large majority can do in order to ensure discrimination between ability levels, consider using tasks that don’t require as many resources, choose tasks that have standardized operations. specify testing procedures (develop instructions, scoring, materials and equipment to be used etc.), establish independent test sections or provide a new set of materials for each sequence of a work activity, minimize contaminating factors, select number of test problems
  4. develop scoring procedures: clearly define it and standards which to compare the score against in order to determine success on the test, train judges
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

the effectiveness of performance tests

A

universally positive and have several benefits; .33 validity to performance, especially valid when used alongside GMA tests. these tests have not historically shown adverse impact, high face validity and perceived as fair among applicants can also serve as a realistic job preview

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

assessment centers: used for what and what are they in general

A

used for the selection of managers, executives, and professionals
it’s a standardized evaluation of behavior/KSAs using multiple trained observers and techniques, mostly verbal and performance tests
each dimension must be measured by more than one exercise and each exercise should focus on more than one dimension

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Assessment center development steps

A
  1. JA
  2. identify clusters (dimensions) of job activities that are important to the job; each dimension should be specific, observable, and consist of job tasks that can be logically related
  3. translate the job activities into test activities
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

list assessment center exercises

A

traditional devices such as interviews, GMA, or personality

performance tests: in baskets, LGDs, case analysis

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

assessment center exercises: the interview

A

In an AC, the background interview purpose is to gather information from the candidate about job activities that represent the behavioral dimensions being evaluated in the AC
interviews should be structured and focus on previous job behaviors, limited in scope, and contain multiple questions for each dimension. use trained interviewers and a formal scoring system to evaluate the candidate on the behavioral dimensions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

ACs: in-basket exercises

A

a paper pencil test designed to replicate administrative tasks for the focal job
use JA information to develop in basket content that will contain administrative issues and memos that are representative of actual administrative tasks of the position
the in basket is completed individually and usually takes 2-3 hours to complete.
the candidate sits in a private area at a desk on which is found the written material of the in basket. there are usually no directions given by the AC staff and no interaction between staff and candidate.
the in basket includes an introductory document that describes the situation which is usually a variation of “you have recently been placed in this position because of the resignation of the previous incumbent. A number of memos describing a variety of problems have been accumulated and must be addressed. You have plans to go on a vacation in a few days and thus you must rectify all of these issues prior to your departure by indicating what actions should be taken via written memos left in the out basket” these supporting documents may also include an org chart or other resources for them to refer to
memos can be handwritten or typed to add to the realism. candidates read the memos and then write their recommendation for action steps and who should be involved. afterwards, the AC staff may interview the candidate to explain the overall philosophy used in addressing the memos and the reasoning behind the recommendations they made
the AC staff uses written and oral information to evaluate behavioral dimensions such as decision making, planning and organizing, ability to delegate, decisiveness, independence, and initiative

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

ACs: common dimensions to evaluate in an in basket

A

decision making, planning and organizing, ability to delegate, decisiveness, independence, and initiative

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

ACs: leaderless group discussions (LGDs)

A

interactions of 6 candidates where no one is named as the leader of the discussion. AC assessors sit around the room to record notes and behaviors. equals are addressing a common problem that emphasizes either cooperation or competition among the candidates, and is either defined (usually used for competitive) or undefined (usually used for cooperative) in terms of group member roles. group is provided a description of the issue to discuss as well as supporting documents.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

ACs: common dimensions to evaluate using a LGD

A

oral communication, tolerance for stress, adaptability, resilience, leadership, persuasiveness

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

ACs: case analysis

A

each participant is provided with a long description of an org problem that changes according to the job being considered. for higher level positions, the case may describe the history of certain events within the company, with relevant financial data, marketing strategy, and org structure. the case focuses on a dilemma that the participant is asked to resolve. in doing so, specific recommendations must be given, supporting data presented, and any changes in company strategy detailed .

middle level management jobs: focus on designing and implementing systems
for first level management: subordinate conflict resolution or nonconformity with policies

the output of the candidate may include a written report, a presentation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

ACs: dimensions to evaluate with a case analysis

A

oral and written communication, planning and organizing, control, decisiveness, resilience, analysis

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

ACs: the role of assessors

A

is crucial to its utility. ACs are hard to score and it takes a lot of cognitive resources to score them.
assessors are usually half that of the number of candidates and are managers in the org one level above the position of interest. thus they are assumed to be very familiar with the job and job behaviors.
all of the assessors come together after candidates complete the AC and have a discussion around their observations, ratings, and data. consensus judgments are made about candidates based on this discussion to develop an overall rating for each candidate.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

ACs: assessor training

A

assessor training programs fill in gaps managers may have in skills related to observing behaviors systematically and developing ratings. they usually train on six key abilities:

  1. understanding the behavioral dimensions: provide them with clear and detailed dimension definitions and discuss as a group to get a common understanding
  2. observing behaviors: train to observe and record and avoid making immediate judgments about candidates. focus on recording their actual behaviors, explain the differences to them and provide examples of each. then have them practice recording behaviors using examples of candidate behaviors using live or videotaped exercises.
  3. categorizing participant behaviors:

CARD INCOMPELTE

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

how to design a selection system: general steps

A
  1. workforce planning
  2. job analysis
  3. develop and select measurement/recruiting strategy
  4. determine how to best make selection decisions
  5. validation
  6. utility analysis
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

selection system how to: workforce planning & JA

A

both processes inform each other; thus if there is no JA information available, do workforce planning and revise during JA period via the feedback loop

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

workforce planning objectives

A

adapt to uncertainties in the demand for talent; long term forecasting isn’t reliable

improve ROI for employee development

balance the employee-employer interests

31
Q

workforce planning: important components/considerations

A

talent inventories: only useful when combined with workforce requirements. ID performance stars through PM

workforce requirements: select or train? more cost effective to train. Economy and labor market will influence decision though; train during a boom, select during a recession. conduct a SWOT of the environment to get a big picture idea followed by a gap analysis.

action planning: invest resources into strategic jobs to maximize ROI.

evaluation via feedback loops: evaluate and continuously improve the process

32
Q

job analysis for low qualification jobs

A

use job elements method for cost effectiveness, reliability, and SMEs to ensure face validity.

33
Q

job analysis for more complex jobs

A

higher risk and ROI: use CJAM; flexibility and adaptation to environmental constraints important to consider.

  1. start with ONET: derive task statements but be more specific
  2. compose SME group
  3. SMEs evaluate the task statements, rate on frequency and importance
  4. calculate importance value for task statements (frequency x criticality)
  5. use task statements with importance values of <50% for further development
  6. have SMEs cluster the tasks into dimensions and weight them
  7. establish minimum qualification (MQ)
  8. create evaluation criterion to ID expected performance against which future incumbent may be evaluated on
34
Q

general steps for establishing minimum qualifications

A
  1. SMEs identify and discuss MQ task statements; they should think from the perspectives of other referents and think about certifications needed
  2. bracket MQs with easier and harder statements
  3. have SMEs rate MQs on practicability
  4. Link MQs back to KSAs to compare completeness
  5. select based on rating and consider adverse impact potential
35
Q

considerations/steps for developing and selecting measurement and recruiting strategy

A
  • derive necessary KSAOs from the JA
  • GMA tests better for more complex jobs, but remember adverse impact possibilities
  • personality: can use but not as sole factor
  • personal history/biodata: look for achievements and experience as they predict performance
36
Q

developing SJTs

A

generate situation using CJAM or JEM

generate response options and scoring options using SMEs. continuous scoring options are most popular for SJTs (vs. correct/incorrect)

37
Q

training: benefits to the org

A

competitive advantage in terms of performance as well as attraction/retention

38
Q

primary ways that learning at work is conceptualized

A

behavioral: focuses on changes in the form or frequency of behavior; correct behavioral response = learning
cognitive: learning involves several interrelated changes in how people process information. Declarative and procedural knowledge.
situational: learning embedded in the context in which it occurs and individuals construct meaning from their own experiences

39
Q

transfer of training: importance and components of it

A

training is useless if the skills derived can not be transferred to the job.

components:
maintenance: remembering what was learned over time

generalization: applying what was learned in training to the job context

near transfer: transfer training information to closely related on the job tasks

far transfer: transfer general training information to more specific contexts/tasks

40
Q

best practices for training transfer

A

supervisor support and reinforcement

coaching and opportunities to practice

conduct post training evaluation of transfer

use interactive activities during training sessions

41
Q

general steps for developing a training program

A
  1. needs analysis: organizational, task, and person analysis
  2. establish objectives: behaviors, conditions, and standards
  3. deign training: plan events and pre training interventions
  4. training evaluation: determine purpose, decide on criteria, develop outcome measures, choose evaluation strategy
42
Q

training: future directions

A

focus on outcomes like work performance and ROI

shift towards online delivery

using adaptive technology to train

learning on the job

increase in collaborative learning such as forums (in person or online)

43
Q

determinants of effective training

A
  • top management commitment/support
  • individual readiness for training
  • 3 conditions must be present: capability of individual (can do), motivation (will do), support from above
44
Q

characteristics of good written training objectives

A

desired behavioral outcome

specific conditions under which the behaviors will be performed or demonstrated during training

the criterion of acceptable training performance

45
Q

training: instructional methods

A
lecture/discussion
behavior modeling 
simulated work settings
web based 
future: VR
46
Q

training methods: lecture/discussion

A

perceived as boring but can be effective for training cognitive or interpersonal skills

47
Q

training methods: behavior modeling

A

based on social learning theory, good for procedural and declarative knowledge outcomes. greater transfer when positive and negative examples are presented to trainees.

  • describe the set of skills to be learned
  • provide a model for effective behavior display
  • give them the chance to practice
  • provide feedback and reinforcement
48
Q

training methods: simulated work settings

A

must have psychological fidelity, good for a supplement to other methods

49
Q

training methods: online delivery/web based

A

as good as or better than classroom, more cost effective

50
Q

best practices for maximizing training effects on performance

A

comprise training of multiple events

provide accurate feedback that is in control of the trainee

space out practice sessions

ensure trained behavior is rewarded on the job

have trainees set transfer goals after training

51
Q

individual characteristics that affect training transfer

A

motivation
self efficacy
locus of control
personality: conscientiousness and cognitive ability

52
Q

barriers to training program evaluation

A
  • top management doesn’t emphasize it
  • lack of skills to actually conduct the evaluation
  • objectives being unclear to HR individuals
  • may uncover that the training failed and wasted resources
53
Q

training validity

A

internal: did the treatment make a difference
external: is the result generalizable

54
Q

training validity: threats to internal validity

A

history: events that occur between pre and post test that influence results

learning effects: pre test has effects on post test

instrumentation: changes in the assessment instrument from pre to post test

regression to the mean: since participants are often chosen on the basis of extreme scores

55
Q

training validity: threats to external valdiity

A

reactive effects of pre testing: the pre te

56
Q

training: directions for research

A
  • finding the optimal combo of face to face and online instruction
  • role of instructor in training
  • types of KSAs best learned online vs. face to face vs. blended approach
57
Q

discuss the idea of FDS for performance

A

In terms of differentiating between levels of performance, relative rating systems are more effective, wherein performance of an individual is determined by comparing people to each other; a common form of such system is the FDS that is meant to deal with rater leniency probs. When deciding whether to use FDS its important to consider the consequences it has on low performers. There’s not a lot of evidence of its overall practical value to orgs but tons of orgs use it. The way it works is usually either sorting employees into predetermined performance categories or ranking them on the basis of relative performance. Thus “Low” performers may not actually be low performers at all. Its also perceived as being unfair in general, especially when organizations move from a different system to a FDS and all of a sudden people who were identified as good or high performers before are now considered low performers.

58
Q

Multi factor model of job performance ratings

A

delineates potential influences of variables that influence ratings; Suggest that the relationship between job performance and job performance ratings is weak because many of the factors that influence such ratings are not related to to actual job performance. The practical implication of this is to recognize the importance and understand the nonperformance factors that affect performance ratings. Possible solution to criterion problem via identifying non-performance factors that affect ratings and correcting or otherwise reducing their influence.

59
Q

mediated model of job performance ratings

A

Focuses on the motivational aspects of performance ratings; some supervisors are motivated to rate in certain ways for certain reasons, such as seeing themselves as a tough boss by giving lower ratings, or motivating their own employees via inflated ratings. Thus one solution to the criterion problem is to develop an understanding of the conditions under which raters will or will not attempt to provide accurate ratings. Practical implications include training or persuading raters to cooperate with the proper use of performance appraisals.

60
Q

FOR training: how to

A

three of the key steps in FOR training for JA raters includes describing the behaviors that are indicative of each dimension, allowing respondents to practice their ratings skills, and pro- viding feedback to respondents

These FOR techniques may help job analysis respondents overcome the use of heuristics and automatic information processing

61
Q

brief history of personality research

A

1990s saw an explosion of personality research and practice, after a two decade silence from the 1960s conclusions of Guion that personality variables don’t relate to work related criteria. Project A by the Army was extremely impactful in renewing this interest and presented near opposite conclusions to Guion’s. It changed the way we thought about the predictor and criterion space as a multidimensional entity and being construct oriented in nature. Project A, a large concurrent validation study, found meaningful validity coefficients when they summarized the literature relating personality variables to job related criteria according to both personality constructs and work related criterion constructs taken together. From this emerged the ground breaking Barrick and Mount 1991 meta analysis of the Big Five to job performance.

62
Q

personality variables: list what they predict

A

Overall job performance, objective performance, getting ahead, task performance, training performance, learning & skill acquisition, contextual performance and CWBs, managerial effectiveness, leader emergence, transformational leadership, goal setting, procrastination, creativity & innovation, team related variables, job satisfaction, “will do” behaviors

63
Q

Personality/FFM in selection and general criticisms

A

Provides incremental validity in performance over GMA, and does not have adverse impact.
Criticisms: overlooks important constructs, not comprehensive; combines constructs that are better left separate; method bound and dependent upon factor analysis of lexical terms; constructs are so broad and heterogeneous that we sacrifice criterion related validity
personality constructs are better predictors of job performance when the work situation is incorporated into the item; also, high autonomy jobs present higher validities for personality than low autonomy; using other’s vs self ratings results in higher correlations between personality and work criteria

64
Q

The future of personality practice and research

A

Practitioners are facing the challenge of being able to predict performance in an ever changing world of work where tasks, teams, and context is constantly varied. We need new models for validation, such as building a data base that synthesizes validities for combinations of work activities and responsibilities.

also, research in applicant faking on personality tests is in need of a theoretical rehaul that incorporates several dimensions simultaneously, such as test setting, test medium, proctored vs. unproctored, timed vs untimed, etc. a primary reason for this is because even screening out those with high social desirability scores doesn’t do much to significantly increase criterion validity coefficients; it does a better job in lab controlled studies so practically it’s not as good as we’d think

65
Q

ASA model

A

Integrates both individual and org theory to propose that the outcome of three interrelated dynamic processes, attraction selection and attrition, determines the kinds of people in an org, which consequently defines the nature of the org, its structures, processes, and its culture. Org behavior is dependent upon the people that make up the org, their characteristics, especially top management and founders. At the core of the model is the goals as articulated by the founder; the achievement of these goals is reflected in the processes put into place and the characteristics of the founders. Over time, this determines the kinds of people who are attracted to, selected by, and stay with the org. These people in turn facilitate the culture structures and processes of the org, thus its label as the ASA “cycle”. Important to recognize is that this theory has as its criterion organizational behavior, that persons make the environment. The framework purports that individuals and situations are no independent; the situation is the people there behaving as they do. Structure, processes, and culture are outcomes - not antecedents - to organizational behavior. An implication of this model is that JA as a basis for selection should also include an org diagnosis which would help incorporate personality issues into selection programs, the “other” in KSAOs that normally pertains only to the job, not to the org in which that job is embedded. Some times, it may be useful to have good fit (early on in an orgs life) and other times it may be useful to promote heterogeneity.

66
Q

General individual performance theory

A

Three determined factors: declarative knowledge, procedural knowledge, and skill. Performance is determined by the level of skills the learner has. This theory is helpful because it is not always clear what behaviors performance consists of in the context of specific tasks; this is very complex.

67
Q

Trends and future directions in performance research

A

Quantitative vs. qualitative designs/measures
Job performance investigated as DV
Objective measures preferred over subjective measures
Future directions: organizational factors that contribute to performance, focus on other methods such as qualitative focus groups
Making more connections between performance constructs

68
Q

trends: definitions of performance

A

Job performance should be defined in terms of behavior, not results. Performance behaviors include only those behaviors that contribute to organizational goals. Contextual variables matter in part because they determine how an individual will apply the skills and competencies towards completing the task. theoretical definitions emphasize behaviors
Empirical definitions and measures are based on organizational goals
Definitions usually focus on the individual level, even though theory discusses performance from a multilevel perspective

69
Q

positive manifold

A

is closely related to intelligence research and to factor analysis. Phenomenon in which the inter correlations of a set of test items are all positive. Every ability is divisible into two factors, one universal and the other specific (Spearman). Each test item is reflective of a general ability factor “g’ which is common to all test items, and a specific factor that is uncorrelated with g and with all other specific factors. The amount to which the g and specific factors are reflected in items varies. If you gave people many different kinds of cognitive ability tests, all the correlations would be positive; this is what Spearman called positive manifold. It implies that it’s not so important which particular tests are used to asses g, since they all correlate highly. It is the idea that all of the variables are positively correlated. Thus they will all have positive loadings on the g factor.

70
Q

rise of technology and our position as I-Os

A

Orgs will choose technology over scientific best practices if they perceive benefits. There is debate over how the rise of technology and AI will affect our field. One side says it is a huge threat to taking over our functions and the other side says these advancements won’t be long term but “fads”. A middle ground position is that some trends and technologies may actually be able to make our jobs better/benefit us and be useful to helping us do our jobs better. Right now technology is outpacing our ability to do the science we need to do to our jobs. Our seat at the table during big data discussions rests on our ability to use and apply data analytics, influence decision making, programming, database dewing, visualization of data. Academics can play a role in determining the validity of methods, build theory, and produce research to advise product developers and providers. Consultants can play a role in the legal aspects of big data systems.

71
Q

What method for combining applicant information is best?

A

•pure statistical and mechanical composite methods for collecting and combining data were always either equal or superior to all other methods
•A more recent view concluded that the major issue is how data are combined in making a prediction rather than how the data are collected
§Mechanical methods of combining predictor data relative to clinical or judgmental combination methods improved the ability to predict work performance by more than 50 percent
§The rate of identifying acceptable hires was reduced by more than 25 percent when judgmental data combination methods were used rather than mechanical means
gatewood & field

72
Q

discuss the alternative proposed for banding

A

In the last 10 years supplementing valid cognitive predictors with non cognitive predictors has been a primary strategy for addressing the issue. When non cognitive predictors are used and are relevant to the job and/or have smaller subgroup differences, they can increase the validity coefficients. Pareto optimal composites: this is an alternative proposed to banding and other approaches that better balances the trade off between loss of utility and diversity. It involves using a composite of selection predictors that have different effect sizes to obtain a better trade off between selection quality and adverse impact. In the past this has been done by trial and error (in terms of both research and practice), just entering things in to the regression and looking at the resulting weights.

Pareto optimal composities; when selection parameter estimates are known (validity estimates, predictor intercorrelations, subgroup differneces, etc) from a local validation study or meta analyses, a procedure can be used to address the adverse impact-quality problem. it is a procedure that determines values for the predictor weights such that the resulting composite provides a Pareto optimal composite; the rate of minority selection and utility is maximized. there is minimal restriction on selection scenarios that can be addressed by this method; the minorities can come from several different populations, for example, or selection of individuals based on pre-hire training proficiency.

73
Q

ways of calculating adverse impact

A

4/5 rule: simplest and most popular; inflates risk of type 1 error. Preferred method per the uniform guidelines. Selection rate calculated for each group, observe which group has highest selection rate. Impact ratio = selection rate for each group / selection rate of group with highest selection rate. If it is less then .8, adverse impact is an issue. Can’t be sure if result was due to chance or if AI actually exists because of sampling errors.

Statistical tests: such as chi square (compares fit of observed vs expected frequencies) or fisher’s exact test. designed to control for type 1 error, most accurate when sample is large and balanced. Since it is the comparison of minority to majority groups this is often not feasible though. Statistical tests also tend to increase chance of type II error.

Practical tests: use when samples are small; balance type I error of 4/5 rule and power issues of statistical tests from small samples. 2 main types, N of 1 rule = compares ratios of minority and majority group; one person rule = difference between actual minority hires and expected frequency of minority hires, if difference less than 1 differences may be due to small sample size

74
Q

discuss the idea of needs assessment for training, why its important

A

NA is the process used to determine whether training is necessary and involves:
organizational analysis: determining the appropriateness of training given the company’s business strategy, its resources available for training, and support by managers and peers for training activities , 

▪ task analysis: identified the important tasks and knowledge, skills, and behaviors that need to be emphasized in training for employees to complete their tasks. Should only be done if org analysis reveals that training is appropriate. Select jobs, developing task list using SMEs for importance, frequency, and difficulty, identify KSAs needed to perform the tasks identified for training.

▪ person analysis: involves (1) determining whether performance deficiencies result from a lack of knowledge, skills, or ability (a training issues) or from a motivational or work-design problem; (2) identifying who needs training; and (3) determining employee readiness for training 

needs assessment is necessary because it answers why training is needed. Pressure points dont necessarily mean that training is needed or is the correct solution. Employees, managers, and trainers should participate in the NA process. Methods used in NA include observation, questionnaires, interviews, focus groups, historical data reviews. Use multiple methods because different methods will provide different types of info as well as detail provided.