Assessment Centers Flashcards

1
Q

• Correlations between ratings of different dimensions within the same exercise tend to have higher correlations (r = .58) than the correlations between ratings of the same dimension (or trait) across the various exercises (r = .25)• Competency demand hypothesis: when individuals are motivated to meet a situational demand and have competencies to do so, most will engage in the dominant response and there will be few individual differences• When situation demands responses that are more difficult (not everyone possesses the competency) differences will be more easily observed• Perhaps we need to get rid of the idea that something is wrong when traits measured in different situations (exercises) do not converge, as these exercises were designed to tap relatively distinct aspects of the job

A

Haaland & Christiansen (2002)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Assessor-related factors in assessment centers• Model 1- assessors possess limited information-processing capacities and therefore are not always able to meet the cognitive demands of the assessment center process• Model 2 (expert assessor model) - in which differences between novices and experts account for differences in rating quality• Assessee-related factors: cross-exercise consistency and dimensional (w/in exercise) differentiation• Evidence for convergent validity is established when assessors rate candidates who perform consistently across exercises. Evidence for discriminant validity is found when assessors rate candidates who perform differently on dimensions.• Careful AC design and assessor reliability are necessary but insufficient for establishing evidence of convergent and discriminant validity in AC. This is because candidate perf. may be limiting factors to establish evidence for construct validity.

A

Lievens (2002)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

• Goal: look at criterion-validity of AC at dimension level• Past res. Collapse dimensions of AC to predict perf => deflated validity• Mean validity of 6 AC dimensions =.36• Problem-solving, influencing others, org. & planning, comm. =>best predictor of perf. ->drive & consideration/awareness X good predictors• Found above dimension intercorrelations =>need to increase discrimination among dimensions

A

Arthur et al (2003)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

• Model of perf. Of 8 competencies• Leading & deciding; interacting & presenting; analyzing & interpreting; supporting & cooperating; creating & conceptualizing; adapting & coping; enterprising & performing; organizing & executing• Cog ability & personality (to a lesser extent) predicts great 8• Best predictor of perf =>analyzing & interpreting, org & executing; leading & deciding

A

Bartram (2005)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

• Goal: provide factors influencing quality of assessor decisions• More than 4/5 dimensions will overburden daters• Assessee characteristics- whites rated higher in ex. w. cog. Component• Variance in ratings can be attribute to exercise format• Org. culture & context influence rating process too• Rating models: rational model, limited capacity model, expert model• Poor quality ratings can be due to 1)lack of opp to obs, 2) assessor biases, 3)assessor capability to make accurate judgments

A

Lievens & Klimoski (2001)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

• Goal: assessor type & assessee perf profiles influencing construct validity of AC ratings• Assessor type =>limited capacity model & expert model• Assessee perf. Profiles =>1) consistent & diff. perf. 2) Consistent & undiff. Perf3) Inconsistent & undiff perf.4) Inconsistent & diff. perf• Increase convergent validity when rating assessee (1) & (2)• Increase discriminant validity when rating assessee (1)• Careful AC design & increase interrater reliability is not sufficient =>construct validity influenced by assessee perf. profiles

A

Lievens (2002)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

book on ACs

A

Thornton & Rupp (2006)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

• Exercise can be presented to diff participants at diff times – order that exercises are presented had trivial effects on the ratings

A

Bycio & Zoogah (2002)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

• Despite various design fixes, “construct validity” as conceived in MTMM (multi-trait, multi-method) cannot be salvaged• Exercise effects are robust• Exercise effects represent cross-situational specificity and not method bias• Assessors can be quite accurate judges of behavior• Exercise effects do not indicate assessor halo error but represent accurate overall judgments of perf• ACs are good for exercise not dimension effects

A

Lance (2008)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

• The evidence clearly confirms that the AC method does possess validity to assess and develop job-related perf dimensions• Dimensions should remain the focal construct• Exercise effects may be meaningful but dimensions can add to it

A

Rupp et al (2008)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

• AC are not valid because are psychometrically unsound• Should use 1) test of traits 2) simulation exercises 3) bio indicators

A

Schuler (2008)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

• Data from ACs vary in quality• ACs work when judgments made reflect useful insights that both the individual and the org find valuable…

A

Moses (2008)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

• Lance (2008) ignores the multiple functions that the AC method fulfills and the reasons why ACs have been around for so long.

A

Jones & Klimoski (2008)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

• Assessment Center Purposes: Employee selection, Identification of managerial talent, Development planning, Identification of training needs, Promotion, Management Succession• Selection centers: assessors serve more often, skills stay more current, assessors asked to create overall performance ratings, center data are validated more frequently• Development centers: fewer candidate selection mechanisms used, heavy reliance on supervisor data, many female assessees, varied exercises, assessors conduct long discussion sessions, center data are infrequently validated• Promotion centers: candidates selected from wider variety of sources, assessors asked to observe many more assesses, assessees complete fewer types of exercises

A

Spychalski et al. 1997

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

• Arthur et al – shouldn’t abandon dimension-based app because issue is failure to engage in appropriate tests of theory• Lack of appropriate test of espoused construct underlying dimension. Explains poor construct validity of AC• PEDR are item-level ratings and PCOR are composite scale level indicator of dimensions. Surprising PEDR reflect exercise effect and small dimension effect – typical of items in scale• Future directions: hold AC developers to same psychometric standards as test developers, design & implement AC that properly represent constructs.

A

Arthur et alResponse to Lance (2008)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

• Article called “why Assessment Centers do not work they’re supposed to?” I/O Psych: Perspectives on Science & Practice• Despite AC design fixes, construct validity of AC is still questionable• “exercise effects” are robust and won’t go away• ODSE correlation is greater than SODE correlation• Cross-situational specificity in candidate performance x method bias → function of trait activation theory?• Assesors are capable of accurate performance judgments, ratings reflect relatively true performance levels• Conclusions: AC seem to work better as exercise-based evaluation tool: end up with ratings on effectiveness in each exercise, reduces rater cognitive load, provide greater concrete feedback to ratees.

A

Lance (2008)

17
Q

• Greater understanding of individual difference variables affecting candidates behavior• Greater understanding of exercise characteristics that are “incidentals” and “radicals” in eliciting desired trait/behavior• Intersection between exercise and individual difference variables, which exercise factors trigger trait relevant behavior

A

Lievens: Response to Lance (2008)

18
Q

• MTMM has been misleading AC practice and res• Treats cross-situation specifically as variable• Assumes all exercises equally capable of measuring behavior• Incorrectly lead to conclusion that AC measure trait, but actually AC measures behavior• Dimensions: clusters of behavior are necessary in AC, not tasks or roles• Allows information about readiness of future responsibilities• Tasks might fast become obsolete in changing work environment• Four ways to measure dimensions: role oriented, cumulative activities, basic interpersonal skills, cross-situational dimensions• Future research: develop exercises that better elicit behavior

A

Howard: Response to Lance (2008)