Selection (General) Flashcards

1
Q

Discuss structured interviews and its advantages

A

-The source of the questions is based on job analysis (job-related)
-All applicants are asked the same questions
-There are standardized methods of scoring answers
Advantages
-Job relatedness is high. This is necessary to predict job performance
Substantially lower adverse impact
-Tap into job knowledge, job skills, applied mental skills, and interpersonal skills
-Not as affected by use of non-verbal cues

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

unstructured interviews: advantages

A

Interviewers can ask anything they want
Consistency not required
Can assign numbers of points at their own discretion’

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Unstructured interviews: disadvantages

A
  • Poor intuitive ability (gut reactions) are not an effective method of evaluating applicant ability
  • Lack of job relatedness; questions asked may be illegal
  • Primacy effects (first impressions)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Reducing primacy effects in interviews

A

interviewers should make repeated judgments throughout the interview rather than one overall judgment at the end (e.g., rating response after each question or series of questions and not waiting until very end to rate)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

List/discuss the biases associated with interviews (particularly unstructured)

A
  • gut reactions
  • primacy effects (first impressions)
  • contrast effects (interview performance of one applicant may affect interview score given to next applicant)
  • Negative information bias (negative information weighs more heavily than positive, especially when interviewers aren’t aware of job requirements)
  • Interviewer-interviewee similarity (Higher scores for higher similarity)
  • Interviewee appearance
  • non-verbal cues (use of appropriate non-verbal communications is highly correlated with interview scores)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Types of interviews (in terms of structure)

A
  • Highly structured (all 3 criteria met)
  • moderately structured (2 criteria met)
  • slightly structured (1 criteria met)
  • unstructured (no criteria met)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Steps to creating structured interviews

A
  1. Conduct JA
  2. write detailed job description
  3. determine which KSAOs interview should address
  4. design interview questions
  5. incorporate questions into interview form
  6. create a scoring key for interview answers
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Creating structured interviews: how to determine which KSAOs the interview should address

A

through the JA, determine the best way to approach measurement for each of the KSAOs; if an interview is an appropriate method of doing so, consider tapping into that KSAO via the interview.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Creating structured interviews: types of interview questions

A
  • Clarifiers: interviewer clarifies info in the resume/app and fills gaps
  • Disqualifier: wrong answer will disqualify the applicant
  • Skill-level determiner: tap an applicant’s knowledge or skill
  • Future focused: given a situation and asked how they would approach it
  • Situational: presented with series of situations and asked how they would handle each one
  • Past-focused: taps past experience
  • Patterned-behavior description interview (PBDI): focus on behaviors in previous jobs
  • Organizational-fit: how well an applicant’s personality and values will align with org’s culture
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Creating structured interviews: approaches to creating a scoring key for interview answers

A
  • Right/Wrong: good for skill-level determining questions or disqualifiers
  • Typical-answer approach: compares an applicant’s answer with benchmark answers
  • Key issues approach: provides points for each part of an answer that matches the scoring key
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

creating a scoring key for interview answers: how to apply the typical answer approach

A
  • Create list of all possible answers to each question
  • Have SMEs rate favorableness of each answer
  • Use these ratings a s benchmarks for each point on a five point scale
  • Increasing number of benchmark answers = higher scoring reliability
  • Problem: many possible answers to a question; key issues remedies this
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

creating a scoring key for interview answers: key issues approach

A
  • remedies issues with having too many possible answers with typical answers approach
  • SMEs create list of key issues they think should be included in the perfect answer
  • For each key issue that is included, the interviewee gets a point
  • Can also assign weights to them
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Conducting structured interview: list of steps

A
  1. Build rapport = more positive feelings about the interview
    - Don’t begin until applicants have had time to settle their nerves and collect their thoughts
  2. Explain the process
  3. Ask interview questions
    - Score each answer as it’s given
  4. Provide job and organization information
  5. Answer applicant questions
  6. End with compliment and information on when they’ll be contacted
  7. Scores from questions are summed and resulting figure is the applicant’s score
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Reasons for using references and LORs in selection

A
  • Confirming resume details
  • Checking for discipline problems (Protecting the org from negligent hiring)
  • Discovering new information about the applicant
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Problems with using LORs in selection

A
  • Leniency: most LORs are positive; applicants choose their own references
  • Knowledge of applicant: person writing reference hasn’t observed all aspects of applicant’s behavior or doesn’t know them well
  • Reliability: lack of agreement between 2 people who provide references for the same person
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

How to increase validity of references and LORs

A

Increase the structure of the reference check by conducting a JA and then creating a reference checklist directly tied to those results

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Predicting performance using applicant knowledge: Job knowledge tests, what are they

A
  • Used primarily in public sector for promotions
  • Measure how much a person knows about a job
  • Typically multiple choice for ease of scoring but can also be written or essay format or given orally during interview
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Advantages and disadvantages to job knowledge tests

A

Advantages:

  • Good predictors of training performance (.27) and on-the-job performance (.22)
  • High face validity, accepted by applicants

Disadvantages:

  • Adverse impact
  • Can be used only for jobs in which applicants are expected to have job knowledge at time of hire or promotion
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

List ways to predict performance (In context of selection methods)

A
  • LORs and references
  • Applicant training & Education
  • Applicant KSAs
  • Prior experience
  • personality/character
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Applicant ability for predicting performance (brief overview)

A
  • Ability tests measure the extent to which an applicant can learn or perform job-related skills
  • Used primarily for jobs in which applicants aren’t expected to know how to perform the job at time of hire
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Using cognitive ability to predict performance; how does it predict?

A

Allows people to quickly learn job related knowledge through info processing and decision making skills

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Using cognitive ability to predict performance: pros and cons

A

Pros

  • Excellent predictors of performance across all jobs
  • Easy to administer, inexpensive
  • When properly developed and validated, cognitive ability tests survive legal challenges

Cons

  • High levels of adverse impact, most of any selection method and often lack face validity
  • Difficult to set passing scores
  • Job specific metas cast some doubt on the assumption that cognitive ability tests predict across jobs
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Cognitive ability testing: Discuss possible alternative

A

Sienna Reasoning Test (SRT): Potential breakthrough

  • Theory: race differences in scores are due to knowledge needed to answer the questions rather than actual intelligence
  • Uses commonly known words
  • SRT predicts college grades and work performance as well as traditional cognitive ability tests, almost eliminated racial differences in test scores
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

SJTs to predict performance (overview)

A
  • Related to cognitive ability
  • SJTs correlate highly with cognitive ability tests (.46) and job performance (.34)
  • Highest validity is combining cognitive ability tests with SJTs
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Job simulations for selection

A
  • applicants demonstrate actual job behaviors

- Highly content valid, but impractical

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Methods of measuring applicant skill

A
  • work sample tests

- assessment centers

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Work sample tests : pros and cons

A

Pros

  • High content validity: directly relate to job tasks
  • Have great criterion validity
  • Great face validity, challenged less often in court
  • Lower racial differences in test scores than written cognitive ability tests

Cons
-Expensive to construct and administer

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

Assessment Centers : what are they and what is the advantage

A

-Use multiple assessment methods and multiple assessors to actually observe applicants perform simulated job tasks

Pros
Job related methods and multiple trained assessors help protect against many types of selection bias

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

Creating assessment centers: developing exercises

A
  • Develop exercises that measure different aspects of the job
  • In baskets, modest support for usefulness
  • Simulations, lower adverse impact than traditional paper pencil tests
  • Leaderless group discussions
  • Business games
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

Evaluating assessment centers

A
  • Good at predicting performance, but there are cheaper methods to achieving the same or better predictive results
  • Adverse impact relatively high
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

Predicting performance using prior experience: measurement methods

A
  • Experience ratings

- biodata

32
Q

Biodata:
what is it?
Relationship to job performance?
measured how/using what?

A
  • Considers applicant’s life, school, military, community, and work experience
  • Good predictor of job performance, and best predictor of future employee tenure
  • Instrument is usually application blank or questionnaire, and these are weighted
33
Q

Biodata: advantages

A
  • Predict work behaviors well in many jobs
  • Predict various performance criteria such as supervisor ratings, absenteeism, accidents, theft, tenure
  • Easy to use, quick to administer, inexpensive,
  • not subject to biases as other methods
34
Q

Steps to developing a biodata instrument

A
  1. Obtain employee information
    File approach
    Obtaining from personnel files on employees’ previous employment, education, etc.
    But information is often missing or incomplete
    Questionnaire approach
    Information cannot be obtained on employees who quit or were fired
  2. Choose criterion
  3. Criterion groups
    Employees are split into two groups based on their criterion scores
  4. Compare employee information
    Each piece of employee information is compared with criterion group membership to determine which pieces of information will distinguish members of high group from low group
35
Q

Criticisms of biodata

A

Validity may not be stable; ability to predict performance may decrease over time
Some biodata items may not meet legal requirements in Uniform Guidelines
Can be faked

36
Q

Personality inventories for predicting performance

A

predict performance well and have less adverse impact than ability tests

37
Q

discuss the best interpretation across meta analyses in regards to using personality in selection to predict performance

A

The best interpretation across meta analyses is:

  • Personality can predict performance at low but significant levels
  • Can add incremental validity to use of other selection tests
  • Conscientiousness is best predictor across most jobs and performance criterion
  • Validity of the other 4 traits is dependent on type of job and criterion
38
Q

Criticisms of using personality to predict performance

A
  • Self reports, easy to fake, BUT research indicates that applicants fake less often than thought, and when they do, it doesn’t really effect the validity of the test results
  • Validity of personality inventories asking about work personality is more valid than asking about personality in general
39
Q

list of selection methods that have low validity

A

Unstructured interviews
Education
Interest inventories
Some personality traits

40
Q

list of selection methods that have decent validity

A

Ability
Work samples
Biodata
Structured interviews

41
Q

best combination of selection methods is what?

A

Cognitive ability test, and either a work sample, integrity test, or structured interview ; A well constructed selection battery will use a variety of testing methods in order to tap into various dimensions of the job

42
Q

face validity is highest for which types of selection methods?

A

Interviews, work samples, resumes are perceived as most job related/fair

43
Q

face validity lowest for which types of selection methods?

A

Graphology, integrity tests, and personality tests perceived as least job related/fair

44
Q

What’s the deal with GMA in selection?

A

it’s super valid, and increases in validity as the Job gets more complex (i.e., its relationship to job performance increases as the job gets more complex; Schmidtt & Hunter 1988) it can be used for basically all jobs, and it’s cheap to administer. HOWEVER it often results in adverse impact and has the largest white black subgroup difference so many orgs are moving away from it, and towards things like including content valid job specific ability tests that have way lower group differences , GMA is also less likely to be viewed as fair by candidates.

45
Q

Angoff method of determining cut off scores

A

Judges estimate the probability that a minimally qualified candidate answers the item correctly, or the percent of minimally qualified candidates who would correctly respond to the item
A prediction of item difficulty is the product of this data. Item difficulties are summed and a predicted test score is obtained for each judge. Then the mean of these predicted test scores from all judges are averaged and used as the cut off score.

46
Q

Describe the categories of cut off score methods

A

Criterion method: setting a cut score is independent of test results. Used to judge performance against a set benchmark. Usually involves SME input, which is labor intensive, costly, and subjective.
Norm-referenced: cut scores are set depending on test results, and norms are made. This is usually used to rank people

47
Q

Modified Angoff method of cut off scores

A

Restricts the probability choices for SMEs to 8 choices. The purpose is to apply the Angoff method but without overburdening SMEs.

Subject matter experts are generally briefed on the Angoff method and allowed to take the test with the performance levels in mind. SMEs are then asked to provide estimates for each question of the proportion of borderline or “minimally acceptable” participants that they would expect to get the question correct. The estimates are generally in p-value type form (e.g., 0.6 for item 1: 60% of borderline passing participants would get this question correct). Several rounds are generally conducted with SMEs allowed to modify their estimates given different types of information (e.g., actual participant performance information on each question, other SME estimates, etc.). The final determination of the cut score is then made (e.g., by averaging estimates or taking the median). This method is generally used with multiple-choice questions.

48
Q

Nedelsky method of determining cut off scores

A

SMEs make decisions on a question-by-question basis regarding which of the question distracters they feel borderline participants would be able to eliminate as incorrect. This method is generally used with multiple-choice questions only.

49
Q

discuss basing selection cut off scores on applicant or others’ performance

A

local norms are developed that help setting cut off scores.
there are three main methods
thordikes predicted yield: use information to determine score based on percentage of applicants that need to be hired to fill a position

expectancy chart method: use expected percentage to identify the score associated with that percentile minus one SE.

using job incumbents: give them the measure and then use that as the basis for deriving a cut off score.

whatever you use make sure its justifiable legally in that it represents a useful and meaningful selection standard with regards to job performance

50
Q

discuss basing cutoff scores for selection measures on SME judgments

A

SMEs judge content of selection procedures and that helps determine the cut off score. mostly used for multiple choice or written tests.

Ebel method: uses item difficulty where SMEs judge what percentage of the items a borderline taker would be able to answer correctly

Angoff and modified Angoff discussed in other flash cards

51
Q

what are the most critical considerations when choosing SMEs for judgmental methods of determining cut off scores for selection measures?

A

at least 10-20% of the sample of SMEs representative of race gender shift etc. of the employe group; this means that experience of SMEs on the job under study is critical.

52
Q

Weighted application blanks (WABs)
what are they
when to use
how to develop

A

used to determine which parts of an application determine successful vs. unsuccessful performance by scoring and weighting individual application items

use in these situations:
jobs in which a large number of employees perform similar activities
jobs in which good personnel records are available
jobs that require long and costly training programs
jobs with high turnover rate
jobs with large number of applicants relative to position openings
jobs in which it is expensive to bring applicants into the organization for interviewing and testing
usually used for lower skilled jobs

how to develop:

  1. choose the criterion: could be job tenure, training program success, job performance. it’s preferred this data be behavioral measures to increase reliability.
  2. identify criterion groups: two groups of high vs. low criterion employees are distinguished. you want as many people as you can get for this step
  3. selecting application blank items: depend on content and number of items on application form itself which should be derived from JA information
  4. specify item response categories: developed for each WAB item to serve as a way of scoring applicant responses
  5. determine item weights: based on degree of relationship of the WAB item with the criterion of success
  6. apply weights to holdout groups: can’t use the same group to determine success of the WAB so must cross validate. can use new applicants to do this. must wait for them to be at the org for considerable amount of time though
  7. evaluate holdout group’s WAB scores: plot their total WAB scores; high success employees should have higher WAB scores than lower success employees
  8. set cut off scores to use for selection: maximize differentiation between successful vs. unsuccessful

keep in mind that a statistical relationship between the WAB and success must also practically relate to the job in order to withstand legal scrutiny; just because it’s statistically significant doesn’t mean it should actually be used.

53
Q

biodata

what is it, when to use and how to develop

A

concerns personal background and life experiences presented in self report standardizes MC format usually called a biographical application blank (BIB). rests on the assumption that past behavior predicts future behavior and measuring this helps indirectly uncover motivational characteristics of applicants. can get lots of info in short amount of time from applicants. BIB has very good criterion related validity and minimizes adverse impact, even just a few biodata items can make a positive difference. can especially be useful to prescreen applicants when your selection system involves expensive procedures such as assessment centers

how to develop:

  1. select a job: can be applied to high or lower levels jobs, but keep in mind that ROI may be lower for lower level jobs
  2. analyzing the job and defining the criterion life history domain: conduct job analysis and do literature search to determine what antecedent behaviors and life experiences may account for performance. FJA is especially useful for developing BIBs.
  3. forming Hypotheses of life history experiences: in relation to the criterion.
  4. develop a pool of biodata items: select or construct biodata items to reflect experiences that were hypothesized in previous step, can use already made items from biodata item sources in publications
  5. pre screen and pilot test items: review by SMEs for bias and try out of representative group of respondents at least 300 preferably 500; remove items as necessary
  6. scoring the biodata form: develop how they’ll be scored, any method you use should be cross validated. can use single score or multiple scores for dimensions of items on the inventory.
54
Q

biodata items: general guidelines for formatting BIBs

A
  1. deal with past behaviors/experiences
  2. avoid items of personal nature
  3. be specific but brief as possible in question and response options
  4. escape option should be given
  5. measure unique and discrete external events to avoid social desirability in responding
55
Q

biodata items: what might indicate that an item shouldn’t be used?

A

items exhibiting little response variance
items w skewed response distributions
items correlated w protected group characteristics
items having no correlation with other items thought to be measuring the same life history construct
items not having validity with criterion measure

56
Q

developing situational interview questions

A

use critical incidents JA technique to develop questions

incidents are sorted into themes of similar behaviors by SMEs

57
Q

developing behavioral interview questions

A

use critical incidents JA technique but also identify each dimension as essentially describing either maximum or typical performance. maximum = deals with technical skills and knowledge; typical performance dimensions should be used exclusively for behavioral questions since interviews can be primary way of measuring OCBs.
use probes/follow up questions and recognize distinction that should be made between applicants who have work experience related to the job vs. those who don’. For people who don’t have experience, questions should be framed more general and broadly

to score each dimension is scored separately and given a weight reflecting its importance to overall job performance. equal weights should generally be used unless a dimension is at least 2x more important than the others to job performance. place applicants in one of 5 rank order groups for each dimension then sum their dimension scores.

58
Q

using interviews for selection: general rules

A

restrict the use of the interview to the most job relevant KSAs relating to: job knowledge, applied social and interpersonal skills, and personality behaviors

limit the use of preinterview applicant data: better for validity and objectivity. information that would be okay to look at beforehand is info relating to the KSAs to Be covered in the interview or inconsistencies on the application blank or resumes

adopt a structured format: formulate set of questions for each KSA in the interview

use job related questions

use multiple questions for each KSA

rely on multiple independent interviewers: increase reliability, use same across candidates

apply formal scoring format: preferably behaviorally anchored rating scales based on critical incidents representative of the dimensions and sum them

train the interviewer: receiving information, evaluating information, and regulating behavior

59
Q

legal issues in physical abilities testing

A

concerns of females, disabled, and older workers; adverse impact an issue for these groups

the test must clearly be linked to critical job tasks that require physical abilities in their completion. if tasks can be modified to reduce or eliminate physical demands for tasks, they should be and the test reevaluated for use

60
Q

considerations for use of ability tests in selection

A

review reliability data: make sure reliability studies done on the test are sound, have a big enough sample, etc. A good reliability coefficient is .85-.9. this means test scores are likely to be closer to an applicant’s true score and the test results are more precise

review validity data: correlation of test scores with other measures and factor analyses of test items

61
Q

personality in selection

A

conscientiousness and emotional stability predict performance across most jobs

one problem is that their predictive validity is still generally quite low, rarely exceeding .2. our theoretical understanding of why some traits are important for some jobs is still inadequate

but evidence has been offered evidence that supports using personality to make selection decisions especially with the growing consensus of the Big Five dimensions. also, managers like using it and view it as being nearly as important as GMA. even if personality only moderately predicts performance, it can help with a range of other things, like identifying people likely to turnover, engage in OCBs/CWBs, teamwork and leadership, outcomes managers care about. personality traits contribute incremental validity as they don’t highly correlate with other useful selection tools and have little to no adverse impact. the effects of personality are longitudinal, predicting outcomes years later. this demonstrates their usefulness.

62
Q

SJTs: current research trends and findings

A

Measurement methods: reflect multiple KSAOs simultaneously
Criterion related validity: is strong
Subgroup differences: are small to moderate

63
Q

SJTS: directions for future research

A

Construction: new ways besides CIT to develop situations and items
Reliability: better methods than chronbach’s alpha

64
Q

SJTs: issue of reliability

A

Coefficient alpha is most utilized and reported form, but is not the most appropriate for SJTs. There is currently a contradiction in the literature regarding the appropriate method for reliability for SJTs. Some believe that internal consistency is inappropriate given SJTs are usually multidimensional in nature. Yet every published study on SJTs reports internal consistency. The issue of reliability for SJTs is important given they are often used for high stakes decision making. Lower internal consistency reliability common to SJTs makes it hard to develop cut off scores/minimum standards when using them. Thus, test retest reliability is often considered the more appropriate approach because multidimensionality does not need to be considered.

65
Q

SJTs: how to develop situations

A

Use CIT gather from SMEs using the antecedents-behavior-consequences method. Antecedents will be used to construct the situation, while the behaviors and consequences are used in the construction of alternative response options. Each item is created to represent actual instances of behaviors in the workplace.

66
Q

SJTs: how to score/assign numerical values to responses

A

the rational approach to designing the answers uses the judgment of SMEs and the test developer

The continuous method of scoring that key allows for a range of possible scores per item. You assign a scheme ranging from -1 to 1 (worst to best answer) to all items.

67
Q

SJTs: instruction format

A

To elicit more knowledge-based responses, item stems ask what the respondent should do, or which answer is the best.
To elicit a more behavioral response, ask what the respondent would do or would most likely do or has done.

68
Q

SJTs: discuss directions and ideas for future research

A

Most of the SJT research has followed Motowidlo et al 1990’s approach to development. This included the use of CIT to develop situations, the use of rational key and multiple choice response format, and continuous scoring. Continuously using this approach means that we may be missing out on something better. We need to figure out if there are better ways to develop SJTs. Further, using CIT can limit generalizability to other settings because it is inherent to that specific job/situation. Would the same SJT with identical items have validity that gen- eralizes to different settings? Such validity generalization is possible for context-free measures of KSAOs such as cognitive ability or personality, but would it occur for an SJT? Additionally, most SJT research is conducted in field contexts that apply concurrent designs. Directly related to this issue is the fact that a primary concern in SJT research is the reporting of reliability as internal consistency; critical incidents techniques that are so often applied are bound to produce high item-specific variance, thus lowering reliability. Using more theory based approaches to SJT development is a good way of reducing this issue and enhance item homogeneity. Lab studies need to be done more in SJT research, perhaps for testing theoretical frameworks to complement field setting research. Research could also look at organizational influences on SJT scores (such as org culture), since SJTs are contextualized by nature.

69
Q

strategies to reduce adverse impact in selection

A

Use JA to identify technical and non-technical aspects of performance
Use cognitive and non-cognitive predictors to measure full range of KSAOs
supplement cognitive predictor with alternative predictor methods (ones that measure more than one construct)
Minimize verbal/reading requirements to reduce cognitive load of predictors to the extent supported by JA
Enhance applicant reactions via using sensitivity review panels of SMEs or simply giving explanations for why a certain measure or procedure is being used
Consider banding

70
Q

structured interview typology

A

Interview structure typology consists of content (things like standardization of questions, basing questions on JA, etc.) and evaluation dimensions (using anchored rating scales, notes, same interviewers across applicants, etc)

71
Q

Structured interviews: predictive validity

A

Incremental validity over personality and GMA tests because they are only weakly related to each other
Can predict ethical behaviors, or maximum or typical performance

72
Q

structured interviews: Impression management

A

Not all impression management is bad; if it is job related, such as with customer service jobs, it could be assessed during an interview and be useful information

73
Q

Types of structured interview questions

A

Situational: intentions predict future behavior; asked to describe what they’d do in a hypothetical job related situation, usually a dilemma; get at job knowledge or even GMA
Past behavior questions: past behavior predicts future behavior; asked to describe what they did in past job related situations; get at past experiences
Have similar reliability and validity estimates, can complement each other if used correctly

74
Q

current state of interview research

A

agreement that structured is more valid than unstructured, but still unclear on how to define “structure” and when it is defined, unclear whether certain applications of structure adequately reflect operationalization. Needs to be more research on structured alternative interview techniques such as phone or video since these would likely benefit from even more structure than in person. Impression management also needs more research in terms of when it’s job relevant how it affects validity of the interview. Research should also look at structured interviews as an alternative way of assessing personalities over self report measures.

75
Q

discuss issues and limitations of ACs and how to ameliorate these issues

A

Acs have issues with construct validity. 25 years of research have indicated that the ratings that are completed at the end of each exercise are not reflections of the actual dimensions intended to be evaluated, but the exercises themselves. Evidence suggests that this discrepancy is due to the lack of cross-situational consistency in candidate behavior. One suggestion to alleviate this is to design Acs as task or role based, rather than traditional dimension based. Acs have demonstrated strong criterion and content validity, but the issue of construct validity is pervasive; they tend not to measure what they were designed to measure. Lance (2008) points out that this is largely due to a misapplication and misuse of the MTMM matrix in demonstrating construct validity. Post exercise dimension ratings are used to form final dimension ratings and summary ratings resembles an MTMM design, with dimensions representing traits and exercises representing methods. This method has formed the basis of Acs and AC research in the last few decades, and operates under the theory that AC dimensions are stable categories that are distinct within exercises and consistent across exercises, thus SDDE (same dimension different exercise) correlations should be stronger than DDDE correlations. But often this is not the case, as DDDE correlations are almost always larger than SDDE. We should focus less on behavioral dimensions and more on roles or tasks in exercises via simulations or work samples. More accurate assessment of behaviors, fewer inferences needed, red complexity of rating process and cognitive load on raters because behaviors are more proximal to the actual job.