L4: Selection (part 1) Flashcards
What can help with job postings?
Talent acquisition is difficult but technology can help with customized job postings on online info and targeted advertising on Linkedin
How to increase diversity for recruitment?
-emphasize the availability of training and career development programs
- make contacts and gather info
- develop results oriented programs
- invite representatives
- select diversity of contacts and recruiters
- get management approval and support
- develop procedures for monitoring and follow ups
- messages of organization
What is an applicant-tracking system?
- authorize from hiring manager filling positions
- post in a variety of hiring channels
- as candidates apply then they receive acknowledgments
- rough screening on a pass/fail basis
- hiring managers interview most prominent candidates and select those with highest ratings
What is important for attracting applicants?
The person-job fit so the matching of abilities to skills and the person-organization fit is the matching of values to personality. Usually use the internet, company websites and online networking sites
What do applicants find most important in an advertisement?
Job description
Salary
Key responsibilities
Career prospects
Closing date
Company details
Location
Experience needed
What organizational characteristics influence applicants’ attraction to firms?
- work environment
- organizational image
- employer image (more broad)-> can improve by providing more info
Types of biographical items?
Historical, future/hypothetical, external, internal, objective, secondhand, discrete, summative etc
What does biodata mean?
- items pertaining to historical event that may have shaped the person’s behaviour and identity
- involves deducing abilities based on education and then making hypotheses
- can take a rational approach to ask more indirectly and asking about specific development times can help provide accurate responses
What affected the attractiveness of organizational perceptions?
- the content of job advertised affected this more for experienced job seekers than inexperienced job seekers due to central processing of a message
- peripheral cues such as attractiveness of employees in advertisements influenced inexperienced job seekers
- attitude depends on positive/negative cues
What are the consequences of inflated applicant expectations?
Employers try to present themselves positively, but employees more likely to become dissatisfied and quit
Why is it important to have realistic job previews?
- This has the effect of reducing turnover, attrition during
selection, and is associated with better performance (although
can reduce applicant acceptance) - Better earlier in the selection process
-While retention rates improve with RJPs, this improvement is less in low-complexity jobs
How to improve completion rate?
Most candidates use smartphones to search and apply for jobs but 60% quit in the middle of filling out. This is due to length, complexity and too brief job descriptions. This can result in losing top talent. Solution: ask less questions in online applications→ AI and automatic tracking system to obtain info about application process (dropout fell by 50%).
How can negative views and attitudes of applicant perceptions affect recruitment?
Negative views mean that good candidates might be lost
Negative attitudes affect motivation and performance at the interview and truth propensity.
Feelings of injustice can result in negative feelings towards the selection
Which selection procedures do candidates like?
Like: Work samples + unstructured interviews + interviewers who show high positive non-verbal behaviour (e.g. smiles & nods)
Dislike: Tests
What can affect perceptions of fairness?
- outcome like job offer→ fairness and self-interest
- if selection decision is in favour of the candidate
- inherent unfairness like less time for the test seen as unfair if candidate does not get the job
- a buffer against threat to self-esteem from rejection→ externalize
Methods of screening
- Social Networking Websites (SNWs)
- Recommendations
- Reference checks
- Biodata
- Curriculum Vitaes/Résumés
- Honesty/integrity tests
What is the role of social networking websites?
Relevant for young graduates as recruiters have limited background info. Can be professional (info about person-job fit) or personal (info about person-org fit). Linkedin is now the most common.
Which KSAOs can be measured from screening?
Big 5, narcissism, cognitive ability -> high construct validity. Job relevant background, language fluency, network ability and social capital, communication skills, leadership, persuasion
What are some issues with SNWs?
- not all info is useful
- screening with SNWs is not structured (no criteria used for assessing the content, content presented is not consistent, some applicants have no SNWs)
- info distortion is when people do not present themselves honestly, info might be job irrelevant
- personal info can be found SNWs, which could activate stereotypes
- criterion validity with academic performance but not with job performance
What is the role of recommendations?
- these are opinions of relevant others to evaluate how well an applicant did in the past (employment, education, character, personality and interpersonal skills, job performance ability)
- needs to have enough observation, and competent enough to make evaluations
- mean criterion validity is low= .14 as they do not include unfavourable info so hard to discriminate
How can recommendations be made more meaningful?
specify familiarity with candidate, with job, specific examples of performance, indicate group to whom candidate is compared
What are the recommendations to make reference checks most useful?
- consistent
- relevant
- written
- based on public records like compensation
- also implementing a structured telephone reference check
What is important to keep in mind about reference checks?
- mean criterion validity = .26
- most frequently used methods of screenings
- need to be: consistent, relevant, written, based on public records if available
- provides valuable info
Important aspects of biodata?
- usually self-report in an application form
- assumes causal relationship between prior life events and subsequent behaviour (focussed on history items and present values, attitudes, interests
- good when hiring large number of employees when turnover is high
- items should be job-related and overall score should be given
- cross-validated biodata items exist for many occupations
What are the statistical properties of biodata?
- mean criterion validity = .38
- high incremental validity-> 6% variance in quality/quantity of work, 7% for interpersonal relationships. 9% for retention
Issues with biodata?
used on different population than the one it was developed for, potential faking or distortion. can be lowered with: lie scales, verification, elaboration on answers which reduces impression management and forces applicants to remember more accurately
What are some characteristics of CVs/resumes?
- criterion validity= .25
- involves competency statements which are self-evaluations made by candidates
- photographs can increase probability of interview offer for average CVs but not starting salary
- video CVs can help applicants better express themselves
What are some issues with CVs?
prone to cognitive biases and heuristics, placed in stereotype-based categories, applicant distortion
What is important to overcome these issues?
train raters to make sure they focus on job-related factors, assess interrater reliability, create structured rating schema to enhance consistency and accuracy
What are honesty/integrity tests?
- mean criterion validity is .31
- types: overt integrity tests (attitudes towards theft and dishonesty) and personality-oriented measures (assess broader dispositional traits that predict counterproductive behaviour)
Conditional reasoning testing
Focusses on how people solve inductive-reasoning preferences on the surface. But, the true intention is to determine based on their solutions what their biases and preferences are
What is the main issue with honesty/integrity tests?
There can be distortion/faking. The solution is to make it more implicit, such as part of SJT
What is Google’s recruitment process?
- a series of interviews with different interviewers, with many rounds
- many interview topics covered like general logic puzzles
- the focus is on skills and intelligence
- usually ask unusual skills not geared towards skills or experiences to measure how well a candidate takes the next steps
What are the functions of interviews?
- useful to select and to learn more about the organization
- fills info gaps in other selection devices like communication skills
- assesses aspects measured through face to face interaction
- answers whether there is a fit with organizational values and other employees
Impression management
Controlling information to steer others’ opinions in the service of personal or social goals
Functions of impression management?
- self-promotion is presenting yourself as highly competent through guiding convo and highlighting job-relevant abilities, experiences etc-> positive relationship with interviews especially with low initial perception
- image creation is deceptive tactics, intentional misrepresentation to influence perceptions through distortion and false impressions-> reduces ratings
- ingratiation is making others like you by showing interest, complimenting, smiling, opinion conformity-> higher ratings
How good are interviewers at detecting impression management?
Not that good, but above chance, and experience does not help detection
What social/interpersonal factors can influence interview decision-making?
- interviewer applicant similarity which leads to attraction, higher affect and interview ratings. Results in more expectations of performance, but can be reduced through multiple interviewers and structure
- more positive effects of non-verbal behaviour especially when verbal content is good, like handshakes
What is the role of handshakes?
This increases immediacy (interaction btw individuals involving proximity and perceptual availability), which leads to greater attribution of liking. Firm handshake involves: strong, complete grip, vigorous shaking for lasting duration and eye contact (these tend to be consistent with good reliability). Mediates the relationship btw extraversion and hireability evaluations
Hypotheses into the role of handshakes?
- Individuals with a firm handshake will receive more positive evaluations during employment interviews
- Extraversion will correlate positively with handshake ratings.
- The handshake is a behavioral mediator of the relationship between extraversion and hirability evaluations
in employment interviews. - handshakes from women will be rated less favourably, resulting in lower interviewer assessments for women
How were handshakes investigated?
This study examined 98 undergraduates in mock interviews conducted by HR professionals. Handshakes were rated by five independent raters on grip, strength, duration, vigor, and eye contact (ICC = .85). Participants also completed a Big Five personality test, and interviewers provided hiring recommendations (α = .90). Physical attractiveness and professional dress were rated separately (ICC = .79, .89)
What are the gender differences in handshakes?
- women have less handshake firmness, less eye contact but did not affect evaluations
- weak handshakes for both genders have the same employment suitability
- women with a firm handshake had higher ratings than men with a handshake of the same firmness (more salient with women)
How can pre-interview impressions influence recruitment?
Behavioural bias is is an interviewer acts in a way that confirms impressions. Cognitive bias is when interviewers either distort info to support impressions or use selective attention and recall of info. Both of these are examples of self fulfilling prophecy- confirms initial impression during interview
How can first impressions influence recruitment?
100ms is enough to form an impression. Involvements judgements about attractiveness, likeability, trustworthiness, competence and aggressiveness based on facial appearance. More time increases confidence in judgement, interviewers seek info to confirm these
How can prototypes and stereotypes influence recruitment?
Interviewers develop a prototype of a good candidate & seek to accept people who match the prototype. Stereotypes (e.g. gender-based, ethnic-based)
How can contrast effects influence recruitment?
Other candidates are used as a standard, so if a candidate is average but previous candidates were evaluated unfavourably. Very persistent and can be helped through intensive workshops.
How can info recall influence recruitment?
Those who were least accurate in recall assumed that the interview was favourable and adopted halo strategy (so there was less variability). Those who took notes were more accurate and differentiate better
How can individual differences influence recruitment?
- ethnicity: combination with accent and interviewer ethnicity is perceived less positively, those with natural hairstyles seen as less competent and less professional
- conscientious leads to follow-up interview invitations, while extraversion & emotional stability lead to job offers
- positive bias for attractive other-sexes applicants and negative bias towards same sexes applicants to buffer self-threats
- disability has mixed findings
- scents led to higher ratings when women was assessing
What is question sophistication?
- basing questions on the results of job analysis and CIT (turning incidents into Qs)
- involves longer interviews and more questions
- can use BARS for rating to enhance accuracy
- high criterion validities and high interrater reliability
What are the types of sophisticated questions?
Experience-based Qs are past oriented and situations that are relevant to the current job.
Situational Qs are future-oriented, given a set of circumstances and show how you would respond to the situation. These are highly valid, resistant to contrast error, gender or race bias. Can be rated using BARS
What is questioning consistency?
- involves asking the same Qs across candidates, limiting follow-up Qs and elaboration
- candidates only ask Qs at the end
- same interviewers across candidates
What is evaluation standardization?
Involves rating answers on scales which are behaviourally anchored, not discussing applicants between interviews and interviewer training
Characteristics of structured interviews?
- more valid and reliable than unstructured interviews
- reduces mean differences between groups based on race, gender, disability
- reduces the effect of impression management on interview ratings
- taps into knowledge, skills, organizational fit, interpersonal skills
- greater degree of job relatedness than unstructured
What are asynchronous video interviews (AVIs)?
Candidates video-record their answers to a predefined set of interview questions, which are reviewed and rated by the hiring organization
Evaluation of AVIs?
Pros: more flexible, faster, and cheaper to use than face-to-face or video conference interviews. Recent study shows that performance is similar to traditional interviews. Becoming increasingly popular
Cons: often perceived more negatively by applicants, including as less fair, less user-friendly, and less valid, leading to lower organizational attraction than traditional modalities and synchronous video interviews
How is media richness increased in AVIs?
By replacing text-based with video-based company introductions and interview questions which enhanced social presence. This reduced anxiety and improved performance
How is computer-based testing being used?
Advancements in technology enable computer-based screening (CBS) for hiring, offering efficiency, standardization, and accessibility. Computer-adaptive testing (CAT) personalizes difficulty, reducing cheating risks. CBS benefits include automated data recording and disability accommodations, but challenges include cost, cheating, and the digital divide (gap between those who have access to tech). Research supports Web-based testing’s reliability, but success depends on organizational readiness.
What is AI?
a system’s ability to interpret external data correctly, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation. Includes machine learning approaches like deep neural networks but also simple algorithms like regression, natural language processing or voice recognition
How is AI used in recruitment and selection?
Used for higher speed and efficiency gains. Examples include: chatbots to guide applicants through the process, match talent to most relevant vacancies, analyse social media, CV scanning, interview bots and rating of responses
How can AI be evaluated?
Benefits: reduction in human bias and error -> AI capable of processing a vast amount of data in a more standardized, consistent, and objective manner, provides timely feedback to applicants
Drawbacks: bias could be introduced into AI, seen as less fair, seen as black box due to low transparency and explanability, privacy issues, accountability issues and limited validation of AI (limited accuracy)
What are some suggestions for improving the interview process and outcome?
- link interview questions tightly to job analysis results, ensure that behaviours and skill in the interview are similar to the job
- ask the same questions of each candidate
- anchor rating scales
- interview panels no more valid than individual interviews
- well-designed and properly evaluated training program to communicate info to employees
- document job analysis and interview development procedures
- planned system of feedback to interviewers to let them know who succeeds and fails
What are some steps of following a structured interview?
- open the interview-> explain purpose and structure
- preview the job
- ask questions about minimum qualifications
- ask experience-based questions
- ask situational questions
Computer scoring of texts
It analyses texts, but issues with info retrieval (lexical match btw words in query and words in the document analyzed)
What is important about polygraph testing?
Is likely to lead to errors, and physiological indicators can be altered by conscious efforts on the part of applicants
What is the general view on using technology?
The availability of big data and technological advancements (e.g., social media, mobile and Web-based selection, and virtual reality technology) create new opportunities, but there is much we need to understand about their validity and reliability before we can recommend their use more widely.
What is AI recruiting?
any procedure that makes use
of AI for the purposes of assisting organizations during the
recruitment and selection of job candidates
AI
A system’s ability to interpret external data correctly, to learn from data and use those learnings to achieve specific goals and tasks through flexible adaptation
What kind of literature was excluded?
Those with a sole focus on a technical assessment of algorithmic fairness and technology-enhanced recruiting. Only if an article discussed the application of recruiting and ethical implications was it included. 5 different perspectives for categorizing the articles emerged: theoretical, practitioner, legal, technical and descriptive
Findings for theoretical perspective?
One study applied ethical principles from other disciplines (e.g., medicine, AI) to HR, identifying five key principles: privacy, opt-out options, institutional review, transparency, and respect for personal development. Another study advocated for a feminist design justice framework to prevent AI hiring systems from perpetuating historical inequalities. A humanistic approach was also suggested, emphasizing the integration of technology with human-centered hiring.
Findings for practitioner perspective?
Some studies argued that AI can improve candidate assessment by addressing biases in traditional hiring methods. Others warned about unresolved ethical, legal, and privacy issues. Several papers provided guidelines for managers on ethical AI implementation in recruitment
Findings for legal perspective?
Title VII of the US Civil Rights Act protects against employment discrimination, but scholars argue that it is outdated and fails to address AI’s potential for implicit bias. One study suggested shifting the burden of proof onto employers to ensure AI-generated hiring data is non-discriminatory. Studies assessed bias mitigation in AI recruiting under US and UK legal frameworks.
Findings for technical perspective?
Some papers examined ethical concerns through algorithmic mechanisms. Others proposed technical solutions, such as AI auditing frameworks and computational methods to reduce bias One study highlighted challenges in making AI hiring fairer and more transparent.
Findings for descriptive perspective
Experimental studies assessed candidate reactions to AI-based hiring. Some studies found AI-driven decisions perceived as less fair, while others found no difference compared to human decisions. Factors like information transparency and applicants’ computer experience influenced perceptions of AI fairness.
How can AI be applied in each step of recruiting?
- outreach: identifying possibly candidates through job ads 2. screening: shortlist of promising candidates through matching 3. assessment: identify which candidate is best for the job 4. facilitation: coordinate with applicants, sending job offers, scheduling interviews, answering questions
What are some ethical considerations?
AI in hiring presents a complex landscape of opportunities and risks. It promises to reduce human bias and promote diversity through objective assessments, yet potentially introduces algorithmic bias and reinforces existing homogeneity. Privacy and ethical concerns are significant, with risks of discrimination and reduced applicant autonomy. While AI enhances recruitment efficiency by automating tasks, its accuracy and transparency remain contested. The technology’s impact hinges on careful implementation, ongoing ethical oversight, and balancing technological capabilities with human judgment.
How to mitigate ethical risks?
Governmental Regulation
Address classification bias
Track and monitor selection processes
Prevent potential legal infringements
Organizational Standards
Protect and secure data
Ensure transparency about AI analysis
Maintain human oversight in algorithmic decisions
Technical Due Diligence
Develop data literacy
Remove potentially discriminatory predictors
Create interpretable AI models
Continuously check technologies for bias
Awareness
Recruit professionals understanding AI recruiting risks and opportunities
What should future research focus on?
Future research on AI-enabled recruiting should integrate ethical frameworks like utilitarianism, deontology, and contract theory to assess privacy, transparency, and fairness. Business ethics, such as social contract theory, can further explore AI’s impact. Aligning AI recruiting with broader AI ethics guidelines is essential. Empirical studies should examine AI’s accuracy, bias reduction, informed consent, and applicant perceptions. Comparing AI to traditional hiring, assessing privacy concerns, and analyzing applicant reactions can refine ethical and practical guidelines for AI-driven recruitment.
What are the practical implications?
- firms should think and act beyond regulation and establish organizational standards to ensure the ethical use of AI recruiting tools like compliance with privacy laws, transparency on AI and human oversight
- multi-tier approach is needed for all measures
What has been found about validity findings from previous studies?
- widely used predictors exhibit lower operational criterion-related validity with regard to overall performance
- relative predictive power of cognitive ability changed
- structured interviews had the highest mean validity
- validity is lower in restricted samples
How can range restriction be adjusted for?
The common approach has been to compute the average criterion unreliability and range restriction from available samples and use these as correction factors for observed validity. This is only valid if the samples providing these estimates are random or representative, but these samples are usually non-random, leading to systematic overcorrections
What are they key insights regarding overcorrection?
- correction factors come from predictive validity studies
- the degree of range restriction is substantial in predictive studies
- applying a large correction factor results in overcorrection
- 80% of studies are concurrent
Why are specific predictors to individual jobs fare better?
- supports end of sign-sample dichotomy as conducts careful job analysis
- shows value of an investment in a custom-designed selection system
- some predictors are not applicable for entry-level hiring
- some predictors reflect changing domains in terms of time and effort
How do new developments alter previous findings?
New evidence has led to updates in predictor validity rankings. One study found that the validity of cognitive ability for job performance is lower than previously estimated, likely due to shifts in job structures and performance criteria. This lowers cognitive ability’s ranking from 5th to 12th. Meanwhile, another study reported lower criterion reliability for managerial roles, leading to a revised estimate for assessment centers, increasing their validity and ranking them alongside work samples in 4th place.
What is the main issue with meta-analyses?
the overemphasis on mean validity values while ignoring variance across studies. They caution against assuming the meta-analytic mean is universally applicable, as standard deviations reveal considerable variability. Previous studies have found large credibility intervals, meaning their effectiveness can vary significantly depending on implementation quality and study design. This is due to: broad predictor categories, quality of selection procedures and job differences. Practitioners should consider: moderator analyses, look at credibility intervals and local validity studies.
What is the verdict on criterion validity?
The role of cognitive ability in predicting job performance has diminished based on recent findings, which show lower correlations than previously thought. But it is still important in: practicality for selection, as they are widely available and can be administered easily. They are also the best predictor for performance in educational and job training settings, especially when significant training investments are involved. Cognitive ability predicts task performance better than other job behaviors and is more relevant for maximum performance (short-term, high-effort tasks) than typical performance (sustained tasks). Organizations should choose selection strategies based on whether they prioritize maximum or typical performance.
Operational validity
refers to the criterion-related validity of a selection procedure in the applicant pool (so, free from range restriction introduced via selection) for predicting a criterion free of measurement error.
How should practitioners estimate operational validity?
- correct for reliability first then range restriction
- correcting for unreliability is important for all validity studies
- use an estimate of interrater reliability not internal consistency (preferably local or if not then highly similar settings with similar measures)
- use many reliability estimates to triangulate
- lower reliability estimates produce larger corrections
- if objective performance measures are used then consistency is a basis for reliability
- to correct range resitriction then standard deviation of applicants and employees is needed
- range restriction is important if predictor was used to select the validation sample, otherwise will not have a large influence
- use local applicants and incumbents standard deviation
- be cautious about using publisher norms and formulas that convert selection ratios into U-ratio
- do not use mean range restriction correction for concurrent studies, and use with extreme caution for predictive studies
- make no correct unless confident in st dev info