problem 2 - selection Flashcards
what is the design & validation process in selection?
- job analysis to define a competency model & create a person specification
- used to identify the selection & assessment criteria
- used to decide which selection methods to use
- selection methods then piloted & validated - prospective candidates engage in self-selection: they make judgements about whether the role suits them & their abilities/skills
- once candidates apply – a selecting out process may take place based on eligibility before the selecting in process - after some time, info on work performance of workers can be used to examine the validity of the selection methods and decisions
what is validity?
The extent to which the observed test score is a good indicator of the construct you intend to measure/predict
- Based on the test scores, to what extent are you able to make statements about the construct that you intend to measure
what are criteria?
a measurement used to analyze the performance of a worker/employee
- creativity, task perf, OCB, CWB
what are predictors?
a variable used to estimate, forecast, or project future job performance
- interviews, work-sample, biodata, psychological tests, etc.
- Used to decide whether or not a candidate is suitable for the job
what is criterion-related validity?
the extent to which a measure is related to an outcome
can be concurrent validity or predictive validity
what is concurrent validity?
predictor & criterion are measured at the same point in time. Can be done in 2 ways:
- In job incumbents: correlate predictor and criterion in a sample of individuals already working for the org
- Use a proxy for future job performance (e.g., work-sample test)
what is predictive validity?
The predictor is measured at time 1, criterion measured at time 2 (aka later time) - measured at different points in time
what is construct validity?
if the test measures what is should measure – use several methods to measure the same thing, then check if the results are the same
e.g. does an intelligence test actually measure someone’s intelligence?
can be convergent or divergent (discriminant)
convergent vs divergent (discriminant) validity
convergent = the degree to which the measure is related to other measures of similar construct
- e.g. a correlation between conscientiousness and related concepts like punctuality
divergent = the degree to which the measure is unrelated to measure of distinct constructs
- e.g. the absence of a correlation between conscientiousness scores on unrelated concepts like creativity
What is content validity?
a test is content valid when it contains all the components of the behavior that is measured
- whenever an important component is missing, the test is no longer valid
- Aka whether the content of the test overlaps with what must be measured
- E.g. a driving exam without a parking exercise is incomplete
can be faith or face validity
what is faith validity?
when an organization believes that the selection method is valid, because it is, for example, sold by a known organization or because its framed in a way that sounds valuable
what is face validity?
the extent to which a test is perceived as measuring what it is supposed to measure
- e.g., at the end of the exam, the students feel as though it measured our knowledge of org psych, it has high face validity
Important for applicant reactions in job selection
what is reliability?
the consistency or repeatability of the measures, the extent to which a measurement tool gives consistent results
- tests should give the same results when used multiple times
what is adverse impact?
an unwanted side effect – e.g. a test that (unintentionally) distinguishes between races, genders or ethnicities
direct or indirect discrimination
direct vs indirect discrimination
Direct discrimination = rejecting someone on the basis of their characteristics, intentionally
Indirect discrimination = rejecting someone without the intention to discriminate, e.g. by determining criteria which exclude certain groups