Week 4 - Validity Flashcards
1
Q
Binning, J.F., & Barrett, G.V. (1989). Validity of personnel decisions: A conceptual analysis of the inferential and evidential bases. Journal of Applied Psychology, 74(3), 478–494.
A
- Psychological constructs and operational measures are inferentially linked in personnel selection.
- Validation involves accumulating judgmental and empirical evidence to support inferences.
- Construct, content, and criterion-related validity are unified within a conceptual framework.
- Validation misconceptions and the importance of validating performance criteria are addressed.
- Calls for a shift in behavioral scientists’ roles in personnel selection, emphasizing programmatic research.
- Validity is about the soundness of inferences made from test or assessment information.
- Psychological constructs serve as labels for clusters of covarying behaviors, simplifying information exchange.
- Constructs hypothesize behavior covariance and are used for describing behavioral domains.
- Four core inferences in construct validation: specified relationships between constructs and measures.
- Construct validity encompasses evidence supporting any inference about construct-measure or construct-construct links.
- Traditional validity concepts (construct, content, criterion) represent different evidential bases for supporting validity inferences.
- Distinctions between predictor construct domains and performance domains, emphasizing their conceptual and operational differences.
- Criterion validity concerns often neglected, impacting conceptions of validity.
- Construct-related and content-related evidence offer different approaches for justifying validity inferences, emphasizing the need for multi-faceted validation strategies.
- The importance of rigorous criterion development and validation, often overlooked, is highlighted.
- Performance domains involve behavior-outcome units, requiring delineation of valued outcomes and behaviors.
- Job analysis provides critical evidence for justifying validity inferences, though standard practices are lacking.
2
Q
Landy. F. L. (1986). Stamp collecting versus science: Validation as hypothesis testing. American Psychologist, 41(11), 1183–1192.
A
- Validation in psychology equates to traditional hypothesis testing.
- Trinitarian view (content, criterion-related, construct validity) deemed overly simplistic and restrictive.
- Emphasizes a unitarian approach to validation, advocating for a broader, more integrated understanding.
- Critiques the rigid adherence to predefined models of validity, suggesting flexibility and adaptability are key.
- Advocates for validation as a multidimensional, inferential process rather than confined to specific models.
- Stresses the role of constructs in psychological measurement, urging a move away from narrow definitions.
- Suggests validation involves collecting evidence to support or refute hypotheses about test scores’ implications.
- Calls for an end to the artificial distinction between behavior and mental processes in test validation.
- Encourages psychologists to leverage their expertise in hypothesis testing over conforming to restrictive guidelines.
- Critiques the Uniform Guidelines on Employee Selection Procedures for limiting validation approaches and undermining the role of constructs in measurement.
3
Q
Whetzel & Wheaton (2007), chapter 13
A
4
Q
Sackett et al (2021) Revisiting Meta-Analytic Estimates of Validity in Personnel Selection:
Addressing Systematic Overcorrection for Restriction of Range
A
- Focus: Revisiting the validity of personnel selection procedures, specifically the impact of range restriction corrections on these validity estimates.
- Issue Identified: Systematic overcorrection in meta-analytic estimates due to flawed approaches in accounting for range restriction.
- Range Restriction: Affects validity estimates in personnel selection meta-analyses; traditional correction methods often result in overestimation.
- Methodological Critique: Five common approaches for estimating range restriction artifact distributions are critically evaluated, revealing significant flaws.
- Meta-analytic Reassessment: Revised validity estimates of selection procedures with adjusted range restriction corrections, including cognitive ability tests, structured interviews, and integrity tests.
- Findings: Most selection procedures remain highly ranked but with lower mean validity estimates, suggesting previous overestimation.
- Structured Interviews: Emerged as the top-ranked selection procedure in the revised analysis.
- Inclusion of Diversity Considerations: Analysis includes Black-White subgroup differences in selection procedures, addressing validity-diversity trade-offs.
- Consequential Implications: Revised validity estimates impact understanding of the effectiveness of various selection procedures.
- Recommendations: Advocating for no correction or revised correction in concurrent studies; calls for more accurate and representative artifact distributions.
- Broader Implications: Highlights the need for more cautious and accurate approaches in meta-analytic estimations, particularly in personnel selection research.
5
Q
Kell (2022) Criterion Problem
A
- The Criterion Problem: Challenges in conceptualizing and measuring success in organizational settings.
- Criterion: Defined as success in behaviors or outcomes valuable to influential organizational constituencies.
- Conceptual Criterion: Abstract concept of success, context-dependent, often multidimensional.
- Operational Criterion: Practical measures defining the conceptual criterion.
- Criteria Types: Behaviors and outcomes, with varying levels of measurement judgment.
- Multidimensionality: Success is complex, varying across situations and jobs.
- Behavior vs. Results: Focus on actions or their outcomes, influenced by organizational goals and scientific understanding.
- Timeframe: Immediate, proximal, or distal criterion measurement, affecting inferences and validity.
- Relevance, Deficiency, Contamination: Key quality aspects of operational criteria.
- Hard and Soft Criteria: Objective (hard) and subjective (soft) measures, each with unique challenges.
- Criterion Dimensionality: Balancing multidimensional aspects with decision-making needs.
- Criterion Distortion and Unreliability: Risks in criterion measurement and interpretation.
- Hard Criteria Challenges: Objective measures’ narrow scope and context-dependence.
- Soft Criteria Challenges: Subjectivity and biases in evaluative judgments.
- Negotiation in Criteria Definition: Balancing stakeholder interests and scientific insight.
- Criterion Measurement Over Time: Impact of time on performance assessment and validity.
- Specific Job-Related Criteria: Variability in relevance and application across different jobs.
- Human Judgment in Criteria: Inherent in both hard and soft criteria decision-making.
- Scaling Behaviors and Results: Frequency, quality, and importance in performance evaluation.