Validity Types Flashcards
Construct validity
A test has construct validity if it accurately measures a theoretical, non-observable construct or trait (e.g., intelligence, motivation). Two methods of establishing a test’s construct validity are convergent/divergent validation and factor analysis. Evaluated using the multitrait-multimethod matrix. Looks at convergent and divergent validity.
Cross-validation
Shrinkage is associated with cross-validation and refers to the fact that validity coefficient is likely to be smaller than the original coefficient when the predictor and criterion are administered to another (cross-validation) sample. Shrinkage occurs because the chance factors that contributed to the relationship between predictors and criterion in the original sample are not present in the cross-validation sample.
Convergent/Divergent Validation:
A test has convergent validity if it has a high correlation with another test that measures the same construct. A test’s divergent validity, by contrast, is demonstrated through a low correlation with a test that measures a different construct. The multitrait-multimethod matrix is one way to assess a test’s convergent and divergent validity.
Content validity
concerns primarily the extent to which the test items adequately and representatively sample the content area to be measured. For example, a final examination in a math course will have content validity to the degree that it measures knowledge of what was taught in that course as a whole.
criterion-related validity
if it is useful for predicting an individual’s behavior in specified situations. To determine criterion-related validity, scores on a predictor test are correlated with an outside criterion.
Concurrent and predictive are criterion-related validity
A college admissions committee might be interested in using scores on the SAT to predict college success. In this situation, the SAT would be called the predictor, a direct measure of college success (e.g., college GPA) would be the criterion, and the correlation between the SAT and college GPA would be a measure of the SAT’s criterion-related validity.
concurrent validation
Concurrent validity is most important when predictor scores will be used to estimate current scores on the criterion. This is a criterion-related validity type.
The predictor and the criterion data are collected at or at about the same time. For instance, a job selection test for typists could be given to presently employed typists to see if it actually does predict good and bad typing performance, as measured by a criterion such as supervisor ratings. If a test is useful for predicting a given current behavior (e.g., the current performance of the typists), we say that it has high concurrent validity.
predictive validation
Predictive validity is most important when predictor scores will be used estimate future scores on the criterion. This Is one of the types of criterion-related validity.
Scores on the predictor are collected first, and the criterion data are collected at some future point. For instance, the validity of the GRE in predicting college success was assessed using predictive validation. GRE scores were obtained for a representative sample of graduate school freshmen, and, at a later point in time, the students’ GPAs — along with faculty ratings of the students — were correlated with GRE scores. (By the way, the GRE had a correlation .33 with graduate grade point average and .41 with faculty ratings.) If a test is useful for predicting a future behavior, we say that the test has high predictive validity.
Differential validity
A selection test or other employment procedure has differential validity when it has significantly different validity coefficients for members of different groups. Differential validity is a potential cause of adverse impact.
discriminant validity
tests whether concepts or measurements that are not supposed to be related are actually unrelated. Same as divergent.
Multitrait-multimethod matrix is used to evaluate what kind of validity?
Construct