Validity Flashcards
A judgment or estimate of how well a test measures what it purports to measure in a particular context.
A. Validity
B. Inference
C. Validation
D. Local validation studies
E. Trinitarian view
A. Validity
A judgment based on evidence about the appropriateness of inferences drawn from test scores
A. Validity
B. Inference
C. Validation
D. Local validation studies
E. Trinitarian view
A. Validity
a term used in conjunction with the meaningfulness of a test score
A. Validity
B. Inference
C. Validation
D. Local validation studies
E. Trinitarian view
A. Validity
A logical result or deduction
A. Validity
B. Inference
C. Validation
D. Local validation studies
E. Trinitarian view
B. Inference
True or False: Characterizations of the validity of tests and test scores are frequently phrased in terms such as “acceptable” or “weak.”
True
The process of gathering and evaluating evidence about validity
A. Validity
B. Inference
C. Validation
D. Local validation studies
E. Trinitarian view
C. Validation
True or False: Both test developers and test users may play a role in the validation of a test
True
True or False: It is the test taker’s responsibility to supply validity evidence in the test manual.
False; test developer’s
True or False: It is not appropriate for test users to conduct their own validation studies with their own groups of test takers
False; It may sometimes be appropriate
May yield insights regarding a particular population of test takers as compared to the norming sample described in a test manual
A. Validity
B. Inference
C. Validation
D. Local validation studies
E. Trinitarian view
D. Local validation studies
Are absolutely necessary when the test user plans to alter in some way the format, instructions, language, or content of the test.
A. Validity
B. Inference
C. Validation
D. Local validation studies
E. Trinitarian view
D. Local validation studies
Would also be necessary if a test user sought to use a test with a population of test takers that differed in some significant way from the population on which the test was standardized
A. Validity
B. Inference
C. Validation
D. Local validation studies
E. Trinitarian view
D. Local validation studies
Classic conception of validity
A. Validity
B. Inference
C. Validation
D. Local validation studies
E. Trinitarian view
E. Trinitarian view
Critics condemned this approach as fragmented and incomplete
A. Validity
B. Inference
C. Validation
D. Local validation studies
E. Trinitarian view
E. Trinitarian view
It might be useful to visualize construct validity as being “umbrella validity” because every other variety of validity falls under it.
A. Validity
B. Inference
C. Validation
D. Local validation studies
E. Trinitarian view
E. Trinitarian view
Stated another way, all three types of validity evidence contribute to a unified picture of a test’s validity.
A. Validity
B. Inference
C. Validation
D. Local validation studies
E. Trinitarian view
E. Trinitarian view
A judgment concerning how relevant the test items appear to be
A. Face Validity
B. Content Validity
C. Test blueprint
A. Face Validity
Relates more to what a test appears to measure to the person being tested than to what the test actually measures.
A. Face Validity
B. Content Validity
C. Test blueprint
A. Face Validity
Frequently thought of from the perspective of the test taker, not the test user.
A. Face Validity
B. Content Validity
C. Test blueprint
A. Face Validity
Lack of this could contribute to a lack of confidence in the perceived effectiveness of the test
A. Face Validity
B. Content Validity
C. Test blueprint
A. Face Validity
Based on an evaluation of the subjects, topics, or content covered by the items in the test.
A. Face Validity
B. Content Validity
C. Test blueprint
B. Content Validity
A judgment of how adequately a test samples behavior representative of the universe of behavior that the test was designed to sample
A. Face Validity
B. Content Validity
C. Test blueprint
B. Content Validity
A plan regarding the types of information to be covered by the items
A. Face Validity
B. Content Validity
C. Test blueprint
C. Test blueprint
The number of items tapping each area of coverage, the organization of the items in the test
A. Face Validity
B. Content Validity
C. Test blueprint
C. Test blueprint
True or False: The content validity of a test varies across cultures and time
True
One technique frequently used in blueprinting the content areas to be covered in certain types of employment tests
A. Personality tests
B. Behavioral observation
C. Content Validity Ratio
B. Behavioral observation
Measures agreement among raters regarding how essential an individual test item is for inclusion in a test
A. Personality tests
B. Behavioral observation
C. Content Validity Ratio
C. Content Validity Ratio
This measure of validity is obtained by evaluating the relationship of scores obtained on the test to scores on other tests or measures
A. Criterion Validity
B. Criterion
C. Criterion contamination
D. Criterion-related Validity
A. Criterion Validity
A judgment of how adequately a test score can be used to infer an individual’s most probable standing on some measure of interest
A. Criterion Validity
B. Criterion
C. Criterion contamination
D. Criterion-related Validity
A. Criterion Validity
The standard against which a test or a test score is evaluated
A. Criterion Validity
B. Criterion
C. Criterion contamination
D. Criterion-related Validity
B. Criterion
The term applied to a criterion measure that has been based, at least in part, on predictor measures
A. Criterion Validity
B. Criterion
C. Criterion contamination
D. Criterion-related Validity
C. Criterion contamination
A judgment of how adequately a test score can be used to infer an individual’s most probable standing on some measure of interest
A. Criterion Validity
B. Criterion
C. Criterion contamination
D. Criterion-related Validity
D. Criterion-related Validity
True or False: An adequate criterion must be relevant to the matter at hand
True
True or False: An adequate criterion should be valid for the purpose for which it is being measured
True
An index of the degree to which a test score is related to some criterion measure obtained at the same time
A. Concurrent validity
B. Predictive validity
C. Expectancy data
D. Validity coefficient
A. Concurrent validity
An index of the degree to which a test score predicts some criterion measure
A. Concurrent validity
B. Predictive validity
C. Expectancy data
D. Validity coefficient
B. Predictive validity
Statistical evidences for concurrent and predictive validity; information found in a table/chart
A. Concurrent validity
B. Predictive validity
C. Expectancy data
D. Validity coefficient
C. Expectancy data
A correlation coefficient that provides a measure of the relationship between test scores and scores on the criterion measure
A. Concurrent validity
B. Predictive validity
C. Expectancy data
D. Validity coefficient
D. Validity coefficient
The degree to which an additional predictor explains something about the criterion measure that is not explained by predictors already in use.
A. Incremental Validity
B. Hit Rate
C. Miss Rate
D. Base Rate
A. Incremental Validity
Proportion of people a test accurately identifies a possessing/exhibiting a particular trait, behavior/ characteristic/attribute
A. Incremental Validity
B. Hit Rate
C. Miss Rate
D. Base Rate
B. Hit Rate
Proportion of people the test fails to identify as having/ not having a particular characteristic/attribute
A. Incremental Validity
B. Hit Rate
C. Miss Rate
D. Base Rate
C. Miss Rate
Percentage of people hired under the existing system for a particular position extent to which a particular trait, behavior, characteristic or attribute exists in the population expressed in proportion
A. Incremental Validity
B. Hit Rate
C. Miss Rate
D. Base Rate
D. Base Rate
Numerical value that reflects the relationship between the number of people to be hired and the number of people available to be hired
A. Selection Ratio
B. False Positive
C. False Negative
A. Selection Ratio
Type 1 Error
B. False Positive
A miss wherein the test predicted that the examinee did possess the particular characteristic/ attribute being measured when the examinee did not.
A. Selection Ratio
B. False Positive
C. False Negative
B. False Positive
Type 2 Error
A. Selection Ratio
B. False Positive
C. False Negative
C. False Negative
A miss wherein the test predicted that the examinee did not possess the particular characteristic/ attribute being measured when the examinee did
A. Selection Ratio
B. False Positive
C. False Negative
C. False Negative
An index of the degree to which a test score is related to some criterion measure obtained at the same time.
A. Concurrent Validity
B. Predictive Validity
C. Construct Validity
A. Concurrent Validity
An index of the degree to which a test score predicts some criterion, measure
A. Concurrent Validity
B. Predictive Validity
C. Construct Validity
B. Predictive Validity
This measure of validity is arrived at by executing a comprehensive analysis of how scores on the test relate to other test scores and measures and how scores on the test can be understood within some theoretical framework for understanding the construct that the test was designed to measure.
A. Concurrent Validity
B. Predictive Validity
C. Construct Validity
C. Construct Validity
True or False: If a test is a valid measure of a construct, then high scorers and low scorers should behave as theorized.
True
How uniform a test is in measuring a single concept
A. Evidence of homogeneity
B. Evidence of changes with age
C. Evidence of pretest–posttest changes
D. Evidence from distinct groups
A. Evidence of homogeneity
Some constructs are expected to change over time (e.g., reading rate)
A. Evidence of homogeneity
B. Evidence of changes with age
C. Evidence of pretest–posttest changes
D. Evidence from distinct groups
B. Evidence of changes with age
Test scores change as a result of some experience between a pretest and a posttest (e.g., therapy)
A. Evidence of homogeneity
B. Evidence of changes with age
C. Evidence of pretest–posttest changes
D. Evidence from distinct groups
C. Evidence of pretest–posttest changes
Scores on a test vary in a predictable way as a function of membership in some group
A. Evidence of homogeneity
B. Evidence of changes with age
C. Evidence of pretest–posttest changes
D. Evidence from distinct groups
D. Evidence from distinct groups
Scores on the test undergoing construct validation tend to correlate highly in the predicted direction with scores on older, more established tests designed to measure the same (or a similar) construct.
A. Convergent Evidence
B. Discriminant Evidence
C. Factor Analysis
A. Convergent Evidence
Validity coefficient showing little relationship between test scores and/or other variables with which scores on the test should not theoretically be correlated.
A. Convergent Evidence
B. Discriminant Evidence
C. Factor Analysis
B. Discriminant Evidence
Class of mathematical procedures designed to identify specific variables on which people may differ
A. Convergent Evidence
B. Discriminant Evidence
C. Factor Analysis
C. Factor Analysis
A factor inherent in a test that systematically prevents accurate, impartial measurement.
A. Bias
B. Rating Error
C. Halo Effect
D. Fairness
A. Bias
A judgment resulting from the intentional or unintentional misuse of a rating scale.
A. Bias
B. Rating Error
C. Halo Effect
D. Fairness
B. Rating Error
Raters may be either too lenient, too severe, or reluctant to give ratings at the extremes (central tendency error).
A. Bias
B. Rating Error
C. Halo Effect
D. Fairness
B. Rating Error
A tendency to give a particular person a higher rating than he or she objectively deserves because of a favorable overall impression
A. Bias
B. Rating Error
C. Halo Effect
D. Fairness
C. Halo Effect
The extent to which a test is used in an impartial, just, and equitable way
A. Bias
B. Rating Error
C. Halo Effect
D. Fairness
D. Fairness