chapter 6 Flashcards
- As applied to a test, is a judgment or estimate of how well a test measures what it purports to measure in a particular context.
- More specifically, it is a judgment based on
evidence about the appropriateness of inferences drawn from test scores
Validity
Is a logical result or deduction.
Inference
Is the process of gathering and evaluating evidence about validity.
Validation
Are absolutely necessary when the test user plans to alter in some way the format, instructions, language, or content of the test.
- Require professional time and know-how, and they may be costly.
- May yield insights regarding a particular population of testtakers as compared to the norming sample described in a test manual.
Local validation studies
One way measurement specialists have traditionally conceptualized validity is according to three categories:
Content validity
Criterion-related validity
Construct validity
This is a measure of validity based on an evaluation of the subjects, topics, or content covered by the items in the test.
Content validity
This is a measure of validity obtained by evaluating the relationship of scores obtained on the test to scores on other tests or measures.
Criterion-related validity
This is a measure of validity that is arrived at by executing a comprehensive analysis of
a. how scores on the test relate to other test scores and measures, and
b. how scores on the test can be understood within some theoretical framework for
understanding the construct that the test was designed to measure.
Construct validity
Refers to a judgment regarding how well a test measures what it purports to measure at the time and place that the variable being measured (typically a behavior, cognition, or emotion) is actually emitted.
Ecological validity
Relates more to what a test appears to measure to the person being tested than to what the test actually measures.
- Is a judgment concerning how relevant the test items appear to be.
Face validity
Describes judgment of how adequately a test samples behavior representative of the universe of behavior that the test was designed to sample.
Content validity
For the “structure” of the evaluation—that is, a plan regarding the types of information to be covered by the items, the number of items tapping each area of coverage, the organization of the items in the test, and so forth.
Test blueprint
Is a judgment of how adequately a test score can be used to infer an individual’s most probable standing on some measure of interest—the measure of interest being the criterion.
Criterion-related validity
Two types of validity evidence are subsumed under the heading:
Concurrent validity
Predictive validity
Is an index of the degree to which a test score is related to some criterion measure obtained at the same time (concurrently).
Concurrent validity
Is an index of the degree to which a test score predicts some criterion measure.
Predictive validity
The standard against which a test a test score is evaluated.
Criterion
Characteristics of a criterion:
- An adequate criterion is relevant.
- An adequate criterion measure must also be valid for the purpose for which it is being used.
Is the term applied to a criterion measure that has been based, at least in part, on predictor measures.
Criterion contamination
Is the extent to which a particular trait, behavior, characteristic, or attribute exists in the population (expressed as a proportion).
Base rate
May be defined as the proportion of people a test accurately identifies as possessing or exhibiting a particular trait, behavior, characteristic, or attribute.
Hit rate
May be defined as the proportion of people the test fails to identify as having, or not having, a particular characteristic or attribute
Miss rate
Is a miss wherein the test predicted that the test-taker did possess the particular characteristic or attribute being measured when in fact the testtaker did not.
False positive
Is a miss wherein the test predicted that the test-taker did not possess the particular characteristic or attribute being measured when the testtaker actually did.
False negative
Judgments of criterion-related validity, whether concurrent or predictive, are based on two types of statistical evidence:
Validity coefficient
Expectancy data
Is a correlation coefficient that provides a measure of the relationship between test scores and scores on the criterion measure.
Validity coefficient
The degree to which an additional predictor explains something about the criterion measure that is not explained by predictors already in use.
Incremental validity
Is a judgment about the appropriateness of inferences drawn from test scores regarding individual standings on a variable called a construct.
Construct validity
Is an informed, scientific idea developed or hypothesized to describe or explain behavior.
- Are unobservable, presupposed (underlying) traits that a test developer may invoke to describe test behavior or criterion performance.
Construct
Evidence of Construct Validity:
- Evidence of homogeneity
- Evidence of changes with age
- Evidence of pretest–posttest changes
- Evidence from distinct groups
- Convergent evidence
- Discriminant evidence
Refers to how uniform a test is in measuring a single concept.
Homogeneity
Some constructs are expected to change over time.
- If a test score purports to be a measure of a construct that could be expected to change over time, then the test score, too, should show the same progressive changes with age to be considered a valid measure of the construct.
Evidence of changes with age
Evidence that test scores change as a result of some experience between a pretest and a posttest can be evidence of construct validity.
Evidence of pretest–posttest changes
One way of providing evidence for the validity of a test is to demonstrate that scores on the test vary in a predictable way as a function of membership in some group.
Method of contrasted groups
Evidence for the construct validity of a particular test may converge from a number of sources, such as other tests or measures designed to assess the same (or a similar) construct. Thus, if scores on the test undergoing construct validation tend to correlate highly in the predicted direction with scores on older, more established, and already validated tests designed to measure the same (or a similar) construct, this would be an example of ___________
Convergent evidence
A validity coefficient showing little (a statistically insignificant) relationship between test scores and/or other variables with which scores on the test being construct-validated should not theoretically be correlated provides __________ of construct validity (also known as ___________)
Discriminant evidence / discriminant evidence
An In 1959 an experimental technique useful for examining both convergent and discriminant validity evidence was presented in Psychological Bulletin. This rather technical procedure was called the __________.
Multitrait-multimethod matrix
Data indicating that a test measures the same construct as other tests purporting to measure the same construct are also referred to as evidence of ___________.
Convergent validity
Both convergent and discriminant evidence of construct validity can be obtained by the use of __________.
- Is a shorthand term for a class of mathematical procedures designed to identify factors or specific variables that are typically attributes, characteristics, or dimensions on which people may differ.
Factor analysis
Typically entails “estimating, or extracting factors; deciding how many factors to retain; and rotating factors to an interpretable orientation”
Exploratory factor analysis
Researchers test the degree to which a hypothetical model (which includes factors) fits the actual data.
Confirmatory factor analysis
“A ort of metaphor. Each test is thought of as a vehicle carrying a certain amount of one or more abilities”
- conveys information about the extent to which the factor determines the test score or scores.
Factor loading
- May conjure up many meanings having to do with prejudice and preferential treatment.
- For psychometricians, it is a factor inherent in a test that systematically prevents accurate, impartial measurement.
Bias
Is a numerical or verbal judgment (or both) that places a person or an attribute along a continuum identified by a scale of numerical or word descriptors known as a _________.
Rating / Rating scale
Is a judgment resulting from the intentional or
unintentional misuse of a rating scale.
Rating error
(Also known as a generosity error) is, as its name implies, an error in rating that arises from the tendency on the part of the rater to be lenient in scoring, marking, and/or grading.
Leniency error
Here the rater, for whatever reason, exhibits a general and systematic reluctance to giving ratings at either the positive or the negative extreme.
Central tendency error
A procedure that requires the rater to measure individuals against one another instead of against an absolute scale.
Rankings
Describes the fact that, for some raters, some ratees can do no wrong.
Halo effect
In psychometric context as the extent to which a test is used in an impartial, just, and equitable way.
Fairness
To show a statistically significant difference between individuals or groups with respect to measurement.
To discriminate
Psychometric Techniques for Preventing or Remedying Adverse Impact and/or Instituting an Affirmative Action Program:
- Addition of Points
- Differential Scoring of Items
- Elimination of Items Based on Differential Item Functioning
- Differential Cutoffs
- Separate Lists
- Within-Group Norming
- Banding
- Preference Policies
A constant number of points is added to the test score of members of a particular group. The purpose of the point addition is to reduce or eliminate observed differences between groups.
Addition of Points
This technique incorporates group membership information, not in adjusting a raw score on a test but in deriving the score in the first place. The application of the technique may involve the scoring of some test items for members of one group but not scoring the same test items for members of another group. This technique is also known as empirical keying by group.
Differential Scoring of Items
This procedure entails removing from a test any items found to inappropriately favor one group’s test performance over another’s. Ideally, the intent of the elimination of certain test items is not to make the test easier for any group but simply to make the test fairer. Sackett and Wilk (1994) put it this way: “Conceptually, rather than asking ‘Is this item harder for members of Group X than it is for Group Y?’ these approaches ask ‘Is this item harder for members of Group X with true score Z than it is for members of Group Y with true score Z?’”
Elimination of Items Based on Differential Item Functioning
Different cutoffs are set for members of different groups. For example, a passing score for members of one group is 65, whereas a passing score for members of another group is 70. As with the addition of points, the purpose of ____________ is to reduce or eliminate observed differences between groups.
Differential Cutoffs
Different lists of testtaker scores are established by group membership. For each list, test performance of testtakers is ranked in top-down fashion. Users of the test scores for selection purposes may alternate selections from the different lists. Depending on factors such as the allocation rules in effect and the equivalency of the standard deviation within the groups, the ____________ technique may yield effects similar to those of other techniques, such as the addition of points and differential cutoffs. In practice, the _______ is popular in affirmative action programs where the intent is to overselect from previously excluded groups.
Separate Lists
Used as a remedy for adverse impact if members of different groups tend to perform differentially on a particular test, __________ entails the conversion of all raw scores into percentile scores or standard scores based on the test performance of one’s own group.
Within-Group Norming
When race is the primary criterion of group membership and separate norms are established by race, this technique is known as
Race norming
The effect of _________ of test scores is to make equivalent all scores that fall within a particular range or band. For example, thousands of raw scores on a test may be transformed to a stanine having a value of 1 to 9. All scores that fall within each of the stanine boundaries will be treated by the test user as either equivalent or subject to some additional selection criteria.
Banding
Is a modified banding procedure wherein a band is adjusted (“slid”) to permit the selection of more members of some group than would otherwise be selected.
Sliding band
In the interest of affirmative action, reverse discrimination, or some other policy deemed to be in the interest of society at large, a test user might establish a policy of preference based on group membership. For example, if a municipal fire department sought to increase the representation of female personnel in its ranks, it might institute a test-related policy designed to do just that. A key provision in this policy might be that when a male and a female earn equal scores on the test used for hiring, the female will be hired.
Preference Policies