Module 3: Validity Flashcards
Validity
+ a judgment or estimate of how well a test measures what it supposed to measure
+ evidence about the appropriateness of inferences drawn from test scores
+ degree to which the measurement procedure measures the variables to measure
Inferences
logical result or deduction
Why may validity diminish?
May diminish as the culture or times change
What is true about validity?
✓ Predicts future performance
✓ Measures appropriate domain
✓ Measures appropriate characteristics
Validation
the process of gathering and evaluating evidence about validity
Validation Studies
yield insights regarding a particular population of testtakers as compared to the norming sample described in a test manual
Internal Validity
degree of control among variables in the study (increased through random assignment)
External Validity
generalizability of the research results (increased through random selection)
Conceptual Validity
+ focuses on individual with their unique histories and behaviors
+ means of evaluating and integrating test data so that the clinician’s conclusions make accurate statements about the examinee
Face Validity
a test appears to measure to the person being tested than to what the test actually
measures
Types of Validity
- Content Validity
- Criterion Validity
- Construct Validity (Umbrella Validity)
Content Validity
+ describes a judgement of how adequately a test samples behavior representative of the universe of behavior that the test was designed to sample
+ representativeness and relevance of the assessment instrument to the construct being measured
+ when the proportion of the material covered by the test approximates the proportion of material covered in the course
+ more logical than statistical
Test Blueprint
Content Validity
a plan regarding the types of information to be covered by the items, the no. of items tapping each area of coverage, the organization of the items, and so forth
What is content validity concerned with?
concerned with the extent to which the test is representative of defined body of content consisting the topics and processes
How do a panel of experts evaluate a test for content validity?
panel of experts can review the test items and rate them in terms of how closely they match the objective or domain specification
What does the content validity examine?
examine if items are essential, useful and necessary
Construct underrepresentation:
failure to capture important components of a construct
Construct-irrelevant variance
happens when scores are influenced by factors irrelevant to the construct
Lawshe
developed the formula of Content Validity
Ratio
Formula of Content Validity Ratio
CVR=(Ne - N/2)/(N/2)
+ Ne is the number of panelists indicating “essential” and N is the total number of panelists
What is recommended when the CVI is low?
If the CVI is low, it is recommended to remove or modify the items that have low CVR values to improve the overall content validity of the test
Zero CVR
exactly half of the experts rate the item as essential
Criterion Validity
+ more statistical than logical
+ a judgement of how adequately a test score can be used to infer an individual’s most probable standing on some measure of interest – the measure of interest
being criterion
Criterion
standard on which a judgement or decision may be made
Characteristics
relevant, valid, uncontaminated
Criterion Contamination
occurs when the criterion measure includes aspects of performance that are not part of the job or when the measure is affected by “construct-irrelevant” (Messick, 1989) factors that are not part of the criterion construct
Types of Criterion Validity
- Concurrent Validity
- Predictive Validity
Concurrent Validity
If the test scores obtained at about the same time as the criterion measures are obtained; economically efficient
Predictive Validity
measures of the relationship between test scores and a criterion measure obtained
at a future time
Incremental Validity
the degree to which an additional predictor explains something about the criterion measure that is not explained by predictors already in use; used to improve the domain
Type of Predictive Validity
Construct Validity (Umbrella Validity)
+ covers all types of validity
+ logical and statistical
+ judgement about the appropriateness of inferences drawn from test scores regarding individual standing on variable called construct
+ test is homogenous
+ test score increases or decreases as a function of age, passage of time, or experimental manipulation
+ pretest-posttest differences
+ scores differ from groups
+ scores correlated with scores on other test in accordance to what is predicted
Construct
+ an informed, scientific idea developed or hypothesized to describe or explain behavior; unobservable, presupposed traits that may invoke to describe test behavior or criterion performance
+ some constructs lend themselves more readily than others to predictions of change over time
How can a test developer improve homogeneity of a test?
One way a test developer can improve the
homogeneity of a test containing dichotomous items is by eliminating items that do not show significant correlation coefficients with total test scores
What makes a bad item in an academic test?
If it is an academic test and high scorers on the entire test for some reason tended to get that particular item wrong while low scorers got it right, then the item is obviously not a good one
Method of Contrasted Groups
demonstrate that scores on the test vary in a predictable way as a function of membership in a group
What will happen if a group takes a test with a construct that they do not have?
If a test is a valid measure of a particular construct, then the scores from the group of people who does not have that construct would have different test scores than those who really possesses that construct
Convergent Evidence
if scores on the test undergoing construct validation tend to highly correlated with another established, validated test that measures the same construct
Discriminant Evidence
a validity coefficient showing little relationship between test scores and/or other variables with which scores on the test being construct-validated should not be correlated
Sensitivity
percentage of true positives
Specificity
percentage of true negatives
Multitrait-multimethod Matrix
+ useful for examining both convergent and discriminant validity evidence
+ the matrix or table that results from correlating variables within and between methods
Multitrait
two or more traits
Multimethod
two or more methods
Factor Analysis
+ designed to identify factors or specific variables that are typically attributes,
characteristics, or dimensions on which people may differ
+ employed as data reduction method
Who developed the Factor Analysis?
Charles Spearman
What is the factor analysis used for?
+ used to study the interrelationships among set of variables
+ can be used to obtain both convergent and discriminant validity
What does the factor analysis identify?
Identify the factor or factors in common between test scores on subscales within a particular test
Types of Factor Analysis
- Explanatory FA
- Confirmatory FA
- Factor Loading
Explanatory FA
estimating or extracting factors; deciding how many factors must be retained
Confirmatory FA
researchers test the degree to which a hypothetical model fits the actual data
Factor Loading
conveys info about the extent to which the factor determines the test score or scores
Cross-Validation
revalidation of the test to a criterion based on another group different from the
original group form which the test was validated
Types of Cross-Validation
- Validity Shrinkage
- Co-Validation
- Co-Norming
Validity Shrinkage
Cross-Validation
decrease in validity after cross-validation
Co-Validation
Cross-Validation
validation of more than one test from the same group
Co-Norming
Cross-Validation
norming more than one test from the same group
Bias
+ factor inherent in a test that systematically prevents accurate, impartial measurement
+ prejudice, preferential treatment
+ prevention during test dev through a procedure called Estimated True Score Transformation
Rating
numerical or verbal judgement that places a person or an attribute along a continuum identified by a scale of numerical or word descriptors known as Rating Scale
Rating Error
intentional or unintentional misuse of the scale
Leniency Error
rater is lenient in scoring (Generosity Error)
Severity Error
rater is strict in scoring
Central Tendency Error
rater’s rating would tend to cluster in the middle of the rating scale
What is one way to overcome rating errors?
One way to overcome rating errors is to use rankings
Halo Effect
tendency to give high score due to failure to discriminate among conceptually distinct and potentially independent aspects of a ratee’s behavior
Fairness
the extent to which a test is used in an impartial, just, and equitable way
Would it be good to define the validity of a test if it is not reliable?
Attempting to define the validity of the test will be futile if the test is NOT reliable