Chapter 10 Flashcards
relates to the confidence we have that our measurement tools are giving us accurate information about a relevant construct so that we can apply results in a meaningful way
Validity
Questions Addressed by Validity
Is a test capable of discriminating among individuals with and without certain traits, diagnoses, or conditions?
●Can the test evaluate the magnitude or quality of a variable or the degree of change from one time to another?
●Can we make useful and accurate predictions about a patient’s future function or status based on the outcome of a test?
Reliability
Consistency of measurement
Validity relates to
alignment of the measurement with a targeted construct; i.e., can inferences be made?
Similarities of reliability and validity
•Do not consider as all-or-none
•Not an immutable characteristic of the instrument itself
3 C’s of validity
•Content validity
•Criterion-related validity
•Construct validity
Purpose of content validity
Establishes that the multiple items that make up a questionnaire, inventory, or scale adequately sample the universe of content that defines the construct being measured.
Purpose of Criterion-related Validity
Establishes the correspondence between a target test and a reference or “gold” standard measure of the same construct.
Purpose of concurrent validity
The extent to which the target test correlates with a reference standard taken at relatively the same time.
Purpose of predictive validity
The extent to which the target test can predict a future reference standard.
Purpose of construct validity
Establishes the ability of an instrument to measure the dimensions and theoretical foundation of an abstract construct.
Purpose of convergent validity
The extent to which a test correlates with other tests of closely related constructs.
Purpose of divergent validity
The extent to which a test is uncorrelated with tests of distinct or contrasting constructs.
Refers to the adequacy that the complete universe of content is sampled by a test’s items.
•The items must adequately represent the full scope of the construct being studied.
•The number of items that address each component should reflect the relative importance of that component.
•The test should not contain irrelevant items.
Content validity
●The implication that an instrument appears to test what is intended to test
●Not the same as content validity
•Face validity is a judgment by the users of a test after the test is developed
•Content validity evolves out of the process of planning and constructing a test
Face validity
●Comparison of the results of a test to an external criterion
•Index test
•Gold or reference standard as the criterion
●Two types
•Concurrent validity
•Predictive validity
Criterion-Related Validity
●Reflects the ability of an instrument to measure the theoretical dimensions of a construct
•Assessing presence of a latent trait
Construct validity
Methods of construct validity
•Known-groups method
•Convergence and divergence
•Factor analysis
standardized assessment designed to compare and rank individuals within a defined population.
norm-referenced test
interpreted according to a fixed standard that represents an acceptable level of performance
criterion-referenced test
Change scores used to:
•Demonstrate effectiveness of an intervention
•Track the course of a disorder over time
•Provide a context for clinical decision making
Minimal clinically important difference (MCID)
The smallest difference that signifies an important difference in a patient’s condition
Methodological Studies: Validity
●Fully understand the construct
●Consider the clinical context
●Consider several approaches to validation
●Consider validity issues if adapting existing tools
●Cross-validate outcomes