Ch. 6 - Validity Flashcards
validity
judgment of how well a test measures what it purports to measure in a particular context; judgment based on evidence about the appropriateness of inferences drawn from test scores
inference
logical result or deduction
a valid test has been shown to be valid for
a particular use with a particular population of testtakers at a particular time
no test is ____ valid
universally valid for all times, all uses, and with all populations
a test is valid within ____
“reasonable boundaries” of a contemplated usage
validation
the process of gathering and evaluating evidence about validity
validation studies can be done with ____
a group of testtakers, to provide insights regarding a particular group of testtakers as compared to a norming sample (local validation)
what are the three categories of validity?
content, criterion-related, and construct (umbrella)
content validity
scrutinizing the test’s content
criterion-related validity
relating scores obtained on the test to other test scores or other measures
construct validity
umbrella validity; all others fall under it. a comprehensive analysis.
analysis of how test scores relate to other measures and how scores can be understood within some theoretical framework. (maybe your hypothesis about what’s different about high and low test scorers)
face validity
not one of the three C’s
what a test appears to measure or how relevant the test items look to the testtaker
why does face validity matter?
testtakers may not put forth good effort; parents may complain about their kids taking a non-face-valid test; lawsuits may be filed
content validity
judgment of how adequately a test samples behavior representative of the whole universe of behavior that the test was designed to sample.
e. g. assertiveness test assesses behavior on the job, in social situations, etc
e. g. test samples all chapters
how can we judge content validity?
get a panel of judges or experts - if more than half indicate that an item is essential, that item has some content validity. more people agree, more content validiy
what’s a problem with establishing content validity?
we frequently don’t know all of the items in the theoretical domain of possible items
criterion-related validity
a judgment of how adequately a test score can be used to infer an individual’s standing on a criterion being measured (3 types: concurrent, predictive, incremental)
concurrent validity
a judgment of how adequately a test score can be used to infer an individual’s present standing on a criterion (ex: diagnosing someone from a test when you already know they have the thing - perhaps from a diff validated test. the test might be an easier way to reach the diagnosis)
predictive validity
measures of the relationship between the test scores and a criterion measure obtained at a future time (ex: using GRE scores to predict graduate course passing)
criterion
standard against which a test score is measured; can be almost anything (behavior, diagnosis)
a good criterion is
relevant (pertinent to matter at hand); valid (if X is being used to predict Y, then we need to know X is valid); uncontaminated (not based on a predictor measure (if X is used to predict Y, and Y is in part based on X, then X is contaminiated)
what are three types of criterion-related validity?
concurrent validity
predictive validity
incrimental validity
base rate
extent to which a particular trait, behavior, etc exists in the popluation (proportion)
hit rate
proportion of people that a test accurately identifies as having a specific trait
miss rate
proportion of people a test failes to identify as having a trait. an inaccurate prediction
false positive
test identifies a testtaker as having the trait when they don’t
false negative
test does not identify a testtaker as having the trait when they do
validity coefficient
correlation coefficient; provides a measure of the relationship between test scores and the scores on the criterion measure; no rules to determine a minimum accepted size; affected by restriction of range
restriction of range
self-selection, testing firefighting skills on only firefighters, not general population
incremental validity
kind of part of predictive; degree to which an additional predictor variable explains something about the criterion measure that’s not already explained by predictors already in use (e.g.: how much sleep, how much time spent in library, how much time spent studying should both help predict GPA. if library is overlapping with studying, it doesn’t have great incremental validity)
one measure of a test’s value is
the extent to which it improves on the hit rate for a trait that existed before the test was used
construct validity is not mutually exclusive with ____
criterion
construct validity is shown when…
(1) test is homoegenous
(2) test scores change over time
(3) post-test scores vary as predicted (from some intervention)
(4) test scores from people of different groups vary as predicted (AKA method of contrasted groups)
(5) test scores correlate with scores on other tests as predicted (BDI correlates with another depression index)
example of using the method of contrasted groups
psych patients are more depressed than random Wal-Mart shoppers
convergent evidence
if scores on the new test correlate highly in the predicted direction with scores on an older, more established, and already validated test that’s testing the same thing
divergent evidence
if scores on the new test don’t correlate with scores on a test that you didn’t theorize they would correlate with
a valid test can be used…
fairly or unfairly
test bias
a factor inherent in a test that systematically prevents accurate, impartial measurement. systematic = not due to change. can be identified and remedied; ex: weighted coin toss
what’s a type of test bias?
rating error
rating error
a judgment resulting from the intentional or unintentional misuse of a raiting scale
examples of rating error
leniency/generosity error (lenient on grading)
severity errors - always rate bad
central tendency errors - all ratings at the middle
halo effect - tendency of a rater to give a ratee a higher rating than they deserve for everything (e.g., Lady Gaga speech never going to be bad no matter the topic if the rater is the president of her fan club)
test fairness
the extent to which a test is used in an impartial, just, and equitable way; has to do with values and opposing points of view
test ____ can be seen as a statistical problem, test ____ cannot
test bias can be seen as a statistical problem, test fairness cannot