Chapter 5: Test Worthiness Flashcards
Correlation of determination
=shared variance
- statement about factors that underlie variables that account for their relationship
- -how much of the variability in one variable can be explained by variation in another
validity
degree to which all evidence supports intended interpretation of test scores for intended purpose
–does it measure what it’s supposed to measure
content validity
is the content valid for the kind of test it is
four step process of developing content validity
- survey domain
- content matches domain
- specific items match content
- analyze relative importance of each objective
criterion related validity
relationship between test and criterion that the test should be related to
–two types: concurrent validity and predictive validity
concurrent validity
does instrument relate to another criterion NOW
predictive validity
does instrument relate to another criterion in future
concepts related to predictive vaidity
- -standard error of the estimate: using known value of one variable to predict scores on a second variable
- -false positive: predicts an attribute that doesn’t exist
- -false negative: instrument forecasts no attribute when it does exist
construct validity
extent to which instrument measures a theoretical or hypothetical trait
–**clearly define trait or concept you are measuring
four methods of establishing construct validity
experimental design, factor analysis, convergence w/ other instruments, discrimination with other measures
experimental design
creating hypothesis and research studies that show instrument captures correct episode
-or meta analysis
factor analysis
demonstrates statistical relationship between sub scales
-how similar or different are sub scales?
convergence
comparing test scores to other, well established tests
divergence
correlate test scores with other tests that are different
you want a meager correlation
reliability
accuracy/consistency of test scores
3 types of reliability
test-retest, alternate/parallel/equivalent, internal consistency (split half or odd even, coefficient alpha, kuder richardson)
test retest
give test twice to same groups of people (problems: learning, familiarity with test)
alternate, parallel, equivalent forms
two forms of same test
- correlate scores on first with scores on second
- -BUT: are two forms ever really equivalent
internal consistency reliability
how do individual items relate to each other and test as whole
- reliability within test rather than multiple admins
- 3 ways to establish: split half/ odd even, chronbachs alpha and ruder richardson internal consistency
item response theory
extension of classical test theory (looks at amount of error in total test)
-IRT: probability individuals will answer each item correctly/match quality being assessed
Griggs vs. Duke Power Co
tests for hiring and advancement must show ability to predict job performance
ADA
accommodations for individuals taking tests for employment must be made
Carl Perkins
individuals with disability have right to vocational assessment, counseling, and placement
IDEA
assures rights of students suspected of having a learning disability to be tested at school’s expense
civil rights act
series of laws concerned w tests used in employment and promotion
section 504 and rehabilitation act
relative to assessment–any instrument used to measure appropriateness for program or service must measure ability, not be a reflection of disability
practicality
time, cost, format, readability, ease of admin, scoring, interpretation
steps to selecting and administering tests
- determine client’s goals
- choose instruments relevant to goals
- research
- examine test worthiness
- choose!