Week 6 Reliability & Validity Flashcards
What are the key criteria for evaluating quantitative measurement?
- Reliability
- Validity
Reliability
How consistently a data collection instrument measures the variable
Validity
Degree to which an instrument measures what it is supposed to measure
What are the 3 aspects of reliability?
- Stability
- Internal consistency
- Equivalence
Stability
Extent to which scores are similar on 2 separate administration of instrument, appropriate for fairly enduring characteristics
e.g personality, IQ test
How is stability assessed?
Test retest reliability - same instrument is given 2x to same group, scores may not be identical but differences are small (reliability coefficient)
Range from 0-1 (0.7-0.8 acceptable)
Internal consistency
Extent that all subparts of instrument measure same trait, appropriate for multi item instruments by administering instrument on one occasion
How is internal consistency assessed?
Cronbach’s alpha (coefficient alpha) - indicates how well a group of items together measure the trait of interest
0.7-0.9 acceptable
Equivalence
Concerns the degree to which 2 or more independent observers agree about the scoring of an instrument
How is equivalence assessed?
Inter-rater reliability using Cohen’s Kappa (k) & intraclass correlation coefficient (ICC)
What are the 2 aspects of validity?
- Content validity
- Criterion related validity
Content validity
Appropriately & adequacy of instrument content
How is content validity assessed?
Expert panel review rate items on a 4-point scale by using content validity index (CVI, only for 3/4) for evaluating each item’s relevance (0.9>)
Criterion validity
Extent to which instrument corresponds to gold standard or another well established measure of target variable
Types of criterion validity
- Concurrent validity
- Predictive validity