Ch 5.1 Reliability Flashcards
Concept of Reliability
Consistency
- Within the test (Internal Consistency)
- Across different points in time (Test-Retest)
- Across different raters (Inter-rater) [Consistency across clinicians]
All tests contain error
- Your observed test score is your “true” (T) score + some error (E)
- X = T+E
Reliability
Repeatability or consistency of measurement
Reliability coefficient
- Index of reliability
- Indicates ratio between true score variance & total variance on a test
Variance
- Standard deviation squared
- Total variance = True variance (variance from true sources) + Error variance (variance from different sources)
Smaller error variance
Small error variance = higher reliability coefficient (closer to 1)
Larger error variance
Larger error variance = lower reliability coefficient (closer to 0)
Measurement Error
- All factors associated with the process of measuring some variable, other than the variable being measured
Random error
- Noise
- Unpredictable, inconsistent
Systematic error
- Does not affect score consistency (reliability)
- However, does affect validity
Sources of Error Variance [place holder]
.
Test construction
- Item sampling
- Content sampling
Test administration
- Environmental factors
ex: noise level, lighting - Test taker variables
ex: emotional or medical state - Examiner- related variables
ex: I give you clues by the way I answer your questions
Test scoring & interpretation
- Unstructured personality tests
- Manually scored exams
Consistency Within the Test: Internal Consistency Reliability
- Use of a single test administered on one occasion to estimate reliability. Useful in understanding inter-item consistency (homogeneity)
Split-half reliability
- Correlating scores obtained from equivalent halves
- Ways to split items
- Random split
- Odd-even split
- 1st & 2nd half
- Cannot simply correlate the 2 because shorter tests are generally less reliable than longer tests
- Must use statistics
Split - Half Statistics
- When using split half reliability you use the Spearman Brown formula to calculate the full measures reliability coefficient
Spearman Brown Formula
- Allows estimates of internal consistency from a correlation of two test halves (predicts reliability after changing test length)
- Calculate the effect of increasing or decreasing test length
- How many items needed to reach certain reliability
Kuder Richardson Formula 20 (KR-20)
- Similar concept to Spearman-Brown
- Used when items are dichotomous right/wrong
Cronbach’s Coefficient Alpha
- The mean of all possible split half correlations
- Used with continuous variable items
- Preferred statistics for obtaining an estimate of internal consistency reliability
- Range from 0 (no similarity) to 1 (perfectly identical)
- Larger the better but also you don’t want redundancy
Consistency Between Tests: Parallel/Alternate Forms
- Degree of relationship between various forms of a test (equivalence)
Parallel
- Must have the same means & variances of observed scores
- Estimate of the extent to which item sampling and other errors affect test scores
Alternate
- Is equivalent with respect to content, difficulty level
- Time consuming and expensive to construct
- Advantages (controls memory effects)