Scale Reliability Flashcards
What is Reliability?
- Degree to which the measurement is free from measurement error
- The extent to which the differences in respondents’ test scores are a function of their true differences, as opposed to measurement error.
How is Reliability determined?
It is on a continuum where we measure the degree to which scores from a measure are or are not reliable
What is Measurement Error
Random noise that creates inconsistency between
observed and true scores
- There is no perfectly reliable measure
What does it mean for a test to have “strong Reliability?”
Strong signal and/or little noise (less error)
ex. Severe or constant pain, large drops in BP, Severe tremor
How does Reliability deal with proportion of variance?
Tied to differences among people
- A heterogenous sample increases reliability
- Coefficient of reliability is a 0-1 score
What does a coefficient of 1 for reliability mean?
True score variance is equal to observed score variance
- No measurement error of observed scores (this is not possible)
What does a coefficient of 0 for reliability mean?
Means there is no variance (everyone has the same true score)
- Occurs when respondents do not differ
Proportion of Variance
Reliability is the proportion of observed score variance that is attributable to true score variance
- Reliability is the lack of error variance
Correlations in regards to Reliability
- Reliability is the correlation between observed scores and true scores
- Reliability is the lack of correlation between observed scores and error scores
Types of Reliability
- Internal Consistency: Different set of items from same PRO
- Test- Restest: Over time
- Interrater: By different people on the same occasion
- Intra- rater: By the same persons on different occasions
Assumption of Test-Retest Reliability
True scores remain stable across the test-retest interval
Internal Consistency Reliability
- Requires only one test to be completed at one point in time
- Estimates reliability of multiple-item test
- Most widely used method for estimating reliability
What is the most widely used method of estimating Internal consistency reliability?
Cronbach’s Alpha
What are two facts that affect internal consistency reliability?
- How much observed differences on one part of an assignment are consistent with observed differences on other parts of the test
- Test Length (long tests will produce more reliable scores than a short one)
How are confidence intervals effected by poor reliability?
They are less precise and wider compared
What is Variability?
Variability of observed scores is larger than variability among true scores
- Noise induced by error