Reliability Flashcards
Tests that are free from measurement of error
Reliable
measurements are consistent, or repeatable
Reliability
who pioneered reliability
Charles Spearman
Measurement of instruments are imperfect thus we use this formula
X= T+E
problems created by using a limited number of items to represent a larger more complicated construct
domain sampling model
it is the ratio of variance of the observed score on the shorter test and the variance of the long run true score
reliability
focuses on an item difficulty to assess the ability
Item Response Theory
it is an index of reliability
Reliabilitycoefficient
a proportion that indicates the ratio between the true score variance on a test and the total variance
reliability coefficient
Sources of Error
Test Construction , Test Administration, Test Scoring and I
another name for test retest reliability estimate
Time Sampling Method
obtained by correlating pairs of scores from the same people on 2 different administrations of the same test
Test-Retest Reliability Estimate
The estimate of test retest reliability is
coefficient of stability
it exists when for each form of the test the means and the variances of the observed score test acores are equal
Parallel Forms and Alternating Forms Reliability
obtained by correlating 2 pairs of scores obtained from equivalent halves of a single test administered once
Split Half Reliability Estimates
Steps in Split Half Reliability
- Divide the test into equivalent halves 2. Calculate Pearson r bet. scores of the 2 halves of the test 3. Adjust the half test reliability using Spearman Brown Formula
refers to the degree of correlation among all items on a scale
Inter item consistency
it measures a single trait
Homogenous Test
Used when the items are highly homogenous and is the same result with split half reliability
KR-20 formulas
reliability criteria that doesn’t require calculation of p and q
KR21
item difficulty in KR21
average of 50%
the mean of all possible split-half correlations,
Coefficient Alpha
Coefficient alpha is corrected by what
Spearman-Brown Formula
how to increase relia ility?
- increase the number of items or observations. -Eliminate items that are unclear. -Standardize the conditions under whixh the test is taken. -Moderate the degree of difficulty of the tests. -Minimize the effects of external events. -Standardize the instructions -Maintain a consistent scoring procedures
type of reliabilty that measures stability
test retest
type of reliabilty that measures equivalence
Parallel or Alternate Forms
type of reliabilty that measures Agreement
Inter-rater
type of reliabilty that measures consistency of each item in the underlying construct
Internal consistency
Statistical Computation for Test-retest
Correlation (Pearson r or Spearman ro)
Statistical Computation for Stability
Correlation (Pearson r or Spearman rho)
Statistical Computation for equivalence
Correlation, Pearson r or Spearman rho
Statistical Computation for parallel or alternate forms
Correlation (Pearson r or Spearman rho)
Statistical Computation for Agreement
Percentage and Kappa’s coefficient
Statistical Computation for inter rater
Percentage and Kappa’s coefficient
Statistical Computation for Internal Consistency
Cronbach’s Alpha , Kuder Richardson, Ordinal/Composite
Statistical Computation for Consistency
Cronbach’s Alpha,Kuder-Richardson,Ordinal/Composite
it is usually an internal consistency of alpha as index
.70
Newly developed should not have what consistency
should not have internal consistency of .90 and above
modest reliabilty
.60 - .69
Considerations in the use and purpose of reliability coefficients
homogeneity vs heterogeneity of test items. dynamic vs static characteristics. Speed test vs Power test