U11 Flashcards
Measurement
involves rules for assigning numeric values to qualities of objects to designate the quantity of the attribute
Errors of Measurement
Even the best instruments have a certain amount of error
Calculating Error
Obtained Score = True Score+/- Error
Most common factors contributing to Error
situational contaminants Response set biases Transitory personal factors Administration Variations Item Sampling
Reliability
consistency of instrument to measure attributes accurately. 3 Parts:
Stability, Reliability Coefficient and Test-Retest reliability
Stability
of a measure is the extent to which the same scores are obtained when the instrument is used with the same people on separate occasions. Assessed through Test-retest reliability procedures
Reliability coefficient
a numeric index of a measure’s reliability. Range from 0.00 to 1.00
Closer to 1.00 the more reliable (stable) the measuring instrument
Test-retest reliability
the further apart the tests are, the more likely the reliability is to decline.
Internal Consistency
for scales that involve summary items, ideally scales are measures of the same critical attribute. Internal consistency reliability is = to the extent that all its subparts measure the same characteristic
Split-half technique
scores are separated (positive vs negative), these two scores are used to generate a reliability coefficient. If both are really measuring the same attribute the correlation will be higher
Cronbach alpha (Coefficient alpha)
gives estimate of split half correlations for all possible ways of dividing measure into 2 halves. Indexes ranges from 0.00 to 1.00 with closer to 1.00 the more internally consistent the measure.
Equivalence
primarily with observational instruments. determines the consistency or equivalence of the instrument by different observers or raters. Can be assessed with the Interrupter Reliability.. estimated by having 2+ observers make simultaneous, independent observations and calculating the differences.
Interpretation of Reliability Coefficients
- hypotheses not supported about be d/t unreliability of measuring tool
- a low reliability prevents an adequate testing of research hypothesis
- reliability estimates via different procedures for same instrument are NOT identical
- sample heterogeneity increases reliability coefficient. (instruments are designed to measure differences, also sample homogeneity = decreased reliability coefficient)
- longer instruments (more items) tend to have higher reliability than shorter instruments
Validity
degree to which an instrument measures what it is supposed to be measuring
*can have high reliability of instrument with no evidence of validity
(high reliability does not = high validity)
*low reliability IS evidence of low validity
(low reliability = low validity)
Validity coefficient - score greater than or equal to .70 desirable (range .00-1.00)
Face validity
wether the instrument looks as though it is measuring the appropriate construct