Measurment and survey research: ch5&7 Flashcards
Level of Measurement
The relationship between numerical values on a measure. There are different types of levels of measurement (nominal, ordinal, interval, ratio) that determine how you can treat the measure when analyzing it. For instance,
it makes sense to compute an average of an interval or ratio variable but does not for a nominal or ordinal one.
Nominal Level of Measurement Measuring
A variable by assigning a number arbitrarily in order to name it numerically so that it might be distinguished from other objects. The jersey numbers in most sports are measured at a nominal level.
Ordinal Level of Measurement
Measuring a variable using rankings. class rank is a variable measured at an ordinal level.
Interval level of Measurement
Measuring a variable on a scale where the distance between numbers is interpretable. For instance, temperature in Fahrenheit or celsius is measured on an interval level.
Ratio Level of Measurement
Measuring a variable on a scale where the distance between numbers is interpretable and there is an absolute zero value. For example, weight is a ratio measurement.
A theory that maintains that an observed score is the sum of two components: true ability (or the true level) of the respondent; and random error.
True Score Theory
Random Error
A component or part of the value
of a measure that varies entirely by chance. Random error adds noise to a measure and obscures the true value.
systematic error
A component of an observed score that consistently affects the responses in the distribution.
Triangulate
combining multiple independent measures to get at a more accurate estimate of a variable.
inter-rater or inter-observer Reliability
The degree of agreement or correlation between the ratings or coding’s of two independent raters or observers of the same phenomenon.
Test-retest reliability
The correlation between scores on the same test or measure at two successive time points.
Parallel-forms reliability
The correlation between two versions of the same test or measure that were constructed in the same way, usually by randomly selecting items from a common test question pool.
internal consistency reliability
A correlation that assesses the degree to which items on the same multi-item instrument are interrelated. The most common forms of internal consistency reliability are the average inter-item correlation, the average item-total correlation, the split half correlation and cronbach’s Alpha.
Cohen’s Kappa
A statistical estimate of inter-rater agreement or reliability that is more robust than percent agreement because it adjusts for the probability that some agreement is due to random chance.
Average inter-item correlation
An estimate of internal consistency reliability that uses the average of the correlations of all pairs of items.
Average item-total correlation
An estimate of internal consistency reliability where you first create a total score across all items and then compute the correlation of each item with the total. The average inter-item correlation is the average of those individual item-total correlations.
Split half reliability
An estimate of internal consistency reliability that uses the correlation between the total score of two randomly selected halves of the same multi-item test or measure.
Cronbach’s Alpha
one specific method of estimating the internal consistency reliability of a measure. Although not calculated in this manner, cronbach’s Alpha can be thought of as analogous to the average of all possible split-half correlation.
Translation validity
A type of construct validity related to how well you translated the idea of your measure into its operationalization.
criterion-related validity
The validation of a measure based on
its relationship to another independent measure as predicted by your theory of how the measures should behave.
Face validity
A validity that checks that “on its face” the operationalization seems like a good translation of the construct.
Content validity
A check of the operationalization against the relevant content domain for the construct.
concurrent validity
An operationalization’s ability to distinguish between groups that it should theoretically be able to distinguish between.
Convergent validity
The degree to which the operationalization is similar to (converges on) other operationalizations to which it should be theoretically similar.