Measurment and survey research: ch5&7 Flashcards
Level of Measurement
The relationship between numerical values on a measure. There are different types of levels of measurement (nominal, ordinal, interval, ratio) that determine how you can treat the measure when analyzing it. For instance,
it makes sense to compute an average of an interval or ratio variable but does not for a nominal or ordinal one.
Nominal Level of Measurement Measuring
A variable by assigning a number arbitrarily in order to name it numerically so that it might be distinguished from other objects. The jersey numbers in most sports are measured at a nominal level.
Ordinal Level of Measurement
Measuring a variable using rankings. class rank is a variable measured at an ordinal level.
Interval level of Measurement
Measuring a variable on a scale where the distance between numbers is interpretable. For instance, temperature in Fahrenheit or celsius is measured on an interval level.
Ratio Level of Measurement
Measuring a variable on a scale where the distance between numbers is interpretable and there is an absolute zero value. For example, weight is a ratio measurement.
A theory that maintains that an observed score is the sum of two components: true ability (or the true level) of the respondent; and random error.
True Score Theory
Random Error
A component or part of the value
of a measure that varies entirely by chance. Random error adds noise to a measure and obscures the true value.
systematic error
A component of an observed score that consistently affects the responses in the distribution.
Triangulate
combining multiple independent measures to get at a more accurate estimate of a variable.
inter-rater or inter-observer Reliability
The degree of agreement or correlation between the ratings or coding’s of two independent raters or observers of the same phenomenon.
Test-retest reliability
The correlation between scores on the same test or measure at two successive time points.
Parallel-forms reliability
The correlation between two versions of the same test or measure that were constructed in the same way, usually by randomly selecting items from a common test question pool.
internal consistency reliability
A correlation that assesses the degree to which items on the same multi-item instrument are interrelated. The most common forms of internal consistency reliability are the average inter-item correlation, the average item-total correlation, the split half correlation and cronbach’s Alpha.
Cohen’s Kappa
A statistical estimate of inter-rater agreement or reliability that is more robust than percent agreement because it adjusts for the probability that some agreement is due to random chance.
Average inter-item correlation
An estimate of internal consistency reliability that uses the average of the correlations of all pairs of items.
Average item-total correlation
An estimate of internal consistency reliability where you first create a total score across all items and then compute the correlation of each item with the total. The average inter-item correlation is the average of those individual item-total correlations.