Chapter 11 - Measurement and Data Quality Flashcards
What is a measurement?
involves rules for assigning numbers to objects or people to designate the QUANTITY of an attribute
-numbers assigned according to rules
Advantages of Measurement
- removes guesswork/ambiguity in gathering and communicating info (ex. temp is measured in Fahrenheit)
- minimize subjectivity, precise information
- less vague than words (ex. 6ft 3in is more descriptive than “tall”)
Nominal Measurement
LOWEST LEVEL
- ->involves using numbers to simply categorize attributes
ex. gender, blood type - don’t have to have quantitative meaning
Ordinal Measurement
- ->ranks people based on relative standing on an attribute
ex. ADL ranking (1=completely dependent, 2=needs assist x 1, etc) - ordinal ranking cannot show that one ranking is twice as great as another
Interval Measurement
–>ranking people on an attribute and specify the distance between them
Ex. IQ testing (distance between 100-110 is the same at 120-130)
-no meaningful zero
Ratio Measurement
HIGHEST LEVEL
–>have a meaningful zero and thus provide info about the absolute magnitude of the attribute
Ex. weight (100 lbs is twice as much at 50 lbs and 0 lbs = no weight)
What is the purpose of different levels of measurement?
A variable’s level of measurement determines the mathematic operations that may be performed in a statistical analysis.
Calculation for Errors of Measurement
Obtained Score = True Score +/- Error
Obtained (observed) score vs. true score
Obtained: the measurement we take from the patient
True: score we would get if it was an infallible measure (hypothetical)
What contributes to measurement error?
- Situational contaminants - environmental factors (temp, lighting, time of day)
- Response-Set Bias - enduring characteristics of respondents
- Transitory Personal Factors - temporary states (fatigue, stress, hunger)
- Item Sampling - bias by sampling items chosen (score on 100 pt test depends on which 100 questions were asked)
Reliablity
CONSISTENCY with which an instrument measures the attribute
-also includes ACCURACY, statement is reliable to the extent that it captures true scores
ex. scale measuring person at 100 lbs one minute and 5 minutes later measuring them at 150 lbs = UNRELIABLE
Reliability Assessments
Reliability assessments involve computing a reliability coefficient
– Reliability coefficients ranges from .00-1.00.
– Coefficients below .70 are considered unsatisfactory.
– Coefficients of .80 or higher are desirable.
Stability
*Aspect of reliability to assess
the degree to which similar results are obtained on separate occasions
- affected by time related influences (fatigue)
- assessed through TEST-RETEST RELIABILITY
Reliability Coefficient: quantifies an instruments reliability to assess objectively how small the differences are
-range from .00 to 1.00 (higher = more stable)
Internal Consistency
*Aspect of reliability to assess
the extent to which an instrument’s items measure the same trait
Coefficient (Cronbach’s Alpha) Alpha: how internal consistency is evaluated
-range from .00-1.00 (higher = more internally consistent)
Equivalence
*Aspect of reliability to assess
concerns the degree to which two or more independent observers/coders agree about scoring on an instrument
-high agreement = minimized measurement errors
Interrater (Interobserver) Reliability: two or more observers/coders make independent observations
–>evaluate congruency between ratings (more congruency = greater accuracy/reliability)
What is the purpose of reliability coefficients?
Indicators of data accuracy and quality - critical in interpreting research results (especially if hypotheses are not supported)
Effected by:
- Sample variability (more homogeneous = lower coefficient)
- -> scales are designed to measure differences, less differentiation = difficult to discriminate reliably
- ->want a heterogeneous and long-term/multi-item scale - Type of procedure (results may be different if procedures are not identical)
What is validity?
The degree to which an instrument measures what it is supposed to measure
*To be valid, a measuring device MUST be reliable (but a measuring device can be reliable WITHOUT being valid)
-easier to document than reliability
Face validity
whether an instrument looks as though it is measuring the appropriate construct
Content Validity
The degree to which an instrument has an appropriate sample of items for the contract being measured - want to capture the FULL content domain
-based on judgment (subjective)
Content Validity Index (CVI)
*assessment of content validity
a calculation that indicates the extent of expert agreement
-CVI of .90 or higher are standard for excellence
Criterion Related Validity
researchers examine the relationship between scores on an instrument and an external criterion
–>an instrument is valid if it’s scores correspond strongly with scores on the criterion
-assures decision-makers that their decisions will be fair, appropriate, and valid
Validity Coefficient
Computed with a mathematical formula to correlate the two scores
- magnitude of scores = estimate of instruments validity
- ->.70 or greater is desirable
Predictive validity
an instrument’s ability to differentiate between people’s performances on FUTURE criterion
–>ex. high school grades compared to college GPA
Concurrent validity
an instrument’s ability to distinguish among people who differ presently on criterion
–>ex. whether a pt could be released from a mental health hospital correlated with nurses’ behaviors at that time