Module 6-Slides Flashcards
Measurement
Process of assigning NUMERALS to variables to represent QUANITITIES of characteristics according to certain rules
Construct
An ABSTRACT variable that is not observable and is defined by the measurement used to assess it
Considered a latent trait because it reflects a property within a person and is not externally observable
~intelligence, health, pain, mobility, and depression
Purpose of Measurement
Way of understanding, EVALUTING, and differentiating characteristics of people, objects, and systems by scientists and clinicians
Allows to communicate in ONJECTIVE TERMS, giving a common sense of “how much” or “how little” w/out ambiguous interpretation
Levels of Measurement
Nominal
Ordinal
Interval
Ratio
Ratio
Distance, age, time, decibels, weight / Numbers represent units with equal intervals, measured from true zero
*The highest level of measurement with an absolute zero point
Interval
Calendar years, Celsius, Fahrenheit / Numbers have equal intervals, but no true zero
*Possesses rank order but also has known and equal intervals between consecutive values but no true zero
Ordinal
Manual muscle test, function, pain assessment scale / Numbers indicate rank order
*A rank-ordered measure where intervals between values are unknown and likely unequal
Nominal
Gender, blood type, diagnosis, ethnicity / Numerals are category labels
*Classifies objects or people into categories with no quantitative order
T/F Measurements cannot be taken at different LEVELS or rated using various SCALES.
False; Measurements CAN be taken at different LEVELS or rated using various SCALES
Example: pain measurement
yes or no: nominal scale
from 0-10: ordinal scale
Why is it important to accurately identify the “level” of measurement?
Because Selection of Statistical tests is based on certain assumptions about data including but not limited to the level of measurement
Parametric tests
Arithmetic manipulations requiring Interval or Ratio level of data
Nonparametric tests
Do not make the same assumptions; are used with Ordinal or Nominal data
Reliability
The extent to which “a measured value can be obtained CONSISTENTLY during REPEATED assessment of unchanging behavior”
What are the 2 basic types of measurement error?
Systematic error
Random error
Systematic error
Predictable, occurring in a consistent overestimate or underestimate of a measure
Random error
Have no systematic bias and can occur in any direction or amount
Sources of measurement error
- Measuring INSTRUMENT itself: does not perform in the same way each time
- The person/individual taking the measurements (the rater): does not perform the test properly
- VARIABILITY of the characteristic being measured: the variable being measured is not consistent over time (ex: BP)
Reliability coefficient
Provide values that help estimate the degree of reliability (a range from 0.0 to 1.0)
4 general approaches to reliability testing
- Test-retest reliability
- Rater reliability
- Alternate forms
- Internal consistency
Test-Retest Reliability
An assessment of how well an instrument will perform from one trial to another assuming that no real change in performance has occurred
Coefficient:
-ICC (Intraclass correlation coefficients) for quantitative
-Kappa coefficient for categorial
Inter-Rater (two or more raters) Reliability
Concerns variation between two or more raters who are measuring the same property
Coefficient: ICC or Kappa
Intra-Rater (one rater) Reliability
A measure of the stability of data recorded by one tester across two or more trials
Coefficient: ICC or Kappa
Change Scores
Reflect difference in performance from one session to another, often a PRETEST AND POSTEST. If measures don’t have strong reliability, change scores may primarily be a reflection of error
Reliability of measurement
A prerequisite for being able to interpret change scores
Minimal detectable change (MDC)
Amount of change in a variable that must be achieved beyond the minimal error in a measurement, a threshold above which can be confident that a change reflects true change and not just error
Minimal detectable difference (MDD)
Amount of change that goes beyond error; smallest real difference, smallest detectable change, or the reliability change index
Measurement validity
Concerning the meaning or interpretation that we give to a measurement
Characterized as the extent to which a test measures what itis intended to measure
Distinctions
Between
Reliability and Validity
Reliability relates to consistency of a measurement
Validity relates to alignment of the measurement with a targeted construct
Measuring validity is NOT as straightforward as reliability
Similarities
Between
Reliability and Validity
Do not consider it as all-or-none (1 or 0)
How can validity be fairly evaluated?
Only within the context of an instrument’s intended use
Reliability and Validity scores
A. Scores are reliable, not valid (missing the center)
B. Scores show random error, average validity (near the center)
C. Scores are not reliable, not valid (off the center)
D. Scores are both reliable and valid (center)
T/F A reliable measure guarantees that the measure is valid
False; it does NOT guarantee it
Types of Evidence for Validity
Depending on specific conditions, several types of evidence can be used to support a tool’s use, often the 3 Cs
The 3 Cs
- Content validity
- Criterion-related validity
- Construct validity
Content validity
Establishes that the multiple items that make up a questionnaire, inventory, or scale adequately sample a wide domain or (the UNIVERSE of content) that defines the variable or construct being measured
Criterion-related validity
Establishes the correspondence between a Target test (to be validated) and a REFERENCE OR “GOLD” STANDARD (as the criterion) to determine that the Target test is measuring the variable of interest
Construct validity
Establishes the ability of an instrument to measure the dimensions and theoretical foundation of an abstract construct
~Abstract constructs do not directly manifest as physical events; thus, making inferences through observable behaviors, measurable performance, or patient self-report
Minimal clinically important difference (MCID)
Smallest difference that signifies an important difference in a patient’s condition
Methodological Research
Involves the development and testing of both reliability and validity of measuring instruments to determine their application and interpretation in a variety of clinical situations
Ways to maximize Reliability
Standardize measurement protocols
Train raters
Calibrate and improve the instrument
Take multiple measurements
Choose a sample with a range of scores
Pilot testing
Ways to maximize Validity
Fully understand the construct
Consider the clinical context
Consider several approaches to validation
Consider validity issues if adapting existing tools
Cross-validate outcomes