Outcome measures Flashcards
T/F most measurements are directly observable
False
we can measure only a correlate of the actual property
Biomarker
objective, quantifiable characteristics of biological process.
surrogate endpoint
substitute for clinically meaningful endpoint
Match Terms:
Reliability, Validity
Accuracy, Consistency
Reliability = Consistency
Validity = Accuracy
Correlation
the degree of association between 2 sets of data
draw scatter plot to visualize
correlation coefficient (r) to quantify it
Test-Retest reliability
consistency of repeated measurements that are separated in time
stability of a measurement
Intrarater reliability
consistency of repeated measurements by same PT at different times
Interrater reliability
Consistency of repeated measurements by multiple PTs.
Internal consistency
Consistency over multiple items or parts in a test, where each part is supposed to measure one concept
Face validity
does the content of the test appear to be suitable to its aims?
content validity
is the test fully representative of what it aims to measure?
construct validity
does the test measure the concept that it is intended to measure?
Dichotomous scale
categorical
two point scale
yes/no, T/F
5 point Likert scale
ordinal
5 point scale
always/often/sometimes/rarely/never
Visual Analog Scale
- continuous
- line with verbal anchors at either end
- put mark at point corresponding to rating
Performance Based outcome measurement
when patient is required to perform a set of functional tasks
FIM
Self reported outcome measurement
when patient is required to complete a questionnaire
Owestry Low Back Pain Disability Index (ODI)
Error Parameters
quantify errors in measurements
not as widely reported as reliability or validity
important to determine amount of error associated with patient’s outcome measure
Standard error of measurement
amount of variability that can be attributed to measurement error
Minimal detectable change
minimal change a patient must show on scale to ensure that the observed change is real
Clinical Utility Considerations
appropriateness of test for application
precision of test to accurately change measure
interpretability of the test to individual’s situation
acceptability of test to individual
time and cost of administering the test
Cohen’s kappa
a measure of agreement between to categorical variables
PR(o)
probability of relative observed agreement
Pr(e)
probability of expected agreement just by chance among 2 examiners
Interpretation of Cohen’s Kappa
decimal between 0 and +1 \+1 = perfect agreement 0 = no agreement 0-.4 = poor .4-.75 = fair to good .75-1 = excellent
Which reliability doesn’t use kappa or correlation coefficient to check?
internal consistency
it uses Cronbach’s alpha
Confirmatory factor analysis
investigates construct validity
how well measurement represents the number of constructs
Cluster Analysis
investigates construct validity
statistical procedure used to classify a group of subjects on the basis of a set of measured variables into a number of different groups