Interpreting Mesaurements Flashcards

1
Q

Validity

A

Did the test measure what it says it measures?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Reliability

A

How confident can I be about the measurement

A value that quantifies consistency of a tool

Necessary, but not sufficient, for a measure to be valid

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Measurement scales

A

Categorical scales:
Nominal (names)
Ordinal (has order but not consistent intervals)

Continuous scales
Interval (has order and equal intervals)
Ratio (has order, interval, true 0)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Nominal and Ordinal Scales

A

Nominal scales:
Categorical
No order

Ordinal scales:
Categorical
Are ordered

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Continuous Scales

A

Interval scales:
Are continuous
Have equal intervals
Allow for mathematical operations

Ratio scales:
Are continuous
Have equal intervals
Zero means absence

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Reporting Options by Scale: Nominal and Ordinal

A

Reporting for a group (“descriptive statistics”)

Nominal: Frequency, tallies, counts, percentage, and mode
# in each group, % of total in each group, most frequent, pie chart

Ordinal: (All of the above) + Central tendencies (mode, median) and Range (including interquartile range) , box and whiskers plot,

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Reporting Options by Scale: Interval and Ratio

A

Reporting for a group (“descriptive statistics”)

Interval or ratio: Range (including interquartile range), Central tendencies (mode, median, mean), Variability (Standard Deviation), Box and whiskers plot

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Reliability measurements

A

Test–retest reliability

Inter-rater reliability: Consistency between two different raters

Intra-rater reliability: Consistency between same rater

ALL measurements contain error!

Avoid saying a test is reliable or unreliable use coefficient instead

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Interclass correlation coefficient (ICC)

A

Used for continuous data

(Variability between subjects – Variability within subjects) / (Variability between subjects)

1 is perfect reliability
0.9 is excellent for clinical measures
0.75–0.9 is considered good
0.5–0.75 is considered moderate

Doesn’t give the error expected around a specific measurement
Doesn’t give a way to interpret how much change needs to occur to be beyond error of the tool

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

SEM to CiM

A

Standard error of measure (SEM): estimate of the average variability expected around a measurement

Use SEM to determine 95%CI around a
measurement (confidence in measure, or CiM)

SEM and CiM quantifies the potential error
(variability) around a measurement taken - used to calculate MDC

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

MDC

A

Minimum amount of change required to
exceed measurement error

MDC takes into consideration the error associated with two measurements (based on ICC and SD for a sample)

Determines what change would be beyond the error associated with taking two measurements.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

MCID

A

Minimal clinically important difference

Smallest change that would be important to the
patient

Relationship between measurement and function

Established for a patient group/population

How well did you know this?
1
Not at all
2
3
4
5
Perfectly