L1 Outcome Measures Flashcards

1
Q

Why do PTs need to use clinical measures?

A

Track pt progress over time
to justify more PT visits
screen for possible medical or psychosocial problems
discriminate among impairments
predict future events
show the value of what we do as PTs
research, to measure change

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

How do we select outcome measures?

A

Meets the needs of your patient
Must be valid
Must be reliable
Needs to be responsive to change

fits the three psychometric properties: reliability, validity, clinical meaningfulness

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Patient reported outcome measures PROS

A

efficient, describes patient perspective of problem

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Patient reported outcome measures CONS

A

rely on pt self assessment ability, accurate recall of past events

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Patient reported outcome measures

A

questionnaires that the patient answers on their own about the impairment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Performance-based outcome measures PROS

A

with high reliability can be very accurate way to document change

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Performance-based outcome measures CONS

A

equipment, time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Performance-based outcome measures

A

actual activity that the pt completes, like the 6 minute walk test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Content Validity

A

outcome measure includes all the characteristics that it purports to measure
established by expert panel

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Criterion Validity

A

establishes the validity of an outcome measure by comparing it to a more established gold standard measure

can be divided into concurrent and predictive

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Construct Validity

A

ability of a measure to assess an abstract characteristic or concept

theoretical model, comparing various populations in order to create a new outcome measure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Sources of measurement error

A

Instrument
Patient
Clinician
Environment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Measurement error

A

there is error to every measurement that we do
we try to eliminate as much as possible
error contributes to reliability
usually instrument error is the most common

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Relative Reliability

A

degree to which individual measurements maintain their position over repeated measurements/consistency

types: inter-rater, intra-rater, test-retest

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Strength of correlation

A

measured by intraclass correlation or Kappa (k)

values range from 0-1, higher value indicates greater reliability.

Less than .5 is poor reliability. You should use .9 or above within the clinic

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Standardization

A

ensuring administration and scoring of a test is done in the same way, should be within the clinic, yourself, as well as standardized to the measure instructions

17
Q

Internal Consistency

A

the extent to which multiple items within an outcome measure reflect the same construct

assessed using Cronbach alpha, should be between .7 to .9

score below .70 = suggests items are not measuring same construct

score above .90 = suggest repetition between items

18
Q

Absolute reliability

A

degree to which scores vary across repeated measurements

measured using the standard error measurement (SEM)

19
Q

Standard Error of Measurement

A

estimate of the extent to which observed scores vary around the true score.

SEM = SD (sq 1-r)

20
Q

Minimal detectable change

A

the minimum amount of change required on an outcome measure to exceed anticipated measurement error and variability. your patient has to change MORE than this for it to be a significant change/not due to error or chance

must be determined for different populations

21
Q

Measures of responsiveness

A

Change over time = responsiveness

MDC
MCID
Floor/ceiling effect

22
Q

Minimal Clinical Important DIfference

A

specific to a particular patient population, should consider diagnosis, age, severity of condition

should be larger than MDC to be valuable
MCID is helpful when interpreting study results

23
Q

Floor and Ceiling Effects

A

lack of a sufficient range of a measurement to accurately characterize a group of patients

floor effects can happen early on in treatment

24
Q

How do you appraise outcome measure studies?

A

determine the applicability
determine the quality
interpret the results
summarize the clinical bottom line

25
Q

How do you apply outcome measures to clinical decisions?

A

to write goals
to track pt progress
to make decisions about plan of care
to critically appraise an intervention study that you want to apply to clinical practice
to screen for problems

26
Q

Concurrent Validity

A

established when researchers demonstrate that an outcome measure has a high correlation with a criterion measure taken at the same point in time

27
Q

Predictive validity

A

established when researchers demonstrate that an outcome measure has a high correlation with a cirterion measured in the future

28
Q

Validity

A

outcome measure’s ability to measure the characteristic or feature that is intended to measure

29
Q

Reliability

A

outcome measure’s consistency in score production

30
Q

Test-retest reliability

A

extent to which an outcome measure produces the same result when repeatedly applied to a patient who has not experienced change in the characteristic being measured

31
Q

Intra-rater reliability

A

consistency that an outcome measure has the same score when used by the same PT on the same pt

establishes the variability of raters when pt has not changed

specifically addresses the skills of the raters conducting the test

32
Q

How do you apply outcome measures to clinical decisions?

A

writing goals
track patient progress
make clinical decisions about POC
critically appraise an intervention study
screen for a problem

33
Q

Learning health system

A

system in which routinely collected information is used for continuous improvement and innovation

health systems become learning systems when they can continuously and routinely study/improve themselves

34
Q

What is the goal of a learning health system?

A

to identify and promote the best care practices at the lowest cost, improve patient experience, better health outcomes, improved staff experience

choosing the correct outcome measures to collect helpful and effective patient data

35
Q

How can PTs play a role in learning health system processes?

A

-select the right outcome measures
-connect practice to research
-practice research concepts, new role of a PT
-common data models (organizing in a set way)
-clinical documentation practices
-connection between who designs EMRs and who uses them

36
Q

health informatics

A

the practice of acquiring, studying and managing health data and applying medical concepts in conjunction with health information technology systems to help clinicians provide better healthcare