Research Objectives 2 & QUEST Flashcards
Can theoretically have any value along a continuum within a defined range
Ex: wt in lbs
Continuous variables
Can only be described in whole integer unit
Ex: HR in BPM
Discrete variables
Can only take on two values
Ex: yes or no one a survey
Dichotomous variable
What is the challenge of measuring constructs?
It subjective, used with abstract variables and measured according to the expectations of how a person who possesses a specified trait would behave, look, or feel in certain situations.
category/classifications (Ex: blood type, gender, dx)
Nominal
numbers in rank order, inconsistent/unknown intervals. Based on greater than, less than (Ex: MMT, function, pain)
Ordinal
numbers have rank orders and equal intervals, but no true zero. Can be added or subtracted, cannot be used to interpret actual quantities (Ex: Fahrenheit vs celsius, shoe size)
Interval
numbers represent units w/ equal intervals measured from true zero (Ex: Height, wt, age)
Ratio
What is the relevance of identifying measurement scales for statistical analysis?
mathematical operations
Meaningful interpretations
Statistical procedure requiring applied mathematical manipulations, requiring interval or ratio data… Mean, median, mode
Parametric tests
Statistical procedure that does not make the same assumptions and are designed to be used w/ ordinal and nominal data
Non parametric tests
Two important forms of reliability in clinical measurement
Relative and absolute reliability
Reflects true variance as a proportion of total variance in a set of scores.
Intraclass correlation coefficients (ICC) and kappa coefficients are commonly used
Relative reliability
Indicates how much of a measured value, expressed in the original units, is likely due to error
Most commonly uses standard error of measurement
Absolute reliability
any observed score involves true score (fixed value) and unknown error component (small or large)
Classical measurement theory
true score ± error component equals…..
Observed score
matter of chance, possibly arising from factors such as examiner or subject inattention, instrument imprecision, or unanticipated environmental fluctuation.
Can occur through the measuring instrument itself for example: Imprecise instruments or environmental changes affecting instrument performance can also contribute to random error
Random errors
predictable errors of measurement. They occur in one direction, constantly overestimating or underestimating the true score.
Consistent & are not a threat to reliability. Instead, it only threatens the validity of a measure.
Systematic errors
typical sources of measurement error
The person taking the measurements — the raters
The measuring instrument
Variability/consistency in the characteristic being measured
What is the effect of regression toward the mean in repeated measurement?
can interfere when researchers try to extrapolate results observed in a small sample to a larger population of interest.
statistical phenomena when extreme scores are used in the calculation of measured change. Extreme scores on an initial test are expected to move closer (or regress) toward the group average (mean) on a second test.
Regression towards the mean (RTM)
Discuss how concepts of agreement and correlation relate to reliability
Through interrater agreement and correlation, a study is likely to be more reliable
determines the ability of an instrument to measure subject performance consistently
Test-retest
time intervals between tests must be considered
Test-retest intervals
carryover influenced by practice or learning during the initial trial alters performance on subsequent trials
Carryover
when the test itself is responsible for observed changes in a measured variable
Testing effects
training and standardization may be necessary for rater(s); the instrument and the response variable are assumed to be stable so that any differences between scores on repeated tests can be attributed solely to rater error
Rater reliability
stability of data recorded by one tester across two or more trials
Intrarater
two or more raters who measure same subject
Interrater
also called equivalent or parallel forms; assesses the differences between scores to determine whether they agree; used as an alternative to test-retest reliability when the intention is to derive comparable versions of a test to minimize the threat posed when subjects recall their responses
Alternate forms of reliablity
Relevant to a tool’s application
Reliability exists in a context
Exists to some extent in any instrument
Reliability is not all-or-none
how is reliability related to the concept of minimal detectable difference?
GREATER RELIABILITY = SMALLER THE MDC
MDC is based on
Standard error of measurement
the amount of change that goes beyond error
Minimal detectable change (MDC)
The most commonly used reliability index, it provides a range of scores within which the true score for a given test is likely to lie.
SEM
relates to the confidence we have that our measurement tools are giving us accurate information about a relevant construct so that we can apply results in a meaningful way
Used to measure progress toward goals and outcomes
Validity
Validity needs to be capable of…
discriminating among individuals w/ and w/o certain traits, dx, or conditions
evaluating magnitude/quality of variable
creating accurate predictions of pt future
Implies that an instrument appears to test what is intended to test
Judgment by the users of a test after the test is developed
Face validity
establishing multiple items make up a sample, or that scale adequately samples the universe of the content that defines the construct being measured.
The items must adequately represent the full scope of the construct being studied.
The number of items that address each component should reflect the relative importance of that component.
The test should not contain irrelevant items.
Content validity
Comparison of the results of a test to an external criterion
Index test AND Gold or reference standard as the criterion
Criterion-related validity
Two types of criterion-related validity
Concurrent and predictive validity
test correlates w/ reference standard at same time
Concurrent validity
Reflects the ability of an instrument to measure the theoretical dimensions of a construct; establishes the correspondence b/w a target test and a reference or gold standard measure of the same construct.
Construct validity
Methods of construct validity
Known-groups method, Convergence and divergence, Factor analysis
extent to which a test correlates w/ other tests of closely related structure
Convergent validity
extent to which a test is uncorrelated w/ tests of distinct or contrasting constructs
Discriminant validity
Change scores are used to:
Demonstrate effectiveness of an intervention
Track the course of a disorder over time
Provide a context for clinical decision making
What is the concern affecting validity of measuring change?
the ability of an instrument to reflect changes at extremes of a scale
Floor effect w/ change scores???
not being able to see difference in scores on an instrument if the participants score is already low
The smallest difference that signifies an important difference in a patient’s condition
Minimal clinically important difference (MCID)
standardized assessment designed to compare and rank individuals within a defined population
Norm-referenced test
interpreted according to a fixed standard that represents an acceptable level of performance
Criterion-referenced test
The roles of surveys in clinical research?
elicits quantitative or qualitative responses & can be used for descriptive purposes or to generate data to predict hypothesis
The two basic structures of survey instruments
Questionnaires and Interviews
standardized survey, usually self-administered, that asks individuals to respond to a series of questions.
Questionnaires
the researcher asks respondents specific questions and records the answers.
structured, semi-structured, unstructured
Interviews
Process of designing a survey
Question
Review literature
Questions and hypothesis
Content development
Using existing instruments
Expert review of draft questions
Pilot testing
Revisions
Two types of survey questions
Open ended and closed ended
ask respondents to answer in their own words.
useful in identifying feelings, opinions, and biases
Open-ended question
ask respondents to select and answer from among several fixed choices.
typical formats: multiple choice, 2 choices, check all that apply, 3-5 options, checklists, measurement scales, visual analog scales
Closed-ended questions
identifies seven core quality indicators applicable to services provided by all occupational therapists, regardless of geographic location, practice settings and populations served.
QUEST
7 QUEST core indicators
Availability of competent occupational therapists
Long term supply of resources
Ability to access service
Optimal use of resources
Success in obtaining occupational therapy goals
Satisfaction throughout service delivery
Incidents resulting in harm
To be used for a particular occupational therapy service, core indicators must be defined to be SMART:
Specific
Measurable
Agreed upon
Relevant
Timely
The two-step process is used for each core indicator:
Determine quality expectations for the service in relation to the areas measured by the core indicator. Consider the perspective of others such as people receiving services, referral sources and funding agencies when identifying expectations. Sample questions to consider are listed for each core indicator.
Consider the quality measurement question and sample SMART indicators provided for the core indicator. Define the core indicator to measure performance of the service in relation to the quality expectations using SMART criteria. Outline the calculation used to determine the indicator result, define terms used in indicator and identify how data will be collected for the indicator. More than one SMART indicator may be defined for each core indicator.
What does QUEST mean?
Quality Evaluation Strategy Tool
Provider of QUEST
WFOT (world federation of occupational therapists)