Research Objectives 2 & QUEST Flashcards

1
Q

Can theoretically have any value along a continuum within a defined range

Ex: wt in lbs

A

Continuous variables

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Can only be described in whole integer unit

Ex: HR in BPM

A

Discrete variables

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Can only take on two values

Ex: yes or no one a survey

A

Dichotomous variable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is the challenge of measuring constructs?

A

It subjective, used with abstract variables and measured according to the expectations of how a person who possesses a specified trait would behave, look, or feel in certain situations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

category/classifications (Ex: blood type, gender, dx)

A

Nominal

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

numbers in rank order, inconsistent/unknown intervals. Based on greater than, less than (Ex: MMT, function, pain)

A

Ordinal

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

numbers have rank orders and equal intervals, but no true zero. Can be added or subtracted, cannot be used to interpret actual quantities (Ex: Fahrenheit vs celsius, shoe size)

A

Interval

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

numbers represent units w/ equal intervals measured from true zero (Ex: Height, wt, age)

A

Ratio

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is the relevance of identifying measurement scales for statistical analysis?

A

mathematical operations
Meaningful interpretations

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Statistical procedure requiring applied mathematical manipulations, requiring interval or ratio data… Mean, median, mode

A

Parametric tests

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Statistical procedure that does not make the same assumptions and are designed to be used w/ ordinal and nominal data

A

Non parametric tests

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Two important forms of reliability in clinical measurement

A

Relative and absolute reliability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Reflects true variance as a proportion of total variance in a set of scores.

Intraclass correlation coefficients (ICC) and kappa coefficients are commonly used

A

Relative reliability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Indicates how much of a measured value, expressed in the original units, is likely due to error

Most commonly uses standard error of measurement

A

Absolute reliability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

any observed score involves true score (fixed value) and unknown error component (small or large)

A

Classical measurement theory

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

true score ± error component equals…..

A

Observed score

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

matter of chance, possibly arising from factors such as examiner or subject inattention, instrument imprecision, or unanticipated environmental fluctuation.

Can occur through the measuring instrument itself for example: Imprecise instruments or environmental changes affecting instrument performance can also contribute to random error

A

Random errors

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

predictable errors of measurement. They occur in one direction, constantly overestimating or underestimating the true score.

Consistent & are not a threat to reliability. Instead, it only threatens the validity of a measure.

A

Systematic errors

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

typical sources of measurement error

A

The person taking the measurements — the raters
The measuring instrument
Variability/consistency in the characteristic being measured

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What is the effect of regression toward the mean in repeated measurement?

A

can interfere when researchers try to extrapolate results observed in a small sample to a larger population of interest.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

statistical phenomena when extreme scores are used in the calculation of measured change. Extreme scores on an initial test are expected to move closer (or regress) toward the group average (mean) on a second test.

A

Regression towards the mean (RTM)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Discuss how concepts of agreement and correlation relate to reliability

A

Through interrater agreement and correlation, a study is likely to be more reliable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

determines the ability of an instrument to measure subject performance consistently

A

Test-retest

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

time intervals between tests must be considered

A

Test-retest intervals

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
carryover influenced by practice or learning during the initial trial alters performance on subsequent trials
Carryover
26
when the test itself is responsible for observed changes in a measured variable
Testing effects
27
training and standardization may be necessary for rater(s); the instrument and the response variable are assumed to be stable so that any differences between scores on repeated tests can be attributed solely to rater error
Rater reliability
28
stability of data recorded by one tester across two or more trials
Intrarater
29
two or more raters who measure same subject
Interrater
30
also called equivalent or parallel forms; assesses the differences between scores to determine whether they agree; used as an alternative to test-retest reliability when the intention is to derive comparable versions of a test to minimize the threat posed when subjects recall their responses
Alternate forms of reliablity
31
Relevant to a tool’s application
Reliability exists in a context
32
Exists to some extent in any instrument
Reliability is not all-or-none
33
how is reliability related to the concept of minimal detectable difference?
GREATER RELIABILITY = SMALLER THE MDC
34
MDC is based on
Standard error of measurement
35
the amount of change that goes beyond error
Minimal detectable change (MDC)
36
The most commonly used reliability index, it provides a range of scores within which the true score for a given test is likely to lie.
SEM
37
relates to the confidence we have that our measurement tools are giving us accurate information about a relevant construct so that we can apply results in a meaningful way Used to measure progress toward goals and outcomes
Validity
38
Validity needs to be capable of…
discriminating among individuals w/ and w/o certain traits, dx, or conditions evaluating magnitude/quality of variable creating accurate predictions of pt future
39
Implies that an instrument appears to test what is intended to test Judgment by the users of a test after the test is developed
Face validity
40
establishing multiple items make up a sample, or that scale adequately samples the universe of the content that defines the construct being measured. The items must adequately represent the full scope of the construct being studied. The number of items that address each component should reflect the relative importance of that component. The test should not contain irrelevant items.
Content validity
41
Comparison of the results of a test to an external criterion Index test AND Gold or reference standard as the criterion
Criterion-related validity
42
Two types of criterion-related validity
Concurrent and predictive validity
43
test correlates w/ reference standard at same time
Concurrent validity
44
Reflects the ability of an instrument to measure the theoretical dimensions of a construct; establishes the correspondence b/w a target test and a reference or gold standard measure of the same construct.
Construct validity
45
Methods of construct validity
Known-groups method, Convergence and divergence, Factor analysis
46
extent to which a test correlates w/ other tests of closely related structure
Convergent validity
47
extent to which a test is uncorrelated w/ tests of distinct or contrasting constructs
Discriminant validity
48
Change scores are used to:
Demonstrate effectiveness of an intervention Track the course of a disorder over time Provide a context for clinical decision making
49
What is the concern affecting validity of measuring change?
the ability of an instrument to reflect changes at extremes of a scale
50
Floor effect w/ change scores???
not being able to see difference in scores on an instrument if the participants score is already low
51
The smallest difference that signifies an important difference in a patient’s condition
Minimal clinically important difference (MCID)
52
standardized assessment designed to compare and rank individuals within a defined population
Norm-referenced test
53
interpreted according to a fixed standard that represents an acceptable level of performance
Criterion-referenced test
54
The roles of surveys in clinical research?
elicits quantitative or qualitative responses & can be used for descriptive purposes or to generate data to predict hypothesis
55
The two basic structures of survey instruments
Questionnaires and Interviews
56
standardized survey, usually self-administered, that asks individuals to respond to a series of questions.
Questionnaires
57
the researcher asks respondents specific questions and records the answers. structured, semi-structured, unstructured
Interviews
58
Process of designing a survey
Question Review literature Questions and hypothesis Content development Using existing instruments Expert review of draft questions Pilot testing Revisions
59
Two types of survey questions
Open ended and closed ended
60
ask respondents to answer in their own words. useful in identifying feelings, opinions, and biases
Open-ended question
61
ask respondents to select and answer from among several fixed choices. typical formats: multiple choice, 2 choices, check all that apply, 3-5 options, checklists, measurement scales, visual analog scales
Closed-ended questions
62
identifies seven core quality indicators applicable to services provided by all occupational therapists, regardless of geographic location, practice settings and populations served.
QUEST
63
7 QUEST core indicators
Availability of competent occupational therapists Long term supply of resources Ability to access service Optimal use of resources Success in obtaining occupational therapy goals Satisfaction throughout service delivery Incidents resulting in harm
64
To be used for a particular occupational therapy service, core indicators must be defined to be SMART:
Specific Measurable Agreed upon Relevant Timely
65
The two-step process is used for each core indicator:
Determine quality expectations for the service in relation to the areas measured by the core indicator. Consider the perspective of others such as people receiving services, referral sources and funding agencies when identifying expectations. Sample questions to consider are listed for each core indicator. Consider the quality measurement question and sample SMART indicators provided for the core indicator. Define the core indicator to measure performance of the service in relation to the quality expectations using SMART criteria. Outline the calculation used to determine the indicator result, define terms used in indicator and identify how data will be collected for the indicator. More than one SMART indicator may be defined for each core indicator.
66
What does QUEST mean?
Quality Evaluation Strategy Tool
67
Provider of QUEST
WFOT (world federation of occupational therapists)