Research Objectives 2 & QUEST Flashcards

1
Q

Can theoretically have any value along a continuum within a defined range

Ex: wt in lbs

A

Continuous variables

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Can only be described in whole integer unit

Ex: HR in BPM

A

Discrete variables

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Can only take on two values

Ex: yes or no one a survey

A

Dichotomous variable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is the challenge of measuring constructs?

A

It subjective, used with abstract variables and measured according to the expectations of how a person who possesses a specified trait would behave, look, or feel in certain situations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

category/classifications (Ex: blood type, gender, dx)

A

Nominal

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

numbers in rank order, inconsistent/unknown intervals. Based on greater than, less than (Ex: MMT, function, pain)

A

Ordinal

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

numbers have rank orders and equal intervals, but no true zero. Can be added or subtracted, cannot be used to interpret actual quantities (Ex: Fahrenheit vs celsius, shoe size)

A

Interval

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

numbers represent units w/ equal intervals measured from true zero (Ex: Height, wt, age)

A

Ratio

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is the relevance of identifying measurement scales for statistical analysis?

A

mathematical operations
Meaningful interpretations

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Statistical procedure requiring applied mathematical manipulations, requiring interval or ratio data… Mean, median, mode

A

Parametric tests

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Statistical procedure that does not make the same assumptions and are designed to be used w/ ordinal and nominal data

A

Non parametric tests

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Two important forms of reliability in clinical measurement

A

Relative and absolute reliability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Reflects true variance as a proportion of total variance in a set of scores.

Intraclass correlation coefficients (ICC) and kappa coefficients are commonly used

A

Relative reliability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Indicates how much of a measured value, expressed in the original units, is likely due to error

Most commonly uses standard error of measurement

A

Absolute reliability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

any observed score involves true score (fixed value) and unknown error component (small or large)

A

Classical measurement theory

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

true score ± error component equals…..

A

Observed score

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

matter of chance, possibly arising from factors such as examiner or subject inattention, instrument imprecision, or unanticipated environmental fluctuation.

Can occur through the measuring instrument itself for example: Imprecise instruments or environmental changes affecting instrument performance can also contribute to random error

A

Random errors

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

predictable errors of measurement. They occur in one direction, constantly overestimating or underestimating the true score.

Consistent & are not a threat to reliability. Instead, it only threatens the validity of a measure.

A

Systematic errors

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

typical sources of measurement error

A

The person taking the measurements — the raters
The measuring instrument
Variability/consistency in the characteristic being measured

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What is the effect of regression toward the mean in repeated measurement?

A

can interfere when researchers try to extrapolate results observed in a small sample to a larger population of interest.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

statistical phenomena when extreme scores are used in the calculation of measured change. Extreme scores on an initial test are expected to move closer (or regress) toward the group average (mean) on a second test.

A

Regression towards the mean (RTM)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Discuss how concepts of agreement and correlation relate to reliability

A

Through interrater agreement and correlation, a study is likely to be more reliable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

determines the ability of an instrument to measure subject performance consistently

A

Test-retest

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

time intervals between tests must be considered

A

Test-retest intervals

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

carryover influenced by practice or learning during the initial trial alters performance on subsequent trials

A

Carryover

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

when the test itself is responsible for observed changes in a measured variable

A

Testing effects

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

training and standardization may be necessary for rater(s); the instrument and the response variable are assumed to be stable so that any differences between scores on repeated tests can be attributed solely to rater error

A

Rater reliability

28
Q

stability of data recorded by one tester across two or more trials

A

Intrarater

29
Q

two or more raters who measure same subject

A

Interrater

30
Q

also called equivalent or parallel forms; assesses the differences between scores to determine whether they agree; used as an alternative to test-retest reliability when the intention is to derive comparable versions of a test to minimize the threat posed when subjects recall their responses

A

Alternate forms of reliablity

31
Q

Relevant to a tool’s application

A

Reliability exists in a context

32
Q

Exists to some extent in any instrument

A

Reliability is not all-or-none

33
Q

how is reliability related to the concept of minimal detectable difference?

A

GREATER RELIABILITY = SMALLER THE MDC

34
Q

MDC is based on

A

Standard error of measurement

35
Q

the amount of change that goes beyond error

A

Minimal detectable change (MDC)

36
Q

The most commonly used reliability index, it provides a range of scores within which the true score for a given test is likely to lie.

A

SEM

37
Q

relates to the confidence we have that our measurement tools are giving us accurate information about a relevant construct so that we can apply results in a meaningful way

Used to measure progress toward goals and outcomes

A

Validity

38
Q

Validity needs to be capable of…

A

discriminating among individuals w/ and w/o certain traits, dx, or conditions
evaluating magnitude/quality of variable
creating accurate predictions of pt future

39
Q

Implies that an instrument appears to test what is intended to test
Judgment by the users of a test after the test is developed

A

Face validity

40
Q

establishing multiple items make up a sample, or that scale adequately samples the universe of the content that defines the construct being measured.
The items must adequately represent the full scope of the construct being studied.
The number of items that address each component should reflect the relative importance of that component.
The test should not contain irrelevant items.

A

Content validity

41
Q

Comparison of the results of a test to an external criterion
Index test AND Gold or reference standard as the criterion

A

Criterion-related validity

42
Q

Two types of criterion-related validity

A

Concurrent and predictive validity

43
Q

test correlates w/ reference standard at same time

A

Concurrent validity

44
Q

Reflects the ability of an instrument to measure the theoretical dimensions of a construct; establishes the correspondence b/w a target test and a reference or gold standard measure of the same construct.

A

Construct validity

45
Q

Methods of construct validity

A

Known-groups method, Convergence and divergence, Factor analysis

46
Q

extent to which a test correlates w/ other tests of closely related structure

A

Convergent validity

47
Q

extent to which a test is uncorrelated w/ tests of distinct or contrasting constructs

A

Discriminant validity

48
Q

Change scores are used to:

A

Demonstrate effectiveness of an intervention
Track the course of a disorder over time
Provide a context for clinical decision making

49
Q

What is the concern affecting validity of measuring change?

A

the ability of an instrument to reflect changes at extremes of a scale

50
Q

Floor effect w/ change scores???

A

not being able to see difference in scores on an instrument if the participants score is already low

51
Q

The smallest difference that signifies an important difference in a patient’s condition

A

Minimal clinically important difference (MCID)

52
Q

standardized assessment designed to compare and rank individuals within a defined population

A

Norm-referenced test

53
Q

interpreted according to a fixed standard that represents an acceptable level of performance

A

Criterion-referenced test

54
Q

The roles of surveys in clinical research?

A

elicits quantitative or qualitative responses & can be used for descriptive purposes or to generate data to predict hypothesis

55
Q

The two basic structures of survey instruments

A

Questionnaires and Interviews

56
Q

standardized survey, usually self-administered, that asks individuals to respond to a series of questions.

A

Questionnaires

57
Q

the researcher asks respondents specific questions and records the answers.

structured, semi-structured, unstructured

A

Interviews

58
Q

Process of designing a survey

A

Question
Review literature
Questions and hypothesis
Content development
Using existing instruments
Expert review of draft questions
Pilot testing
Revisions

59
Q

Two types of survey questions

A

Open ended and closed ended

60
Q

ask respondents to answer in their own words.
useful in identifying feelings, opinions, and biases

A

Open-ended question

61
Q

ask respondents to select and answer from among several fixed choices.

typical formats: multiple choice, 2 choices, check all that apply, 3-5 options, checklists, measurement scales, visual analog scales

A

Closed-ended questions

62
Q

identifies seven core quality indicators applicable to services provided by all occupational therapists, regardless of geographic location, practice settings and populations served.

A

QUEST

63
Q

7 QUEST core indicators

A

Availability of competent occupational therapists
Long term supply of resources
Ability to access service
Optimal use of resources
Success in obtaining occupational therapy goals
Satisfaction throughout service delivery
Incidents resulting in harm

64
Q

To be used for a particular occupational therapy service, core indicators must be defined to be SMART:

A

Specific
Measurable
Agreed upon
Relevant
Timely

65
Q

The two-step process is used for each core indicator:

A

Determine quality expectations for the service in relation to the areas measured by the core indicator. Consider the perspective of others such as people receiving services, referral sources and funding agencies when identifying expectations. Sample questions to consider are listed for each core indicator.

Consider the quality measurement question and sample SMART indicators provided for the core indicator. Define the core indicator to measure performance of the service in relation to the quality expectations using SMART criteria. Outline the calculation used to determine the indicator result, define terms used in indicator and identify how data will be collected for the indicator. More than one SMART indicator may be defined for each core indicator.

66
Q

What does QUEST mean?

A

Quality Evaluation Strategy Tool

67
Q

Provider of QUEST

A

WFOT (world federation of occupational therapists)