Critically Appraising Evidence-Intervention Study Flashcards

1
Q

what elements do we need to consider when critically appraising evidence/

A

purpose

study design/methods

results

appraising clinical relevance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

what is included in the study design/methods?

A

prospective/retrospective

study population

application of intervention

outcome measures

bias

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

what is included in appraising clinical relevance?

A

external validity

internal validity

applicability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

what is the definition of the purpose of a research article?

A

what the authors set out to achieve

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

the purpose of an article is important for determining the ____ to your pt

A

applicability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

t/f: the purpose of the article may not actually be achieved

A

true

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

what is the PICO question?

A

Population
Intervention
Comparison
Outcome

it outlines the parameters for the study or search

more specific is better

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

what is attrition bias?

A

systematic difference bw study groups in # and way the participants are lost from the study

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

what is confounding bias?

A

distorted measure of association bw exposure and outcome

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

are most research studies prospective or retrospective studies?

A

prospective

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

what is a prospective study?

A

a study that is designed b4 pts receive treatment

“live” data collection

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

what are the cons of prospective studies?

A

ppl may leave the study

not following protocol

money

time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

what is the advantage of prospective studies?

A

there is not as much bias

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

what is a retrospective study?

A

a study that is designed after the pts receive rx

chart review

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

what are the cons of retrospective studies?

A

there are no set parameters, quality control, and more inclined to have bias

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

t/f: single vs multiple study sites is about how many places are conducting the study, NOT about how many places the participants come from

A

true

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

t/f: more diversity in a study is generally better

A

true

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

what is the advantage of multiple study sites?

A

there are dif lifestyles and populations

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

what is the disadvantage of multiple study sites?

A

interrater reliability is inconsistent

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

what is the difference bw a concurrent control trial and historical control?

A

a concurrent control trial has an investigator assigns subjects to rx (control and treatment) based on enrollment criteria

a historical control uses prior data to serve as the control group

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

what are the pros of using a historical control?

A

you cut the recruitment amount in 1/2

saves money

saves time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

what are the cons of a historical control?

A

the 2 different time points make the populations very different

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

what is consecutive sampling?

A

researchers set an entry point and screen everyone who comes through the entry point

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

what is selective sampling?

A

participants come in response to solicitation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

which type of sampling may advertise, ask for a referral, or go to places in the community and invite ppl to participate?

A

selective sampling

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

which type of sampling is common and practical?

A

selective sampling

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

what does inclusion and exclusion criteria have to do with?

A

who is allowed in the study

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

what questions should be considered about inclusion/exclusion criteria?

A

do the criteria make clinical sense?

is a clinically relevant population being recruited?

is there bias in the population being recruited?

would your patient have qualified for the study? if not, are the differences bw your pt and the criteria relevant to potential outcomes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

t/f: a study must have a baseline to go off of to see change effects

A

true

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

what are 3 important questions in the application of intervention?

A

1) was rx consistent (fidelity)?
2) was it realistic? can it be done realistically?
3) were groups treated equally except for the IV?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

what are important questions to ask about outcome measures?

A

are they reliable

are they valid?

do they span the ICF?

do they measure something important?

do they measure something that will change w/rx?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

what is a bias in research?

A

a tendency or preference toward a particular result that impairs objectivity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

what are the selection biases?

A

referral, volunteer biases

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

t/f: referral bias is related to selective sampling

A

true

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

what is volunteer bias?

A

the difference bw individuals who volunteer vs those who do not

leads to some people being under or not represented

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

what are the types of measurement bias?

A

instrument, expectation, and attention biases

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

what is instrument bias?

A

errors in the instrument used to collect data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

what is expectation bias?

A

when no blinding occurs

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

what is attention bias?

A

when participants know their involvement, they are more likely to give a favorable response

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

what are the types of intervention bias?

A

proficiency, compliance (attrition) biases

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

what is proficiency bias?

A

dif skills of PTs or dif sites, interventions are not applied equally

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

what is compliance bias?

A

losing people in a study

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

what is confirmation bias?

A

researchers may miss observing a certain phenomenon bc of a focus on the hypothesis testing

44
Q

what are the types of biases>

A

selection bias

measurement bias

intervention bias

confirmation bias

confounding bias

45
Q

t/f: missing data from attrition is unavoidable in clinical research w/follow-up visits

A

true

46
Q

how does attrition introduce bias?

A

demographics of participants in the study change

ppl who leave are likely dif from those who stay, and only compliant pts are studied

creates missing data

47
Q

what is intention-to-treat analysis?

A

analyzing data as though the participants remained in their assigned groups after leaving a study

one approach to make up for missing data created by attrition

48
Q

what are the statistical approaches to intention-to-treat?

A

last observation carried forward

best and worst case approaches (both often used in combo)

regression models (esp multiple regression models)

49
Q

what is confounding bias?

A

when a 3rd uncontrolled variable influences the DV and can falsely show an association

50
Q

t/f: confounding bias strengthens internal validity

A

false, it hurts internal validity

51
Q

t/f: confounding error makes it difficult to establish a clear cause and effect link bw IV and DV

A

true

52
Q

how can we reduce confounding bias?

A

by setting very clear inclusion/exclusion criteria

53
Q

what is involved in understanding the results of an intervention study?

A

statistics

identifying potential problems in inferential stats

summarizing the clinical bottom line

read the tables and figures

54
Q

what are the 3 categories of statistics?

A

descriptive stats

inferential stats

clinically relevant stats

55
Q

what statistics evaluates the importance of changes in outcomes for PT care?

A

clinically relevant statistics

56
Q

what things do we need to know about interpreting results from descriptive statistics?

A

how to classify different types of data

which results are from descriptive stats

difference bw normal and skewed distribution (and why it matters)

how to interpret reported means, median, modes, SD, proportions, and ranges

how different types of data are presented in descriptive statistics

57
Q

why should we pay attention to descriptive stats?

A

bc it helps determine where a majority of data falls (demographics and outcomes)

bc it helps us understand info b4 and after intervention

58
Q

what are the commonly reported stats for nominal data?

A

proportion

59
Q

what are the commonly reported stats for ordinal data?

A

proportion, range

60
Q

what are the commonly reported stats for continuous, normally distributed data?

A

mean, SD, range

61
Q

what are the commonly reported stats for continuous, not normally distributed data?

A

median, IQR

62
Q

what do we need to know to decide if groups are statistically significantly different?

A

p values

63
Q

t/f: descriptive stats are useful but insufficient to make conclusions about the differences bw groups

A

true

64
Q

when interpreting and appeasing results of inferential stats, what questions need to be asked?

A

what is being compared?

what type of data is being compared? (para/nonpara, categorical/continuous)

was the right stat test used?

65
Q

what is the importance of randomization?

A

it ensures that groups are similar

66
Q

group differences at baseline may be due to what?

A

potential error/bias

67
Q

what things may lead to group differences at baseline?

A

unsuccessful randomization

inter/intra-rater reliability, test-retest reliability is bad

reliability of instruments/tests are bad

68
Q

what happens if alpha is larger than 0.05 (standard)?

A

there is less probability of type 2 error

there is greater tolerance of type 1 error

it is easier to have FP

69
Q

what happens if alpha is smaller than 0.05 (standard)?

A

there is a reduced chance of FP

it is harder to detect significance

it is less likely to incorrectly reject the null

70
Q

when would the alpha be smaller?

A

with post hoc bonferroni corrections

71
Q

what is the effect size?

A

an estimate of the magnitude of the dif bw groups (effect of the different interventions)

72
Q

the effect size indicates the strength of the decision on what?

A

H0

73
Q

the bigger the effect size, the ___ our decision on the H0.

A

stronger

74
Q

t/f: the effect size depends on the test used

A

true

75
Q

what is the value used to measure the effect size for t test?

A

cohen’s d

76
Q

what is the value used to measure the effect size for ANOVAs?

A

partial eta squared

77
Q

what are different strengths of effects sizes?

A

small, medium, and large effect

78
Q

how does variability affect effect size?

A

the greater the variability the smaller the effect size

79
Q

if a curve is flatter, what does this mean about the variability? the effect size? the sample size?

A

the variability is greater

the effect size is smaller

the sample size is smaller

80
Q

when the effect size is smaller, is it more difficult r easier to distinguish differences bw null and alternative?

A

more difficult

81
Q

what is statistical power?

A

1-beta

the probability of rejecting the null hypothesis when H0 is false (TN)

82
Q

when there is greater power is there lower type 1 or 2 error?

A

lower type 2 error

83
Q

when beta increases, power ___, when beta decreases, power _____.

A

decreased, increases

84
Q

t/f: greater statistical power=stronger conclusion

A

true

85
Q

generally, studies should have power of greater than what?

A

0.8 (80% chance of detecting a real difference)

86
Q

larger sample size=___ effect size=____ w/in group variability

A

larger, less

87
Q

smaller sample size=___effect size=___w/in group variability

A

smaller, more

88
Q

when should power analysis be done? why?

A

b4 the study in order to calculate how many samples you need

89
Q

if there is insufficient power, there is a larger risk for what type of error?

A

type 2 errors

90
Q

t/f: if there is insufficient power, the validity of findings can be questionable

A

true

91
Q

why is a study with insufficient power (too small N) a problem?

A

bc the type 1 or 2 error will be too high

bc the study might find a difference bw groups when a difference doesn’t really exist

bc the study might find no difference bw groups when a difference actually exists

92
Q

what are the types of clinical meaningfulness?

A

minimal detectable change (MDC)

minimally clinically important differences (MCID or MID)

93
Q

what question does the MDC and MCID answer?

A

are the results significant and meaningful?

94
Q

what does the MCD indicate?

A

the amount of change required to exceed measurement variability

95
Q

what does the MCID indicate?

A

the amount of change required to produce clinically meaningful change

96
Q

is the MDC or MCID derived using a stable sample at 2 time points?

A

MDC

97
Q

is the MDC or MCID best estimated in a change sample over time?

A

MCID

98
Q

t/f: statistical significance could be defined at any point greater than “no change” depending on the sample size and SD

A

true

99
Q

what things do we need to consider when appraising clinical relevance?

A

external validity

internal validity

100
Q

what is external validity?

A

the generalizability of a study to a pt in clinical practice

101
Q

what are things we need to consider with external validity?

A

is the study population applicable to your client?

is the intervention applicable to your clinical setting?

are the outcome measures applicable to your clinical question?

can the results be applied to your client in your clinical setting?

102
Q

what is internal validity?

A

being sure that the results of a study are due to the manipulations within the experiment

103
Q

what things need to be considered about internal validity?

A

was the study designed and carried out w/sufficient QUALITY?

was the study conducted w/sufficient rigor that it can be used for clinical decision making?

does the way the participants were recruited avoid/minimize systematic bias?

does the study design avoid/minimize systematic bias?

does the application of the interventions (IV) avoid/minimize systematic bias?

does the outcome measures avoid/minimize systematic bias? do they have established validity and reliability?

104
Q

what are the study design considerations?

A

study design (randomized control trial, case study, etc)

control vs comparison used

are the participants in ACh group similar at the start of the study

is there blinding?

is the attrition <20%? (should be)

are the reasons for dropouts explained?

are follow-up assessments conducted at sufficient intervals (3 or 6 months) post intervention for LT effect?

are the funding sources stated and could they create bias

105
Q

t/f: sponsors for a study are a bad thing

A

false, they are not innately bad, but we need to make sure that we consider the possible effects of it

106
Q

what are 5 things we need to look for when a study reports its stats?

A

1) are the statistical methods appropriate for the distribution of the data and the study design?

2) are the investigators controlling for confounding variables that could impact the outcome other than the intervention?

3) is the intent-to-treat analysis performed?

4) do the investigators address whether statistically significant results were clinically meaningful (ie MCID)?

5) are confidence intervals reported?

107
Q

what questions are important in summarizing the clinical bottom line?

A

what were the characteristics and size of study samples?

were the groups similar at baseline?

were outcome measures reliable and valid?

were appropriate descriptive and inferential stats analysis applied to the results?

was there a treatment effect? if so, was it clinically relevant?