Critically Appraising Evidence-Intervention Study Flashcards
what elements do we need to consider when critically appraising evidence/
purpose
study design/methods
results
appraising clinical relevance
what is included in the study design/methods?
prospective/retrospective
study population
application of intervention
outcome measures
bias
what is included in appraising clinical relevance?
external validity
internal validity
applicability
what is the definition of the purpose of a research article?
what the authors set out to achieve
the purpose of an article is important for determining the ____ to your pt
applicability
t/f: the purpose of the article may not actually be achieved
true
what is the PICO question?
Population
Intervention
Comparison
Outcome
it outlines the parameters for the study or search
more specific is better
what is attrition bias?
systematic difference bw study groups in # and way the participants are lost from the study
what is confounding bias?
distorted measure of association bw exposure and outcome
are most research studies prospective or retrospective studies?
prospective
what is a prospective study?
a study that is designed b4 pts receive treatment
“live” data collection
what are the cons of prospective studies?
ppl may leave the study
not following protocol
money
time
what is the advantage of prospective studies?
there is not as much bias
what is a retrospective study?
a study that is designed after the pts receive rx
chart review
what are the cons of retrospective studies?
there are no set parameters, quality control, and more inclined to have bias
t/f: single vs multiple study sites is about how many places are conducting the study, NOT about how many places the participants come from
true
t/f: more diversity in a study is generally better
true
what is the advantage of multiple study sites?
there are dif lifestyles and populations
what is the disadvantage of multiple study sites?
interrater reliability is inconsistent
what is the difference bw a concurrent control trial and historical control?
a concurrent control trial has an investigator assigns subjects to rx (control and treatment) based on enrollment criteria
a historical control uses prior data to serve as the control group
what are the pros of using a historical control?
you cut the recruitment amount in 1/2
saves money
saves time
what are the cons of a historical control?
the 2 different time points make the populations very different
what is consecutive sampling?
researchers set an entry point and screen everyone who comes through the entry point
what is selective sampling?
participants come in response to solicitation
which type of sampling may advertise, ask for a referral, or go to places in the community and invite ppl to participate?
selective sampling
which type of sampling is common and practical?
selective sampling
what does inclusion and exclusion criteria have to do with?
who is allowed in the study
what questions should be considered about inclusion/exclusion criteria?
do the criteria make clinical sense?
is a clinically relevant population being recruited?
is there bias in the population being recruited?
would your patient have qualified for the study? if not, are the differences bw your pt and the criteria relevant to potential outcomes
t/f: a study must have a baseline to go off of to see change effects
true
what are 3 important questions in the application of intervention?
1) was rx consistent (fidelity)?
2) was it realistic? can it be done realistically?
3) were groups treated equally except for the IV?
what are important questions to ask about outcome measures?
are they reliable
are they valid?
do they span the ICF?
do they measure something important?
do they measure something that will change w/rx?
what is a bias in research?
a tendency or preference toward a particular result that impairs objectivity
what are the selection biases?
referral, volunteer biases
t/f: referral bias is related to selective sampling
true
what is volunteer bias?
the difference bw individuals who volunteer vs those who do not
leads to some people being under or not represented
what are the types of measurement bias?
instrument, expectation, and attention biases
what is instrument bias?
errors in the instrument used to collect data
what is expectation bias?
when no blinding occurs
what is attention bias?
when participants know their involvement, they are more likely to give a favorable response
what are the types of intervention bias?
proficiency, compliance (attrition) biases
what is proficiency bias?
dif skills of PTs or dif sites, interventions are not applied equally
what is compliance bias?
losing people in a study
what is confirmation bias?
researchers may miss observing a certain phenomenon bc of a focus on the hypothesis testing
what are the types of biases>
selection bias
measurement bias
intervention bias
confirmation bias
confounding bias
t/f: missing data from attrition is unavoidable in clinical research w/follow-up visits
true
how does attrition introduce bias?
demographics of participants in the study change
ppl who leave are likely dif from those who stay, and only compliant pts are studied
creates missing data
what is intention-to-treat analysis?
analyzing data as though the participants remained in their assigned groups after leaving a study
one approach to make up for missing data created by attrition
what are the statistical approaches to intention-to-treat?
last observation carried forward
best and worst case approaches (both often used in combo)
regression models (esp multiple regression models)
what is confounding bias?
when a 3rd uncontrolled variable influences the DV and can falsely show an association
t/f: confounding bias strengthens internal validity
false, it hurts internal validity
t/f: confounding error makes it difficult to establish a clear cause and effect link bw IV and DV
true
how can we reduce confounding bias?
by setting very clear inclusion/exclusion criteria
what is involved in understanding the results of an intervention study?
statistics
identifying potential problems in inferential stats
summarizing the clinical bottom line
read the tables and figures
what are the 3 categories of statistics?
descriptive stats
inferential stats
clinically relevant stats
what statistics evaluates the importance of changes in outcomes for PT care?
clinically relevant statistics
what things do we need to know about interpreting results from descriptive statistics?
how to classify different types of data
which results are from descriptive stats
difference bw normal and skewed distribution (and why it matters)
how to interpret reported means, median, modes, SD, proportions, and ranges
how different types of data are presented in descriptive statistics
why should we pay attention to descriptive stats?
bc it helps determine where a majority of data falls (demographics and outcomes)
bc it helps us understand info b4 and after intervention
what are the commonly reported stats for nominal data?
proportion
what are the commonly reported stats for ordinal data?
proportion, range
what are the commonly reported stats for continuous, normally distributed data?
mean, SD, range
what are the commonly reported stats for continuous, not normally distributed data?
median, IQR
what do we need to know to decide if groups are statistically significantly different?
p values
t/f: descriptive stats are useful but insufficient to make conclusions about the differences bw groups
true
when interpreting and appeasing results of inferential stats, what questions need to be asked?
what is being compared?
what type of data is being compared? (para/nonpara, categorical/continuous)
was the right stat test used?
what is the importance of randomization?
it ensures that groups are similar
group differences at baseline may be due to what?
potential error/bias
what things may lead to group differences at baseline?
unsuccessful randomization
inter/intra-rater reliability, test-retest reliability is bad
reliability of instruments/tests are bad
what happens if alpha is larger than 0.05 (standard)?
there is less probability of type 2 error
there is greater tolerance of type 1 error
it is easier to have FP
what happens if alpha is smaller than 0.05 (standard)?
there is a reduced chance of FP
it is harder to detect significance
it is less likely to incorrectly reject the null
when would the alpha be smaller?
with post hoc bonferroni corrections
what is the effect size?
an estimate of the magnitude of the dif bw groups (effect of the different interventions)
the effect size indicates the strength of the decision on what?
H0
the bigger the effect size, the ___ our decision on the H0.
stronger
t/f: the effect size depends on the test used
true
what is the value used to measure the effect size for t test?
cohen’s d
what is the value used to measure the effect size for ANOVAs?
partial eta squared
what are different strengths of effects sizes?
small, medium, and large effect
how does variability affect effect size?
the greater the variability the smaller the effect size
if a curve is flatter, what does this mean about the variability? the effect size? the sample size?
the variability is greater
the effect size is smaller
the sample size is smaller
when the effect size is smaller, is it more difficult r easier to distinguish differences bw null and alternative?
more difficult
what is statistical power?
1-beta
the probability of rejecting the null hypothesis when H0 is false (TN)
when there is greater power is there lower type 1 or 2 error?
lower type 2 error
when beta increases, power ___, when beta decreases, power _____.
decreased, increases
t/f: greater statistical power=stronger conclusion
true
generally, studies should have power of greater than what?
0.8 (80% chance of detecting a real difference)
larger sample size=___ effect size=____ w/in group variability
larger, less
smaller sample size=___effect size=___w/in group variability
smaller, more
when should power analysis be done? why?
b4 the study in order to calculate how many samples you need
if there is insufficient power, there is a larger risk for what type of error?
type 2 errors
t/f: if there is insufficient power, the validity of findings can be questionable
true
why is a study with insufficient power (too small N) a problem?
bc the type 1 or 2 error will be too high
bc the study might find a difference bw groups when a difference doesn’t really exist
bc the study might find no difference bw groups when a difference actually exists
what are the types of clinical meaningfulness?
minimal detectable change (MDC)
minimally clinically important differences (MCID or MID)
what question does the MDC and MCID answer?
are the results significant and meaningful?
what does the MCD indicate?
the amount of change required to exceed measurement variability
what does the MCID indicate?
the amount of change required to produce clinically meaningful change
is the MDC or MCID derived using a stable sample at 2 time points?
MDC
is the MDC or MCID best estimated in a change sample over time?
MCID
t/f: statistical significance could be defined at any point greater than “no change” depending on the sample size and SD
true
what things do we need to consider when appraising clinical relevance?
external validity
internal validity
what is external validity?
the generalizability of a study to a pt in clinical practice
what are things we need to consider with external validity?
is the study population applicable to your client?
is the intervention applicable to your clinical setting?
are the outcome measures applicable to your clinical question?
can the results be applied to your client in your clinical setting?
what is internal validity?
being sure that the results of a study are due to the manipulations within the experiment
what things need to be considered about internal validity?
was the study designed and carried out w/sufficient QUALITY?
was the study conducted w/sufficient rigor that it can be used for clinical decision making?
does the way the participants were recruited avoid/minimize systematic bias?
does the study design avoid/minimize systematic bias?
does the application of the interventions (IV) avoid/minimize systematic bias?
does the outcome measures avoid/minimize systematic bias? do they have established validity and reliability?
what are the study design considerations?
study design (randomized control trial, case study, etc)
control vs comparison used
are the participants in ACh group similar at the start of the study
is there blinding?
is the attrition <20%? (should be)
are the reasons for dropouts explained?
are follow-up assessments conducted at sufficient intervals (3 or 6 months) post intervention for LT effect?
are the funding sources stated and could they create bias
t/f: sponsors for a study are a bad thing
false, they are not innately bad, but we need to make sure that we consider the possible effects of it
what are 5 things we need to look for when a study reports its stats?
1) are the statistical methods appropriate for the distribution of the data and the study design?
2) are the investigators controlling for confounding variables that could impact the outcome other than the intervention?
3) is the intent-to-treat analysis performed?
4) do the investigators address whether statistically significant results were clinically meaningful (ie MCID)?
5) are confidence intervals reported?
what questions are important in summarizing the clinical bottom line?
what were the characteristics and size of study samples?
were the groups similar at baseline?
were outcome measures reliable and valid?
were appropriate descriptive and inferential stats analysis applied to the results?
was there a treatment effect? if so, was it clinically relevant?