EBP EXAM 2:2 Flashcards
Internal Validity
Did the INDEPENDENT VARIABLE cause the EFFECT?
Or was the effect caused by confounding or extraneous variables. Did it measure what it intended to measure?
External Validity
Generalizability to others within the same demographic, ailment, etc.
When appraising QUANTITATIVE research, what will the APPRAISER consider?
Challenges in study design, complexity, practical and ethical considerations, study quality, internal validity, study relevance, etc.
What is the FIRST question to consider when appraising QUANTITATIVE research?
Was the design appropriate?
What is a systematic review?
The collection and rigorous analysis and appraisal of literature in a given area
What is the difference between a systematic review and a meta-analysis
A meta-analysis is like a systematic review, but it goes a step further and examines all of the numerical & statistical results from the studies.
Describe a RCT
- People are randomly assigned to an intervention or control group
- They are pretested on D.V
- Treatment is provided to intervention group while no treatment or an alt. treatment is given to control
- compare results
Describe a COHORT study
A natural occurring group with two groups (cohorts) one have I.V is followed to see if they do or do not have an outcome (dependent variable)
Cohort vs Case Controlled studies
Case controlled studies find people with the dependent variable first, then look to see if the I.V is related.
List common biases in systematic reviews and methods researchers can use to address them.
- Publication bias: study is more likely to be published if it showed positive results, more statistical significance
- All relevant studies are not included
- Only one reviewer or appraiser: need to have more than one person to look at strength of evidence and can help appraise systematic review.
List common biases in meta-analyses and methods researchers can use to address them.
- Lack of universal definitions
* Lack of consistent measures for outcomes
List common biases in RCTs and methods researchers can use to address them.
• RESEARCHERS AND/OR PARTICIPANTS KNOW/GUESS TREATMENT GROUP
o Ways to avoid: conceal randomization, masking/blinding,
• DROPOUTS/ATTRITION
o Ways to avoid: intention to treat analysis
• CO-INTERVENTION
o Ways to avoid: have good exclusion criteria
• CONTAMINATION (CONTROL GROUP GETS THE TREATMENT INADVERTENTLY)
o Ways to avoid: implement controls in the study
• DIFFERENT THERAPISTS (MIGHT DO THINGS DIFFERENTLY, HAVE BETTER SKILLS, NOT GIVING SAME KIND OF THERAPY)
o Ways to avoid: training, only use one therapist
• SITE OF INTERVENTION (HOME VS. SNF)
o Ways to avoid: have consistent sites
Name some additional considerations when appraising the quality of RCTs.
- Inappropriate generalization (Study is too controlled. not every client is the same, affects ability to generalize)
- Lack of longitudinal follow-up (had good findings, but do not know how long effects last after invention is over)
- Feasibility (not feasible to do therapy 4 hrs/day, 5x/week)
- Not appropriate for all questions
List common biases in cohort studies and methods researchers can use to address them.
· Selection bias (naturally occurring groups)
· Inconsistent or inaccurate data collection
· Attrition (going to have more dropouts over time if study is long term)
· Sufficient length of time (was study long enough to see a difference?)
List common biases in case-controlled studies and methods researchers can use to address them.
· Recall/memory bias
In addition to design considerations, what other aspects of studies need to be considered when appraising internal validity?
· Was the statistical approach appropriate to answer the question?
· What were the results? Are the findings statistically significant? (probability)
· Were the conclusions supported by the study findings?
What questions should guide the appraisal of a study’s external validity?
· Was the sampling plan appropriate?
· Was the sample size adequate?
· Nonresponse/dropout
What questions should guide the appraisal of a study’s impact/clinical utility?
· Are the findings clinically significant?
· How large were the treatment effects or effect size?
· Does the evidence pertain to my clinical situation?
· Can the therapeutic intervention be implemented in my clinical setting?
What are some take-home messages from this presentation to keep in mind as you appraise your articles?
· Rigorous studies, free of bias are the best evidence
· Few studies will meet all of the standards
· Many studies will almost, but not quite, answer the PICO question
· Research studies will usually not replicate one another exactly
· Consider whether the evidence is strong, moderate, or weak