Lecture 4 Flashcards
IMRAD
Introduction- why was the study done? Methods- How was it conducted? Results- What were the outcomes? Analysis- How was the data treated? Discussion- What does the data mean?
When to use IMRAD
When reviewing a paper.
The methods section is key to making sure you have a quality study. What should be included in the methods section?
Originality Who is being studied? Was it well designed? Did the study minimize systematic bias? Did the study have enough power? Are there statistical biases?
some key words to consider when looking at the originality of the methods section when reviewing a paper
Robustness- may increase with larger sample size or longer duration
Rigor: When addressing criticisms of previous studies
Improved generalizability if you study additional populations.
How to increase robustness of a methods section
may increase with larger sample size or longer duration
How to improve rigor of a methods section
When addressing criticisms of previous studies
How to improve generalizability of a methods section
Study additional populations.
How to confirm validation of a methods section
Validation will confirm repeatability of previous study findings.
Why is it important to consider who is being studied in the methods section?
Should consider how the subjects were recruited, inclusion and exclusion criteria, and population from which they were drawn from. Are they representative?
Recruitment methods of subjects influence ____
Inference: drawing conclusions based on data found.
Who, where, and time frame are all relevant to outcomes.
How to determine if the study was well designed based off methods section
Consider the event being studied
Consider the outcome measures- did they use surrogate endpoints or objective measures (should be objective in most EBM studies)
Assess validity of the outcome measures- did they measure what they said they were measuring?
What is the difference between statistical significance and clinical significance?
Statistical significance is objective. Determined by a test.
Clinical significance is somewhat subjective. Knowledge regarding the clinical measure report is necessary to evaluate significance.
Systematic bias
Anything which erroneously influences the conclusions about groups and distorts comparisons.
Bullseye with all hits off center.
All studies may contain systematic bias, but each study design requires different steps to avoid or minimize.
How can you avoid systematic bias?
Do a RCT: gold standard, should theoretically avoid systematic bias.
It AVOIDs, but not minimizes completely. Some biases that can still occur in a RCT include:
Selection bias, performance bias, exclusion bias, and detection bias.
Selection bias (may occur in RCT)
Incomplete randomization to groups.
Performance bias (may occur in RCT)
TREATMENT varies apart from intervention (what you actually did)
Exclusion bias (may occur in RCT)
Differences in WITHDRAWALS from each group. Ex: people from placebo group may withdrawal bc they don’t think they are getting anything out of the study. People in test group may withdrawal due to side effects of drugs.
Detection bias (may occur in RCT)
Differences in outcome assessments.
Why non-randomized controlled trials lead to systematic bias
If people put themselves into groups, they might share some personality traits or common interests. This will cause an overestimated effect of your trial.
What is challenging about a cohort study?
Control group selection can be challenging. People who weren’t exposed.
The goal is to find two identical groups in age, gender, socioeconomic status, and coexisting illness (or lack in control group)
How can retrospectively designed tests induce systematic bias?
Recall bias. Lack of control over data that was collected.
Definition of condition may change over time- misclassification is a bias that may skew findings (blood pressure example)
Two of the main ways to prevent statistical bias
Adequate sample size and power.
Power
The likelihood of deleting a statistically significant difference if the study hypothesis is true. This is also referred to as test sensitivity.
Relationship between effect and power
inverse. Small effect = high power.
How is effect size chosen?
By the investigator
What to consider when choosing a sample size and power?
Clinical significance and statistical significance
Sample sizes needed for a larger effect size
Larger effect size= larger difference between the control and test groups= need fewer people to detect.
Sample sizes needed for smaller effect size
Need more people to detect smaller effect size. More sensitivity.
Type I (alpha) error in statistical bias
False positive. Rare.
Type II (beta) error in statistical bias
False positive.
Ex: Beta of 10%
Power = 1-B
1-0.10= 90%
Study power should be a minimum of
80%-90%
Formula to determine power based on Beta (Type II error, statistical bias)
Power = 1 - B
Rule of thumb. Adequate follow up %
Better than 70% return for follow up.
Reasons why subjects may not return to follow up (need 70% or better return)
Failure to follow criteria Adverse effects Loss of motivation Clinical reasons Death
How blinding can reduce performance (treatment) bias?
If the investigator is not blinded, may bias treatment towards those perceived as needing it most.
If participants are not blinded: may alter their behavior.
How binding can reduce treatment (withdrawal) bias?
If investigators are not blinded, they may exclude subjects from analyses that didn’t get ideal results.
Subjects who withdraw may influence results.
Intent to treat analysis
Includes all subjects up until the point they either finish or withdraw from the study.
If withdrawals are not induced, the study is biased in the direction of the intervention.
Efficacy (per protocol) analysis
Opposite of intent to treat. Examines only the effect of the treatment being studied and subjects who completed the protocol. Creates two cohorts even if a RCT design is used.
Only ones who followed the rules are included in the analysis.