Research Methods Flashcards
What is Confounding?
when we miss the real relationship between one variable and another when looking for a causal relationship, this is usually because a confounder is operating in the background
- A confounder can be another risk factor for the disease
- A confounder can also be a preventive factor for the disease
- A confounder can also be a surrogate or a marker for some other cause of disease
What is the confounder in the idea that the MMR vaccine causes Autism?
- the age of symptoms/ diagnosis of autism aligns with the age the MMR vaccine is given
What is a randomized controlled trial?
It is an Analytic-experimental trial
A study which participants are allocated randomly between an intervention and a control group
The most common being a two arm parallel design
it protects against confounding
Why are trials conducted?
Safety
- Ascertain the safe dose of a new drug.
- Demonstrate safety and tolerability of a new compound
- Monitor adverse events profile of a new drug (against an existing drug or placebo)
Efficacy/ Effectiveness
- Demonstrate efficacy of new drug – does it work?
- Show that treatment T is superior or equivalent to treatment X
- Demonstrate effectiveness, and cost-effectiveness, of A vs. B
How doe randomized controlled trials protect against confounding?
Taking baseline data, providing an intervention, and then measuring change since baseline = temporal precedence of exposure to outcome
- If we randomly assign people to have intervention/comparator…
- Allocation decided by chance
- Confounders should be equally distributed between groups
- Any effect of confounders on outcome likely equal within each the group
- Any observed between-group difference in observed outcome likely due to intervention
What is Clinical equipoise?
- genuine uncertainty in the expert medical community over whether one treatment will be more beneficial
- ethical basis for medical research that involves assigning patients to different treatments
- It is not ethically acceptable to randomize patients to a treatment/condition known to be inferior
- a lack of research evidence suggesting a difference in efficacy does not stop participants from having a preference
- strong preferences can make it harder to recruit
What makes a good randomized controlled trial?
- Internal validity - is the exposure causing the desired outcome in this study
- External validity - to what extent can these findings be generalized to other people, situations and times
What is Bias?
Generally - ‘a partiality that prevents objective consideration of an issue or situation’
In statistics - ‘a tendency of an estimate to deviate in one direction from a true value’
Bias is any departure of results from the truth
How do biases occur in randomized controlled trials?
- Bias in RCTs occurs when systematic error is introduced into sampling or testing by selecting or encouraging one outcome or answer over others
- Bias is independent of both sample size and statistical significance
- Unlike random error - this results from sampling variability and decreases as sample size increases
What types of bias are there?
- Selection bias
- Performance bias
- Attrition bias
- Observer/detection/information bias
What is Selection bias?
- Representativeness of sample to wider population: not adequately capturing the relevant population
-
Systematic differences between baseline characteristics of groups that are compared
- imbalance in demographic/ clinical characteristics
What is Performance bias
Systematic differences between groups in care provided, or in exposure to factors other than the interventions of interest
e.g different incentives, follow-up appointments in different locations
What is Attrition bias?
Systematic differences between groups in withdrawals from a study
e. g more control group participants withdrawal from the study because they become unwell
e. g greater interventional participants drop-out due to side effects
What is Observer/Detection/Information bias?
Outcome measure does not adequately capture outcome of interest
Systematic differences between groups in how outcomes are determined - or how information ic collected for the groups
e.g blood tests vs questionnaires
What are solutions to bias in RCTs?
- Selection bias - inclusion/exclusion criteria + sampling strategy/size; randomization +allocation procedures
- performance, attrition, /observer/detection/information bias - blinding/masking
What are you options if possed this question?
Some of the trial participants who get randomised to the treatment arm only have (e.g.) one dose of the vaccine instead of two. What should I do with their data?
- Intention-to-treat analyses
- analyse outcomes for everyone randomized, irrespective of whether they have/adhere to intervention/s allocated
- more conservative test, better reflection of real-life in which not everyone adheres to treatment
- Per-protocol analyses
- analyse outcomes for only those who received ‘does’ of intervention as specified by the protocol
- better test of intervention’s actual effectiveness when received as optimally specified
What Errors can we make in a study?
- type 1 error means that we observed a difference when there wasn’t really one e.g. our intervention was significantly better in our study, but this effect does not actually exist
- type II means that we didn’t observe a difference when there actually was one e.g. our interventions looked equivalent but actually the new intervention is better
- as we reduce the chance of making a type 1 error we increase our chance of making a type 2 error
What is the Significance level?
The rate at which we are comfortable in making a type 1 error - type 1 error rate
the convention is 5% or 0.05
meaning we are comfortable to make a type 1 error 5% of the time
What is the ‘Power’ of a study?
this the probability that a test will not amiss an effect when an effect truly exists
power is set at 1 minus 0.2 = 0.8/ 80%
What is the P value?
When you compare groups using a statistical test
the p value is the probability that the difference observed could have occurred by chance if the groups compared were really alike
E.g. If p=0.12, the probability of observing this result (or one more extreme), given that the null hypothesis is true, is 12%
- this can be effect by our sample size
How can the p-value be effected by the sample size/
- Small sample that is highly variable → high p-value
- there is more likely to look like the two groups differ even if the null hypothesis is true
- Many sample/observations with low variability → low p-value with a lower probability of making a type I error