lectures 11-23 Flashcards
what is randomisation and what does it achieve
randomly splitting participants into groups. Eliminates confounding because known and unknown confounders should be balanced
cluster randomisation
randomise groups (clusters) of participants instead of individuals, which may be difficult. E.g. GP practices, hospital wards
stratified (block) randomisation
If you want to be certain that important confounders are eliminated. Randomise individuals within each age group, sex, or hospital
cross over studies
Each person gets both treatments -confounding is effectively eliminated. Can only be done for long-term conditions and treatments that do not cure disease
protecting randomisation
- concealment of allocation - Make sure that people can’t cheat and pick the treatment that they prefer
- intention-to-treat analysis - Analyse participants as randomised, this reflects real world
- blinding - of researchers and participants
- complete follow-up
- use large numbers - balance confounding
per-protocol analysis
Analyse as treated (not necessarily as randomised)
Lose the benefit of randomisation
potential sources of bias
- lack of blinding - participants or researchers may act differently if they know the groups
- loss to follow up - cant analyse their results
- non-adherence - e.g. stop taking treatment
4 strengths of RCTs
- The best study design to test an intervention
- Well conducted studies should eliminate confounding and bias
- You can calculate Incidence, Relative Risks, and Risk Differences
- The strongest design for testing cause-and-effect associations
clinical equipoise
must have genuine uncertainty about benefit or harm of intervention in RCTs
practical issues of RCTs
- can be expensive - need many participants, long time
- often funded by pharmaceutical companies - unlikely to fund studies for cheap treatments
- Participants in RCTs are often not representative - They need to meet all the inclusion criteria
- RCTs are not efficient for rare outcomes
internal validity
whether or not there is a real association in the group you looked at, or if its due to chance, bias or confounding
external validity
can findings be generalised to broader population
effect of increasing sample size in random sample
- Makes it more likely to represent sample
- Reduces sample variability (standard deviation etc)
- Increases precision of parameter estimate - Confidence intervals get narrower
2 interpretations of confidence intervals
- A 95% Confidence Interval represents the range of values within which the parameter will lie between 95% of the time if we continue to repeat the study with new samples
- 95% CI = We are 95% confident that the true population value lies between the limits of the confidence interval
what does it mean if 0 in included in a CI for difference between two means
result could be due to chance
null hypothesis
There is no association between exposure and outcome
There is no difference between groups
Parameter equals null value
if this is true, any differences must be due to chance
alternative hypothesis
There is an association between exposure and outcome
Parameter does not equal null value
p value
The probability of getting an estimate as extreme as the one that you have observed if there is really no association
i.e. probability of it occurring by chance
statistically significant p value
P values < 0.05
Less than 5% probability of a result this extreme due to chance
We can reject the null hypothesis and accept alternative hypothesis
not statistically significant
if P value > 0.05
More than 5% probability of this result occurring by chance
We cannot reject the null hypothesis
type I error
false positive
occurs when we find a “statistically significant” result when there is no real difference
we reject H0 even though it is correct and the difference is due to chance
P(type I error) = alpha (significant level - usually 0.05)
type II errors
false negative
occurs when we don’t find a “statistically significant” result when there is a real difference
we fail to reject H0 even though it is false and the difference is real
More likely if we use smaller samples
Also more likely if we look for a smaller P value (e.g. ≤ 0.01)
clinical importance
if it will have a substantial effect/make a decent difference that makes it work funding
selection bias in case-control studies
Controls not representative of the population which gave rise to cases
If inclusion/exclusion criteria differ between cases and controls
selection bias in cohort studies
Loss to follow up
If comparison group selected separately from exposed group can lead to bias - healthy worker effect
how can measurement error occur
Participants provide inaccurate responses
Data is collected incorrectly/inaccurately