Session 6 Reviews of evidence Flashcards
Describe the hierarchy of scientific evidence from strongest to weakest (7)
- Meta-analyses and systematic reviews
- RCTs
- Cohort studies
- Case-Control Studies
- Cross Sectional Studies
- Animal trials and in vitro studies
- Case reports, opinion papers and letters
What should healthcare services and interventions be based on?
best available evidence
What should best available medicine be based on?
rigorously conducted research
i.e. primary research studies like RCTs
Narrative reviews vs systematic reviews
Assumptions? Methodology? Reproducible? Biased or unbiased? Subjective or objective?
Narrative:
Assumptions = implicit Methodology = opaque Reproducible = no Biased or unbiased = biased Subjective or objective = subjective
Systematic:
Assumptions = explicit Methodology = transparent Reproducible = yes Biased or unbiased = unbiased Subjective or objective = objective
Why are systematic reviews extremely credible sources of evidence?
What are the 3 key aspects?
they clearly focus on the question
they include explicit statements about types of:
- study
- participants
- interventions
- outcome measures
they include a systematic literature search and selection of materials
appraisal and synthesis also included
KEY ASPECTS = explicit, transparent and reproducible
Describe what a systematic review is vs what a meta-analysis is
systematic review is ‘an overview of primary studies that uses explicit and reproducible methods’
meta-analysis is ‘a quantitative synthesis of the results of two or more primary studies that addressed the same hypothesis in the same way’
What are the 4 main purposes of performing a meta-analyses?
- to facilitate the synthesis of a large number of study results
- to systematically collate study results
- to reduce problems of interpretation due to variations in sampling
- to quantify effect sizes and their uncertainty as a pooled estimate
Meta-analysis should have a formal protocol specifying what 4 things?
- compilation of complete set of studies
- identification of common variable or category definition
- standardised data extraction
- analysis allowing for sources of variation
Calculate the odds ratio and 95% CI - what do these indicate?
survive die aspirin [566] [49] placebo [557] [67]
aspirin: odds of surviving = 566:49 = 11.55:1
placebo: odds of surviving = 557:67 = 8.31:1
odds ratio (OR) = 11.55 / 8.31 = 1.39
95% CI = 1.39 divided by or times 1.48 = 0.94 to 2.05
error factor = 1.48
OR indicates greater odds of surviving after MI with aspirin vs placebo
95% CI indicates that null hypothesis (=1.00) is within the range and so the results could be due to chance
how would you calculate a pooled estimate odds ratio?
combine 95% CI and OR’s to give a pooled estimate OR using a statistical computer programme
studies are weighted according to their size and uncertainty of their OR’s
narrow CI = greater weight of results
How do you interpret a Forest Plot?
squares = lines = diamond = dotted line = width of diamond = solid line =
squares = individual odds ratios
(size of square is in proportion to the weight given to the study)
lines = 95% CI
diamond = pooled estimate
dotted line = centre of diamond indicates pooled odds ratio
width of diamond = pooled 95% CI
solid line = null hypothesis OR
What are the common problems with meta-analyses? 3
Heterogeneity between studies
Variable quality of studies
Publication bias in selection of studies
There are two approaches to calculating the pooled estimate odds ratio and its 95% CI. What are they and what do they assume?
(this is to do with heterogeneity between studies and modelling for variation)
Fixed effect model: assumes that the studies are estimating exactly the same true effect size
Random effects model: assumes that the studies are estimating similar, but not the same, true effect size
Fixed effect model vs random effects model comparison of:
- Point estimate (e.g. OR) =
- 95% CI =
- Weighting of the studies =
Point estimate (e.g. OR) * often similar (but not always!) in both fixed and random
95% CI
* often wider in the random effects than in fixed effects
weighting of the studies
- more equal between the studies in the random effects than in fixed effects
i. e. greater weighting towards smaller studies
NB: there is a lot of debate over which model is superior!
Random effects model can only account for variation but can’t explain it so what could you do to help explain heterogeneity which may prove further insight into the effect of a treatment or exposure? what does this focus on?
sub-group analysis!
looks at study characteristics e.g. year of publication, length of follow up, %female participants
participant profile - where data is analysed by types of participants (e.g. subgroups of males, females, adults, children)
Variable quality of studies is an issue in meta-analyses. What can variable quality be due to?
List 4 types of studies that are susceptible to bias and confounding from less to more.
variable quality can be due to:
- poor study design
- poor design protocol
- poor protocol implementation
less susceptible to more susceptible to bias and confounding:
RCT > non-RCT > cohort > case-control
What are the two approaches used to try and lessen variable quality of studies?
- define a basic quality standard and only include studies satisfying this criteria e.g. Cochrane reviews used to include only RCT’s
- score each study for its quality and then:
* incorporate the quality score into the weighting allocated to each study during the modelling, so that higher quality studies have a greater influence on the pooled estimate
* use sub-group analyses to explore differences e.g. high quality studies vs low quality studies
When assessing the quality of RCT’s, what are the main components of the scales to assess them?
- allocation methods e.g. randomisation?
- blinding and outcome assessment
- patient attrition e.g. <10% intention to treat
- appropriate statistical analysis
What are the reasons and consequences of publication bias in selection of studies?
Studies with statistically significant or ‘favourable’ results are more likely to be published than those studies with non-statistically significant or ‘unfavourable’ results
(this particularly applies to smaller studies)
Any systematic review or meta-analysis can be flawed by such bias
- publication bias leads to a biased selection of studies towards demonstrating an effect
What are the methods of identification of publication bias? 3 things!
- check meta-analysis protocol for method of identification of studies (should include searching and identification of unpublished studies)
- plot results of identified studies against a measure of their size (e.g. inverse of standard error) i.e. a Funnel Plot
- Use a statistical test for publication bias (tend to be weak statistical tests)
How do you interpret a Funnel Plot
when would publication bias be likely to exist?
= a plot of some measurement of study size (e.g. standard error of estimate) against a measure of effect (e.g. OR)
If no publication bias = plot will be a balanced / symmetrical funnel
Smaller studies can be expected to vary further from the ‘central’ effect size
Publication bias is likely to exist if there are few small studies with results indicating small or negative measure of effect
What are the sources of evidence for systematic reviews?
NHS centre for reviews and dissemination
NIHR health technology assessment programme