Review Of evidence Flashcards
What is the hierarchy of scientific evidence?
- Meta-analysis and systematic reviews
- RCTs
- Cohort studies
- Case-control
- Cross-sectional
- Animal trials and IV studies
- Case reports, opinions and letters
What does a systematic review consist of?
Clearly focused question
Explicit statement about:
- Types of study
- Types of participants
- Types of interventions
- Types of outcome measures
Systematic literature search
Selection of material
Appraisal
Synthesis
A systematic review is an extremely credible source of evidence because it is EXPLICIT, TRANSPARENT, REPRODUCIBLE
What is the difference between systematic review and meta-analysis?
A systematic review is an overview of primary studies that used explicit and reproducible methods
A meta-analysis is a quantitative synthesis of the results of two or more primary studies that addressed the same hypothesis in the same way
What is the purpose of a meta-analysis?
To facilitate the synthesis of a large number of study results
To systematically collate study results
To reduce problems of interpretations due to variations in sampling
To quantify effect sizes and their uncertainly as a pooled estimate.
What is the criteria for a meta analysis?
Meta-analysis should have a formal protocol which specifies:
Compilation of complete set of studies
Identification of common variable or category definition
Standardised data extraction
Analysis allowing for sources of variation
What is a pooled estimate odds ratio?
Odds ratios and their 95% CIs are calculated for all studies in meta-analysis
These are then combined to give a pooled estimate odds ratio using a statistical computer program
Studies are weighted according to their size and the uncertainly of their odds ratio. (Narrower CI = greater weight of results).
How do you interpret a forest plot?
Individual odds ration with 95% CI are displaced for each study
Size of square is in proportion to the weight give to the study
The diamond is the pooled estimate with the centre indicating the pooled odds ratio (dotted line) and the width representing the pooled 95% CI.
The sold line is the null hypothesis odds ratio.
What are the problems with meta-analysis?
Heterogeneity between studies:
- Modelling for variation (fixed vs random effects model)
- Analysing the variation (sub group analysis)
Variable quality of studies
Publication bias in selection of the studies (only publish those with significant results)/
What are the two different approaches to calculating the pooled estimate odds ratio and its 95% CI?
Fixed effect model: assumes that the studies are estimating exactly the same true effect size
Random effects model - assumes that the studies are estimating similar, but not the same true effect size.
How does the odds ratio and CI differ between the fixed effect and random effects modelling?
Point estimate (OR) often similar (but not always).
95% CI is often wider for random effects model than in fixed effects model.
The weighting of studies is more equal in the random effects model than in the fixed effects model. This means there is greater weighting towards smaller studies.
How do you analyse variation between studies?
Using the random effect or fixed effects model
Random effects modelling can only account for variation but not explain it.
Sub-group analysis can help explain the heterogeneity which may provide further insight into the effect of a treatment or exposure.
- Study characteristics (year of publication, length of follow up, %male / female)
- Participant profile (males / females / adults / children)
What could cause variable quality of the studies?
Poor study design
Poor design protocol
Poor protocol implementation
What studie are more prone to bias and confounding?
Case control (most prone)
Cohort studies
Non-RCTs
RCTs (least)
How do you approach the variable quality of the studies?
Two different approaches:
Define a basic quality standard and only include studies satisfying this criteria e.g. Cochrane reviews used to only include RCTs.
Score each study for its quality and then:
- Incorporate the quality score into the weighting allocated to each study during modelling, so that the higher quality studies have a greater influence on the pooled estimate.
- Use sub-group analysis to explode differences e.g. high quality vs low quality studies.
How do you assess the quality of the studies?
RCTs: Many scales are available…
Main components:
- Allocation methods (randomisation?)
- Blinding and outcome assessment
- Patient attrition (<10% and intention to treat)
Who assesses the quality?
- > 1 assessor
- Handling disagreements
Should assessors be blinded to results?
-Sometimes difficult