S7 L2 Introduction to systematic reviews and meta analysis Flashcards

You may prefer our related Brainscape-certified flashcards:
1
Q

Why is evidence based healthcare improtant?

A
  • Healthcare services and interventions should be based on best available evidence
  • Best available evidence should be based on rigorously conducted research
  • Primary research studies e.g. RCT
  • Literature reviews of studies
    → Narrative reviews: implicit assumptions, opaque methodology, not reproducible → biased, subjective
    → Systematic reviews: explicit assumptions, transparent methodology, not reproducible → unbiased, objective
  • Decision analysis
    → Harm and benefits
    → Cost-effectiveness
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are systematic reviews?

A
Clearly focused question 
Explicit statements about:
- Types of study 
- Types of participants 
- Types of interventions
- Types of outcome measure 
Systematic literature search 
Selection of the materials
Appraisal 
Synthesis (possibly including a meta-analysis)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What are the key aspects to a systematic reviews?

A

Credible source of evidence

Explicit, transparent and reproducible

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is the difference between systematic review and meta-analysis?

A

Systematic review→ overview of primary studies that used explicit and reproducible methods
Meta-analysis→ A quantitative synthesis of the results of two or more primary studies that addressed the same hypothesis in the same way

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is the purpose of meta-analysis?

A
  • To facilitate the synthesis of a large number study result
  • To systematically collate study results
  • To reduce problems of interpretation due to variations in sampling → get a bigger sample size
  • To quantify effects sizes and their uncertainty as a pooled estimate → one CI, p value, RR/OR, help determine whether a given drug is actually useful → conflicting information/sources
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

How is the quality of meta-analysis determined?

A

Formal protocol specifying:

  • Compilation of complete set of studies
  • Identification of common variable or category definition → compare like for like
  • Standardised data extraction
  • Analysis allowing for source of variation
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

How do you calculate a pooled odds ratio?

A
  • Odds ratio and their 95% CIs are calculated for all studies in meta-analysis
  • These are then combined to give a pooled estimate odds ratio using statistical computer program
  • Studies are weighted according to their size and the uncertainty of their odds ratio (narrower CI→ greater weight to result)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What does a forest plot allow?

A

Visual representation of odds ratio of each study and meta analysis

  • Individual odds ratio represented as squares with their 95% CI lines displayed for each study
  • Size of square is in proportion to the weight given to the study
  • The diamond i the pooled estimate- centre (dotted line)= pooled odds ratio, width represents the pooled 95% CI
  • Solid line us the null hypothesis OR
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What are the problems with meta-analysis?

A
  • Heterogeneity between studies
    → Modelling for variation → fixed effect model vs random effect model
    → Analysing the variation → sub-group analysis
  • Variable quality of the studies
  • Publication bias in selection of studies
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

How can heterogeneity between studies be modelled for?

A
  • Fixed effect model → assumes that the studies are estimating exactly the same true effect size (differences between studies only due to random variation) → little/no heterogeneity
  • Random effects model → assumes that the studies are estimating similar, but not the same, true effect size → better for greater heterogeneity between (still relatively small)
    Lot of heterogeneity - meta analysis not appropriate
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What does the fixed effect model look like?

A
True effect → calculated based on minimal difference between each of the studies (vertical solid line)
Study result (dot either side of line)
Random error → difference between the study result and true effect (horizontal line)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What does a random effects model look like?

A

True mean effect→ mean of all studies (vertical solid line)

Each study result has its own true trial specific effect (vertical dotted line)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What are the difference in the fixed effect and random effect data?

A
  • Point estimate → often similar (not always) in both random and fixed effect model
  • 95% CI → often wider in the random effects model than the fixed effect model
  • Weighting of the studies → more equal between the studies in the random effects model than in the fixed effects model (greater weighting towards smaller studies)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

How can variation be analysed?

A

Random effect model- accounts for it but does not explain it
Sub group analysis can help to explain heterogeneity which may provide further insight into the effect of a treatment or exposure
→ study characteristic (e.g. year of publication, length to follow up, % female population)
→ participation profile - where the data is analysed by types of participants (e.g. subgroup of males, females, adults, children)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What are the issue with meta analysis?

A

Variable qualities of the studies
- Due to:
→ poor study design
→ poor design protocol (way it was conducted could cause bias)
→ poor protocol implementation (didn’t follow protocol)
- Some studies are more prone to bias and confounding than others:
→ randomised controlled trials
→ non-randomised controlled trials
→ cohort studies
→ case-control studies

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

How can the variability in quality of studies be accounted for?

A

Two approached used:
1- define basic quality of standard and only include studies satisfying this criteria
2- score each study for its quality and then
→ incorporate the quality score into the weighting allocated to each study during the modelling so that higher quality studies have a greater influence on the pooled estimate
→ use sub-group analyses to explore differences e.g. high quality studies vs low quality studies

17
Q

How do you assess the quality of a study?

A
For any RTC many scales are available 
Main components are:
- Allocation methods e.g. randomisation?
- Blinding and outcome assessment 
- Patient attrition e.g. intention to treat (ITT) 
- Appropriate statistical analysis
18
Q

Who assess quality?

A

> 1 assessor

handling disagreements

19
Q

What is publication bias?

A
  • Studies with statistically significant or ‘favourable’ results more likely to be published than those studies with non-statistically significant or ‘unfavourable’ results- applies particularly to smaller studies
  • Publication bias leads to selection bias towards studies which show an effect
20
Q

What are the methods of identification of publication bias?

A
  • Check meta-analysis protocol for method of identification of studies → should include searching and identification of unpublished studies
  • Plot results of identified studies against a measure of their size (e.g. inverse of standard error)
  • Use statistical test for publication bias → they tend to be a weak statistical test
21
Q

What is a funnel plot?

A
  • A plot of some measure of study size (e.g. standard error of estimate) against measure of effect
  • If no publication bias, then the plot will be balanced/symmetrical funnel
  • Smaller studies can be expected to vary further from the central effect size
  • Publication bias likely to exist if there are few smaller studies with results indicating small or negative measure of effect