Meta-Analysis and Systematic Review Flashcards
Meta-Analysis:
- A type of systematic review that uses statistical techniques to quantitatively combine and summarize results of previous research.
- A review of literature is a meta-analysis review only if it includes quantitative estimation of the magnitude of the effects and its uncertainty (confidence limits).
- Meta-Analysis refers to the analysis of analyses. Statistical analysis of a large collection of analysis results from individual studies for the purpose of integrating the findings.
- It require rigorous alternative to the causal, narrative discussion of research studies which typify our attempts to make sense of the rapidly expanding research literature.
- A meta-analysis is a Quantitative approach for Systematically combining results of previous research to arrive at conclusions about the body of research.
Quantitative:
Numbers
Systematic:
Methodical
Combining:
Putting together
Previous research
what already done
Conclusion
New knowledge
Forest Plot
???
Rationale for systematic review and meta-analysis (MA):
- Information reduced into pieces for critical examination, evaluation and synthesis.
- Various decision makers need to integrate critical pieces of available information.
- MA is an efficient scientific technique usually quicker and less costly than a new study.
- Consistency of relationships across studies can be evaluated.
- MA can help explain data inconsistencies and conflicts in data.
- MA increases the statistical power.
- MA allows increased precision in estimates of effect.
- MA is an improved reflection of reality compared to the traditional views.
1) Formulating the research question:
Good MA should begin with clearly formulated specific research questions (hypothesis) that are important and testable.
2) Obtaining representative studies for review:
- Clear inclusion (populations, interventions, outcomes) and exclusion criteria.
- Multiple research strategies, journals, examining references of journals, computer searches of databases, searching for unpublished studies, dissertations abstracts, internationals.
3) Coding studies for important information:
- Goal is to code all study features that might influence outcomes.
- Quality of studies is assessed.
- Coding scheme and reliability of coding process is usually provided by the authors.
- APA publication policy is to list all studies evaluated in a meta-analysis in the published report.
4) Analyzing the data systematically:
- Abstracting effect size -> Using one effect per research, weighting effects prior to analysis (by the inverse of its variance), grouping studies for analysis, homogeneity testing (Q-statistics)
The inverse variance weight:
- IDEA: Effects size from larger studies should”count for more” than ES’s from smaller studies.
- Original idea was to weight each effect size (ES) by its sample size.
- Hedges suggested an alternative -> weighting Es’s by their inverse variance minimize the variance of their sum (and mean), and so, minimizes the Standard Error of Estimates (SE).
- Smaller SE leads to narrower CI’s and more powerful significance tests.
Forest Plot:
- Graphical display of results from individual studies on a common scale.
- Each study is represented by a black square and a horizontal line. The area of the black square reflects the weight of the study in the meta-analysis.
- A logarithmic scale should be used for plotting the Relative Risk.
Publication Bias:
- Statistically Significant results are more likely to be published
- Well established bias in the published literature
- Affects all forms of reviewing, not just MA.
What to do about publication bias?
- Search for and include unpublished studies
- Assess distribution of effect size for publication bias
- Graphically examine a “funnel plot”
- “Adjust” the distribution using the trim-and-fill method
Funnel Plot:
- Scatter plot of effect estimates against sample size
- Used to detect publication bias
- If no bias, expect symmetric, inverted funnel
- If bias, expect asymmetric or skewed shape
Statistical approaches to quantitative meta-analysis:
1) Weighted-sum - depending on homogeneity testing -> Fixed effect model, random effect model, cumulative MA
2) Meta-Regression model -> Meta-regression technique, weighted linear regression.
Fixed effect model:
- All the observed differences between the studies duo to chance
- Observed study effect = Fixed effect + (random) error
- Basic assumption that there is one true value of the effect.
Random effect model:
- Assumes a different underlying effect for each study.
- Leads to relatively more weight being given to smaller studies and to wider confidence intervals than the fixed effects model.
- Use of this model has been advocated if there is heterogeneity between study results.
The logic of random effects model:
- Fixed effects model assumes that all of the variability between effect sixes is duo to sampling error.
- other words= Instability in an effect size is duo simply to subject-level “noise”.
- Random effects model assumes that the variability between effect size is due to sampling error plus variability in the population of effects.
- Other words= instability in an effect size is due to subject lever “noise” and true unmeasured differences across studies
Statistical heterogeneity:
- Various results differ too much, pooling the results likely to be misleading since the studies might actually be measuring different effects.
- Heterogeneity across studies means that the estimates from individual studies have different magnitudes or even different directions
Source of heterogeneity:
- Results of studies of similar interventions usually differ to some degree.
- Difference may be due to: inadequate sampling size, different study design, different treatment protocols, different patient follow-up, different reporting, different patient response.
Meta-Regression:
- can be either a linear or logistic regression model.
- Predictors in the regression are at the study-level and might include such factors as the treatment protocols, characteristics of the study population such as average age, or other variables describing the study setting
Sensitivity analysis:
- How sensitive the results of the MA are to the inclusion of studies of differing size, quality and other specific methodological differences.
- Sensitivity analysis can involve -> repeating the analysis on subset of the original data, determination how any one study (or group of similar studies) might influence the overall summary statistics
Critics of Meta-analysis:
- Biases in sampling of studies -> publication bias, not enough data published in papers.
- Not applicable to test multivariate effects.
- Apple and orange criticism.
What to look for in a quality MA:
- Explicit inclusion and exclusion criteria
- Inclusion and exclusion criteria well justified
- Not restricted to published studies
- Search strategy well explicated
- Search includes multiple sources (databases, hand-searches, contact with authors, etc)
- Used a detailed coding protocol
- Assessed coder reliability (e.g. double-coding)
- Maintained statistical independent in the analysis of effect size
- Use proper MA methods(e.g. inverse variance weighting)
- tested for heterogeneity in effect size
- reported both fixed and random effects results
- Used proper methods of testing moderator effects (analog-to-the-ANOVA, meta-regression)
- Assessed for publication bias
- If included, methodologically “flawed” studies, performed sensitivity analysis on results