w6 Flashcards
What is a meta-analysis and why do we use it
Its a statistical procedure to combine results of various studies on the same topic
* Reliable overview of literature it provides us with the Insight into which variables can explain differences in effect sizes between studies (moderators)
* its Important for clinical practice, because often no time to read a lot (and even often no access to literature)
* Very popular and lots of influence
- Narrative review vs. meta-analysis
narative rewiew:
- Focus on p-values in original studies, meta analysis focuses on effect sizes
- Unable to deal with inconsistent results (reviewer bias) while in meta anlysis all results are included → inconsistent results reduce overall effect (non-significance says nothing about effect size)
- Does not take into account reliability of studies, meta analyis studies weighted with reliability
- Often only published articles (publication bias) while meta analisis has extensive literature search, often also unpublished studies (but not always); tests for publication bias
- How does a meta-analysis work?
- Meta-regression (moderators)
statistical technique that allows researchers to assess how various factors influence the size and direction of the observed effects across studies.
- Limitations of meta-analyses
Risk of Bias: Many trials included in Cipriani’s meta-analysis had high or unclear risks of bias related to randomization, allocation concealment, and blinding. These biases likely inflated the reported efficacy of antidepressants, especially for subjective outcomes like depression rating scales(Literature 6_Munkholm e…).
Placebo Run-In Design: The use of a placebo run-in design (where participants receive a placebo before randomization and placebo responders are excluded) can distort results by artificially inflating the apparent efficacy of antidepressants. Trials that employed this design showed larger effect sizes compared to those that did not(Literature 6_Munkholm e…).
Publication Bias: There is substantial publication bias in antidepressant trials, with published studies showing higher efficacy than unpublished ones. Cipriani et al. did not adequately account for this bias, leading to an overestimation of antidepressants’ benefits(Literature 6_Munkholm e…).
Short Trial Duration: Most trials had short follow-up periods (less than 12 weeks), which does not reflect the long-term use of antidepressants in clinical practice, where patients often take them for years. This short duration limits the relevance of findings for long-term efficacy and side effects(Literature 6_Munkholm e…).
Selective Outcome Reporting: Many trials selectively reported outcomes, which could further distort efficacy estimates. Trials that did not report certain outcomes should have been rated as higher risk of bias, but Cipriani et al. did not consistently apply this standard(Literature 6_Munkholm e…).
Clinically Irrelevant Efficacy Outcomes: The meta-analysis focused on response rates, a threshold-based outcome (≥50% improvement) that may not be clinically meaningful. Such dichotomous measures can exaggerate the perceived benefit of antidepressants compared to continuous measures like mean differences in depression scores(Literature 6_Munkholm e…).
Lack of Patient-Relevant Outcomes: The analysis primarily relied on symptom scales rather than outcomes that might be more relevant to patients, such as quality of life or functional improvement. These outcomes were either not reported or under-represented in the data used by Cipriani et al.(Literature 6_Munkholm e…).
conducting a meta analysis
- Formulate research question
- Determine eligibility criteria (in- and exclusion criteria)
- Search for studies
- Data extraction and coding
a) Characteristics of the studies
b) Data needed to pool studies
c) Study quality - Data-analysis
a) Calculating mean effect (= pooling)
b) Examine differences between studies
c) Publication bias - Write manuscript
advantages of meta analysis
- p-values irrelevant → effect sizes
- all results are included → inconsistent results reduce overall effect (non-significance says nothing about effect size)
- studies weighted with reliability
- extensive literature search, often also unpublished studies (but not always); tests for publication bias
EFFECT SIZES
Effect size is the main statistic in meta-analysis, representing the magnitude of the relationship or difference between variables across studies
- magnitude of treatment effect or relationship betweenvariables
-
how does meta analyis takes reliability into account
it weights studies, the more reliable the study the more weight the: smaller SE and/or larger N
* 95% confidence interval (CI), based on SE (1.96SE either side)
* SE highly dependent on sample size
* larger N (more precise estimate of true effect) – smaller SE – smaller CI – more weight
* direct relation CI and p-value: significant if ”no-effect value” is not in CI*
fixed effect model
- Studies from 1 single, homogeneous population, so 1 true effect size
- Observed effect sizes are estimators of the same true effect size
- Differences in observed effect sizes only due to sampling error
- 1 error term (sampling error) - between study error
why would we not use fixed effects analysis
- beacsue not all populations have the same true effect size, for example height different populations have different true effect sizes
random effects model
- Studies from “universe” of populations
- Subpopulations have different true effect sizes
- Distribution of true effect sizes
- Studies are random samples from that distribution
2 error terms - sampling error (error within a study)
- variation in true effect sizes (spread between studies)
how to calculate weighted mean/summary effect
for both (FE) and (RE)
- first determine the weight of each study k
- Weight determined differently for FE and RE
- Fixed-effect: weight study k = 1/variancewithin-study
- variancewithin-study = SE = numeral representation of sampling error
- Random-effects:
- weight study k = 1 / (variancewithin-study + variancebetween-studies)
- Variancewithin-study
- reliability of estimation of true effect size in subpopulation by study k (varies per study)
- Variancebetween-studies
- indication of variation of true effect sizes between studies
- one value for all studies
Can we explain these differences in effect sizes
yes with meta-regression…
why is there less influencve of single extreme studies on random effwect than on fixed effects analysis
In RE MA:
* each study provides information on subpopulation
* also small studies
* small studies relatively more weight, large study relatively less weight
In a random-effects meta-analysis, there is less influence of single studies because the model assumes that the true effect size varies between studies. This means that the weight assigned to each study depends not only on its sample size (as in the fixed-effect model) but also on the between-study variance (heterogeneity). As a result, smaller studies are given relatively more weight, and larger studies are given relatively less weight compared to a fixed-effect model, which heavily favors large studies.
In contrast, the fixed-effect model assumes all studies estimate the same effect size, so it gives much more weight to larger studies with smaller variances, allowing them to exert more influence over the overall effect size estimate.