w6 Flashcards

1
Q

What is a meta-analysis and why do we use it

A

Its a statistical procedure to combine results of various studies on the same topic
* Reliable overview of literature it provides us with the Insight into which variables can explain differences in effect sizes between studies (moderators)
* its Important for clinical practice, because often no time to read a lot (and even often no access to literature)
* Very popular and lots of influence

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q
  • Narrative review vs. meta-analysis
A

narative rewiew:
- Focus on p-values in original studies, meta analysis focuses on effect sizes
- Unable to deal with inconsistent results (reviewer bias) while in meta anlysis all results are included → inconsistent results reduce overall effect (non-significance says nothing about effect size)
- Does not take into account reliability of studies, meta analyis studies weighted with reliability
- Often only published articles (publication bias) while meta analisis has extensive literature search, often also unpublished studies (but not always); tests for publication bias

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q
  • How does a meta-analysis work?
A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q
  • Meta-regression (moderators)
A

statistical technique that allows researchers to assess how various factors influence the size and direction of the observed effects across studies.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q
  • Limitations of meta-analyses
A

Risk of Bias: Many trials included in Cipriani’s meta-analysis had high or unclear risks of bias related to randomization, allocation concealment, and blinding. These biases likely inflated the reported efficacy of antidepressants, especially for subjective outcomes like depression rating scales​(Literature 6_Munkholm e…).

Placebo Run-In Design: The use of a placebo run-in design (where participants receive a placebo before randomization and placebo responders are excluded) can distort results by artificially inflating the apparent efficacy of antidepressants. Trials that employed this design showed larger effect sizes compared to those that did not​(Literature 6_Munkholm e…).

Publication Bias: There is substantial publication bias in antidepressant trials, with published studies showing higher efficacy than unpublished ones. Cipriani et al. did not adequately account for this bias, leading to an overestimation of antidepressants’ benefits​(Literature 6_Munkholm e…).

Short Trial Duration: Most trials had short follow-up periods (less than 12 weeks), which does not reflect the long-term use of antidepressants in clinical practice, where patients often take them for years. This short duration limits the relevance of findings for long-term efficacy and side effects​(Literature 6_Munkholm e…).

Selective Outcome Reporting: Many trials selectively reported outcomes, which could further distort efficacy estimates. Trials that did not report certain outcomes should have been rated as higher risk of bias, but Cipriani et al. did not consistently apply this standard​(Literature 6_Munkholm e…).

Clinically Irrelevant Efficacy Outcomes: The meta-analysis focused on response rates, a threshold-based outcome (≥50% improvement) that may not be clinically meaningful. Such dichotomous measures can exaggerate the perceived benefit of antidepressants compared to continuous measures like mean differences in depression scores​(Literature 6_Munkholm e…).

Lack of Patient-Relevant Outcomes: The analysis primarily relied on symptom scales rather than outcomes that might be more relevant to patients, such as quality of life or functional improvement. These outcomes were either not reported or under-represented in the data used by Cipriani et al.​(Literature 6_Munkholm e…).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

conducting a meta analysis

A
  1. Formulate research question
  2. Determine eligibility criteria (in- and exclusion criteria)
  3. Search for studies
  4. Data extraction and coding
    a) Characteristics of the studies
    b) Data needed to pool studies
    c) Study quality
  5. Data-analysis
    a) Calculating mean effect (= pooling)
    b) Examine differences between studies
    c) Publication bias
  6. Write manuscript
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

advantages of meta analysis

A
  • p-values irrelevant → effect sizes
  • all results are included → inconsistent results reduce overall effect (non-significance says nothing about effect size)
  • studies weighted with reliability
  • extensive literature search, often also unpublished studies (but not always); tests for publication bias
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

EFFECT SIZES

A

Effect size is the main statistic in meta-analysis, representing the magnitude of the relationship or difference between variables across studies
- magnitude of treatment effect or relationship betweenvariables
-

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

how does meta analyis takes reliability into account

A

it weights studies, the more reliable the study the more weight the: smaller SE and/or larger N
* 95% confidence interval (CI), based on SE (1.96SE either side)
* SE highly dependent on sample size
* larger N (more precise estimate of true effect) – smaller SE – smaller CI – more weight
* direct relation CI and p-value: significant if ”no-effect value” is not in CI*

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

fixed effect model

A
  • Studies from 1 single, homogeneous population, so 1 true effect size
  • Observed effect sizes are estimators of the same true effect size
  • Differences in observed effect sizes only due to sampling error
  • 1 error term (sampling error) - between study error
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

why would we not use fixed effects analysis

A
  • beacsue not all populations have the same true effect size, for example height different populations have different true effect sizes
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

random effects model

A
  • Studies from “universe” of populations
  • Subpopulations have different true effect sizes
  • Distribution of true effect sizes
  • Studies are random samples from that distribution
    2 error terms
  • sampling error (error within a study)
  • variation in true effect sizes (spread between studies)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

how to calculate weighted mean/summary effect

A

for both (FE) and (RE)
- first determine the weight of each study k
- Weight determined differently for FE and RE
- Fixed-effect: weight study k = 1/variancewithin-study
- variancewithin-study = SE = numeral representation of sampling error
- Random-effects:
- weight study k = 1 / (variancewithin-study + variancebetween-studies)
- Variancewithin-study
- reliability of estimation of true effect size in subpopulation by study k (varies per study)
- Variancebetween-studies
- indication of variation of true effect sizes between studies
- one value for all studies

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Can we explain these differences in effect sizes

A

yes with meta-regression…

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

why is there less influencve of single extreme studies on random effwect than on fixed effects analysis

A

In RE MA:
* each study provides information on subpopulation
* also small studies
* small studies relatively more weight, large study relatively less weight

In a random-effects meta-analysis, there is less influence of single studies because the model assumes that the true effect size varies between studies. This means that the weight assigned to each study depends not only on its sample size (as in the fixed-effect model) but also on the between-study variance (heterogeneity). As a result, smaller studies are given relatively more weight, and larger studies are given relatively less weight compared to a fixed-effect model, which heavily favors large studies.

In contrast, the fixed-effect model assumes all studies estimate the same effect size, so it gives much more weight to larger studies with smaller variances, allowing them to exert more influence over the overall effect size estimate.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

what is the difference between meta anylsis and meta regression

A

meta analyiss is model with intercept only
* y(k) = b0 + error(k), where k is a study
while meta regression has intercept and the efffects of moderator(s)
* y(k) = b0 + b1x1(k) + b2x2(k) + b3*x3(k) + …. + error(k),

Meta-analysis: This is a statistical method that combines effect sizes from multiple studies to calculate an overall summary effect. It focuses on estimating a single pooled effect size and understanding the consistency of effects across studies.

Meta-regression: This is an extension of meta-analysis. It not only combines effect sizes but also explores how study-level characteristics (moderators) influence the variability in effect sizes across studies. Meta-regression investigates whether certain variables (e.g., sample size, treatment duration) explain differences in effect sizes between studies.

In short, meta-analysis focuses on summarizing overall effects, while meta-regression examines the sources of variability among studies.

16
Q

sumamry effect

A

overall estimate derived from the individual studies, it represents the weigted mean of the effect sizes
FEM - all studies share the same true effect size, so the sumamry effect is the estimate of the common true effect size across all studies
REM - true effect sizes varie between studies, so the summary effect is the mean if the distribution of true effect sizes across studies

17
Q

whats the difference between effect size and treatment effect

A

The treatment effect refers to the impact of an intervention (e.g., the difference between a treatment and control group), while the effect size is a standardized measure that quantifies the magnitude of this difference, allowing comparison across studies.

In short, treatment effect is the actual observed impact, and effect size is the way we express that impact in a comparable form.

18
Q

what is essential for improving precision in a study

A

Precision in meta-analysis is determined by sample size and study design.
- Larger sample sizes lead to lower variance and narrower confidence intervals, enhancing the precision of the effect size estimate. Study design also plays a role; for example, matched pair designs typically result in more precise estimates than independent group designs due to reduced variance in paired comparisons(intro to meta analyisis)(intro to meta analyisis).