Lecture 13 (Limits of Behavior Change) Flashcards
The effectiveness of nudging: A meta-analysis of choice architecture
interventions across behavioral domains – Mertens et al. 2022
a) Outline the project and design
b) Explain the results from figure 28, and discuss
Question a)
Drawing on more than 200 studies
reporting over 440 effect sizes (n =
2,148,439), we present a
comprehensive analysis of the
effectiveness of choice
architecture interventions across
techniques, behavioral domains,
and contextual study
characteristics.
Question b)
Most of the 200 studies did not find an effect size bigger than 1%. Outliers in both end raging from negative -0.7% to around 4%.
This question the use of nudges, if it makes any sense using them. We have to count in the cost, because some can easily be implemented and then 1% effect is good, but others can be very costly due to the amount of information we would need to gather, then 1% effect is bad. Furhermore can be critical to the studies Mertens et all. 2022 chose, but it gives a picture of very limited and positive effects and even negative effects from using nudges.
RCTs to Scale – DellaVigna and Linos 2022
Context:
Nudge interventions have quickly expanded from academic studies to larger imple- mentation in so-called Nudge Units in governments. This provides an opportunity to compare interventions in research studies, versus at scale.
a) Outline the research question and design
b) Describe the results using figure 28. Discuss the results.
Question a)
Research question:
“How do the effects of nudges in academic research settings compare to their effects when implemented at scale in government Nudge Units?”
design:
The researchers analyzed the average treatment effect of nudges in both samples.
They measured how nudges impacted take-up rates of government programs and services (e.g., savings plan enrollment, vaccination rates, tax compliance).
They consider five potential channels for the gap in effect sizes: statistical power, selective
publication, academic involvement, differences in trial features and in nudge features.
Question b)
They find that, on average, nudge interventions have a meaningful and statistically significant impact on the outcome of 1.4 pp. This estimated effect is significantly smaller than in academic journal articles (8.7 pp.). Using a meta- analysis model, they decompose this difference. They show that the largest source of the discrepancy is selective publication in the Academic Journals sample, figure 29, exacerbated by low statistical power in that sample.
The largest factor driving this gap is publication bias in academia, where studies with small or non-significant effects are less likely to be published.
When accounting for study design, sample size, and nudge characteristics, the gap between academic and government nudges largely disappears.
This highlights the importance of large-scale real-world testing and caution in interpreting academic nudge effects as universally applicable.
Nudging: progress to date and further directions –
Beshears and Kosowsky 2020
What are the 5 important issues discussed in the paper that researchers and
practitioners should tackle to design better nudges?
- Which types of nudges have better outcomes on behavior?
2.Use field-based and laboratory-based (or online lab) approaches as
complementary methods to investigate why and in which situations nudges
change outcomes
3.Researchers should place greater emphasis on studying the extent to which
nudges lead to cumulative long-run effects on outcomes.
“At most 21% of the 174 articles attempt to assess the long-run effect of a
nudge.”
4.Researchers should put more effort into measuring the effects of nudges on non-
targeted outcomes, as such unintended consequences can partially or even
completely offset the intended effects of nudges on targeted outcomes.
5.Nudges often represent only one part of a multi-pronged approach to changing
behavior, so researchers should increase focus on the interaction effects among
nudges and traditional interventions such as financial incentives.