CURVE series (WIT/CCM) Flashcards
What does CURVE stand for?
Causes and Uncertainty: Rethinking Value in Expectation
Who is Timothy Jaurek?
A pseudonym for Bob’s co-author for the contractualism post. No other details known to JB at the time of writing.
Contractualism says that morality is about…
…what we can justify to those affected by our actions
What does contractualism say we should maximize instead of expected value?
The relevant strength-adjusted moral claims of existing individuals that are addressed per dollar spent
Who is the main proponent of contractualism?
T.M. Scanlon (What We Owe to Each Other,1998)
The expected value (EV) of an action is…
an average of the possible outcomes of that action, weighted by the probability of those outcomes occurring if the action is performed.
On which considerations does helping chickens dominate helping humans or shrimps?
EV maximization, worse-case risk aversion, and difference-making risk aversion.
WLU stands for
Weighted-Linear Utility Theory
REU stands for
risk-weighted utility theory
DMREU stands for
difference-making risk-weighted expected utility
EDM stands for
expected difference made
When would x-risk mitigation dominate cage-free campaign?
If you expect counterfactual impacts of that x-risk mitigation intervention to last over 10,000 years.
How many (sentience-adjusted, human-equivalent) DALYs could be averted per $1000 intervention by each of 1) AMF, 2) shrimp stunning, 3) ammonia intervention for shrimps?
1) 19
2) 30
3) 1500
(Source: https://docs.google.com/document/d/1CZ5S-Eayxr64z5YADYR9M3P2WTp4u2Pgb4N-ynYbbMU/edit#heading=h.kuhe2te0uxb3, p. 64)
What can Arvo’s tool do?
Calculate and graph the expected value of risk mitigation with any arbitrary value trajectory and risk profile.
According to Dave, what do you have to believe to prioritize AI x-risk reduction work?
(1) significant chance some misaligned AI will be deployed and seek power, which poses an existential risk
(2) the aligned AI we develop will be sufficiently capable to solve non-AI x-risks
(3) future animal lives will not be too bad
(4) total utility counts
https://forum.effectivealtruism.org/s/WdL3LE5LHvTwWmyqj/p/i5cuLZH3SQJigiHMs