“We should be more uncertain about cause prioritization based on philosophical arguments” by Rethink Priorities, Marcus_A_Davis

“We should be more uncertain about cause prioritization based on philosophical arguments” by Rethink Priorities, Marcus_A_Davis

Summary

In this article, I argue most of the interesting cross-cause prioritization decisions and conclusions rest on philosophical evidence that isn’t robust enough to justify high degrees of certainty that any given intervention (or class of cause interventions) is “best” above all others. I hold this to be true generally because of the reliance of such cross-cause prioritization judgments on relatively weak philosophical evidence. In particular, the case for high confidence in conclusions on which interventions are all things considered best seems to rely on particular approaches to handling normative uncertainty. The evidence for these approaches is weak and different approaches can produce radically different recommendations, which suggest that cross-cause prioritization intervention rankings or conclusions are fundamentally fragile and that high confidence in any single approach is unwarranted.

I think the reliance of cross-cause prioritization conclusions on philosophical evidence that isn’t robust has been previously underestimated in EA circles [...]

---

Outline:

(00:14) Summary

(06:03) Cause Prioritization Is Uncertain and Some Key Philosophical Evidence for Particular Conclusions is Structurally Weak

(06:11) The decision-relevant parts of cross-cause prioritization heavily rely on philosophical conclusions

(09:26) Philosophical evidence about the interesting cause prioritization questions is generally weak

(17:35) Aggregation methods disagree

(21:27) Evidence for aggregation methods is weaker than empirical evidence of which EAs are skeptical

(24:07) Objections and Replies

(24:11) Aren't we here to do the most good? / Aren't we here to do consequentialism? / Doesn't our competitive edge come from being more consequentialist than others in the nonprofit sector?

(25:28) Can't I just use my intuitions or my priors about the right answers to these questions? I agree philosophical evidence is weak so we should just do what our intuitions say

(27:27) We can use common sense / or a non-philosophical approach and conclude which cause area(s) to support. For example, it's common sense that humanity going extinct would be really bad; so, we should work on that

(30:22) I'm an anti-realist about philosophical questions so I think that whatever I value is right, by my lights, so why should I care about any uncertainty across theories? Can't I just endorse whatever views seem best to me?

(31:52) If the evidence in philosophy is as weak as you say, this suggests there are no right answers at all and/or that potentially anything goes in philanthropy. If you can't confidently rule things out, wouldn't this imply that you can't distinguish a scam charity from a highly effective group like Against Malaria Foundation?

(34:08) I have high confidence in MEC (or some other aggregation method) and/or some more narrow set of normative theories so cause prioritization is more predictable than you are suggesting despite some uncertainty in what theories I give some credence to

(41:44) Conclusion (or well, what do I recommend?)

(44:05) Acknowledgements

The original text contained 20 footnotes which were omitted from this narration.

---

First published:
July 3rd, 2025

Source:
https://forum.effectivealtruism.org/posts/nwckstt2mJinCwjtB/we-should-be-more-uncertain-about-cause-prioritization-based

---

Narrated by TYPE III AUDIO.

---

Images from the article:

Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Jaksot(250)

[Linkpost] “Starfish” by Aaron Gertler 🔸

[Linkpost] “Starfish” by Aaron Gertler 🔸

This is a link post. By Alexander Wales Thousands of starfish had washed up on the beach, and a little girl was diligently throwing them back into the water, one at a time. A man came up to the girl a...

1 Touko 4min

“Time Sensitive Urgent Animal Welfare Action” by Bentham’s Bulldog

“Time Sensitive Urgent Animal Welfare Action” by Bentham’s Bulldog

The EATS act—now called the save our bacon act—would make it illegal for states to pass animal welfare laws that apply to products produced out of state. This would gut most state level animal protect...

29 Huhti 1min

“Forecasting is Way Overrated, and We Should Stop Funding It” by Marcus Abramovitch 🔸

“Forecasting is Way Overrated, and We Should Stop Funding It” by Marcus Abramovitch 🔸

Summary EA and rationalists got enamoured with forecasting and prediction markets and made them part of the culture, but this hasn’t proven very useful, yet it continues to receive substantial EA fun...

26 Huhti 8min

“My lover, effective altruism” by Natalie_Cargill

“My lover, effective altruism” by Natalie_Cargill

Crossposted from Substack. This post is part of a 30-posts-in-30-days ordeal at Inkhaven. All suboptimalities are the result of that. This is part 2, here is part 1 in my EA mini series! On my way to ...

26 Huhti 8min

“A Database of Near-Term Interventions for Wild Animals” by Bob Fischer

“A Database of Near-Term Interventions for Wild Animals” by Bob Fischer

The Animal Welfare Department (AWD) at Rethink Priorities supports high-impact strategies to help animals, especially where suffering is vast and largely neglected. Therefore, one of our focus areas i...

24 Huhti 17min

“The AI people have been right a lot” by Dylan Matthews

“The AI people have been right a lot” by Dylan Matthews

This post was crossposted from Dylan Matthew's blog by the EA Forum team. The author may not see or reply to comments. Subtitle: Try to keep an open mind as the world gets increasingly wild.The crowd ...

20 Huhti 10min

[Linkpost] “The Anthropic IPO Is Coming. We Aren’t Ready for It.” by Sophie Kim

[Linkpost] “The Anthropic IPO Is Coming. We Aren’t Ready for It.” by Sophie Kim

This is a link post. More money is coming than AI safety has ever seen. The capacity to deploy it doesn't exist yet.Image Source: Fortune.com This week, Anthropic announced Claude Mythos Preview– a mo...

17 Huhti 15min

“AI Safety’s Biggest Talent Gap Isn’t Researchers. It’s Generalists.” by Topaz, Agustín Covarrubias 🔸, Alexandra Bates, Parv Mahajan, Kairos

“AI Safety’s Biggest Talent Gap Isn’t Researchers. It’s Generalists.” by Topaz, Agustín Covarrubias 🔸, Alexandra Bates, Parv Mahajan, Kairos

This post was cross posted to LessWrong TL;DR: One of the largest talent gaps in AI safety is competent generalists: program managers, fieldbuilders, operators, org leaders, chiefs of staff, founders....

16 Huhti 13min

Suosittua kategoriassa Yhteiskunta

sita
kaksi-aitia
olipa-kerran-otsikko
i-dont-like-mondays
hupiklubi
ihme-ja-kumma
siita-on-vaikea-puhua
uutiscast
poks
antin-palautepalvelu
gogin-ja-janin-maailmanhistoria
kolme-kaannekohtaa
rss-murhan-anatomia
mamma-mia
yopuolen-tarinoita-2
rss-palmujen-varjoissa
aikalisa
meidan-pitais-puhua
rss-haudattu
kummitusjuttuja