“Where I Am Donating in 2024” by MichaelDickens

“Where I Am Donating in 2024” by MichaelDickens

Summary

It's been a while since I last put serious thought into where to donate. Well I'm putting thought into it this year and I'm changing my mind on some things.

I now put more priority on existential risk (especially AI risk), and less on animal welfare and global priorities research. I believe I previously gave too little consideration to x-risk for emotional reasons, and I've managed to reason myself out of those emotions.

Within x-risk:

  • AI is the most important source of risk.
  • There is a disturbingly high probability that alignment research won't solve alignment by the time superintelligent AI arrives. Policy work seems more promising.
  • Specifically, I am most optimistic about policy advocacy for government regulation to pause/slow down AI development.

In the rest of this post, I will explain:

  1. Why I prioritize x-risk over animal-focused [...]

---

Outline:

(00:04) Summary

(01:30) I dont like donating to x-risk

(03:56) Cause prioritization

(04:00) S-risk research and animal-focused longtermism

(05:52) X-risk vs. global priorities research

(07:01) Prioritization within x-risk

(08:08) AI safety technical research vs. policy

(11:36) Quantitative model on research vs. policy

(14:20) Man versus man conflicts within AI policy

(15:13) Parallel safety/capabilities vs. slowing AI

(22:56) Freedom vs. regulation

(24:24) Slow nuanced regulation vs. fast coarse regulation

(27:02) Working with vs. against AI companies

(32:49) Political diplomacy vs. advocacy

(33:38) Conflicts that arent man vs. man but nonetheless require an answer

(33:55) Pause vs. Responsible Scaling Policy (RSP)

(35:28) Policy research vs. policy advocacy

(36:42) Advocacy directed at policy-makers vs. the general public

(37:32) Organizations

(39:36) Important disclaimers

(40:56) AI Policy Institute

(42:03) AI Safety and Governance Fund

(43:29) AI Standards Lab

(43:59) Campaign for AI Safety

(44:30) Centre for Enabling EA Learning and Research (CEEALAR)

(45:13) Center for AI Policy

(47:27) Center for AI Safety

(49:06) Center for Human-Compatible AI

(49:32) Center for Long-Term Resilience

(55:52) Center for Security and Emerging Technology (CSET)

(57:33) Centre for Long-Term Policy

(58:12) Centre for the Governance of AI

(59:07) CivAI

(01:00:05) Control AI

(01:02:08) Existential Risk Observatory

(01:03:33) Future of Life Institute (FLI)

(01:03:50) Future Society

(01:06:27) Horizon Institute for Public Service

(01:09:36) Institute for AI Policy and Strategy

(01:11:00) Lightcone Infrastructure

(01:12:30) Machine Intelligence Research Institute (MIRI)

(01:15:22) Manifund

(01:16:28) Model Evaluation and Threat Research (METR)

(01:17:45) Palisade Research

(01:19:10) PauseAI Global

(01:21:59) PauseAI US

(01:23:09) Sentinel rapid emergency response team

(01:24:52) Simon Institute for Longterm Governance

(01:25:44) Stop AI

(01:27:42) Where Im donating

(01:28:57) Prioritization within my top five

(01:32:17) Where Im donating (this is the section in which I actually say where Im donating)

The original text contained 58 footnotes which were omitted from this narration.

---

First published:
November 19th, 2024

Source:
https://forum.effectivealtruism.org/posts/jAfhxWSzsw4pLypRt/where-i-am-donating-in-2024

---

Narrated by TYPE III AUDIO.

---

Images from the article:

Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Jaksot(250)

[Linkpost] “Starfish” by Aaron Gertler 🔸

[Linkpost] “Starfish” by Aaron Gertler 🔸

This is a link post. By Alexander Wales Thousands of starfish had washed up on the beach, and a little girl was diligently throwing them back into the water, one at a time. A man came up to the girl a...

1 Touko 4min

“Time Sensitive Urgent Animal Welfare Action” by Bentham’s Bulldog

“Time Sensitive Urgent Animal Welfare Action” by Bentham’s Bulldog

The EATS act—now called the save our bacon act—would make it illegal for states to pass animal welfare laws that apply to products produced out of state. This would gut most state level animal protect...

29 Huhti 1min

“Forecasting is Way Overrated, and We Should Stop Funding It” by Marcus Abramovitch 🔸

“Forecasting is Way Overrated, and We Should Stop Funding It” by Marcus Abramovitch 🔸

Summary EA and rationalists got enamoured with forecasting and prediction markets and made them part of the culture, but this hasn’t proven very useful, yet it continues to receive substantial EA fun...

26 Huhti 8min

“My lover, effective altruism” by Natalie_Cargill

“My lover, effective altruism” by Natalie_Cargill

Crossposted from Substack. This post is part of a 30-posts-in-30-days ordeal at Inkhaven. All suboptimalities are the result of that. This is part 2, here is part 1 in my EA mini series! On my way to ...

26 Huhti 8min

“A Database of Near-Term Interventions for Wild Animals” by Bob Fischer

“A Database of Near-Term Interventions for Wild Animals” by Bob Fischer

The Animal Welfare Department (AWD) at Rethink Priorities supports high-impact strategies to help animals, especially where suffering is vast and largely neglected. Therefore, one of our focus areas i...

24 Huhti 17min

“The AI people have been right a lot” by Dylan Matthews

“The AI people have been right a lot” by Dylan Matthews

This post was crossposted from Dylan Matthew's blog by the EA Forum team. The author may not see or reply to comments. Subtitle: Try to keep an open mind as the world gets increasingly wild.The crowd ...

20 Huhti 10min

[Linkpost] “The Anthropic IPO Is Coming. We Aren’t Ready for It.” by Sophie Kim

[Linkpost] “The Anthropic IPO Is Coming. We Aren’t Ready for It.” by Sophie Kim

This is a link post. More money is coming than AI safety has ever seen. The capacity to deploy it doesn't exist yet.Image Source: Fortune.com This week, Anthropic announced Claude Mythos Preview– a mo...

17 Huhti 15min

“AI Safety’s Biggest Talent Gap Isn’t Researchers. It’s Generalists.” by Topaz, Agustín Covarrubias 🔸, Alexandra Bates, Parv Mahajan, Kairos

“AI Safety’s Biggest Talent Gap Isn’t Researchers. It’s Generalists.” by Topaz, Agustín Covarrubias 🔸, Alexandra Bates, Parv Mahajan, Kairos

This post was cross posted to LessWrong TL;DR: One of the largest talent gaps in AI safety is competent generalists: program managers, fieldbuilders, operators, org leaders, chiefs of staff, founders....

16 Huhti 13min

Suosittua kategoriassa Yhteiskunta

i-dont-like-mondays
olipa-kerran-otsikko
kaksi-aitia
sita
hupiklubi
ihme-ja-kumma
siita-on-vaikea-puhua
uutiscast
poks
antin-palautepalvelu
gogin-ja-janin-maailmanhistoria
kolme-kaannekohtaa
rss-murhan-anatomia
mamma-mia
yopuolen-tarinoita-2
meidan-pitais-puhua
rss-palmujen-varjoissa
aikalisa
rss-haudattu
taskula-trishin