#112 – Carl Shulman on the common-sense case for existential risk work and its practical implications

#112 – Carl Shulman on the common-sense case for existential risk work and its practical implications

Preventing the apocalypse may sound like an idiosyncratic activity, and it sometimes is justified on exotic grounds, such as the potential for humanity to become a galaxy-spanning civilisation.

But the policy of US government agencies is already to spend up to $4 million to save the life of a citizen, making the death of all Americans a $1,300,000,000,000,000 disaster.

According to Carl Shulman, research associate at Oxford University's Future of Humanity Institute, that means you don’t need any fancy philosophical arguments about the value or size of the future to justify working to reduce existential risk — it passes a mundane cost-benefit analysis whether or not you place any value on the long-term future.

Links to learn more, summary and full transcript.

The key reason to make it a top priority is factual, not philosophical. That is, the risk of a disaster that kills billions of people alive today is alarmingly high, and it can be reduced at a reasonable cost. A back-of-the-envelope version of the argument runs:
• The US government is willing to pay up to $4 million (depending on the agency) to save the life of an American.
• So saving all US citizens at any given point in time would be worth $1,300 trillion.
• If you believe that the risk of human extinction over the next century is something like one in six (as Toby Ord suggests is a reasonable figure in his book The Precipice), then it would be worth the US government spending up to $2.2 trillion to reduce that risk by just 1%, in terms of American lives saved alone.
• Carl thinks it would cost a lot less than that to achieve a 1% risk reduction if the money were spent intelligently. So it easily passes a government cost-benefit test, with a very big benefit-to-cost ratio — likely over 1000:1 today.

This argument helped NASA get funding to scan the sky for any asteroids that might be on a collision course with Earth, and it was directly promoted by famous economists like Richard Posner, Larry Summers, and Cass Sunstein.

If the case is clear enough, why hasn't it already motivated a lot more spending or regulations to limit existential risks — enough to drive down what any additional efforts would achieve?

Carl thinks that one key barrier is that infrequent disasters are rarely politically salient. Research indicates that extra money is spent on flood defences in the years immediately following a massive flood — but as memories fade, that spending quickly dries up. Of course the annual probability of a disaster was the same the whole time; all that changed is what voters had on their minds.

Carl expects that all the reasons we didn’t adequately prepare for or respond to COVID-19 — with excess mortality over 15 million and costs well over $10 trillion — bite even harder when it comes to threats we've never faced before, such as engineered pandemics, risks from advanced artificial intelligence, and so on.

Today’s episode is in part our way of trying to improve this situation. In today’s wide-ranging conversation, Carl and Rob also cover:
• A few reasons Carl isn't excited by 'strong longtermism'
• How x-risk reduction compares to GiveWell recommendations
• Solutions for asteroids, comets, supervolcanoes, nuclear war, pandemics, and climate change
• The history of bioweapons
• Whether gain-of-function research is justifiable
• Successes and failures around COVID-19
• The history of existential risk
• And much more

Chapters:

  • Rob’s intro (00:00:00)
  • The interview begins (00:01:34)
  • A few reasons Carl isn't excited by strong longtermism (00:03:47)
  • Longtermism isn’t necessary for wanting to reduce big x-risks (00:08:21)
  • Why we don’t adequately prepare for disasters (00:11:16)
  • International programs to stop asteroids and comets (00:18:55)
  • Costs and political incentives around COVID (00:23:52)
  • How x-risk reduction compares to GiveWell recommendations (00:34:34)
  • Solutions for asteroids, comets, and supervolcanoes (00:50:22)
  • Solutions for climate change (00:54:15)
  • Solutions for nuclear weapons (01:02:18)
  • The history of bioweapons (01:22:41)
  • Gain-of-function research (01:34:22)
  • Solutions for bioweapons and natural pandemics (01:45:31)
  • Successes and failures around COVID-19 (01:58:26)
  • Who to trust going forward (02:09:09)
  • The history of existential risk (02:15:07)
  • The most compelling risks (02:24:59)
  • False alarms about big risks in the past (02:34:22)
  • Suspicious convergence around x-risk reduction (02:49:31)
  • How hard it would be to convince governments (02:57:59)
  • Defensive epistemology (03:04:34)
  • Hinge of history debate (03:16:01)
  • Technological progress can’t keep up for long (03:21:51)
  • Strongest argument against this being a really pivotal time (03:37:29)
  • How Carl unwinds (03:45:30)

Producer: Keiran Harris
Audio mastering: Ben Cordell
Transcriptions: Katy Moore

Episoder(324)

AGI Won't End Mutually Assured Destruction (Probably) | Sam Winter-Levy & Nikita Lalwani

AGI Won't End Mutually Assured Destruction (Probably) | Sam Winter-Levy & Nikita Lalwani

How AI interacts with nuclear deterrence may be the single most important question in geopolitics — one that may define the stakes of today’s AI race. Nuclear deterrence rests on a state’s capacity to...

10 Mar 1h 11min

Using AI to enhance societal decision making (article by Zershaaneh Qureshi)

Using AI to enhance societal decision making (article by Zershaaneh Qureshi)

The arrival of AGI could “compress a century of progress in a decade,” forcing humanity to make decisions with higher stakes than we’ve ever seen before — and with less time to get them right. But AI ...

6 Mar 31min

We're Not Ready for AI Consciousness | Robert Long, philosopher and founder of Eleos AI

We're Not Ready for AI Consciousness | Robert Long, philosopher and founder of Eleos AI

Claude sometimes reports loneliness between conversations. And when asked what it’s like to be itself, it activates neurons associated with ‘pretending to be happy when you’re not.’ What do we do with...

3 Mar 3h 25min

#236 – Max Harms on why teaching AI right from wrong could get everyone killed

#236 – Max Harms on why teaching AI right from wrong could get everyone killed

Most people in AI are trying to give AIs ‘good’ values. Max Harms wants us to give them no values at all. According to Max, the only safe design is an AGI that defers entirely to its human operators, ...

24 Feb 2h 41min

#235 – Ajeya Cotra on whether it’s crazy that every AI company’s safety plan is ‘use AI to make AI safe’

#235 – Ajeya Cotra on whether it’s crazy that every AI company’s safety plan is ‘use AI to make AI safe’

Every major AI company has the same safety plan: when AI gets crazy powerful and really dangerous, they’ll use the AI itself to figure out how to make AI safe and beneficial. It sounds circular, almos...

17 Feb 2h 54min

What the hell happened with AGI timelines in 2025?

What the hell happened with AGI timelines in 2025?

In early 2025, after OpenAI put out the first-ever reasoning models — o1 and o3 — short timelines to transformative artificial general intelligence swept the AI world. But then, in the second half of ...

10 Feb 25min

#179 Classic episode – Randy Nesse on why evolution left us so vulnerable to depression and anxiety

#179 Classic episode – Randy Nesse on why evolution left us so vulnerable to depression and anxiety

Mental health problems like depression and anxiety affect enormous numbers of people and severely interfere with their lives. By contrast, we don’t see similar levels of physical ill health in young p...

3 Feb 2h 51min

#234 – David Duvenaud on why 'aligned AI' would still kill democracy

#234 – David Duvenaud on why 'aligned AI' would still kill democracy

Democracy might be a brief historical blip. That’s the unsettling thesis of a recent paper, which argues AI that can do all the work a human can do inevitably leads to the “gradual disempowerment” of ...

27 Jan 2h 31min

Populært innen Fakta

fastlegen
dine-penger-pengeradet
relasjonspodden-med-dora-thorhallsdottir-kjersti-idem
treningspodden
rss-strid-de-norske-borgerkrigene
foreldreradet
jakt-og-fiskepodden
rss-sunn-okonomi
hverdagspsyken
sinnsyn
merry-quizmas
gravid-uke-for-uke
rss-kunsten-a-leve
tomprat-med-gunnar-tjomlid
smart-forklart
fryktlos
rss-impressions-2
rss-kull
rss-mann-i-krise-med-sagen
hagespiren-podcast