#112 – Carl Shulman on the common-sense case for existential risk work and its practical implications

#112 – Carl Shulman on the common-sense case for existential risk work and its practical implications

Preventing the apocalypse may sound like an idiosyncratic activity, and it sometimes is justified on exotic grounds, such as the potential for humanity to become a galaxy-spanning civilisation.

But the policy of US government agencies is already to spend up to $4 million to save the life of a citizen, making the death of all Americans a $1,300,000,000,000,000 disaster.

According to Carl Shulman, research associate at Oxford University's Future of Humanity Institute, that means you don’t need any fancy philosophical arguments about the value or size of the future to justify working to reduce existential risk — it passes a mundane cost-benefit analysis whether or not you place any value on the long-term future.

Links to learn more, summary and full transcript.

The key reason to make it a top priority is factual, not philosophical. That is, the risk of a disaster that kills billions of people alive today is alarmingly high, and it can be reduced at a reasonable cost. A back-of-the-envelope version of the argument runs:
• The US government is willing to pay up to $4 million (depending on the agency) to save the life of an American.
• So saving all US citizens at any given point in time would be worth $1,300 trillion.
• If you believe that the risk of human extinction over the next century is something like one in six (as Toby Ord suggests is a reasonable figure in his book The Precipice), then it would be worth the US government spending up to $2.2 trillion to reduce that risk by just 1%, in terms of American lives saved alone.
• Carl thinks it would cost a lot less than that to achieve a 1% risk reduction if the money were spent intelligently. So it easily passes a government cost-benefit test, with a very big benefit-to-cost ratio — likely over 1000:1 today.

This argument helped NASA get funding to scan the sky for any asteroids that might be on a collision course with Earth, and it was directly promoted by famous economists like Richard Posner, Larry Summers, and Cass Sunstein.

If the case is clear enough, why hasn't it already motivated a lot more spending or regulations to limit existential risks — enough to drive down what any additional efforts would achieve?

Carl thinks that one key barrier is that infrequent disasters are rarely politically salient. Research indicates that extra money is spent on flood defences in the years immediately following a massive flood — but as memories fade, that spending quickly dries up. Of course the annual probability of a disaster was the same the whole time; all that changed is what voters had on their minds.

Carl expects that all the reasons we didn’t adequately prepare for or respond to COVID-19 — with excess mortality over 15 million and costs well over $10 trillion — bite even harder when it comes to threats we've never faced before, such as engineered pandemics, risks from advanced artificial intelligence, and so on.

Today’s episode is in part our way of trying to improve this situation. In today’s wide-ranging conversation, Carl and Rob also cover:
• A few reasons Carl isn't excited by 'strong longtermism'
• How x-risk reduction compares to GiveWell recommendations
• Solutions for asteroids, comets, supervolcanoes, nuclear war, pandemics, and climate change
• The history of bioweapons
• Whether gain-of-function research is justifiable
• Successes and failures around COVID-19
• The history of existential risk
• And much more

Chapters:

  • Rob’s intro (00:00:00)
  • The interview begins (00:01:34)
  • A few reasons Carl isn't excited by strong longtermism (00:03:47)
  • Longtermism isn’t necessary for wanting to reduce big x-risks (00:08:21)
  • Why we don’t adequately prepare for disasters (00:11:16)
  • International programs to stop asteroids and comets (00:18:55)
  • Costs and political incentives around COVID (00:23:52)
  • How x-risk reduction compares to GiveWell recommendations (00:34:34)
  • Solutions for asteroids, comets, and supervolcanoes (00:50:22)
  • Solutions for climate change (00:54:15)
  • Solutions for nuclear weapons (01:02:18)
  • The history of bioweapons (01:22:41)
  • Gain-of-function research (01:34:22)
  • Solutions for bioweapons and natural pandemics (01:45:31)
  • Successes and failures around COVID-19 (01:58:26)
  • Who to trust going forward (02:09:09)
  • The history of existential risk (02:15:07)
  • The most compelling risks (02:24:59)
  • False alarms about big risks in the past (02:34:22)
  • Suspicious convergence around x-risk reduction (02:49:31)
  • How hard it would be to convince governments (02:57:59)
  • Defensive epistemology (03:04:34)
  • Hinge of history debate (03:16:01)
  • Technological progress can’t keep up for long (03:21:51)
  • Strongest argument against this being a really pivotal time (03:37:29)
  • How Carl unwinds (03:45:30)

Producer: Keiran Harris
Audio mastering: Ben Cordell
Transcriptions: Katy Moore

Jaksot(333)

'95% of AI Pilots Fail': The hidden agenda behind the viral stat that misled millions

'95% of AI Pilots Fail': The hidden agenda behind the viral stat that misled millions

You might have heard that '95% of corporate AI pilots' are failing. It was one of the most widely cited AI statistics of 2025, parroted by media outlets everywhere. It helped trigger a Nasdaq selloff ...

28 Huhti 10min

#242 – Will MacAskill on how we survive the 'intelligence explosion,' AI character, and the case for 'viatopia'

#242 – Will MacAskill on how we survive the 'intelligence explosion,' AI character, and the case for 'viatopia'

Hundreds of millions already turn to AI on the most personal of topics — therapy, political opinions, and how to treat others. And as AI takes over more of the economy, the character of these systems ...

22 Huhti 3h 9min

Risks from power-seeking AI systems (article narration by Zershaaneh Qureshi)

Risks from power-seeking AI systems (article narration by Zershaaneh Qureshi)

Hundreds of prominent AI scientists and other notable figures signed a statement in 2023 saying that mitigating the risk of extinction from AI should be a global priority. At 80,000 Hours, we’ve consi...

16 Huhti 1h 29min

How scary is Claude Mythos? 303 pages in 21 minutes

How scary is Claude Mythos? 303 pages in 21 minutes

With Claude Mythos we have an AI that knows when it's being tested, can obscure its reasoning when it wants, and is better at breaking into (and out of) computers than any human alive. Rob Wiblin work...

10 Huhti 21min

Village gossip, pesticide bans, and gene drives: 17 experts on the future of global health

Village gossip, pesticide bans, and gene drives: 17 experts on the future of global health

What does it really take to lift millions out of poverty and prevent needless deaths?In this special compilation episode, 17 past guests — including economists, nonprofit founders, and policy advisors...

7 Huhti 4h 6min

What everyone is missing about Anthropic vs the Pentagon. And: The Meta leaks are worse than you think.

What everyone is missing about Anthropic vs the Pentagon. And: The Meta leaks are worse than you think.

When the Pentagon tried to strong-arm Anthropic into dropping its ban on AI-only kill decisions and mass domestic surveillance, the company refused. Its critics went on the attack: Anthropic and its s...

3 Huhti 20min

#241 – Richard Moulange on how now AI codes viable genomes from scratch and outperforms virologists at lab work — what could go wrong?

#241 – Richard Moulange on how now AI codes viable genomes from scratch and outperforms virologists at lab work — what could go wrong?

Last September, scientists used an AI model to design genomes for entirely new bacteriophages (viruses that infect bacteria). They then built them in a lab. Many were viable. And despite being entirel...

31 Maalis 3h 7min

#240 – Samuel Charap on how a Ukraine ceasefire could accidentally set Europe up for a bigger war

#240 – Samuel Charap on how a Ukraine ceasefire could accidentally set Europe up for a bigger war

Many people believe a ceasefire in Ukraine will leave Europe safer. But today's guest lays out how a deal could potentially generate insidious new risks — leaving us in a situation that's equally dang...

24 Maalis 1h 12min

Suosittua kategoriassa Koulutus

rss-murhan-anatomia
psykopodiaa-podcast
voi-hyvin-meditaatiot-2
adhd-podi
rss-arkea-ja-aurinkoa-podcast-espanjasta
rss-niinku-asia-on
rss-liian-kuuma-peruna
rss-rahamania
kesken
rss-valo-minussa-2
rss-narsisti
taytta-tavaraa
rahapuhetta
kehossa
ihminen-tavattavissa-tommy-hellsten-instituutti
rss-duodecim-lehti
rss-tietoinen-yhteys-podcast-2
rss-vapaudu-voimaasi
rss-tyohyvinvoinnin-aakkoset
filocast-filosofian-perusteet