Rebuilding after apocalypse: What 13 experts say about bouncing back

Rebuilding after apocalypse: What 13 experts say about bouncing back

What happens when civilisation faces its greatest tests?

This compilation brings together insights from researchers, defence experts, philosophers, and policymakers on humanity’s ability to survive and recover from catastrophic events. From nuclear winter and electromagnetic pulses to pandemics and climate disasters, we explore both the threats that could bring down modern civilisation and the practical solutions that could help us bounce back.

Learn more and see the full transcript: https://80k.info/cr25

Chapters:

  • Cold open (00:00:00)
  • Luisa’s intro (00:01:16)
  • Zach Weinersmith on how settling space won’t help with threats to civilisation anytime soon (unless AI gets crazy good) (00:03:12)
  • Luisa Rodriguez on what the world might look like after a global catastrophe (00:11:42)
  • Dave Denkenberger on the catastrophes that could cause global starvation (00:22:29)
  • Lewis Dartnell on how we could rediscover essential information if the worst happened (00:34:36)
  • Andy Weber on how people in US defence circles think about nuclear winter (00:39:24)
  • Toby Ord on risks to our atmosphere and whether climate change could really threaten civilisation (00:42:34)
  • Mark Lynas on how likely it is that climate change leads to civilisational collapse (00:54:27)
  • Lewis Dartnell on how we could recover without much coal or oil (01:02:17)
  • Kevin Esvelt on people who want to bring down civilisation — and how AI could help them succeed (01:08:41)
  • Toby Ord on whether rogue AI really could wipe us all out (01:19:50)
  • Joan Rohlfing on why we need to worry about more than just nuclear winter (01:25:06)
  • Annie Jacobsen on the effects of firestorms, rings of annihilation, and electromagnetic pulses from nuclear blasts (01:31:25)
  • Dave Denkenberger on disruptions to electricity and communications (01:44:43)
  • Luisa Rodriguez on how we might lose critical knowledge (01:53:01)
  • Kevin Esvelt on the pandemic scenarios that could bring down civilisation (01:57:32)
  • Andy Weber on tech to help with pandemics (02:15:45)
  • Christian Ruhl on why we need the equivalents of seatbelts and airbags to prevent nuclear war from threatening civilisation (02:24:54)
  • Mark Lynas on whether wide-scale famine would lead to civilisational collapse (02:37:58)
  • Dave Denkenberger on low-cost, low-tech solutions to make sure everyone is fed no matter what (02:49:02)
  • Athena Aktipis on whether society would go all Mad Max in the apocalypse (02:59:57)
  • Luisa Rodriguez on why she’s optimistic survivors wouldn’t turn on one another (03:08:02)
  • David Denkenberger on how resilient foods research overlaps with space technologies (03:16:08)
  • Zach Weinersmith on what we’d practically need to do to save a pocket of humanity in space (03:18:57)
  • Lewis Dartnell on changes we could make today to make us more resilient to potential catastrophes (03:40:45)
  • Christian Ruhl on thoughtful philanthropy to reduce the impact of catastrophes (03:46:40)
  • Toby Ord on whether civilisation could rebuild from a small surviving population (03:55:21)
  • Luisa Rodriguez on how fast populations might rebound (04:00:07)
  • David Denkenberger on the odds civilisation recovers even without much preparation (04:02:13)
  • Athena Aktipis on the best ways to prepare for a catastrophe, and keeping it fun (04:04:15)
  • Will MacAskill on the virtues of the potato (04:19:43)
  • Luisa’s outro (04:25:37)

Tell us what you thought! https://forms.gle/T2PHNQjwGj2dyCqV9

Content editing: Katy Moore and Milo McGuire
Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
Music: Ben Cordell
Transcriptions and web: Katy Moore

Avsnitt(323)

Inside the Biden admin’s AI policy approach | Jake Sullivan, Biden’s NSA | via The Cognitive Revolution

Inside the Biden admin’s AI policy approach | Jake Sullivan, Biden’s NSA | via The Cognitive Revolution

Jake Sullivan was the US National Security Advisor from 2021-2025. He joined our friends on The Cognitive Revolution podcast in August to discuss AI as a critical national security issue. We thought i...

26 Sep 20251h 5min

#223 – Neel Nanda on leading a Google DeepMind team at 26 – and advice if you want to work at an AI company (part 2)

#223 – Neel Nanda on leading a Google DeepMind team at 26 – and advice if you want to work at an AI company (part 2)

At 26, Neel Nanda leads an AI safety team at Google DeepMind, has published dozens of influential papers, and mentored 50 junior researchers — seven of whom now work at major AI companies. His secret?...

15 Sep 20251h 46min

#222 – Can we tell if an AI is loyal by reading its mind? DeepMind's Neel Nanda (part 1)

#222 – Can we tell if an AI is loyal by reading its mind? DeepMind's Neel Nanda (part 1)

We don’t know how AIs think or why they do what they do. Or at least, we don’t know much. That fact is only becoming more troubling as AIs grow more capable and appear on track to wield enormous cultu...

8 Sep 20253h 1min

#221 – Kyle Fish on the most bizarre findings from 5 AI welfare experiments

#221 – Kyle Fish on the most bizarre findings from 5 AI welfare experiments

What happens when you lock two AI systems in a room together and tell them they can discuss anything they want?According to experiments run by Kyle Fish — Anthropic’s first AI welfare researcher — som...

28 Aug 20252h 28min

How not to lose your job to AI (article by Benjamin Todd)

How not to lose your job to AI (article by Benjamin Todd)

About half of people are worried they’ll lose their job to AI. They’re right to be concerned: AI can now complete real-world coding tasks on GitHub, generate photorealistic video, drive a taxi more sa...

31 Juli 202551min

#220 – Ryan Greenblatt on the 4 most likely ways for AI to take over, and the case for and against AGI in <8 years

#220 – Ryan Greenblatt on the 4 most likely ways for AI to take over, and the case for and against AGI in <8 years

Ryan Greenblatt — lead author on the explosive paper “Alignment faking in large language models” and chief scientist at Redwood Research — thinks there’s a 25% chance that within four years, AI will b...

8 Juli 20252h 50min

#219 – Toby Ord on graphs AI companies would prefer you didn't (fully) understand

#219 – Toby Ord on graphs AI companies would prefer you didn't (fully) understand

The era of making AI smarter just by making it bigger is ending. But that doesn’t mean progress is slowing down — far from it. AI models continue to get much more powerful, just using very different m...

24 Juni 20252h 48min

Populärt inom Utbildning

rss-bara-en-till-om-missbruk-medberoende-2
historiepodden-se
det-skaver
alska-oss
harrisons-dramatiska-historia
rss-viktmedicinpodden
sektledare
nu-blir-det-historia
allt-du-velat-veta
johannes-hansen-podcast
roda-vita-rosen
rss-sjalsligt-avkladd
i-vantan-pa-katastrofen
sa-in-i-sjalen
not-fanny-anymore
sex-pa-riktigt-med-marika-smith
polisutbildningspodden
rss-om-vi-ska-vara-arliga
rss-max-tant-med-max-villman
rss-traningsklubben