Rebuilding after apocalypse: What 13 experts say about bouncing back

Rebuilding after apocalypse: What 13 experts say about bouncing back

What happens when civilisation faces its greatest tests?

This compilation brings together insights from researchers, defence experts, philosophers, and policymakers on humanity’s ability to survive and recover from catastrophic events. From nuclear winter and electromagnetic pulses to pandemics and climate disasters, we explore both the threats that could bring down modern civilisation and the practical solutions that could help us bounce back.

Learn more and see the full transcript: https://80k.info/cr25

Chapters:

  • Cold open (00:00:00)
  • Luisa’s intro (00:01:16)
  • Zach Weinersmith on how settling space won’t help with threats to civilisation anytime soon (unless AI gets crazy good) (00:03:12)
  • Luisa Rodriguez on what the world might look like after a global catastrophe (00:11:42)
  • Dave Denkenberger on the catastrophes that could cause global starvation (00:22:29)
  • Lewis Dartnell on how we could rediscover essential information if the worst happened (00:34:36)
  • Andy Weber on how people in US defence circles think about nuclear winter (00:39:24)
  • Toby Ord on risks to our atmosphere and whether climate change could really threaten civilisation (00:42:34)
  • Mark Lynas on how likely it is that climate change leads to civilisational collapse (00:54:27)
  • Lewis Dartnell on how we could recover without much coal or oil (01:02:17)
  • Kevin Esvelt on people who want to bring down civilisation — and how AI could help them succeed (01:08:41)
  • Toby Ord on whether rogue AI really could wipe us all out (01:19:50)
  • Joan Rohlfing on why we need to worry about more than just nuclear winter (01:25:06)
  • Annie Jacobsen on the effects of firestorms, rings of annihilation, and electromagnetic pulses from nuclear blasts (01:31:25)
  • Dave Denkenberger on disruptions to electricity and communications (01:44:43)
  • Luisa Rodriguez on how we might lose critical knowledge (01:53:01)
  • Kevin Esvelt on the pandemic scenarios that could bring down civilisation (01:57:32)
  • Andy Weber on tech to help with pandemics (02:15:45)
  • Christian Ruhl on why we need the equivalents of seatbelts and airbags to prevent nuclear war from threatening civilisation (02:24:54)
  • Mark Lynas on whether wide-scale famine would lead to civilisational collapse (02:37:58)
  • Dave Denkenberger on low-cost, low-tech solutions to make sure everyone is fed no matter what (02:49:02)
  • Athena Aktipis on whether society would go all Mad Max in the apocalypse (02:59:57)
  • Luisa Rodriguez on why she’s optimistic survivors wouldn’t turn on one another (03:08:02)
  • David Denkenberger on how resilient foods research overlaps with space technologies (03:16:08)
  • Zach Weinersmith on what we’d practically need to do to save a pocket of humanity in space (03:18:57)
  • Lewis Dartnell on changes we could make today to make us more resilient to potential catastrophes (03:40:45)
  • Christian Ruhl on thoughtful philanthropy to reduce the impact of catastrophes (03:46:40)
  • Toby Ord on whether civilisation could rebuild from a small surviving population (03:55:21)
  • Luisa Rodriguez on how fast populations might rebound (04:00:07)
  • David Denkenberger on the odds civilisation recovers even without much preparation (04:02:13)
  • Athena Aktipis on the best ways to prepare for a catastrophe, and keeping it fun (04:04:15)
  • Will MacAskill on the virtues of the potato (04:19:43)
  • Luisa’s outro (04:25:37)

Tell us what you thought! https://forms.gle/T2PHNQjwGj2dyCqV9

Content editing: Katy Moore and Milo McGuire
Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
Music: Ben Cordell
Transcriptions and web: Katy Moore

Episoder(326)

#163 – Toby Ord on the perils of maximising the good that you do

#163 – Toby Ord on the perils of maximising the good that you do

Effective altruism is associated with the slogan "do the most good." On one level, this has to be unobjectionable: What could be bad about helping people more and more?But in today's interview, Toby O...

8 Sep 20233h 7min

The 80,000 Hours Career Guide (2023)

The 80,000 Hours Career Guide (2023)

An audio version of the 2023 80,000 Hours career guide, also available on our website, on Amazon, and on Audible.If you know someone who might find our career guide helpful, you can get a free copy se...

4 Sep 20234h 41min

#162 – Mustafa Suleyman on getting Washington and Silicon Valley to tame AI

#162 – Mustafa Suleyman on getting Washington and Silicon Valley to tame AI

Mustafa Suleyman was part of the trio that founded DeepMind, and his new AI project is building one of the world's largest supercomputers to train a large language model on 10–100x the compute used to...

1 Sep 202359min

#161 – Michael Webb on whether AI will soon cause job loss, lower incomes, and higher inequality — or the opposite

#161 – Michael Webb on whether AI will soon cause job loss, lower incomes, and higher inequality — or the opposite

"Do you remember seeing these photographs of generally women sitting in front of these huge panels and connecting calls, plugging different calls between different numbers? The automated version of th...

23 Aug 20233h 30min

#160 – Hannah Ritchie on why it makes sense to be optimistic about the environment

#160 – Hannah Ritchie on why it makes sense to be optimistic about the environment

"There's no money to invest in education elsewhere, so they almost get trapped in the cycle where they don't get a lot from crop production, but everyone in the family has to work there to just stay a...

14 Aug 20232h 36min

#159 – Jan Leike on OpenAI's massive push to make superintelligence safe in 4 years or less

#159 – Jan Leike on OpenAI's massive push to make superintelligence safe in 4 years or less

In July, OpenAI announced a new team and project: Superalignment. The goal is to figure out how to make superintelligent AI systems aligned and safe to use within four years, and the lab is putting a ...

7 Aug 20232h 51min

We now offer shorter 'interview highlights' episodes

We now offer shorter 'interview highlights' episodes

Over on our other feed, 80k After Hours, you can now find 20-30 minute highlights episodes of our 80,000 Hours Podcast interviews. These aren’t necessarily the most important parts of the interview, a...

5 Aug 20236min

#158 – Holden Karnofsky on how AIs might take over even if they're no smarter than humans, and his 4-part playbook for AI risk

#158 – Holden Karnofsky on how AIs might take over even if they're no smarter than humans, and his 4-part playbook for AI risk

Back in 2007, Holden Karnofsky cofounded GiveWell, where he sought out the charities that most cost-effectively helped save lives. He then cofounded Open Philanthropy, where he oversaw a team making b...

31 Jul 20233h 13min

Populært innen Fakta

fastlegen
dine-penger-pengeradet
relasjonspodden-med-dora-thorhallsdottir-kjersti-idem
treningspodden
foreldreradet
rss-strid-de-norske-borgerkrigene
jakt-og-fiskepodden
rss-sunn-okonomi
sinnsyn
takk-og-lov-med-anine-kierulf
merry-quizmas
gravid-uke-for-uke
rss-kunsten-a-leve
hverdagspsyken
rss-kull
hagespiren-podcast
rss-var-forste-kaffe
fryktlos
rss-mann-i-krise-med-sagen
lederskap-nhhs-podkast-om-ledelse