Rebuilding after apocalypse: What 13 experts say about bouncing back

Rebuilding after apocalypse: What 13 experts say about bouncing back

What happens when civilisation faces its greatest tests?

This compilation brings together insights from researchers, defence experts, philosophers, and policymakers on humanity’s ability to survive and recover from catastrophic events. From nuclear winter and electromagnetic pulses to pandemics and climate disasters, we explore both the threats that could bring down modern civilisation and the practical solutions that could help us bounce back.

Learn more and see the full transcript: https://80k.info/cr25

Chapters:

  • Cold open (00:00:00)
  • Luisa’s intro (00:01:16)
  • Zach Weinersmith on how settling space won’t help with threats to civilisation anytime soon (unless AI gets crazy good) (00:03:12)
  • Luisa Rodriguez on what the world might look like after a global catastrophe (00:11:42)
  • Dave Denkenberger on the catastrophes that could cause global starvation (00:22:29)
  • Lewis Dartnell on how we could rediscover essential information if the worst happened (00:34:36)
  • Andy Weber on how people in US defence circles think about nuclear winter (00:39:24)
  • Toby Ord on risks to our atmosphere and whether climate change could really threaten civilisation (00:42:34)
  • Mark Lynas on how likely it is that climate change leads to civilisational collapse (00:54:27)
  • Lewis Dartnell on how we could recover without much coal or oil (01:02:17)
  • Kevin Esvelt on people who want to bring down civilisation — and how AI could help them succeed (01:08:41)
  • Toby Ord on whether rogue AI really could wipe us all out (01:19:50)
  • Joan Rohlfing on why we need to worry about more than just nuclear winter (01:25:06)
  • Annie Jacobsen on the effects of firestorms, rings of annihilation, and electromagnetic pulses from nuclear blasts (01:31:25)
  • Dave Denkenberger on disruptions to electricity and communications (01:44:43)
  • Luisa Rodriguez on how we might lose critical knowledge (01:53:01)
  • Kevin Esvelt on the pandemic scenarios that could bring down civilisation (01:57:32)
  • Andy Weber on tech to help with pandemics (02:15:45)
  • Christian Ruhl on why we need the equivalents of seatbelts and airbags to prevent nuclear war from threatening civilisation (02:24:54)
  • Mark Lynas on whether wide-scale famine would lead to civilisational collapse (02:37:58)
  • Dave Denkenberger on low-cost, low-tech solutions to make sure everyone is fed no matter what (02:49:02)
  • Athena Aktipis on whether society would go all Mad Max in the apocalypse (02:59:57)
  • Luisa Rodriguez on why she’s optimistic survivors wouldn’t turn on one another (03:08:02)
  • David Denkenberger on how resilient foods research overlaps with space technologies (03:16:08)
  • Zach Weinersmith on what we’d practically need to do to save a pocket of humanity in space (03:18:57)
  • Lewis Dartnell on changes we could make today to make us more resilient to potential catastrophes (03:40:45)
  • Christian Ruhl on thoughtful philanthropy to reduce the impact of catastrophes (03:46:40)
  • Toby Ord on whether civilisation could rebuild from a small surviving population (03:55:21)
  • Luisa Rodriguez on how fast populations might rebound (04:00:07)
  • David Denkenberger on the odds civilisation recovers even without much preparation (04:02:13)
  • Athena Aktipis on the best ways to prepare for a catastrophe, and keeping it fun (04:04:15)
  • Will MacAskill on the virtues of the potato (04:19:43)
  • Luisa’s outro (04:25:37)

Tell us what you thought! https://forms.gle/T2PHNQjwGj2dyCqV9

Content editing: Katy Moore and Milo McGuire
Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
Music: Ben Cordell
Transcriptions and web: Katy Moore

Episoder(332)

#162 – Mustafa Suleyman on getting Washington and Silicon Valley to tame AI

#162 – Mustafa Suleyman on getting Washington and Silicon Valley to tame AI

Mustafa Suleyman was part of the trio that founded DeepMind, and his new AI project is building one of the world's largest supercomputers to train a large language model on 10–100x the compute used to...

1 Sep 202359min

#161 – Michael Webb on whether AI will soon cause job loss, lower incomes, and higher inequality — or the opposite

#161 – Michael Webb on whether AI will soon cause job loss, lower incomes, and higher inequality — or the opposite

"Do you remember seeing these photographs of generally women sitting in front of these huge panels and connecting calls, plugging different calls between different numbers? The automated version of th...

23 Aug 20233h 30min

#160 – Hannah Ritchie on why it makes sense to be optimistic about the environment

#160 – Hannah Ritchie on why it makes sense to be optimistic about the environment

"There's no money to invest in education elsewhere, so they almost get trapped in the cycle where they don't get a lot from crop production, but everyone in the family has to work there to just stay a...

14 Aug 20232h 36min

#159 – Jan Leike on OpenAI's massive push to make superintelligence safe in 4 years or less

#159 – Jan Leike on OpenAI's massive push to make superintelligence safe in 4 years or less

In July, OpenAI announced a new team and project: Superalignment. The goal is to figure out how to make superintelligent AI systems aligned and safe to use within four years, and the lab is putting a ...

7 Aug 20232h 51min

We now offer shorter 'interview highlights' episodes

We now offer shorter 'interview highlights' episodes

Over on our other feed, 80k After Hours, you can now find 20-30 minute highlights episodes of our 80,000 Hours Podcast interviews. These aren’t necessarily the most important parts of the interview, a...

5 Aug 20236min

#158 – Holden Karnofsky on how AIs might take over even if they're no smarter than humans, and his 4-part playbook for AI risk

#158 – Holden Karnofsky on how AIs might take over even if they're no smarter than humans, and his 4-part playbook for AI risk

Back in 2007, Holden Karnofsky cofounded GiveWell, where he sought out the charities that most cost-effectively helped save lives. He then cofounded Open Philanthropy, where he oversaw a team making b...

31 Jul 20233h 13min

#157 – Ezra Klein on existential risk from AI and what DC could do about it

#157 – Ezra Klein on existential risk from AI and what DC could do about it

In Oppenheimer, scientists detonate a nuclear weapon despite thinking there's some 'near zero' chance it would ignite the atmosphere, putting an end to life on Earth. Today, scientists working on AI t...

24 Jul 20231h 18min

#156 – Markus Anderljung on how to regulate cutting-edge AI models

#156 – Markus Anderljung on how to regulate cutting-edge AI models

"At the front of the pack we have these frontier AI developers, and we want them to identify particularly dangerous models ahead of time. Once those mines have been discovered, and the frontier develo...

10 Jul 20232h 6min

Populært innen Fakta

fastlegen
dine-penger-pengeradet
relasjonspodden-med-dora-thorhallsdottir-kjersti-idem
mikkels-paskenotter
foreldreradet
rss-strid-de-norske-borgerkrigene
treningspodden
rss-bisarr-historie
jakt-og-fiskepodden
rss-sunn-okonomi
sinnsyn
takk-og-lov-med-anine-kierulf
ukast
hverdagspsyken
rss-bak-luftfarten
gravid-uke-for-uke
lederskap-nhhs-podkast-om-ledelse
fryktlos
level-up-med-anniken-binz
tomprat-med-gunnar-tjomlid