#72 - Toby Ord on the precipice and humanity's potential futures

#72 - Toby Ord on the precipice and humanity's potential futures

This week Oxford academic and 80,000 Hours trustee Dr Toby Ord released his new book The Precipice: Existential Risk and the Future of Humanity. It's about how our long-term future could be better than almost anyone believes, but also how humanity's recklessness is putting that future at grave risk — in Toby's reckoning, a 1 in 6 chance of being extinguished this century.

I loved the book and learned a great deal from it (buy it here, US and audiobook release March 24). While preparing for this interview I copied out 87 facts that were surprising, shocking or important. Here's a sample of 16:

1. The probability of a supervolcano causing a civilisation-threatening catastrophe in the next century is estimated to be 100x that of asteroids and comets combined.

2. The Biological Weapons Convention — a global agreement to protect humanity — has just four employees, and a smaller budget than an average McDonald’s.

3. In 2008 a 'gamma ray burst' reached Earth from another galaxy, 10 billion light years away. It was still bright enough to be visible to the naked eye. We aren't sure what generates gamma ray bursts but one cause may be two neutron stars colliding.

4. Before detonating the first nuclear weapon, scientists in the Manhattan Project feared that the high temperatures in the core, unprecedented for Earth, might be able to ignite the hydrogen in water. This would set off a self-sustaining reaction that would burn off the Earth’s oceans, killing all life above ground. They thought this was unlikely, but many atomic scientists feared their calculations could be missing something. As far as we know, the US President was never informed of this possibility, but similar risks were one reason Hitler stopped…

N.B. I've had to cut off this list as we only get 4,000 characters in these show notes, so:

Click here to read the whole list, see a full transcript, and find related links.

And if you like the list, you can get a free copy of the introduction and first chapter by joining our mailing list.

While I've been studying these topics for years and known Toby for the last eight, a remarkable amount of what's in The Precipice was new to me.

Of course the book isn't a series of isolated amusing facts, but rather a systematic review of the many ways humanity's future could go better or worse, how we might know about them, and what might be done to improve the odds.

And that's how we approach this conversation, first talking about each of the main threats, then how we can learn about things that have never happened before, then finishing with what a great future for humanity might look like and how it might be achieved.

Toby is a famously good explainer of complex issues — a bit of a modern Carl Sagan character — so as expected this was a great interview, and one which Arden Koehler and I barely even had to work for.

Some topics Arden and I ask about include:

• What Toby changed his mind about while writing the book
• Are people exaggerating when they say that climate change could actually end civilization?
• What can we learn from historical pandemics?
• Toby’s estimate of unaligned AI causing human extinction in the next century
• Is this century the most important time in human history, or is that a narcissistic delusion?
• Competing vision for humanity's ideal future
• And more.

Get this episode by subscribing: type '80,000 Hours' into your podcasting app. Or read the linked transcript.

Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcriptions: Zakee Ulhaq.

Avsnitt(323)

Inside the Biden admin’s AI policy approach | Jake Sullivan, Biden’s NSA | via The Cognitive Revolution

Inside the Biden admin’s AI policy approach | Jake Sullivan, Biden’s NSA | via The Cognitive Revolution

Jake Sullivan was the US National Security Advisor from 2021-2025. He joined our friends on The Cognitive Revolution podcast in August to discuss AI as a critical national security issue. We thought i...

26 Sep 20251h 5min

#223 – Neel Nanda on leading a Google DeepMind team at 26 – and advice if you want to work at an AI company (part 2)

#223 – Neel Nanda on leading a Google DeepMind team at 26 – and advice if you want to work at an AI company (part 2)

At 26, Neel Nanda leads an AI safety team at Google DeepMind, has published dozens of influential papers, and mentored 50 junior researchers — seven of whom now work at major AI companies. His secret?...

15 Sep 20251h 46min

#222 – Can we tell if an AI is loyal by reading its mind? DeepMind's Neel Nanda (part 1)

#222 – Can we tell if an AI is loyal by reading its mind? DeepMind's Neel Nanda (part 1)

We don’t know how AIs think or why they do what they do. Or at least, we don’t know much. That fact is only becoming more troubling as AIs grow more capable and appear on track to wield enormous cultu...

8 Sep 20253h 1min

#221 – Kyle Fish on the most bizarre findings from 5 AI welfare experiments

#221 – Kyle Fish on the most bizarre findings from 5 AI welfare experiments

What happens when you lock two AI systems in a room together and tell them they can discuss anything they want?According to experiments run by Kyle Fish — Anthropic’s first AI welfare researcher — som...

28 Aug 20252h 28min

How not to lose your job to AI (article by Benjamin Todd)

How not to lose your job to AI (article by Benjamin Todd)

About half of people are worried they’ll lose their job to AI. They’re right to be concerned: AI can now complete real-world coding tasks on GitHub, generate photorealistic video, drive a taxi more sa...

31 Juli 202551min

Rebuilding after apocalypse: What 13 experts say about bouncing back

Rebuilding after apocalypse: What 13 experts say about bouncing back

What happens when civilisation faces its greatest tests?This compilation brings together insights from researchers, defence experts, philosophers, and policymakers on humanity’s ability to survive and...

15 Juli 20254h 26min

#220 – Ryan Greenblatt on the 4 most likely ways for AI to take over, and the case for and against AGI in <8 years

#220 – Ryan Greenblatt on the 4 most likely ways for AI to take over, and the case for and against AGI in <8 years

Ryan Greenblatt — lead author on the explosive paper “Alignment faking in large language models” and chief scientist at Redwood Research — thinks there’s a 25% chance that within four years, AI will b...

8 Juli 20252h 50min

#219 – Toby Ord on graphs AI companies would prefer you didn't (fully) understand

#219 – Toby Ord on graphs AI companies would prefer you didn't (fully) understand

The era of making AI smarter just by making it bigger is ending. But that doesn’t mean progress is slowing down — far from it. AI models continue to get much more powerful, just using very different m...

24 Juni 20252h 48min

Populärt inom Utbildning

historiepodden-se
rss-bara-en-till-om-missbruk-medberoende-2
det-skaver
alska-oss
sektledare
harrisons-dramatiska-historia
nu-blir-det-historia
rss-viktmedicinpodden
not-fanny-anymore
johannes-hansen-podcast
allt-du-velat-veta
roda-vita-rosen
i-vantan-pa-katastrofen
rss-sjalsligt-avkladd
rss-max-tant-med-max-villman
sa-in-i-sjalen
sex-pa-riktigt-med-marika-smith
rss-basta-livet
rss-beratta-alltid-det-har
vi-gar-till-historien