#213 – Will MacAskill on AI causing a “century in a decade” – and how we're completely unprepared

#213 – Will MacAskill on AI causing a “century in a decade” – and how we're completely unprepared

The 20th century saw unprecedented change: nuclear weapons, satellites, the rise and fall of communism, third-wave feminism, the internet, postmodernism, game theory, genetic engineering, the Big Bang theory, quantum mechanics, birth control, and more. Now imagine all of it compressed into just 10 years.

That’s the future Will MacAskill — philosopher, founding figure of effective altruism, and now researcher at the Forethought Centre for AI Strategy — argues we need to prepare for in his new paper “Preparing for the intelligence explosion.” Not in the distant future, but probably in three to seven years.

Links to learn more, highlights, video, and full transcript.

The reason: AI systems are rapidly approaching human-level capability in scientific research and intellectual tasks. Once AI exceeds human abilities in AI research itself, we’ll enter a recursive self-improvement cycle — creating wildly more capable systems. Soon after, by improving algorithms and manufacturing chips, we’ll deploy millions, then billions, then trillions of superhuman AI scientists working 24/7 without human limitations. These systems will collaborate across disciplines, build on each discovery instantly, and conduct experiments at unprecedented scale and speed — compressing a century of scientific progress into mere years.

Will compares the resulting situation to a mediaeval king suddenly needing to upgrade from bows and arrows to nuclear weapons to deal with an ideological threat from a country he’s never heard of, while simultaneously grappling with learning that he descended from monkeys and his god doesn’t exist.

What makes this acceleration perilous is that while technology can speed up almost arbitrarily, human institutions and decision-making are much more fixed.

In this conversation with host Rob Wiblin, recorded on February 7, 2025, Will maps out the challenges we’d face in this potential “intelligence explosion” future, and what we might do to prepare. They discuss:

  • Why leading AI safety researchers now think there’s dramatically less time before AI is transformative than they’d previously thought
  • The three different types of intelligence explosions that occur in order
  • Will’s list of resulting grand challenges — including destructive technologies, space governance, concentration of power, and digital rights
  • How to prevent ourselves from accidentally “locking in” mediocre futures for all eternity
  • Ways AI could radically improve human coordination and decision making
  • Why we should aim for truly flourishing futures, not just avoiding extinction

Chapters:

  • Cold open (00:00:00)
  • Who’s Will MacAskill? (00:00:46)
  • Why Will now just works on AGI (00:01:02)
  • Will was wrong(ish) on AI timelines and hinge of history (00:04:10)
  • A century of history crammed into a decade (00:09:00)
  • Science goes super fast; our institutions don't keep up (00:15:42)
  • Is it good or bad for intellectual progress to 10x? (00:21:03)
  • An intelligence explosion is not just plausible but likely (00:22:54)
  • Intellectual advances outside technology are similarly important (00:28:57)
  • Counterarguments to intelligence explosion (00:31:31)
  • The three types of intelligence explosion (software, technological, industrial) (00:37:29)
  • The industrial intelligence explosion is the most certain and enduring (00:40:23)
  • Is a 100x or 1,000x speedup more likely than 10x? (00:51:51)
  • The grand superintelligence challenges (00:55:37)
  • Grand challenge #1: Many new destructive technologies (00:59:17)
  • Grand challenge #2: Seizure of power by a small group (01:06:45)
  • Is global lock-in really plausible? (01:08:37)
  • Grand challenge #3: Space governance (01:18:53)
  • Is space truly defence-dominant? (01:28:43)
  • Grand challenge #4: Morally integrating with digital beings (01:32:20)
  • Will we ever know if digital minds are happy? (01:41:01)
  • “My worry isn't that we won't know; it's that we won't care” (01:46:31)
  • Can we get AGI to solve all these issues as early as possible? (01:49:40)
  • Politicians have to learn to use AI advisors (02:02:03)
  • Ensuring AI makes us smarter decision-makers (02:06:10)
  • How listeners can speed up AI epistemic tools (02:09:38)
  • AI could become great at forecasting (02:13:09)
  • How not to lock in a bad future (02:14:37)
  • AI takeover might happen anyway — should we rush to load in our values? (02:25:29)
  • ML researchers are feverishly working to destroy their own power (02:34:37)
  • We should aim for more than mere survival (02:37:54)
  • By default the future is rubbish (02:49:04)
  • No easy utopia (02:56:55)
  • What levers matter most to utopia (03:06:32)
  • Bottom lines from the modelling (03:20:09)
  • People distrust utopianism; should they distrust this? (03:24:09)
  • What conditions make eventual eutopia likely? (03:28:49)
  • The new Forethought Centre for AI Strategy (03:37:21)
  • How does Will resist hopelessness? (03:50:13)

Video editing: Simon Monsour
Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
Camera operator: Jeremy Chevillotte
Transcriptions and web: Katy Moore

Episoder(333)

'95% of AI Projects Fail': The story behind the viral stat that misled millions

'95% of AI Projects Fail': The story behind the viral stat that misled millions

You might have heard that 95% of corporate AI pilots are failing. It was a widely cited AI statistic in 2025, repeated by media outlets and commentators everywhere. It helped trigger a Nasdaq selloff ...

28 Apr 10min

#242 – Will MacAskill on how we survive the 'intelligence explosion,' AI character, and the case for 'viatopia'

#242 – Will MacAskill on how we survive the 'intelligence explosion,' AI character, and the case for 'viatopia'

Hundreds of millions already turn to AI on the most personal of topics — therapy, political opinions, and how to treat others. And as AI takes over more of the economy, the character of these systems ...

22 Apr 3h 9min

Risks from power-seeking AI systems (article narration by Zershaaneh Qureshi)

Risks from power-seeking AI systems (article narration by Zershaaneh Qureshi)

Hundreds of prominent AI scientists and other notable figures signed a statement in 2023 saying that mitigating the risk of extinction from AI should be a global priority. At 80,000 Hours, we’ve consi...

16 Apr 1h 29min

How scary is Claude Mythos? 303 pages in 21 minutes

How scary is Claude Mythos? 303 pages in 21 minutes

With Claude Mythos we have an AI that knows when it's being tested, can obscure its reasoning when it wants, and is better at breaking into (and out of) computers than any human alive. Rob Wiblin work...

10 Apr 21min

Village gossip, pesticide bans, and gene drives: 17 experts on the future of global health

Village gossip, pesticide bans, and gene drives: 17 experts on the future of global health

What does it really take to lift millions out of poverty and prevent needless deaths?In this special compilation episode, 17 past guests — including economists, nonprofit founders, and policy advisors...

7 Apr 4h 6min

What everyone is missing about Anthropic vs the Pentagon. And: The Meta leaks are worse than you think.

What everyone is missing about Anthropic vs the Pentagon. And: The Meta leaks are worse than you think.

When the Pentagon tried to strong-arm Anthropic into dropping its ban on AI-only kill decisions and mass domestic surveillance, the company refused. Its critics went on the attack: Anthropic and its s...

3 Apr 20min

#241 – Richard Moulange on how now AI codes viable genomes from scratch and outperforms virologists at lab work — what could go wrong?

#241 – Richard Moulange on how now AI codes viable genomes from scratch and outperforms virologists at lab work — what could go wrong?

Last September, scientists used an AI model to design genomes for entirely new bacteriophages (viruses that infect bacteria). They then built them in a lab. Many were viable. And despite being entirel...

31 Mar 3h 7min

#240 – Samuel Charap on how a Ukraine ceasefire could accidentally set Europe up for a bigger war

#240 – Samuel Charap on how a Ukraine ceasefire could accidentally set Europe up for a bigger war

Many people believe a ceasefire in Ukraine will leave Europe safer. But today's guest lays out how a deal could potentially generate insidious new risks — leaving us in a situation that's equally dang...

24 Mar 1h 12min

Populært innen Fakta

fastlegen
dine-penger-pengeradet
relasjonspodden-med-dora-thorhallsdottir-kjersti-idem
mikkels-paskenotter
foreldreradet
rss-bisarr-historie
treningspodden
rss-strid-de-norske-borgerkrigene
jakt-og-fiskepodden
rss-sunn-okonomi
ukast
hverdagspsyken
lederskap-nhhs-podkast-om-ledelse
sinnsyn
rss-bak-luftfarten
takk-og-lov-med-anine-kierulf
fryktlos
rss-kunsten-a-leve
rss-kull
gravid-uke-for-uke