#215 – Tom Davidson on how AI-enabled coups could allow a tiny group to seize power

#215 – Tom Davidson on how AI-enabled coups could allow a tiny group to seize power

Throughout history, technological revolutions have fundamentally shifted the balance of power in society. The Industrial Revolution created conditions where democracies could flourish for the first time — as nations needed educated, informed, and empowered citizens to deploy advanced technologies and remain competitive.

Unfortunately there’s every reason to think artificial general intelligence (AGI) will reverse that trend.

Today’s guest — Tom Davidson of the Forethought Centre for AI Strategy — claims in a new paper published today that advanced AI enables power grabs by small groups, by removing the need for widespread human participation.

Links to learn more, video, highlights, and full transcript. https://80k.info/td

Also: come work with us on the 80,000 Hours podcast team! https://80k.info/work

There are a few routes by which small groups might seize power:

  • Military coups: Though rare in established democracies due to citizen/soldier resistance, future AI-controlled militaries may lack such constraints.
  • Self-built hard power: History suggests maybe only 10,000 obedient military drones could seize power.
  • Autocratisation: Leaders using millions of loyal AI workers, while denying others access, could remove democratic checks and balances.

Tom explains several reasons why AI systems might follow a tyrant’s orders:

  • They might be programmed to obey the top of the chain of command, with no checks on that power.
  • Systems could contain "secret loyalties" inserted during development.
  • Superior cyber capabilities could allow small groups to control AI-operated military infrastructure.

Host Rob Wiblin and Tom discuss all this plus potential countermeasures.

Chapters:

  • Cold open (00:00:00)
  • A major update on the show (00:00:55)
  • How AI enables tiny groups to seize power (00:06:24)
  • The 3 different threats (00:07:42)
  • Is this common sense or far-fetched? (00:08:51)
  • “No person rules alone.” Except now they might. (00:11:48)
  • Underpinning all 3 threats: Secret AI loyalties (00:17:46)
  • Key risk factors (00:25:38)
  • Preventing secret loyalties in a nutshell (00:27:12)
  • Are human power grabs more plausible than 'rogue AI'? (00:29:32)
  • If you took over the US, could you take over the whole world? (00:38:11)
  • Will this make it impossible to escape autocracy? (00:42:20)
  • Threat 1: AI-enabled military coups (00:46:19)
  • Will we sleepwalk into an AI military coup? (00:56:23)
  • Could AIs be more coup-resistant than humans? (01:02:28)
  • Threat 2: Autocratisation (01:05:22)
  • Will AGI be super-persuasive? (01:15:32)
  • Threat 3: Self-built hard power (01:17:56)
  • Can you stage a coup with 10,000 drones? (01:25:42)
  • That sounds a lot like sci-fi... is it credible? (01:27:49)
  • Will we foresee and prevent all this? (01:32:08)
  • Are people psychologically willing to do coups? (01:33:34)
  • Will a balance of power between AIs prevent this? (01:37:39)
  • Will whistleblowers or internal mistrust prevent coups? (01:39:55)
  • Would other countries step in? (01:46:03)
  • Will rogue AI preempt a human power grab? (01:48:30)
  • The best reasons not to worry (01:51:05)
  • How likely is this in the US? (01:53:23)
  • Is a small group seizing power really so bad? (02:00:47)
  • Countermeasure 1: Block internal misuse (02:04:19)
  • Countermeasure 2: Cybersecurity (02:14:02)
  • Countermeasure 3: Model spec transparency (02:16:11)
  • Countermeasure 4: Sharing AI access broadly (02:25:23)
  • Is it more dangerous to concentrate or share AGI? (02:30:13)
  • Is it important to have more than one powerful AI country? (02:32:56)
  • In defence of open sourcing AI models (02:35:59)
  • 2 ways to stop secret AI loyalties (02:43:34)
  • Preventing AI-enabled military coups in particular (02:56:20)
  • How listeners can help (03:01:59)
  • How to help if you work at an AI company (03:05:49)
  • The power ML researchers still have, for now (03:09:53)
  • How to help if you're an elected leader (03:13:14)
  • Rob’s outro (03:19:05)

This episode was originally recorded on January 20, 2025.

Video editing: Simon Monsour
Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
Camera operator: Jeremy Chevillotte
Transcriptions and web: Katy Moore

Jaksot(321)

#222 – Can we tell if an AI is loyal by reading its mind? DeepMind's Neel Nanda (part 1)

#222 – Can we tell if an AI is loyal by reading its mind? DeepMind's Neel Nanda (part 1)

We don’t know how AIs think or why they do what they do. Or at least, we don’t know much. That fact is only becoming more troubling as AIs grow more capable and appear on track to wield enormous cultu...

8 Syys 20253h 1min

#221 – Kyle Fish on the most bizarre findings from 5 AI welfare experiments

#221 – Kyle Fish on the most bizarre findings from 5 AI welfare experiments

What happens when you lock two AI systems in a room together and tell them they can discuss anything they want?According to experiments run by Kyle Fish — Anthropic’s first AI welfare researcher — som...

28 Elo 20252h 28min

How not to lose your job to AI (article by Benjamin Todd)

How not to lose your job to AI (article by Benjamin Todd)

About half of people are worried they’ll lose their job to AI. They’re right to be concerned: AI can now complete real-world coding tasks on GitHub, generate photorealistic video, drive a taxi more sa...

31 Heinä 202551min

Rebuilding after apocalypse: What 13 experts say about bouncing back

Rebuilding after apocalypse: What 13 experts say about bouncing back

What happens when civilisation faces its greatest tests?This compilation brings together insights from researchers, defence experts, philosophers, and policymakers on humanity’s ability to survive and...

15 Heinä 20254h 26min

#220 – Ryan Greenblatt on the 4 most likely ways for AI to take over, and the case for and against AGI in <8 years

#220 – Ryan Greenblatt on the 4 most likely ways for AI to take over, and the case for and against AGI in <8 years

Ryan Greenblatt — lead author on the explosive paper “Alignment faking in large language models” and chief scientist at Redwood Research — thinks there’s a 25% chance that within four years, AI will b...

8 Heinä 20252h 50min

#219 – Toby Ord on graphs AI companies would prefer you didn't (fully) understand

#219 – Toby Ord on graphs AI companies would prefer you didn't (fully) understand

The era of making AI smarter just by making it bigger is ending. But that doesn’t mean progress is slowing down — far from it. AI models continue to get much more powerful, just using very different m...

24 Kesä 20252h 48min

#218 – Hugh White on why Trump is abandoning US hegemony – and that’s probably good

#218 – Hugh White on why Trump is abandoning US hegemony – and that’s probably good

For decades, US allies have slept soundly under the protection of America’s overwhelming military might. Donald Trump — with his threats to ditch NATO, seize Greenland, and abandon Taiwan — seems hell...

12 Kesä 20252h 48min

#217 – Beth Barnes on the most important graph in AI right now — and the 7-month rule that governs its progress

#217 – Beth Barnes on the most important graph in AI right now — and the 7-month rule that governs its progress

AI models today have a 50% chance of successfully completing a task that would take an expert human one hour. Seven months ago, that number was roughly 30 minutes — and seven months before that, 15 mi...

2 Kesä 20253h 47min

Suosittua kategoriassa Koulutus

rss-murhan-anatomia
psykopodiaa-podcast
voi-hyvin-meditaatiot-2
rss-narsisti
rss-liian-kuuma-peruna
rss-vapaudu-voimaasi
dear-ladies
aamukahvilla
psykologia
leveli
kesken
ihminen-tavattavissa-tommy-hellsten-instituutti
rss-uskonto-on-tylsaa
rss-duodecim-lehti
rss-valo-minussa-2
rahapuhetta
adhd-podi
rss-tietoinen-yhteys-podcast-2
rss-hereilla
rss-xamk-podcast