#215 – Tom Davidson on how AI-enabled coups could allow a tiny group to seize power

#215 – Tom Davidson on how AI-enabled coups could allow a tiny group to seize power

Throughout history, technological revolutions have fundamentally shifted the balance of power in society. The Industrial Revolution created conditions where democracies could flourish for the first time — as nations needed educated, informed, and empowered citizens to deploy advanced technologies and remain competitive.

Unfortunately there’s every reason to think artificial general intelligence (AGI) will reverse that trend.

Today’s guest — Tom Davidson of the Forethought Centre for AI Strategy — claims in a new paper published today that advanced AI enables power grabs by small groups, by removing the need for widespread human participation.

Links to learn more, video, highlights, and full transcript. https://80k.info/td

Also: come work with us on the 80,000 Hours podcast team! https://80k.info/work

There are a few routes by which small groups might seize power:

  • Military coups: Though rare in established democracies due to citizen/soldier resistance, future AI-controlled militaries may lack such constraints.
  • Self-built hard power: History suggests maybe only 10,000 obedient military drones could seize power.
  • Autocratisation: Leaders using millions of loyal AI workers, while denying others access, could remove democratic checks and balances.

Tom explains several reasons why AI systems might follow a tyrant’s orders:

  • They might be programmed to obey the top of the chain of command, with no checks on that power.
  • Systems could contain "secret loyalties" inserted during development.
  • Superior cyber capabilities could allow small groups to control AI-operated military infrastructure.

Host Rob Wiblin and Tom discuss all this plus potential countermeasures.

Chapters:

  • Cold open (00:00:00)
  • A major update on the show (00:00:55)
  • How AI enables tiny groups to seize power (00:06:24)
  • The 3 different threats (00:07:42)
  • Is this common sense or far-fetched? (00:08:51)
  • “No person rules alone.” Except now they might. (00:11:48)
  • Underpinning all 3 threats: Secret AI loyalties (00:17:46)
  • Key risk factors (00:25:38)
  • Preventing secret loyalties in a nutshell (00:27:12)
  • Are human power grabs more plausible than 'rogue AI'? (00:29:32)
  • If you took over the US, could you take over the whole world? (00:38:11)
  • Will this make it impossible to escape autocracy? (00:42:20)
  • Threat 1: AI-enabled military coups (00:46:19)
  • Will we sleepwalk into an AI military coup? (00:56:23)
  • Could AIs be more coup-resistant than humans? (01:02:28)
  • Threat 2: Autocratisation (01:05:22)
  • Will AGI be super-persuasive? (01:15:32)
  • Threat 3: Self-built hard power (01:17:56)
  • Can you stage a coup with 10,000 drones? (01:25:42)
  • That sounds a lot like sci-fi... is it credible? (01:27:49)
  • Will we foresee and prevent all this? (01:32:08)
  • Are people psychologically willing to do coups? (01:33:34)
  • Will a balance of power between AIs prevent this? (01:37:39)
  • Will whistleblowers or internal mistrust prevent coups? (01:39:55)
  • Would other countries step in? (01:46:03)
  • Will rogue AI preempt a human power grab? (01:48:30)
  • The best reasons not to worry (01:51:05)
  • How likely is this in the US? (01:53:23)
  • Is a small group seizing power really so bad? (02:00:47)
  • Countermeasure 1: Block internal misuse (02:04:19)
  • Countermeasure 2: Cybersecurity (02:14:02)
  • Countermeasure 3: Model spec transparency (02:16:11)
  • Countermeasure 4: Sharing AI access broadly (02:25:23)
  • Is it more dangerous to concentrate or share AGI? (02:30:13)
  • Is it important to have more than one powerful AI country? (02:32:56)
  • In defence of open sourcing AI models (02:35:59)
  • 2 ways to stop secret AI loyalties (02:43:34)
  • Preventing AI-enabled military coups in particular (02:56:20)
  • How listeners can help (03:01:59)
  • How to help if you work at an AI company (03:05:49)
  • The power ML researchers still have, for now (03:09:53)
  • How to help if you're an elected leader (03:13:14)
  • Rob’s outro (03:19:05)

This episode was originally recorded on January 20, 2025.

Video editing: Simon Monsour
Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
Camera operator: Jeremy Chevillotte
Transcriptions and web: Katy Moore

Jaksot(326)

#181 – Laura Deming on the science that could keep us healthy in our 80s and beyond

#181 – Laura Deming on the science that could keep us healthy in our 80s and beyond

"The question I care about is: What do I want to do? Like, when I'm 80, how strong do I want to be? OK, and then if I want to be that strong, how well do my muscles have to work? OK, and then if that'...

1 Maalis 20241h 37min

#180 – Hugo Mercier on why gullibility and misinformation are overrated

#180 – Hugo Mercier on why gullibility and misinformation are overrated

The World Economic Forum’s global risks survey of 1,400 experts, policymakers, and industry leaders ranked misinformation and disinformation as the number one global risk over the next two years — ran...

21 Helmi 20242h 36min

#179 – Randy Nesse on why evolution left us so vulnerable to depression and anxiety

#179 – Randy Nesse on why evolution left us so vulnerable to depression and anxiety

Mental health problems like depression and anxiety affect enormous numbers of people and severely interfere with their lives. By contrast, we don’t see similar levels of physical ill health in young p...

12 Helmi 20242h 56min

#178 – Emily Oster on what the evidence actually says about pregnancy and parenting

#178 – Emily Oster on what the evidence actually says about pregnancy and parenting

"I think at various times — before you have the kid, after you have the kid — it's useful to sit down and think about: What do I want the shape of this to look like? What time do I want to be spending...

1 Helmi 20242h 22min

#177 – Nathan Labenz on recent AI breakthroughs and navigating the growing rift between AI safety and accelerationist camps

#177 – Nathan Labenz on recent AI breakthroughs and navigating the growing rift between AI safety and accelerationist camps

Back in December we spoke with Nathan Labenz — AI entrepreneur and host of The Cognitive Revolution Podcast — about the speed of progress towards AGI and OpenAI's leadership drama, drawing on Nathan's...

24 Tammi 20242h 47min

#90 Classic episode – Ajeya Cotra on worldview diversification and how big the future could be

#90 Classic episode – Ajeya Cotra on worldview diversification and how big the future could be

You wake up in a mysterious box, and hear the booming voice of God: “I just flipped a coin. If it came up heads, I made ten boxes, labeled 1 through 10 — each of which has a human in it. If it came up...

12 Tammi 20242h 59min

#112 Classic episode – Carl Shulman on the common-sense case for existential risk work and its practical implications

#112 Classic episode – Carl Shulman on the common-sense case for existential risk work and its practical implications

Preventing the apocalypse may sound like an idiosyncratic activity, and it sometimes is justified on exotic grounds, such as the potential for humanity to become a galaxy-spanning civilisation.But the...

8 Tammi 20243h 50min

#111 Classic episode – Mushtaq Khan on using institutional economics to predict effective government reforms

#111 Classic episode – Mushtaq Khan on using institutional economics to predict effective government reforms

If you’re living in the Niger Delta in Nigeria, your best bet at a high-paying career is probably ‘artisanal refining’ — or, in plain language, stealing oil from pipelines.The resulting oil spills dam...

4 Tammi 20243h 22min

Suosittua kategoriassa Koulutus

rss-murhan-anatomia
voi-hyvin-meditaatiot-2
rss-narsisti
psykopodiaa-podcast
adhd-podi
rahapuhetta
rss-uskonto-on-tylsaa
rss-liian-kuuma-peruna
rss-rahamania
kesken
rss-vapaudu-voimaasi
rss-niinku-asia-on
salainen-paivakirja
rss-duodecim-lehti
rss-tietoinen-yhteys-podcast-2
rss-koira-haudattuna
aloita-meditaatio
mielipaivakirja
esa-saarinen-filosofia-ja-systeemiajattelu
filocast-filosofian-perusteet