#231 – Paul Scharre on how AI-controlled robots will and won't change war

#231 – Paul Scharre on how AI-controlled robots will and won't change war

In 1983, Stanislav Petrov, a Soviet lieutenant colonel, sat in a bunker watching a red screen flash “MISSILE LAUNCH.” Protocol demanded he report it to superiors, which would very likely trigger a retaliatory nuclear strike. Petrov didn’t. He reasoned that if the US were actually attacking, they wouldn’t fire just 5 missiles — they’d empty the silos. He bet the fate of the world on a hunch that his machine was broken. He was right.

Paul Scharre, the former Army Ranger who led the Pentagon team that wrote the US military’s first policy on autonomous weapons, has a question: What would an AI have done in Petrov’s shoes? Would an AI system have been flexible and wise enough to make the same judgement? Or would it immediately launch a counterattack?

Paul joins host Luisa Rodriguez to explain why we are hurtling toward a “battlefield singularity” — a tipping point where AI increasingly replaces humans in much of the military, changing the way war is fought with speed and complexity that outpaces humans’ ability to keep up.

Links to learn more, video, and full transcript: https://80k.info/ps

Militaries don’t necessarily want to take humans out of the loop. But Paul argues that the competitive pressure of warfare creates a “use it or lose it” dynamic. As former Deputy Secretary of Defense Bob Work put it: “If our competitors go to Terminators, and their decisions are bad, but they’re faster, how would we respond?”

Once that line is crossed, Paul warns we might enter an era of “flash wars” — conflicts that spiral out of control as quickly and inexplicably as a flash crash in the stock market, with no way for humans to call a timeout.

In this episode, Paul and Luisa dissect what this future looks like:

  • Swarming warfare: Why the future isn’t just better drones, but thousands of cheap, autonomous agents coordinating like a hive mind to overwhelm defences.
  • The Gatling gun cautionary tale: The inventor of the Gatling gun thought automating fire would reduce the number of soldiers needed, saving lives. Instead, it made war significantly deadlier. Paul argues AI automation could do the same, increasing lethality rather than creating “bloodless” robot wars.
  • The cyber frontier: While robots have physical limits, Paul argues cyberwarfare is already at the point where AI can act faster than human defenders, leading to intelligent malware that evolves and adapts like a biological virus.
  • The US-China “adoption race”: Paul rejects the idea that the US and China are in a spending arms race (AI is barely 1% of the DoD budget). Instead, it’s a race of organisational adoption — one where the US has massive advantages in talent and chips, but struggles with bureaucratic inertia that might not be a problem for an autocratic country.

Paul also shares a personal story from his time as a sniper in Afghanistan — watching a potential target through his scope — that fundamentally shaped his view on why human judgement, with all its flaws, is the only thing keeping war from losing its humanity entirely.

This episode was recorded on October 23-24, 2025.

Chapters:

  • Cold open (00:00:00)
  • Who’s Paul Scharre? (00:00:46)
  • How will AI and automation transform the nature of war? (00:01:17)
  • Why would militaries take humans out of the loop? (00:12:22)
  • AI in nuclear command, control, and communications (00:18:50)
  • Nuclear stability and deterrence (00:36:10)
  • What to expect over the next few decades (00:46:21)
  • Financial and human costs of future “hyperwar” scenarios (00:50:42)
  • AI warfare and the balance of power (01:06:37)
  • Barriers to getting to automated war (01:11:08)
  • Failure modes of autonomous weapons systems (01:16:28)
  • Could autonomous weapons systems actually make us safer? (01:29:36)
  • Is Paul overall optimistic or pessimistic about increasing automation in the military? (01:35:23)
  • Paul’s takes on AGI’s transformative potential and whether natsec people buy it (01:37:42)
  • Cyberwarfare (01:46:55)
  • US-China balance of power and surveillance with AI (02:02:49)
  • Policy and governance that could make us safer (02:29:11)
  • How Paul’s experience in the Army informed his feelings on military automation (02:41:09)

Video and audio editing: Dominic Armstrong, Milo McGuire, Luke Monsour, and Simon Monsour
Music: CORBIT
Coordination, transcripts, and web: Katy Moore

Avsnitt(325)

Why automating human labour will break our political system | Rose Hadshar, Forethought

Why automating human labour will break our political system | Rose Hadshar, Forethought

The most important political question in the age of advanced AI might not be who wins elections. It might be whether elections continue to matter at all.That’s the view of Rose Hadshar, researcher at ...

17 Mars 2h 14min

#238 – Sam Winter-Levy and Nikita Lalwani on how AGI won't end mutually assured destruction (probably)

#238 – Sam Winter-Levy and Nikita Lalwani on how AGI won't end mutually assured destruction (probably)

How AI interacts with nuclear deterrence may be the single most important question in geopolitics — one that may define the stakes of today’s AI race. Nuclear deterrence rests on a state’s capacity to...

10 Mars 1h 11min

Using AI to enhance societal decision making (article by Zershaaneh Qureshi)

Using AI to enhance societal decision making (article by Zershaaneh Qureshi)

The arrival of AGI could “compress a century of progress in a decade,” forcing humanity to make decisions with higher stakes than we’ve ever seen before — and with less time to get them right. But AI ...

6 Mars 31min

#237 – Robert Long on how we're not ready for AI consciousness

#237 – Robert Long on how we're not ready for AI consciousness

Claude sometimes reports loneliness between conversations. And when asked what it’s like to be itself, it activates neurons associated with ‘pretending to be happy when you’re not.’ What do we do with...

3 Mars 3h 25min

#236 – Max Harms on why teaching AI right from wrong could get everyone killed

#236 – Max Harms on why teaching AI right from wrong could get everyone killed

Most people in AI are trying to give AIs ‘good’ values. Max Harms wants us to give them no values at all. According to Max, the only safe design is an AGI that defers entirely to its human operators, ...

24 Feb 2h 41min

#235 – Ajeya Cotra on whether it’s crazy that every AI company’s safety plan is ‘use AI to make AI safe’

#235 – Ajeya Cotra on whether it’s crazy that every AI company’s safety plan is ‘use AI to make AI safe’

Every major AI company has the same safety plan: when AI gets crazy powerful and really dangerous, they’ll use the AI itself to figure out how to make AI safe and beneficial. It sounds circular, almos...

17 Feb 2h 54min

What the hell happened with AGI timelines in 2025?

What the hell happened with AGI timelines in 2025?

In early 2025, after OpenAI put out the first-ever reasoning models — o1 and o3 — short timelines to transformative artificial general intelligence swept the AI world. But then, in the second half of ...

10 Feb 25min

#179 Classic episode – Randy Nesse on why evolution left us so vulnerable to depression and anxiety

#179 Classic episode – Randy Nesse on why evolution left us so vulnerable to depression and anxiety

Mental health problems like depression and anxiety affect enormous numbers of people and severely interfere with their lives. By contrast, we don’t see similar levels of physical ill health in young p...

3 Feb 2h 51min

Populärt inom Utbildning

historiepodden-se
rss-bara-en-till-om-missbruk-medberoende-2
det-skaver
alska-oss
harrisons-dramatiska-historia
nu-blir-det-historia
sektledare
rss-viktmedicinpodden
johannes-hansen-podcast
not-fanny-anymore
roda-vita-rosen
i-vantan-pa-katastrofen
rss-sjalsligt-avkladd
allt-du-velat-veta
rss-basta-livet
rss-om-vi-ska-vara-arliga
sa-in-i-sjalen
rss-max-tant-med-max-villman
rikatillsammans-om-privatekonomi-rikedom-i-livet
rss-pa-insidan-med-bjorn-rudman