#230 – Dean Ball on how AI is a huge deal — but we shouldn’t regulate it yet

#230 – Dean Ball on how AI is a huge deal — but we shouldn’t regulate it yet

Former White House staffer Dean Ball thinks it's very likely some form of 'superintelligence' arrives in under 20 years. He thinks AI being used for bioweapon research is "a real threat model, obviously." He worries about dangerous "power imbalances" should AI companies reach "$50 trillion market caps." And he believes the agriculture revolution probably worsened human health and wellbeing.

Given that, you might expect him to be pushing for AI regulation. Instead, he’s become one of the field’s most prominent and thoughtful regulation sceptics and was recently the lead writer on Trump’s AI Action Plan, before moving to the Foundation for American Innovation.

Links to learn more, video, and full transcript: https://80k.info/db

Dean argues that the wrong regulations, deployed too early, could freeze society into a brittle, suboptimal political and economic order. As he puts it, “my big concern is that we’ll lock ourselves in to some suboptimal dynamic and actually, in a Shakespearean fashion, bring about the world that we do not want.”

Dean’s fundamental worry is uncertainty: “We just don’t know enough yet about the shape of this technology, the ergonomics of it, the economics of it… You can’t govern the technology until you have a better sense of that.”

Premature regulation could lock us in to addressing the wrong problem (focusing on rogue AI when the real issue is power concentration), using the wrong tools (using compute thresholds when we should regulate companies instead), through the wrong institutions (captured AI-specific bodies), all while making it harder to build the actual solutions we’ll need (like open source alternatives or new forms of governance).

But Dean is also a pragmatist: he opposed California’s AI regulatory bill SB 1047 in 2024, but — impressed by new capabilities enabled by “reasoning models” — he supported its successor SB 53 in 2025.

And as Dean sees it, many of the interventions that would help with catastrophic risks also happen to improve mundane AI safety, make products more reliable, and address present-day harms like AI-assisted suicide among teenagers. So rather than betting on a particular vision of the future, we should cross the river by feeling the stones and pursue “robust” interventions we’re unlikely to regret.


This episode was recorded on September 24, 2025.

Chapters:

  • Cold open (00:00:00)
  • Who’s Dean Ball? (00:01:22)
  • How likely are we to get superintelligence soon, and how bad could it be? (00:01:54)
  • The military may not adopt AI that fast (00:10:54)
  • Dean’s “two wolves” of AI scepticism and optimism (00:17:48)
  • Will AI self-improvement be a game changer? (00:28:20)
  • The case for regulating at the last possible moment (00:33:05)
  • AI could destroy our fragile democratic equilibria. Why not freak out? (00:52:30)
  • The case AI will soon be way overregulated (01:02:51)
  • How to handle the threats without collateral damage (01:14:56)
  • Easy wins against AI misuse (01:26:54)
  • Maybe open source can be handled gracefully (01:41:13)
  • Would a company be sued for trillions if their AI caused a pandemic? (01:47:58)
  • Dean dislikes compute thresholds. Here's what he'd do instead. (01:57:16)
  • Could AI advances lead to violent conflict between the US and China? (02:02:52)
  • Will we see a MAGA-Yudkowskyite alliance? Doomers and the Right (02:12:29)
  • The tactical case for focusing on present-day harms (02:26:51)
  • Is there any way to get the US government to use AI sensibly? (02:45:05)
  • Having a kid in a time of AI turmoil (02:52:38)

Video and audio editing: Dominic Armstrong, Milo McGuire, Luke Monsour, and Simon Monsour
Music: CORBIT
Coordination, transcripts, and web: Katy Moore

Jaksot(324)

AI Won't End Mutually Assured Destruction (Probably) | Sam Winter-Levy & Nikita Lalwani

AI Won't End Mutually Assured Destruction (Probably) | Sam Winter-Levy & Nikita Lalwani

How AI interacts with nuclear deterrence may be the single most important question in geopolitics — one that may define the stakes of today’s AI race. Nuclear deterrence rests on a state’s capacity to...

10 Maalis 1h 11min

Using AI to enhance societal decision making (article by Zershaaneh Qureshi)

Using AI to enhance societal decision making (article by Zershaaneh Qureshi)

The arrival of AGI could “compress a century of progress in a decade,” forcing humanity to make decisions with higher stakes than we’ve ever seen before — and with less time to get them right. But AI ...

6 Maalis 31min

We're Not Ready for AI Consciousness | Robert Long, philosopher and founder of Eleos AI

We're Not Ready for AI Consciousness | Robert Long, philosopher and founder of Eleos AI

Claude sometimes reports loneliness between conversations. And when asked what it’s like to be itself, it activates neurons associated with ‘pretending to be happy when you’re not.’ What do we do with...

3 Maalis 3h 25min

#236 – Max Harms on why teaching AI right from wrong could get everyone killed

#236 – Max Harms on why teaching AI right from wrong could get everyone killed

Most people in AI are trying to give AIs ‘good’ values. Max Harms wants us to give them no values at all. According to Max, the only safe design is an AGI that defers entirely to its human operators, ...

24 Helmi 2h 41min

#235 – Ajeya Cotra on whether it’s crazy that every AI company’s safety plan is ‘use AI to make AI safe’

#235 – Ajeya Cotra on whether it’s crazy that every AI company’s safety plan is ‘use AI to make AI safe’

Every major AI company has the same safety plan: when AI gets crazy powerful and really dangerous, they’ll use the AI itself to figure out how to make AI safe and beneficial. It sounds circular, almos...

17 Helmi 2h 54min

What the hell happened with AGI timelines in 2025?

What the hell happened with AGI timelines in 2025?

In early 2025, after OpenAI put out the first-ever reasoning models — o1 and o3 — short timelines to transformative artificial general intelligence swept the AI world. But then, in the second half of ...

10 Helmi 25min

#179 Classic episode – Randy Nesse on why evolution left us so vulnerable to depression and anxiety

#179 Classic episode – Randy Nesse on why evolution left us so vulnerable to depression and anxiety

Mental health problems like depression and anxiety affect enormous numbers of people and severely interfere with their lives. By contrast, we don’t see similar levels of physical ill health in young p...

3 Helmi 2h 51min

#234 – David Duvenaud on why 'aligned AI' would still kill democracy

#234 – David Duvenaud on why 'aligned AI' would still kill democracy

Democracy might be a brief historical blip. That’s the unsettling thesis of a recent paper, which argues AI that can do all the work a human can do inevitably leads to the “gradual disempowerment” of ...

27 Tammi 2h 31min

Suosittua kategoriassa Koulutus

rss-murhan-anatomia
voi-hyvin-meditaatiot-2
rss-narsisti
rss-uskonto-on-tylsaa
rss-vapaudu-voimaasi
psykologia
rss-liian-kuuma-peruna
psykopodiaa-podcast
rss-duodecim-lehti
adhd-podi
aamukahvilla
kesken
rss-valo-minussa-2
rss-tietoinen-yhteys-podcast-2
rss-hereilla
filocast-filosofian-perusteet
rss-taloustaito-podcast
rss-turun-yliopisto
rss-luonnollinen-synnytys-podcast
rss-synapselingo-opi-englantia