Premium
99 kr/ månad
- Tillgång till alla Premium-poddar
- Reklamfritt premium-innehåll
- Avsluta när du vill


Former White House staffer Dean Ball thinks it's very likely some form of 'superintelligence' arrives in under 20 years. He thinks AI being used for bioweapon research is "a real threat model, obviously." He worries about dangerous "power imbalances" should AI companies reach "$50 trillion market caps." And he believes the agriculture revolution probably worsened human health and wellbeing.
Given that, you might expect him to be pushing for AI regulation. Instead, he’s become one of the field’s most prominent and thoughtful regulation sceptics and was recently the lead writer on Trump’s AI Action Plan, before moving to the Foundation for American Innovation.
Links to learn more, video, and full transcript: https://80k.info/db
Dean argues that the wrong regulations, deployed too early, could freeze society into a brittle, suboptimal political and economic order. As he puts it, “my big concern is that we’ll lock ourselves in to some suboptimal dynamic and actually, in a Shakespearean fashion, bring about the world that we do not want.”
Dean’s fundamental worry is uncertainty: “We just don’t know enough yet about the shape of this technology, the ergonomics of it, the economics of it… You can’t govern the technology until you have a better sense of that.”
Premature regulation could lock us in to addressing the wrong problem (focusing on rogue AI when the real issue is power concentration), using the wrong tools (using compute thresholds when we should regulate companies instead), through the wrong institutions (captured AI-specific bodies), all while making it harder to build the actual solutions we’ll need (like open source alternatives or new forms of governance).
But Dean is also a pragmatist: he opposed California’s AI regulatory bill SB 1047 in 2024, but — impressed by new capabilities enabled by “reasoning models” — he supported its successor SB 53 in 2025.
And as Dean sees it, many of the interventions that would help with catastrophic risks also happen to improve mundane AI safety, make products more reliable, and address present-day harms like AI-assisted suicide among teenagers. So rather than betting on a particular vision of the future, we should cross the river by feeling the stones and pursue “robust” interventions we’re unlikely to regret.
This episode was recorded on September 24, 2025.
Chapters:
Video and audio editing: Dominic Armstrong, Milo McGuire, Luke Monsour, and Simon Monsour
Music: CORBIT
Coordination, transcripts, and web: Katy Moore
Prova 14 dagar kostnadsfritt
Lyssna på dina favoritpoddar och ljudböcker på ett och samma ställe.
Njut av handplockade tips som passar din smak – utan ändlöst scrollande.
Fortsätt lyssna där du slutade – även offline.



99 kr/ månad
129 kr/ månad
Obegränsad lyssning på alla dina favoritpoddar och ljudböcker



















