#199 – Nathan Calvin on California’s AI bill SB 1047 and its potential to shape US AI policy

#199 – Nathan Calvin on California’s AI bill SB 1047 and its potential to shape US AI policy

"I do think that there is a really significant sentiment among parts of the opposition that it’s not really just that this bill itself is that bad or extreme — when you really drill into it, it feels like one of those things where you read it and it’s like, 'This is the thing that everyone is screaming about?' I think it’s a pretty modest bill in a lot of ways, but I think part of what they are thinking is that this is the first step to shutting down AI development. Or that if California does this, then lots of other states are going to do it, and we need to really slam the door shut on model-level regulation or else they’re just going to keep going.

"I think that is like a lot of what the sentiment here is: it’s less about, in some ways, the details of this specific bill, and more about the sense that they want this to stop here, and they’re worried that if they give an inch that there will continue to be other things in the future. And I don’t think that is going to be tolerable to the public in the long run. I think it’s a bad choice, but I think that is the calculus that they are making." —Nathan Calvin

In today’s episode, host Luisa Rodriguez speaks to Nathan Calvin — senior policy counsel at the Center for AI Safety Action Fund — about the new AI safety bill in California, SB 1047, which he’s helped shape as it’s moved through the state legislature.

Links to learn more, highlights, and full transcript.

They cover:

  • What’s actually in SB 1047, and which AI models it would apply to.
  • The most common objections to the bill — including how it could affect competition, startups, open source models, and US national security — and which of these objections Nathan thinks hold water.
  • What Nathan sees as the biggest misunderstandings about the bill that get in the way of good public discourse about it.
  • Why some AI companies are opposed to SB 1047, despite claiming that they want the industry to be regulated.
  • How the bill is different from Biden’s executive order on AI and voluntary commitments made by AI companies.
  • Why California is taking state-level action rather than waiting for federal regulation.
  • How state-level regulations can be hugely impactful at national and global scales, and how listeners could get involved in state-level work to make a real difference on lots of pressing problems.
  • And plenty more.

Chapters:

  • Cold open (00:00:00)
  • Luisa's intro (00:00:57)
  • The interview begins (00:02:30)
  • What risks from AI does SB 1047 try to address? (00:03:10)
  • Supporters and critics of the bill (00:11:03)
  • Misunderstandings about the bill (00:24:07)
  • Competition, open source, and liability concerns (00:30:56)
  • Model size thresholds (00:46:24)
  • How is SB 1047 different from the executive order? (00:55:36)
  • Objections Nathan is sympathetic to (00:58:31)
  • Current status of the bill (01:02:57)
  • How can listeners get involved in work like this? (01:05:00)
  • Luisa's outro (01:11:52)

Producer and editor: Keiran Harris
Audio engineering by Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
Additional content editing: Katy Moore and Luisa Rodriguez
Transcriptions: Katy Moore

Avsnitt(317)

Why 'Aligned AI' Would Still Kill Democracy | David Duvenaud, ex-Anthropic team lead

Why 'Aligned AI' Would Still Kill Democracy | David Duvenaud, ex-Anthropic team lead

Democracy might be a brief historical blip. That’s the unsettling thesis of a recent paper, which argues AI that can do all the work a human can do inevitably leads to the “gradual disempowerment” of ...

27 Jan 2h 31min

#145 Classic episode – Christopher Brown on why slavery abolition wasn't inevitable

#145 Classic episode – Christopher Brown on why slavery abolition wasn't inevitable

In many ways, humanity seems to have become more humane and inclusive over time. While there’s still a lot of progress to be made, campaigns to give people of different genders, races, sexualities, et...

20 Jan 2h 56min

#233 – James Smith on how to prevent a mirror life catastrophe

#233 – James Smith on how to prevent a mirror life catastrophe

When James Smith first heard about mirror bacteria, he was sceptical. But within two weeks, he’d dropped everything to work on it full time, considering it the worst biothreat that he’d seen described...

13 Jan 2h 9min

#144 Classic episode – Athena Aktipis on why cancer is a fundamental universal phenomena

#144 Classic episode – Athena Aktipis on why cancer is a fundamental universal phenomena

What’s the opposite of cancer? If you answered “cure,” “antidote,” or “antivenom” — you’ve obviously been reading the antonym section at www.merriam-webster.com/thesaurus/cancer.But today’s guest Athe...

9 Jan 3h 30min

#142 Classic episode – John McWhorter on why the optimal number of languages might be one, and other provocative claims about language

#142 Classic episode – John McWhorter on why the optimal number of languages might be one, and other provocative claims about language

John McWhorter is a linguistics professor at Columbia University specialising in research on creole languages. He's also a content-producing machine, never afraid to give his frank opinion on anything...

6 Jan 1h 35min

2025 Highlight-o-thon: Oops! All Bests

2025 Highlight-o-thon: Oops! All Bests

It’s that magical time of year once again — highlightapalooza! Stick around for one top bit from each episode we recorded this year, including:Kyle Fish explaining how Anthropic’s AI Claude descends i...

29 Dec 20251h 40min

#232 – Andreas Mogensen on what we owe 'philosophical Vulcans' and unconscious beings

#232 – Andreas Mogensen on what we owe 'philosophical Vulcans' and unconscious beings

Most debates about the moral status of AI systems circle the same question: is there something that it feels like to be them? But what if that’s the wrong question to ask? Andreas Mogensen — a senior ...

19 Dec 20252h 37min

#231 – Paul Scharre on how AI-controlled robots will and won't change war

#231 – Paul Scharre on how AI-controlled robots will and won't change war

In 1983, Stanislav Petrov, a Soviet lieutenant colonel, sat in a bunker watching a red screen flash “MISSILE LAUNCH.” Protocol demanded he report it to superiors, which would very likely trigger a ret...

17 Dec 20252h 45min

Populärt inom Utbildning

rss-bara-en-till-om-missbruk-medberoende-2
historiepodden-se
det-skaver
alska-oss
nu-blir-det-historia
allt-du-velat-veta
harrisons-dramatiska-historia
johannes-hansen-podcast
not-fanny-anymore
sektledare
rss-sjalsligt-avkladd
rss-viktmedicinpodden
rikatillsammans-om-privatekonomi-rikedom-i-livet
sa-in-i-sjalen
rss-max-tant-med-max-villman
i-vantan-pa-katastrofen
rss-om-vi-ska-vara-arliga
psykologsnack
roda-vita-rosen
rss-i-skenet-av-blaljus