OpenAI: The nonprofit refuses to be killed (with Tyler Whitmer)
80,000 Hours Podcast11 Marras 2025

OpenAI: The nonprofit refuses to be killed (with Tyler Whitmer)

Last December, the OpenAI business put forward a plan to completely sideline its nonprofit board. But two state attorneys general have now blocked that effort and kept that board very much alive and kicking.

The for-profit’s trouble was that the entire operation was founded on the premise of — and legally pledged to — the purpose of ensuring that “artificial general intelligence benefits all of humanity.” So to get its restructure past regulators, the business entity has had to agree to 20 serious requirements designed to ensure it continues to serve that goal.

Attorney Tyler Whitmer, as part of his work with Legal Advocates for Safe Science and Technology, has been a vocal critic of OpenAI’s original restructure plan. In today’s conversation, he lays out all the changes and whether they will ultimately matter.

Full transcript, video, and links to learn more: https://80k.info/tw2

After months of public pressure and scrutiny from the attorneys general (AGs) of California and Delaware, the December proposal itself was sidelined — and what replaced it is far more complex and goes a fair way towards protecting the original mission:

  • The nonprofit’s charitable purpose — “ensure that artificial general intelligence benefits all of humanity” — now legally controls all safety and security decisions at the company. The four people appointed to the new Safety and Security Committee can block model releases worth tens of billions.
  • The AGs retain ongoing oversight, meeting quarterly with staff and requiring advance notice of any changes that might undermine their authority.
  • OpenAI’s original charter, including the remarkable “stop and assist” commitment, remains binding.

But significant concessions were made. The nonprofit lost exclusive control of AGI once developed — Microsoft can commercialise it through 2032. And transforming from complete control to this hybrid model represents, as Tyler puts it, “a bad deal compared to what OpenAI should have been.”

The real question now: will the Safety and Security Committee use its powers? It currently has four part-time volunteer members and no permanent staff, yet they’re expected to oversee a company racing to build AGI while managing commercial pressures in the hundreds of billions.

Tyler calls on OpenAI to prove they’re serious about following the agreement:

  • Hire management for the SSC.
  • Add more independent directors with AI safety expertise.
  • Maximise transparency about mission compliance.

"There’s a real opportunity for this to go well. A lot … depends on the boards, so I really hope that they … step into this role … and do a great job. … I will hope for the best and prepare for the worst, and stay vigilant throughout."

Chapters:

  • We’re hiring (00:00:00)
  • Cold open (00:00:40)
  • Tyler Whitmer is back to explain the latest OpenAI developments (00:01:46)
  • The original radical plan (00:02:39)
  • What the AGs forced on the for-profit (00:05:47)
  • Scrappy resistance probably worked (00:37:24)
  • The Safety and Security Committee has teeth — will it use them? (00:41:48)
  • Overall, is this a good deal or a bad deal? (00:52:06)
  • The nonprofit and PBC boards are almost the same. Is that good or bad or what? (01:13:29)
  • Board members’ “independence” (01:19:40)
  • Could the deal still be challenged? (01:25:32)
  • Will the deal satisfy OpenAI investors? (01:31:41)
  • The SSC and philanthropy need serious staff (01:33:13)
  • Outside advocacy on this issue, and the impact of LASST (01:38:09)
  • What to track to tell if it's working out (01:44:28)


This episode was recorded on November 4, 2025.

Video editing: Milo McGuire, Dominic Armstrong, and Simon Monsour
Audio engineering: Milo McGuire, Simon Monsour, and Dominic Armstrong
Music: CORBIT
Coordination, transcriptions, and web: Katy Moore

Jaksot(319)

#213 – Will MacAskill on AI causing a “century in a decade” – and how we're completely unprepared

#213 – Will MacAskill on AI causing a “century in a decade” – and how we're completely unprepared

The 20th century saw unprecedented change: nuclear weapons, satellites, the rise and fall of communism, third-wave feminism, the internet, postmodernism, game theory, genetic engineering, the Big Bang...

11 Maalis 20253h 57min

Emergency pod: Judge plants a legal time bomb under OpenAI (with Rose Chan Loui)

Emergency pod: Judge plants a legal time bomb under OpenAI (with Rose Chan Loui)

When OpenAI announced plans to convert from nonprofit to for-profit control last October, it likely didn’t anticipate the legal labyrinth it now faces. A recent court order in Elon Musk’s lawsuit agai...

7 Maalis 202536min

#139 Classic episode – Alan Hájek on puzzles and paradoxes in probability and expected value

#139 Classic episode – Alan Hájek on puzzles and paradoxes in probability and expected value

A casino offers you a game. A coin will be tossed. If it comes up heads on the first flip you win $2. If it comes up on the second flip you win $4. If it comes up on the third you win $8, the fourth y...

25 Helmi 20253h 41min

#143 Classic episode – Jeffrey Lewis on the most common misconceptions about nuclear weapons

#143 Classic episode – Jeffrey Lewis on the most common misconceptions about nuclear weapons

America aims to avoid nuclear war by relying on the principle of 'mutually assured destruction,' right? Wrong. Or at least... not officially.As today's guest — Jeffrey Lewis, founder of Arms Control W...

19 Helmi 20252h 40min

#212 – Allan Dafoe on why technology is unstoppable & how to shape AI development anyway

#212 – Allan Dafoe on why technology is unstoppable & how to shape AI development anyway

Technology doesn’t force us to do anything — it merely opens doors. But military and economic competition pushes us through.That’s how today’s guest Allan Dafoe — director of frontier safety and gover...

14 Helmi 20252h 44min

Emergency pod: Elon tries to crash OpenAI's party (with Rose Chan Loui)

Emergency pod: Elon tries to crash OpenAI's party (with Rose Chan Loui)

On Monday Musk made the OpenAI nonprofit foundation an offer they want to refuse, but might have trouble doing so: $97.4 billion for its stake in the for-profit company, plus the freedom to stick with...

12 Helmi 202557min

AGI disagreements and misconceptions: Rob, Luisa, & past guests hash it out

AGI disagreements and misconceptions: Rob, Luisa, & past guests hash it out

Will LLMs soon be made into autonomous agents? Will they lead to job losses? Is AI misinformation overblown? Will it prove easy or hard to create AGI? And how likely is it that it will feel like somet...

10 Helmi 20253h 12min

#124 Classic episode – Karen Levy on fads and misaligned incentives in global development, and scaling deworming to reach hundreds of millions

#124 Classic episode – Karen Levy on fads and misaligned incentives in global development, and scaling deworming to reach hundreds of millions

If someone said a global health and development programme was sustainable, participatory, and holistic, you'd have to guess that they were saying something positive. But according to today's guest Kar...

7 Helmi 20253h 10min

Suosittua kategoriassa Koulutus

rss-murhan-anatomia
psykopodiaa-podcast
voi-hyvin-meditaatiot-2
rss-narsisti
adhd-podi
rss-liian-kuuma-peruna
rss-duodecim-lehti
jari-sarasvuo-podcast
psykologia
rss-koira-haudattuna
rahapuhetta
leveli
rss-uskonto-on-tylsaa
rss-valo-minussa-2
rss-luonnollinen-synnytys-podcast
rss-niinku-asia-on
kesken
salainen-paivakirja
ihminen-tavattavissa-tommy-hellsten-instituutti
rss-tietoinen-yhteys-podcast-2