OpenAI: The nonprofit refuses to be killed (with Tyler Whitmer)
80,000 Hours Podcast11 Marras 2025

OpenAI: The nonprofit refuses to be killed (with Tyler Whitmer)

Last December, the OpenAI business put forward a plan to completely sideline its nonprofit board. But two state attorneys general have now blocked that effort and kept that board very much alive and kicking.

The for-profit’s trouble was that the entire operation was founded on the premise of — and legally pledged to — the purpose of ensuring that “artificial general intelligence benefits all of humanity.” So to get its restructure past regulators, the business entity has had to agree to 20 serious requirements designed to ensure it continues to serve that goal.

Attorney Tyler Whitmer, as part of his work with Legal Advocates for Safe Science and Technology, has been a vocal critic of OpenAI’s original restructure plan. In today’s conversation, he lays out all the changes and whether they will ultimately matter.

Full transcript, video, and links to learn more: https://80k.info/tw2

After months of public pressure and scrutiny from the attorneys general (AGs) of California and Delaware, the December proposal itself was sidelined — and what replaced it is far more complex and goes a fair way towards protecting the original mission:

  • The nonprofit’s charitable purpose — “ensure that artificial general intelligence benefits all of humanity” — now legally controls all safety and security decisions at the company. The four people appointed to the new Safety and Security Committee can block model releases worth tens of billions.
  • The AGs retain ongoing oversight, meeting quarterly with staff and requiring advance notice of any changes that might undermine their authority.
  • OpenAI’s original charter, including the remarkable “stop and assist” commitment, remains binding.

But significant concessions were made. The nonprofit lost exclusive control of AGI once developed — Microsoft can commercialise it through 2032. And transforming from complete control to this hybrid model represents, as Tyler puts it, “a bad deal compared to what OpenAI should have been.”

The real question now: will the Safety and Security Committee use its powers? It currently has four part-time volunteer members and no permanent staff, yet they’re expected to oversee a company racing to build AGI while managing commercial pressures in the hundreds of billions.

Tyler calls on OpenAI to prove they’re serious about following the agreement:

  • Hire management for the SSC.
  • Add more independent directors with AI safety expertise.
  • Maximise transparency about mission compliance.

"There’s a real opportunity for this to go well. A lot … depends on the boards, so I really hope that they … step into this role … and do a great job. … I will hope for the best and prepare for the worst, and stay vigilant throughout."

Chapters:

  • We’re hiring (00:00:00)
  • Cold open (00:00:40)
  • Tyler Whitmer is back to explain the latest OpenAI developments (00:01:46)
  • The original radical plan (00:02:39)
  • What the AGs forced on the for-profit (00:05:47)
  • Scrappy resistance probably worked (00:37:24)
  • The Safety and Security Committee has teeth — will it use them? (00:41:48)
  • Overall, is this a good deal or a bad deal? (00:52:06)
  • The nonprofit and PBC boards are almost the same. Is that good or bad or what? (01:13:29)
  • Board members’ “independence” (01:19:40)
  • Could the deal still be challenged? (01:25:32)
  • Will the deal satisfy OpenAI investors? (01:31:41)
  • The SSC and philanthropy need serious staff (01:33:13)
  • Outside advocacy on this issue, and the impact of LASST (01:38:09)
  • What to track to tell if it's working out (01:44:28)


This episode was recorded on November 4, 2025.

Video editing: Milo McGuire, Dominic Armstrong, and Simon Monsour
Audio engineering: Milo McGuire, Simon Monsour, and Dominic Armstrong
Music: CORBIT
Coordination, transcriptions, and web: Katy Moore

Jaksot(320)

#94 – Ezra Klein on aligning journalism, politics, and what matters most

#94 – Ezra Klein on aligning journalism, politics, and what matters most

How many words in U.S. newspapers have been spilled on tax policy in the past five years? And how many words on CRISPR? Or meat alternatives? Or how AI may soon automate the majority of jobs? When p...

20 Maalis 20211h 45min

#93 – Andy Weber on rendering bioweapons obsolete & ending the new nuclear arms race

#93 – Andy Weber on rendering bioweapons obsolete & ending the new nuclear arms race

COVID-19 has provided a vivid reminder of the power of biological threats. But the threat doesn't come from natural sources alone. Weaponized contagious diseases — which were abandoned by the United S...

12 Maalis 20211h 54min

#92 – Brian Christian on the alignment problem

#92 – Brian Christian on the alignment problem

Brian Christian is a bestselling author with a particular knack for accurately communicating difficult or technical ideas from both mathematics and computer science. Listeners loved our episode abo...

5 Maalis 20212h 55min

#91 – Lewis Bollard on big wins against factory farming and how they happened

#91 – Lewis Bollard on big wins against factory farming and how they happened

I suspect today's guest, Lewis Bollard, might be the single best person in the world to interview to get an overview of all the methods that might be effective for putting an end to factory farming an...

15 Helmi 20212h 33min

Rob Wiblin on how he ended up the way he is

Rob Wiblin on how he ended up the way he is

This is a crosspost of an episode of the Eureka Podcast. The interviewer is Misha Saul, a childhood friend of Rob's, who he has known for over 20 years. While it's not an episode of our own show, we...

3 Helmi 20211h 57min

#90 – Ajeya Cotra on worldview diversification and how big the future could be

#90 – Ajeya Cotra on worldview diversification and how big the future could be

You wake up in a mysterious box, and hear the booming voice of God: “I just flipped a coin. If it came up heads, I made ten boxes, labeled 1 through 10 — each of which has a human in it. If it ca...

21 Tammi 20212h 59min

Rob Wiblin on self-improvement and research ethics

Rob Wiblin on self-improvement and research ethics

This is a crosspost of an episode of the Clearer Thinking Podcast: 022: Self-Improvement and Research Ethics with Rob Wiblin. Rob chats with Spencer Greenberg, who has been an audience favourite in...

13 Tammi 20212h 30min

#73 - Phil Trammell on patient philanthropy and waiting to do good [re-release]

#73 - Phil Trammell on patient philanthropy and waiting to do good [re-release]

Rebroadcast: this episode was originally released in March 2020. To do good, most of us look to use our time and money to affect the world around us today. But perhaps that's all wrong. If you too...

7 Tammi 20212h 41min

Suosittua kategoriassa Koulutus

rss-murhan-anatomia
psykopodiaa-podcast
rss-narsisti
voi-hyvin-meditaatiot-2
aamukahvilla
rss-niinku-asia-on
rss-liian-kuuma-peruna
adhd-podi
psykologia
kesken
rss-vapaudu-voimaasi
rss-valo-minussa-2
dear-ladies
rss-koira-haudattuna
jari-sarasvuo-podcast
esa-saarinen-filosofia-ja-systeemiajattelu
leveli
rss-duodecim-lehti
rss-luonnollinen-synnytys-podcast
rss-ihana-elamani