OpenAI: The nonprofit refuses to be killed (with Tyler Whitmer)
80,000 Hours Podcast11 Marras 2025

OpenAI: The nonprofit refuses to be killed (with Tyler Whitmer)

Last December, the OpenAI business put forward a plan to completely sideline its nonprofit board. But two state attorneys general have now blocked that effort and kept that board very much alive and kicking.

The for-profit’s trouble was that the entire operation was founded on the premise of — and legally pledged to — the purpose of ensuring that “artificial general intelligence benefits all of humanity.” So to get its restructure past regulators, the business entity has had to agree to 20 serious requirements designed to ensure it continues to serve that goal.

Attorney Tyler Whitmer, as part of his work with Legal Advocates for Safe Science and Technology, has been a vocal critic of OpenAI’s original restructure plan. In today’s conversation, he lays out all the changes and whether they will ultimately matter.

Full transcript, video, and links to learn more: https://80k.info/tw2

After months of public pressure and scrutiny from the attorneys general (AGs) of California and Delaware, the December proposal itself was sidelined — and what replaced it is far more complex and goes a fair way towards protecting the original mission:

  • The nonprofit’s charitable purpose — “ensure that artificial general intelligence benefits all of humanity” — now legally controls all safety and security decisions at the company. The four people appointed to the new Safety and Security Committee can block model releases worth tens of billions.
  • The AGs retain ongoing oversight, meeting quarterly with staff and requiring advance notice of any changes that might undermine their authority.
  • OpenAI’s original charter, including the remarkable “stop and assist” commitment, remains binding.

But significant concessions were made. The nonprofit lost exclusive control of AGI once developed — Microsoft can commercialise it through 2032. And transforming from complete control to this hybrid model represents, as Tyler puts it, “a bad deal compared to what OpenAI should have been.”

The real question now: will the Safety and Security Committee use its powers? It currently has four part-time volunteer members and no permanent staff, yet they’re expected to oversee a company racing to build AGI while managing commercial pressures in the hundreds of billions.

Tyler calls on OpenAI to prove they’re serious about following the agreement:

  • Hire management for the SSC.
  • Add more independent directors with AI safety expertise.
  • Maximise transparency about mission compliance.

"There’s a real opportunity for this to go well. A lot … depends on the boards, so I really hope that they … step into this role … and do a great job. … I will hope for the best and prepare for the worst, and stay vigilant throughout."

Chapters:

  • We’re hiring (00:00:00)
  • Cold open (00:00:40)
  • Tyler Whitmer is back to explain the latest OpenAI developments (00:01:46)
  • The original radical plan (00:02:39)
  • What the AGs forced on the for-profit (00:05:47)
  • Scrappy resistance probably worked (00:37:24)
  • The Safety and Security Committee has teeth — will it use them? (00:41:48)
  • Overall, is this a good deal or a bad deal? (00:52:06)
  • The nonprofit and PBC boards are almost the same. Is that good or bad or what? (01:13:29)
  • Board members’ “independence” (01:19:40)
  • Could the deal still be challenged? (01:25:32)
  • Will the deal satisfy OpenAI investors? (01:31:41)
  • The SSC and philanthropy need serious staff (01:33:13)
  • Outside advocacy on this issue, and the impact of LASST (01:38:09)
  • What to track to tell if it's working out (01:44:28)


This episode was recorded on November 4, 2025.

Video editing: Milo McGuire, Dominic Armstrong, and Simon Monsour
Audio engineering: Milo McGuire, Simon Monsour, and Dominic Armstrong
Music: CORBIT
Coordination, transcriptions, and web: Katy Moore

Jaksot(320)

#75 – Michelle Hutchinson on what people most often ask 80,000 Hours

#75 – Michelle Hutchinson on what people most often ask 80,000 Hours

Since it was founded, 80,000 Hours has done one-on-one calls to supplement our online content and offer more personalised advice. We try to help people get clear on their most plausible paths, the key...

28 Huhti 20202h 13min

#74 – Dr Greg Lewis on COVID-19 & catastrophic biological risks

#74 – Dr Greg Lewis on COVID-19 & catastrophic biological risks

Our lives currently revolve around the global emergency of COVID-19; you’re probably reading this while confined to your house, as the death toll from the worst pandemic since 1918 continues to rise. ...

17 Huhti 20202h 37min

Article: Reducing global catastrophic biological risks

Article: Reducing global catastrophic biological risks

In a few days we'll be putting out a conversation with Dr Greg Lewis, who studies how to prevent global catastrophic biological risks at Oxford's Future of Humanity Institute. Greg also wrote a new ...

15 Huhti 20201h 4min

Emergency episode: Rob & Howie on the menace of COVID-19, and what both governments & individuals might do to help

Emergency episode: Rob & Howie on the menace of COVID-19, and what both governments & individuals might do to help

From home isolation Rob and Howie just recorded an episode on: 1. How many could die in the crisis, and the risk to your health personally. 2. What individuals might be able to do help tackle the coro...

19 Maalis 20201h 52min

#73 – Phil Trammell on patient philanthropy and waiting to do good

#73 – Phil Trammell on patient philanthropy and waiting to do good

To do good, most of us look to use our time and money to affect the world around us today. But perhaps that's all wrong. If you took $1,000 you were going to donate and instead put it in the stock mar...

17 Maalis 20202h 35min

#72 - Toby Ord on the precipice and humanity's potential futures

#72 - Toby Ord on the precipice and humanity's potential futures

This week Oxford academic and 80,000 Hours trustee Dr Toby Ord released his new book The Precipice: Existential Risk and the Future of Humanity. It's about how our long-term future could be better tha...

7 Maalis 20203h 14min

#71 - Benjamin Todd on the key ideas of 80,000 Hours

#71 - Benjamin Todd on the key ideas of 80,000 Hours

The 80,000 Hours Podcast is about “the world’s most pressing problems and how you can use your career to solve them”, and in this episode we tackle that question in the most direct way possible. Las...

2 Maalis 20202h 57min

Arden & Rob on demandingness, work-life balance & injustice (80k team chat #1)

Arden & Rob on demandingness, work-life balance & injustice (80k team chat #1)

Today's bonus episode of the podcast is a quick conversation between me and my fellow 80,000 Hours researcher Arden Koehler about a few topics, including the demandingness of morality, work-life balan...

25 Helmi 202044min

Suosittua kategoriassa Koulutus

rss-murhan-anatomia
psykopodiaa-podcast
voi-hyvin-meditaatiot-2
rss-narsisti
rss-niinku-asia-on
adhd-podi
rss-liian-kuuma-peruna
aamukahvilla
psykologia
rss-valo-minussa-2
rss-vapaudu-voimaasi
kesken
rss-koira-haudattuna
aloita-meditaatio
dear-ladies
esa-saarinen-filosofia-ja-systeemiajattelu
ihminen-tavattavissa-tommy-hellsten-instituutti
leveli
rss-luonnollinen-synnytys-podcast
filocast-filosofian-perusteet