OpenAI: The nonprofit refuses to be killed (with Tyler Whitmer)
80,000 Hours Podcast11 Marras 2025

OpenAI: The nonprofit refuses to be killed (with Tyler Whitmer)

Last December, the OpenAI business put forward a plan to completely sideline its nonprofit board. But two state attorneys general have now blocked that effort and kept that board very much alive and kicking.

The for-profit’s trouble was that the entire operation was founded on the premise of — and legally pledged to — the purpose of ensuring that “artificial general intelligence benefits all of humanity.” So to get its restructure past regulators, the business entity has had to agree to 20 serious requirements designed to ensure it continues to serve that goal.

Attorney Tyler Whitmer, as part of his work with Legal Advocates for Safe Science and Technology, has been a vocal critic of OpenAI’s original restructure plan. In today’s conversation, he lays out all the changes and whether they will ultimately matter.

Full transcript, video, and links to learn more: https://80k.info/tw2

After months of public pressure and scrutiny from the attorneys general (AGs) of California and Delaware, the December proposal itself was sidelined — and what replaced it is far more complex and goes a fair way towards protecting the original mission:

  • The nonprofit’s charitable purpose — “ensure that artificial general intelligence benefits all of humanity” — now legally controls all safety and security decisions at the company. The four people appointed to the new Safety and Security Committee can block model releases worth tens of billions.
  • The AGs retain ongoing oversight, meeting quarterly with staff and requiring advance notice of any changes that might undermine their authority.
  • OpenAI’s original charter, including the remarkable “stop and assist” commitment, remains binding.

But significant concessions were made. The nonprofit lost exclusive control of AGI once developed — Microsoft can commercialise it through 2032. And transforming from complete control to this hybrid model represents, as Tyler puts it, “a bad deal compared to what OpenAI should have been.”

The real question now: will the Safety and Security Committee use its powers? It currently has four part-time volunteer members and no permanent staff, yet they’re expected to oversee a company racing to build AGI while managing commercial pressures in the hundreds of billions.

Tyler calls on OpenAI to prove they’re serious about following the agreement:

  • Hire management for the SSC.
  • Add more independent directors with AI safety expertise.
  • Maximise transparency about mission compliance.

"There’s a real opportunity for this to go well. A lot … depends on the boards, so I really hope that they … step into this role … and do a great job. … I will hope for the best and prepare for the worst, and stay vigilant throughout."

Chapters:

  • We’re hiring (00:00:00)
  • Cold open (00:00:40)
  • Tyler Whitmer is back to explain the latest OpenAI developments (00:01:46)
  • The original radical plan (00:02:39)
  • What the AGs forced on the for-profit (00:05:47)
  • Scrappy resistance probably worked (00:37:24)
  • The Safety and Security Committee has teeth — will it use them? (00:41:48)
  • Overall, is this a good deal or a bad deal? (00:52:06)
  • The nonprofit and PBC boards are almost the same. Is that good or bad or what? (01:13:29)
  • Board members’ “independence” (01:19:40)
  • Could the deal still be challenged? (01:25:32)
  • Will the deal satisfy OpenAI investors? (01:31:41)
  • The SSC and philanthropy need serious staff (01:33:13)
  • Outside advocacy on this issue, and the impact of LASST (01:38:09)
  • What to track to tell if it's working out (01:44:28)


This episode was recorded on November 4, 2025.

Video editing: Milo McGuire, Dominic Armstrong, and Simon Monsour
Audio engineering: Milo McGuire, Simon Monsour, and Dominic Armstrong
Music: CORBIT
Coordination, transcriptions, and web: Katy Moore

Jaksot(320)

#70 - Dr Cassidy Nelson on the 12 best ways to stop the next pandemic (and limit nCoV)

#70 - Dr Cassidy Nelson on the 12 best ways to stop the next pandemic (and limit nCoV)

nCoV is alarming governments and citizens around the world. It has killed more than 1,000 people, brought the Chinese economy to a standstill, and continues to show up in more and more places. But bad...

13 Helmi 20202h 26min

#69 – Jeffrey Ding on China, its AI dream, and what we get wrong about both

#69 – Jeffrey Ding on China, its AI dream, and what we get wrong about both

The State Council of China's 2017 AI plan was the starting point of China’s AI planning; China’s approach to AI is defined by its top-down and monolithic nature; China is winning the AI arms race; and...

6 Helmi 20201h 37min

Rob & Howie on what we do and don't know about 2019-nCoV

Rob & Howie on what we do and don't know about 2019-nCoV

Two 80,000 Hours researchers, Robert Wiblin and Howie Lempel, record an experimental bonus episode about the new 2019-nCoV virus.See this list of resources, including many discussed in the episode, to...

3 Helmi 20201h 18min

#68 - Will MacAskill on the paralysis argument, whether we're at the hinge of history, & his new priorities

#68 - Will MacAskill on the paralysis argument, whether we're at the hinge of history, & his new priorities

You’re given a box with a set of dice in it. If you roll an even number, a person's life is saved. If you roll an odd number, someone else will die. Each time you shake the box you get $10. Should you...

24 Tammi 20203h 25min

#44 Classic episode - Paul Christiano on finding real solutions to the AI alignment problem

#44 Classic episode - Paul Christiano on finding real solutions to the AI alignment problem

Rebroadcast: this episode was originally released in October 2018. Paul Christiano is one of the smartest people I know. After our first session produced such great material, we decided to do a seco...

15 Tammi 20203h 51min

#33 Classic episode - Anders Sandberg on cryonics, solar flares, and the annual odds of nuclear war

#33 Classic episode - Anders Sandberg on cryonics, solar flares, and the annual odds of nuclear war

Rebroadcast: this episode was originally released in May 2018. Joseph Stalin had a life-extension program dedicated to making himself immortal. What if he had succeeded? According to Bryan Caplan ...

8 Tammi 20201h 25min

#17 Classic episode - Will MacAskill on moral uncertainty, utilitarianism & how to avoid being a moral monster

#17 Classic episode - Will MacAskill on moral uncertainty, utilitarianism & how to avoid being a moral monster

Rebroadcast: this episode was originally released in January 2018. Immanuel Kant is a profoundly influential figure in modern philosophy, and was one of the earliest proponents for universal democracy...

31 Joulu 20191h 52min

#67 – David Chalmers on the nature and ethics of consciousness

#67 – David Chalmers on the nature and ethics of consciousness

What is it like to be you right now? You're seeing this text on the screen, smelling the coffee next to you, and feeling the warmth of the cup. There’s a lot going on in your head — your conscious exp...

16 Joulu 20194h 41min

Suosittua kategoriassa Koulutus

rss-murhan-anatomia
psykopodiaa-podcast
voi-hyvin-meditaatiot-2
rss-narsisti
rss-niinku-asia-on
adhd-podi
rss-liian-kuuma-peruna
aamukahvilla
psykologia
rss-valo-minussa-2
rss-vapaudu-voimaasi
kesken
rss-koira-haudattuna
aloita-meditaatio
dear-ladies
esa-saarinen-filosofia-ja-systeemiajattelu
ihminen-tavattavissa-tommy-hellsten-instituutti
leveli
rss-luonnollinen-synnytys-podcast
filocast-filosofian-perusteet