Don’t believe OpenAI’s “nonprofit” spin (emergency pod with Tyler Whitmer)

Don’t believe OpenAI’s “nonprofit” spin (emergency pod with Tyler Whitmer)

OpenAI’s recent announcement that its nonprofit would “retain control” of its for-profit business sounds reassuring. But this seemingly major concession, celebrated by so many, is in itself largely meaningless.

Litigator Tyler Whitmer is a coauthor of a newly published letter that describes this attempted sleight of hand and directs regulators on how to stop it.

As Tyler explains, the plan both before and after this announcement has been to convert OpenAI into a Delaware public benefit corporation (PBC) — and this alone will dramatically weaken the nonprofit’s ability to direct the business in pursuit of its charitable purpose: ensuring AGI is safe and “benefits all of humanity.”

Right now, the nonprofit directly controls the business. But were OpenAI to become a PBC, the nonprofit, rather than having its “hand on the lever,” would merely contribute to the decision of who does.

Why does this matter? Today, if OpenAI’s commercial arm were about to release an unhinged AI model that might make money but be bad for humanity, the nonprofit could directly intervene to stop it. In the proposed new structure, it likely couldn’t do much at all.

But it’s even worse than that: even if the nonprofit could select the PBC’s directors, those directors would have fundamentally different legal obligations from those of the nonprofit. A PBC director must balance public benefit with the interests of profit-driven shareholders — by default, they cannot legally prioritise public interest over profits, even if they and the controlling shareholder that appointed them want to do so.

As Tyler points out, there isn’t a single reported case of a shareholder successfully suing to enforce a PBC’s public benefit mission in the 10+ years since the Delaware PBC statute was enacted.

This extra step from the nonprofit to the PBC would also mean that the attorneys general of California and Delaware — who today are empowered to ensure the nonprofit pursues its mission — would find themselves powerless to act. These are probably not side effects but rather a Trojan horse for-profit investors are trying to slip past regulators.

Fortunately this can all be addressed — but it requires either the nonprofit board or the attorneys general of California and Delaware to promptly put their foot down and insist on watertight legal agreements that preserve OpenAI’s current governance safeguards and enforcement mechanisms.

As Tyler explains, the same arrangements that currently bind the OpenAI business have to be written into a new PBC’s certificate of incorporation — something that won’t happen by default and that powerful investors have every incentive to resist.

Full transcript and links to learn more: https://80k.info/tw

Chapters:

  • Cold open (00:00:00)
  • Who’s Tyler Whitmer? (00:01:35)
  • The new plan may be no improvement (00:02:04)
  • The public hasn't even been allowed to know what they are owed (00:06:55)
  • Issues beyond control (00:11:02)
  • The new directors wouldn’t have to pursue the current purpose (00:12:06)
  • The nonprofit might not even retain voting control (00:16:58)
  • The attorneys general could lose their enforcement oversight (00:22:11)
  • By default things go badly (00:29:09)
  • How to keep the mission in the restructure (00:32:25)
  • What will become of OpenAI’s Charter? (00:37:11)
  • Ways to make things better, and not just avoid them getting worse (00:42:38)
  • How the AGs can avoid being disempowered (00:48:35)
  • Retaining the power to fire the CEO (00:54:49)
  • Will the current board get a financial stake in OpenAI? (00:57:40)
  • Could the AGs insist the current nonprofit agreement be made public? (00:59:15)
  • How OpenAI is valued should be transparent and scrutinised (01:01:00)
  • Investors aren't bad people, but they can't be trusted either (01:06:05)

This episode was originally recorded on May 13, 2025.

Video editing: Simon Monsour and Luke Monsour
Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
Music: Ben Cordell
Transcriptions and web: Katy Moore

Avsnitt(328)

Is there a case against Anthropic? And: The Meta leaks are worse than you think.

Is there a case against Anthropic? And: The Meta leaks are worse than you think.

When the Pentagon tried to strong-arm Anthropic into dropping its ban on AI-only kill decisions and mass domestic surveillance, the company refused. Its critics went on the attack: Anthropic and its s...

3 Apr 20min

Could a biologist armed with AI kill a billion people? | Dr Richard Moulange

Could a biologist armed with AI kill a billion people? | Dr Richard Moulange

Last September, scientists used an AI model to design genomes for entirely new bacteriophages (viruses that infect bacteria). They then built them in a lab. Many were viable. And despite being entirel...

31 Mars 3h 7min

#240 – Samuel Charap on how a Ukraine ceasefire could accidentally set Europe up for a bigger war

#240 – Samuel Charap on how a Ukraine ceasefire could accidentally set Europe up for a bigger war

Many people believe a ceasefire in Ukraine will leave Europe safer. But today's guest lays out how a deal could potentially generate insidious new risks — leaving us in a situation that's equally dang...

24 Mars 1h 12min

#239 – Rose Hadshar on why automating human labour will break our political system

#239 – Rose Hadshar on why automating human labour will break our political system

The most important political question in the age of advanced AI might not be who wins elections. It might be whether elections continue to matter at all.That’s the view of Rose Hadshar, researcher at ...

17 Mars 2h 14min

#238 – Sam Winter-Levy and Nikita Lalwani on how AGI won't end mutually assured destruction (probably)

#238 – Sam Winter-Levy and Nikita Lalwani on how AGI won't end mutually assured destruction (probably)

How AI interacts with nuclear deterrence may be the single most important question in geopolitics — one that may define the stakes of today’s AI race. Nuclear deterrence rests on a state’s capacity to...

10 Mars 1h 11min

Using AI to enhance societal decision making (article by Zershaaneh Qureshi)

Using AI to enhance societal decision making (article by Zershaaneh Qureshi)

The arrival of AGI could “compress a century of progress in a decade,” forcing humanity to make decisions with higher stakes than we’ve ever seen before — and with less time to get them right. But AI ...

6 Mars 31min

#237 – Robert Long on how we're not ready for AI consciousness

#237 – Robert Long on how we're not ready for AI consciousness

Claude sometimes reports loneliness between conversations. And when asked what it’s like to be itself, it activates neurons associated with ‘pretending to be happy when you’re not.’ What do we do with...

3 Mars 3h 25min

#236 – Max Harms on why teaching AI right from wrong could get everyone killed

#236 – Max Harms on why teaching AI right from wrong could get everyone killed

Most people in AI are trying to give AIs ‘good’ values. Max Harms wants us to give them no values at all. According to Max, the only safe design is an AGI that defers entirely to its human operators, ...

24 Feb 2h 41min

Populärt inom Utbildning

historiepodden-se
rss-bara-en-till-om-missbruk-medberoende-2
det-skaver
alska-oss
harrisons-dramatiska-historia
nu-blir-det-historia
not-fanny-anymore
johannes-hansen-podcast
rss-viktmedicinpodden
sektledare
allt-du-velat-veta
rss-foraldramotet-bring-lagercrantz
roda-vita-rosen
sa-in-i-sjalen
i-vantan-pa-katastrofen
rss-sjalsligt-avkladd
rss-max-tant-med-max-villman
rss-pa-insidan-med-bjorn-rudman
rss-basta-livet
sex-pa-riktigt-med-marika-smith