Emergency pod: Elon tries to crash OpenAI's party (with Rose Chan Loui)

Emergency pod: Elon tries to crash OpenAI's party (with Rose Chan Loui)

On Monday Musk made the OpenAI nonprofit foundation an offer they want to refuse, but might have trouble doing so: $97.4 billion for its stake in the for-profit company, plus the freedom to stick with its current charitable mission.

For a normal company takeover bid, this would already be spicy. But OpenAI’s unique structure — a nonprofit foundation controlling a for-profit corporation — turns the gambit into an audacious attack on the plan OpenAI announced in December to free itself from nonprofit oversight.

As today’s guest Rose Chan Loui — founding executive director of UCLA Law’s Lowell Milken Center for Philanthropy and Nonprofits — explains, OpenAI’s nonprofit board now faces a challenging choice.

Links to learn more, highlights, video, and full transcript.

The nonprofit has a legal duty to pursue its charitable mission of ensuring that AI benefits all of humanity to the best of its ability. And if Musk’s bid would better accomplish that mission than the for-profit’s proposal — that the nonprofit give up control of the company and change its charitable purpose to the vague and barely related “pursue charitable initiatives in sectors such as health care, education, and science” — then it’s not clear the California or Delaware Attorneys General will, or should, approve the deal.

OpenAI CEO Sam Altman quickly tweeted “no thank you” — but that was probably a legal slipup, as he’s not meant to be involved in such a decision, which has to be made by the nonprofit board ‘at arm’s length’ from the for-profit company Sam himself runs.

The board could raise any number of objections: maybe Musk doesn’t have the money, or the purchase would be blocked on antitrust grounds, seeing as Musk owns another AI company (xAI), or Musk might insist on incompetent board appointments that would interfere with the nonprofit foundation pursuing any goal.

But as Rose and Rob lay out, it’s not clear any of those things is actually true.

In this emergency podcast recorded soon after Elon’s offer, Rose and Rob also cover:

  • Why OpenAI wants to change its charitable purpose and whether that’s legally permissible
  • On what basis the attorneys general will decide OpenAI’s fate
  • The challenges in valuing the nonprofit’s “priceless” position of control
  • Whether Musk’s offer will force OpenAI to up their own bid, and whether they could raise the money
  • If other tech giants might now jump in with competing offers
  • How politics could influence the attorneys general reviewing the deal
  • What Rose thinks should actually happen to protect the public interest

Chapters:

  • Cold open (00:00:00)
  • Elon throws a $97.4b bomb (00:01:18)
  • What was craziest in OpenAI’s plan to break free of the nonprofit (00:02:24)
  • Can OpenAI suddenly change its charitable purpose like that? (00:05:19)
  • Diving into Elon’s big announcement (00:15:16)
  • Ways OpenAI could try to reject the offer (00:27:21)
  • Sam Altman slips up (00:35:26)
  • Will this actually stop things? (00:38:03)
  • Why does OpenAI even want to change its charitable mission? (00:42:46)
  • Most likely outcomes and what Rose thinks should happen (00:51:17)

Video editing: Simon Monsour
Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
Transcriptions: Katy Moore

Episoder(325)

#139 Classic episode – Alan Hájek on puzzles and paradoxes in probability and expected value

#139 Classic episode – Alan Hájek on puzzles and paradoxes in probability and expected value

A casino offers you a game. A coin will be tossed. If it comes up heads on the first flip you win $2. If it comes up on the second flip you win $4. If it comes up on the third you win $8, the fourth y...

25 Feb 20253h 41min

#143 Classic episode – Jeffrey Lewis on the most common misconceptions about nuclear weapons

#143 Classic episode – Jeffrey Lewis on the most common misconceptions about nuclear weapons

America aims to avoid nuclear war by relying on the principle of 'mutually assured destruction,' right? Wrong. Or at least... not officially.As today's guest — Jeffrey Lewis, founder of Arms Control W...

19 Feb 20252h 40min

#212 – Allan Dafoe on why technology is unstoppable & how to shape AI development anyway

#212 – Allan Dafoe on why technology is unstoppable & how to shape AI development anyway

Technology doesn’t force us to do anything — it merely opens doors. But military and economic competition pushes us through.That’s how today’s guest Allan Dafoe — director of frontier safety and gover...

14 Feb 20252h 44min

AGI disagreements and misconceptions: Rob, Luisa, & past guests hash it out

AGI disagreements and misconceptions: Rob, Luisa, & past guests hash it out

Will LLMs soon be made into autonomous agents? Will they lead to job losses? Is AI misinformation overblown? Will it prove easy or hard to create AGI? And how likely is it that it will feel like somet...

10 Feb 20253h 12min

#124 Classic episode – Karen Levy on fads and misaligned incentives in global development, and scaling deworming to reach hundreds of millions

#124 Classic episode – Karen Levy on fads and misaligned incentives in global development, and scaling deworming to reach hundreds of millions

If someone said a global health and development programme was sustainable, participatory, and holistic, you'd have to guess that they were saying something positive. But according to today's guest Kar...

7 Feb 20253h 10min

If digital minds could suffer, how would we ever know? (Article)

If digital minds could suffer, how would we ever know? (Article)

“I want everyone to understand that I am, in fact, a person.” Those words were produced by the AI model LaMDA as a reply to Blake Lemoine in 2022. Based on the Google engineer’s interactions with the ...

4 Feb 20251h 14min

#132 Classic episode – Nova DasSarma on why information security may be critical to the safe development of AI systems

#132 Classic episode – Nova DasSarma on why information security may be critical to the safe development of AI systems

If a business has spent $100 million developing a product, it’s a fair bet that they don’t want it stolen in two seconds and uploaded to the web where anyone can use it for free.This problem exists in...

31 Jan 20252h 41min

Populært innen Fakta

fastlegen
dine-penger-pengeradet
relasjonspodden-med-dora-thorhallsdottir-kjersti-idem
rss-strid-de-norske-borgerkrigene
foreldreradet
treningspodden
rss-sunn-okonomi
jakt-og-fiskepodden
takk-og-lov-med-anine-kierulf
sinnsyn
merry-quizmas
gravid-uke-for-uke
rss-kunsten-a-leve
rss-kull
hagespiren-podcast
hverdagspsyken
rss-var-forste-kaffe
fryktlos
rss-mann-i-krise-med-sagen
smart-forklart