Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to Warn

Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to Warn

This week, a group of current and former employees from OpenAI and Google DeepMind penned an open letter accusing the industry’s leading companies of prioritizing profits over safety. This comes after a spate of high profile departures from OpenAI, including co-founder Ilya Sutskever and senior researcher Jan Leike, as well as reports that OpenAI has gone to great lengths to silence would-be whistleblowers.

The writers of the open letter argue that researchers have a “right to warn” the public about AI risks and laid out a series of principles that would protect that right. In this episode, we sit down with one of those writers: William Saunders, who left his job as a research engineer at OpenAI in February. William is now breaking the silence on what he saw at OpenAI that compelled him to leave the company and to put his name to this letter.

RECOMMENDED MEDIA

The Right to Warn Open Letter

My Perspective On "A Right to Warn about Advanced Artificial Intelligence": A follow-up from William about the letter

Leaked OpenAI documents reveal aggressive tactics toward former employees: An investigation by Vox into OpenAI’s policy of non-disparagement.

RECOMMENDED YUA EPISODES

  1. A First Step Toward AI Regulation with Tom Wheeler
  2. Spotlight on AI: What Would It Take For This to Go Well?
  3. Big Food, Big Tech and Big AI with Michael Moss
  4. Can We Govern AI? With Marietje Schaake

Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_


Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Avsnitt(158)

Attachment Hacking and the Rise of AI Psychosis

Attachment Hacking and the Rise of AI Psychosis

Therapy and companionship has become the #1 use case for AI, with millions worldwide sharing their innermost thoughts with AI systems — often things they wouldn't tell loved ones or human therapists. ...

21 Jan 50min

What Would It Take to Actually Trust Each Other? The Game Theory Dilemma

What Would It Take to Actually Trust Each Other? The Game Theory Dilemma

So much of our world today can be summed up in the cold logic of “if I don’t, they will.” This is the foundation of game theory, which holds that cooperation and virtue are irrational; that all that m...

8 Jan 45min

America and China Are Racing to Different AI Futures

America and China Are Racing to Different AI Futures

Is the US really in an AI race with China—or are we racing toward completely different finish lines?In this episode, Tristan Harris sits down with China experts Selina Xu and Matt Sheehan to separate ...

18 Dec 202557min

AI and the Future of Work: What You Need to Know

AI and the Future of Work: What You Need to Know

No matter where you sit within the economy, whether you're a CEO or an entry level worker, everyone's feeling uneasy about AI and the future of work. Uncertainty about career paths, job security, and ...

4 Dec 202545min

Feed Drop: "Into the Machine" with Tobias Rose-Stockwell

Feed Drop: "Into the Machine" with Tobias Rose-Stockwell

This week, we’re bringing you Tristan’s conversation with Tobias Rose-Stockwell on his podcast “Into the Machine.”  Tobias is a designer, writer, and technologist and the author of the book “The Outra...

13 Nov 20251h 4min

What if we had fixed social media?

What if we had fixed social media?

We really enjoyed hearing all of your questions for our annual Ask Us Anything episode. There was one question that kept coming up: what might a different world look like? The broken incentives behind...

6 Nov 202516min

Ask Us Anything 2025

Ask Us Anything 2025

It's been another big year in AI. The AI race has accelerated to breakneck speed, with frontier labs pouring hundreds of billions into increasingly powerful models—each one smarter, faster, and more u...

23 Okt 202540min

The Crisis That United Humanity—and Why It Matters for AI

The Crisis That United Humanity—and Why It Matters for AI

In 1985, scientists in Antarctica discovered a hole in the ozone layer that posed a catastrophic threat to life on earth if we didn’t do something about it. Then, something amazing happened: humanity ...

11 Sep 202551min

Populärt inom Samhälle & Kultur

podme-dokumentar
en-mork-historia
gynning-berg
p3-dokumentar
svenska-fall
aftonbladet-krim
skaringer-nessvold
mardromsgasten
creepypodden-med-jack-werner
hor-har
blenda-2
killradet
flashback-forever
rss-nemo-moter-en-van
p3-historia
rss-mer-an-bara-morsa
historiska-brott
rattsfallen
vad-blir-det-for-mord
kod-katastrof