#157 – Ezra Klein on existential risk from AI and what DC could do about it

#157 – Ezra Klein on existential risk from AI and what DC could do about it

In Oppenheimer, scientists detonate a nuclear weapon despite thinking there's some 'near zero' chance it would ignite the atmosphere, putting an end to life on Earth. Today, scientists working on AI think the chance their work puts an end to humanity is vastly higher than that.

In response, some have suggested we launch a Manhattan Project to make AI safe via enormous investment in relevant R&D. Others have suggested that we need international organisations modelled on those that slowed the proliferation of nuclear weapons. Others still seek a research slowdown by labs while an auditing and licencing scheme is created.

Today's guest — journalist Ezra Klein of The New York Times — has watched policy discussions and legislative battles play out in DC for 20 years.

Links to learn more, summary and full transcript.

Like many people he has also taken a big interest in AI this year, writing articles such as “This changes everything.” In his first interview on the show in 2021, he flagged AI as one topic that DC would regret not having paid more attention to. So we invited him on to get his take on which regulatory proposals have promise, and which seem either unhelpful or politically unviable.

Out of the ideas on the table right now, Ezra favours a focus on direct government funding — both for AI safety research and to develop AI models designed to solve problems other than making money for their operators. He is sympathetic to legislation that would require AI models to be legible in a way that none currently are — and embraces the fact that that will slow down the release of models while businesses figure out how their products actually work.

By contrast, he's pessimistic that it's possible to coordinate countries around the world to agree to prevent or delay the deployment of dangerous AI models — at least not unless there's some spectacular AI-related disaster to create such a consensus. And he fears attempts to require licences to train the most powerful ML models will struggle unless they can find a way to exclude and thereby appease people working on relatively safe consumer technologies rather than cutting-edge research.

From observing how DC works, Ezra expects that even a small community of experts in AI governance can have a large influence on how the the US government responds to AI advances. But in Ezra's view, that requires those experts to move to DC and spend years building relationships with people in government, rather than clustering elsewhere in academia and AI labs.

In today's brisk conversation, Ezra and host Rob Wiblin cover the above as well as:

They cover:

  • Whether it's desirable to slow down AI research
  • The value of engaging with current policy debates even if they don't seem directly important
  • Which AI business models seem more or less dangerous
  • Tensions between people focused on existing vs emergent risks from AI
  • Two major challenges of being a new parent

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio Engineering Lead: Ben Cordell

Technical editing: Milo McGuire

Transcriptions: Katy Moore

Episoder(321)

#191 (Part 1) – Carl Shulman on the economy and national security after AGI

#191 (Part 1) – Carl Shulman on the economy and national security after AGI

This is the first part of our marathon interview with Carl Shulman. The second episode is on government and society after AGI. You can listen to them in either order!The human brain does what it does ...

27 Jun 20244h 14min

#190 – Eric Schwitzgebel on whether the US is conscious

#190 – Eric Schwitzgebel on whether the US is conscious

"One of the most amazing things about planet Earth is that there are complex bags of mostly water — you and me – and we can look up at the stars, and look into our brains, and try to grapple with the ...

7 Jun 20242h

#189 – Rachel Glennerster on why we still don’t have vaccines that could save millions

#189 – Rachel Glennerster on why we still don’t have vaccines that could save millions

"You can’t charge what something is worth during a pandemic. So we estimated that the value of one course of COVID vaccine in January 2021 was over $5,000. They were selling for between $6 and $40. So...

29 Mai 20242h 48min

#188 – Matt Clancy on whether science is good

#188 – Matt Clancy on whether science is good

"Suppose we make these grants, we do some of those experiments I talk about. We discover, for example — I’m just making this up — but we give people superforecasting tests when they’re doing peer revi...

23 Mai 20242h 40min

#187 – Zach Weinersmith on how researching his book turned him from a space optimist into a "space bastard"

#187 – Zach Weinersmith on how researching his book turned him from a space optimist into a "space bastard"

"Earth economists, when they measure how bad the potential for exploitation is, they look at things like, how is labour mobility? How much possibility do labourers have otherwise to go somewhere else?...

14 Mai 20243h 6min

#186 – Dean Spears on why babies are born small in Uttar Pradesh, and how to save their lives

#186 – Dean Spears on why babies are born small in Uttar Pradesh, and how to save their lives

"I work in a place called Uttar Pradesh, which is a state in India with 240 million people. One in every 33 people in the whole world lives in Uttar Pradesh. It would be the fifth largest country if i...

1 Mai 20241h 18min

#185 – Lewis Bollard on the 7 most promising ways to end factory farming, and whether AI is going to be good or bad for animals

#185 – Lewis Bollard on the 7 most promising ways to end factory farming, and whether AI is going to be good or bad for animals

"The constraint right now on factory farming is how far can you push the biology of these animals? But AI could remove that constraint. It could say, 'Actually, we can push them further in these ways ...

18 Apr 20242h 33min

#184 – Zvi Mowshowitz on sleeping on sleeper agents, and the biggest AI updates since ChatGPT

#184 – Zvi Mowshowitz on sleeping on sleeper agents, and the biggest AI updates since ChatGPT

Many of you will have heard of Zvi Mowshowitz as a superhuman information-absorbing-and-processing machine — which he definitely is. As the author of the Substack Don’t Worry About the Vase, Zvi has s...

11 Apr 20243h 31min

Populært innen Fakta

fastlegen
dine-penger-pengeradet
relasjonspodden-med-dora-thorhallsdottir-kjersti-idem
rss-strid-de-norske-borgerkrigene
treningspodden
jakt-og-fiskepodden
rss-sunn-okonomi
foreldreradet
merry-quizmas
rss-mann-i-krise-med-sagen
gravid-uke-for-uke
generasjonspodden
fryktlos
hverdagspsyken
sinnsyn
teknologi-og-mennesker
rss-kunsten-a-leve
rss-mind-body-podden
dopet
rss-kull