#157 – Ezra Klein on existential risk from AI and what DC could do about it
80,000 Hours Podcast24 Heinä 2023

#157 – Ezra Klein on existential risk from AI and what DC could do about it

In Oppenheimer, scientists detonate a nuclear weapon despite thinking there's some 'near zero' chance it would ignite the atmosphere, putting an end to life on Earth. Today, scientists working on AI think the chance their work puts an end to humanity is vastly higher than that.

In response, some have suggested we launch a Manhattan Project to make AI safe via enormous investment in relevant R&D. Others have suggested that we need international organisations modelled on those that slowed the proliferation of nuclear weapons. Others still seek a research slowdown by labs while an auditing and licencing scheme is created.

Today's guest — journalist Ezra Klein of The New York Times — has watched policy discussions and legislative battles play out in DC for 20 years.

Links to learn more, summary and full transcript.

Like many people he has also taken a big interest in AI this year, writing articles such as “This changes everything.” In his first interview on the show in 2021, he flagged AI as one topic that DC would regret not having paid more attention to. So we invited him on to get his take on which regulatory proposals have promise, and which seem either unhelpful or politically unviable.

Out of the ideas on the table right now, Ezra favours a focus on direct government funding — both for AI safety research and to develop AI models designed to solve problems other than making money for their operators. He is sympathetic to legislation that would require AI models to be legible in a way that none currently are — and embraces the fact that that will slow down the release of models while businesses figure out how their products actually work.

By contrast, he's pessimistic that it's possible to coordinate countries around the world to agree to prevent or delay the deployment of dangerous AI models — at least not unless there's some spectacular AI-related disaster to create such a consensus. And he fears attempts to require licences to train the most powerful ML models will struggle unless they can find a way to exclude and thereby appease people working on relatively safe consumer technologies rather than cutting-edge research.

From observing how DC works, Ezra expects that even a small community of experts in AI governance can have a large influence on how the the US government responds to AI advances. But in Ezra's view, that requires those experts to move to DC and spend years building relationships with people in government, rather than clustering elsewhere in academia and AI labs.

In today's brisk conversation, Ezra and host Rob Wiblin cover the above as well as:

They cover:

  • Whether it's desirable to slow down AI research
  • The value of engaging with current policy debates even if they don't seem directly important
  • Which AI business models seem more or less dangerous
  • Tensions between people focused on existing vs emergent risks from AI
  • Two major challenges of being a new parent

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio Engineering Lead: Ben Cordell

Technical editing: Milo McGuire

Transcriptions: Katy Moore

Jaksot(320)

My experience with imposter syndrome — and how to (partly) overcome it (Article)

My experience with imposter syndrome — and how to (partly) overcome it (Article)

Today’s release is a reading of our article called My experience with imposter syndrome — and how to (partly) overcome it, written and narrated by Luisa Rodriguez. If you want to check out the links...

8 Joulu 202244min

Rob's thoughts on the FTX bankruptcy

Rob's thoughts on the FTX bankruptcy

In this episode, usual host of the show Rob Wiblin gives his thoughts on the recent collapse of FTX. Click here for an official 80,000 Hours statement. And here are links to some potentially relev...

23 Marras 20225min

#140 – Bear Braumoeller on the case that war isn't in decline

#140 – Bear Braumoeller on the case that war isn't in decline

Is war in long-term decline? Steven Pinker's The Better Angels of Our Nature brought this previously obscure academic question to the centre of public debate, and pointed to rates of death in war to a...

8 Marras 20222h 47min

#139 – Alan Hájek on puzzles and paradoxes in probability and expected value

#139 – Alan Hájek on puzzles and paradoxes in probability and expected value

A casino offers you a game. A coin will be tossed. If it comes up heads on the first flip you win $2. If it comes up on the second flip you win $4. If it comes up on the third you win $8, the fourth y...

28 Loka 20223h 38min

Preventing an AI-related catastrophe (Article)

Preventing an AI-related catastrophe (Article)

Today’s release is a professional reading of our new problem profile on preventing an AI-related catastrophe, written by Benjamin Hilton. We expect that there will be substantial progress in AI in t...

14 Loka 20222h 24min

#138 – Sharon Hewitt Rawlette on why pleasure and pain are the only things that intrinsically matter

#138 – Sharon Hewitt Rawlette on why pleasure and pain are the only things that intrinsically matter

What in the world is intrinsically good — good in itself even if it has no other effects? Over the millennia, people have offered many answers: joy, justice, equality, accomplishment, loving god, wisd...

30 Syys 20222h 24min

#137 – Andreas Mogensen on whether effective altruism is just for consequentialists

#137 – Andreas Mogensen on whether effective altruism is just for consequentialists

Effective altruism, in a slogan, aims to 'do the most good.' Utilitarianism, in a slogan, says we should act to 'produce the greatest good for the greatest number.' It's clear enough why utilitarians ...

8 Syys 20222h 21min

#136 – Will MacAskill on what we owe the future

#136 – Will MacAskill on what we owe the future

People who exist in the future deserve some degree of moral consideration.The future could be very big, very long, and/or very good.We can reasonably hope to influence whether people in the future exi...

15 Elo 20222h 54min

Suosittua kategoriassa Koulutus

rss-murhan-anatomia
psykopodiaa-podcast
rss-narsisti
voi-hyvin-meditaatiot-2
rss-vapaudu-voimaasi
rss-liian-kuuma-peruna
aamukahvilla
psykologia
dear-ladies
leveli
adhd-podi
kesken
rss-duodecim-lehti
avara-mieli
rahapuhetta
aloita-meditaatio
ihminen-tavattavissa-tommy-hellsten-instituutti
rss-tietoinen-yhteys-podcast-2
filocast-filosofian-perusteet
rss-luonnollinen-synnytys-podcast