#157 – Ezra Klein on existential risk from AI and what DC could do about it

#157 – Ezra Klein on existential risk from AI and what DC could do about it

In Oppenheimer, scientists detonate a nuclear weapon despite thinking there's some 'near zero' chance it would ignite the atmosphere, putting an end to life on Earth. Today, scientists working on AI think the chance their work puts an end to humanity is vastly higher than that.

In response, some have suggested we launch a Manhattan Project to make AI safe via enormous investment in relevant R&D. Others have suggested that we need international organisations modelled on those that slowed the proliferation of nuclear weapons. Others still seek a research slowdown by labs while an auditing and licencing scheme is created.

Today's guest — journalist Ezra Klein of The New York Times — has watched policy discussions and legislative battles play out in DC for 20 years.

Links to learn more, summary and full transcript.

Like many people he has also taken a big interest in AI this year, writing articles such as “This changes everything.” In his first interview on the show in 2021, he flagged AI as one topic that DC would regret not having paid more attention to. So we invited him on to get his take on which regulatory proposals have promise, and which seem either unhelpful or politically unviable.

Out of the ideas on the table right now, Ezra favours a focus on direct government funding — both for AI safety research and to develop AI models designed to solve problems other than making money for their operators. He is sympathetic to legislation that would require AI models to be legible in a way that none currently are — and embraces the fact that that will slow down the release of models while businesses figure out how their products actually work.

By contrast, he's pessimistic that it's possible to coordinate countries around the world to agree to prevent or delay the deployment of dangerous AI models — at least not unless there's some spectacular AI-related disaster to create such a consensus. And he fears attempts to require licences to train the most powerful ML models will struggle unless they can find a way to exclude and thereby appease people working on relatively safe consumer technologies rather than cutting-edge research.

From observing how DC works, Ezra expects that even a small community of experts in AI governance can have a large influence on how the the US government responds to AI advances. But in Ezra's view, that requires those experts to move to DC and spend years building relationships with people in government, rather than clustering elsewhere in academia and AI labs.

In today's brisk conversation, Ezra and host Rob Wiblin cover the above as well as:

They cover:

  • Whether it's desirable to slow down AI research
  • The value of engaging with current policy debates even if they don't seem directly important
  • Which AI business models seem more or less dangerous
  • Tensions between people focused on existing vs emergent risks from AI
  • Two major challenges of being a new parent

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio Engineering Lead: Ben Cordell

Technical editing: Milo McGuire

Transcriptions: Katy Moore

Avsnitt(321)

#128 – Chris Blattman on the five reasons wars happen

#128 – Chris Blattman on the five reasons wars happen

In nature, animals roar and bare their teeth to intimidate adversaries — but one side usually backs down, and real fights are rare. The wisdom of evolution is that the risk of violence is just too gre...

28 Apr 20222h 46min

#127 – Sam Bankman-Fried on taking a high-risk approach to crypto and doing good

#127 – Sam Bankman-Fried on taking a high-risk approach to crypto and doing good

On this episode of the show, host Rob Wiblin interviews Sam Bankman-Fried. This interview was recorded in February 2022, and released in April 2022. But on November 11 2022, Sam Bankman-Fried's co...

14 Apr 20223h 20min

#126 – Bryan Caplan on whether lazy parenting is OK, what really helps workers, and betting on beliefs

#126 – Bryan Caplan on whether lazy parenting is OK, what really helps workers, and betting on beliefs

Everybody knows that good parenting has a big impact on how kids turn out. Except that maybe they don't, because it doesn't.Incredible though it might seem, according to today's guest — economist Brya...

5 Apr 20222h 15min

#125 – Joan Rohlfing on how to avoid catastrophic nuclear blunders

#125 – Joan Rohlfing on how to avoid catastrophic nuclear blunders

Since the Soviet Union split into different countries in 1991, the pervasive fear of catastrophe that people lived with for decades has gradually faded from memory, and nuclear warhead stockpiles have...

29 Mars 20222h 13min

#124 – Karen Levy on fads and misaligned incentives in global development, and scaling deworming to reach hundreds of millions

#124 – Karen Levy on fads and misaligned incentives in global development, and scaling deworming to reach hundreds of millions

If someone said a global health and development programme was sustainable, participatory, and holistic, you'd have to guess that they were saying something positive. But according to today's guest Kar...

21 Mars 20223h 9min

#123 – Samuel Charap on why Putin invaded Ukraine, the risk of escalation, and how to prevent disaster

#123 – Samuel Charap on why Putin invaded Ukraine, the risk of escalation, and how to prevent disaster

Russia's invasion of Ukraine is devastating the lives of Ukrainians, and so long as it continues there's a risk that the conflict could escalate to include other countries or the use of nuclear weapon...

14 Mars 202259min

#122 – Michelle Hutchinson & Habiba Islam on balancing competing priorities and other themes from our 1-on-1 careers advising

#122 – Michelle Hutchinson & Habiba Islam on balancing competing priorities and other themes from our 1-on-1 careers advising

One of 80,000 Hours' main services is our free one-on-one careers advising, which we provide to around 1,000 people a year. Today we speak to two of our advisors, who have each spoken to hundreds of p...

9 Mars 20221h 36min

Introducing 80k After Hours

Introducing 80k After Hours

Today we're launching a new podcast called 80k After Hours. Like this show it’ll mostly still explore the best ways to do good — and some episodes will be even more laser-focused on careers than mos...

1 Mars 202213min

Populärt inom Utbildning

historiepodden-se
rss-bara-en-till-om-missbruk-medberoende-2
det-skaver
nu-blir-det-historia
harrisons-dramatiska-historia
rss-viktmedicinpodden
johannes-hansen-podcast
sektledare
not-fanny-anymore
roda-vita-rosen
rss-sjalsligt-avkladd
i-vantan-pa-katastrofen
allt-du-velat-veta
sa-in-i-sjalen
rss-beratta-alltid-det-har
alska-oss
rss-max-tant-med-max-villman
rikatillsammans-om-privatekonomi-rikedom-i-livet
sektpodden
dumforklarat