#157 – Ezra Klein on existential risk from AI and what DC could do about it

#157 – Ezra Klein on existential risk from AI and what DC could do about it

In Oppenheimer, scientists detonate a nuclear weapon despite thinking there's some 'near zero' chance it would ignite the atmosphere, putting an end to life on Earth. Today, scientists working on AI think the chance their work puts an end to humanity is vastly higher than that.

In response, some have suggested we launch a Manhattan Project to make AI safe via enormous investment in relevant R&D. Others have suggested that we need international organisations modelled on those that slowed the proliferation of nuclear weapons. Others still seek a research slowdown by labs while an auditing and licencing scheme is created.

Today's guest — journalist Ezra Klein of The New York Times — has watched policy discussions and legislative battles play out in DC for 20 years.

Links to learn more, summary and full transcript.

Like many people he has also taken a big interest in AI this year, writing articles such as “This changes everything.” In his first interview on the show in 2021, he flagged AI as one topic that DC would regret not having paid more attention to. So we invited him on to get his take on which regulatory proposals have promise, and which seem either unhelpful or politically unviable.

Out of the ideas on the table right now, Ezra favours a focus on direct government funding — both for AI safety research and to develop AI models designed to solve problems other than making money for their operators. He is sympathetic to legislation that would require AI models to be legible in a way that none currently are — and embraces the fact that that will slow down the release of models while businesses figure out how their products actually work.

By contrast, he's pessimistic that it's possible to coordinate countries around the world to agree to prevent or delay the deployment of dangerous AI models — at least not unless there's some spectacular AI-related disaster to create such a consensus. And he fears attempts to require licences to train the most powerful ML models will struggle unless they can find a way to exclude and thereby appease people working on relatively safe consumer technologies rather than cutting-edge research.

From observing how DC works, Ezra expects that even a small community of experts in AI governance can have a large influence on how the the US government responds to AI advances. But in Ezra's view, that requires those experts to move to DC and spend years building relationships with people in government, rather than clustering elsewhere in academia and AI labs.

In today's brisk conversation, Ezra and host Rob Wiblin cover the above as well as:

They cover:

  • Whether it's desirable to slow down AI research
  • The value of engaging with current policy debates even if they don't seem directly important
  • Which AI business models seem more or less dangerous
  • Tensions between people focused on existing vs emergent risks from AI
  • Two major challenges of being a new parent

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio Engineering Lead: Ben Cordell

Technical editing: Milo McGuire

Transcriptions: Katy Moore

Avsnitt(321)

#54 – OpenAI on publication norms, malicious uses of AI, and general-purpose learning algorithms

#54 – OpenAI on publication norms, malicious uses of AI, and general-purpose learning algorithms

OpenAI’s Dactyl is an AI system that can manipulate objects with a human-like robot hand. OpenAI Five is an AI system that can defeat humans at the video game Dota 2. The strange thing is they were bo...

19 Mars 20192h 53min

#53 - Kelsey Piper on the room for important advocacy within journalism

#53 - Kelsey Piper on the room for important advocacy within journalism

“Politics. Business. Opinion. Science. Sports. Animal welfare. Existential risk.” Is this a plausible future lineup for major news outlets? Funded by the Rockefeller Foundation and given very little ...

27 Feb 20192h 34min

Julia Galef and Rob Wiblin on an updated view of the best ways to help humanity

Julia Galef and Rob Wiblin on an updated view of the best ways to help humanity

This is a cross-post of an interview Rob did with Julia Galef on her podcast Rationally Speaking. Rob and Julia discuss how the career advice 80,000 Hours gives has changed over the years, and the big...

17 Feb 201956min

#52 - Glen Weyl on uprooting capitalism and democracy for a just society

#52 - Glen Weyl on uprooting capitalism and democracy for a just society

Pro-market economists love to wax rhapsodic about the capacity of markets to pull together the valuable local information spread across all of society about what people want and how to make it. But wh...

8 Feb 20192h 44min

#51 - Martin Gurri on the revolt of the public & crisis of authority in the information age

#51 - Martin Gurri on the revolt of the public & crisis of authority in the information age

Politics in rich countries seems to be going nuts. What's the explanation? Rising inequality? The decline of manufacturing jobs? Excessive immigration? Martin Gurri spent decades as a CIA analyst and...

29 Jan 20192h 31min

#50 - David Denkenberger on how to feed all 8b people through an asteroid/nuclear winter

#50 - David Denkenberger on how to feed all 8b people through an asteroid/nuclear winter

If an asteroid impact or nuclear winter blocked the sun for years, our inability to grow food would result in billions dying of starvation, right? According to Dr David Denkenberger, co-author of Feed...

27 Dec 20182h 57min

#49 - Rachel Glennerster on a year's worth of education for 30c & other development 'best buys'

#49 - Rachel Glennerster on a year's worth of education for 30c & other development 'best buys'

If I told you it's possible to deliver an extra year of ideal primary-level education for under $1, would you believe me? Hopefully not - the claim is absurd on its face. But it may be true nonetheles...

20 Dec 20181h 35min

#48 - Brian Christian on better living through the wisdom of computer science

#48 - Brian Christian on better living through the wisdom of computer science

Please let us know if we've helped you: Fill out our annual impact survey Ever felt that you were so busy you spent all your time paralysed trying to figure out where to start, and couldn't get much ...

22 Nov 20183h 15min

Populärt inom Utbildning

historiepodden-se
rss-bara-en-till-om-missbruk-medberoende-2
det-skaver
nu-blir-det-historia
sektledare
harrisons-dramatiska-historia
not-fanny-anymore
johannes-hansen-podcast
rss-viktmedicinpodden
roda-vita-rosen
rss-sjalsligt-avkladd
allt-du-velat-veta
sa-in-i-sjalen
rss-beratta-alltid-det-har
rss-max-tant-med-max-villman
i-vantan-pa-katastrofen
alska-oss
rikatillsammans-om-privatekonomi-rikedom-i-livet
sektpodden
polisutbildningspodden