#157 – Ezra Klein on existential risk from AI and what DC could do about it

#157 – Ezra Klein on existential risk from AI and what DC could do about it

In Oppenheimer, scientists detonate a nuclear weapon despite thinking there's some 'near zero' chance it would ignite the atmosphere, putting an end to life on Earth. Today, scientists working on AI think the chance their work puts an end to humanity is vastly higher than that.

In response, some have suggested we launch a Manhattan Project to make AI safe via enormous investment in relevant R&D. Others have suggested that we need international organisations modelled on those that slowed the proliferation of nuclear weapons. Others still seek a research slowdown by labs while an auditing and licencing scheme is created.

Today's guest — journalist Ezra Klein of The New York Times — has watched policy discussions and legislative battles play out in DC for 20 years.

Links to learn more, summary and full transcript.

Like many people he has also taken a big interest in AI this year, writing articles such as “This changes everything.” In his first interview on the show in 2021, he flagged AI as one topic that DC would regret not having paid more attention to. So we invited him on to get his take on which regulatory proposals have promise, and which seem either unhelpful or politically unviable.

Out of the ideas on the table right now, Ezra favours a focus on direct government funding — both for AI safety research and to develop AI models designed to solve problems other than making money for their operators. He is sympathetic to legislation that would require AI models to be legible in a way that none currently are — and embraces the fact that that will slow down the release of models while businesses figure out how their products actually work.

By contrast, he's pessimistic that it's possible to coordinate countries around the world to agree to prevent or delay the deployment of dangerous AI models — at least not unless there's some spectacular AI-related disaster to create such a consensus. And he fears attempts to require licences to train the most powerful ML models will struggle unless they can find a way to exclude and thereby appease people working on relatively safe consumer technologies rather than cutting-edge research.

From observing how DC works, Ezra expects that even a small community of experts in AI governance can have a large influence on how the the US government responds to AI advances. But in Ezra's view, that requires those experts to move to DC and spend years building relationships with people in government, rather than clustering elsewhere in academia and AI labs.

In today's brisk conversation, Ezra and host Rob Wiblin cover the above as well as:

They cover:

  • Whether it's desirable to slow down AI research
  • The value of engaging with current policy debates even if they don't seem directly important
  • Which AI business models seem more or less dangerous
  • Tensions between people focused on existing vs emergent risks from AI
  • Two major challenges of being a new parent

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio Engineering Lead: Ben Cordell

Technical editing: Milo McGuire

Transcriptions: Katy Moore

Episoder(321)

#211 – Sam Bowman on why housing still isn't fixed and what would actually work

#211 – Sam Bowman on why housing still isn't fixed and what would actually work

Rich countries seem to find it harder and harder to do anything that creates some losers. People who don’t want houses, offices, power stations, trains, subway stations (or whatever) built in their ar...

19 Des 20243h 25min

#210 – Cameron Meyer Shorb on dismantling the myth that we can’t do anything to help wild animals

#210 – Cameron Meyer Shorb on dismantling the myth that we can’t do anything to help wild animals

"I really don’t want to give the impression that I think it is easy to make predictable, controlled, safe interventions in wild systems where there are many species interacting. I don’t think it’s eas...

29 Nov 20243h 21min

#209 – Rose Chan Loui on OpenAI’s gambit to ditch its nonprofit

#209 – Rose Chan Loui on OpenAI’s gambit to ditch its nonprofit

One OpenAI critic calls it “the theft of at least the millennium and quite possibly all of human history.” Are they right?Back in 2015 OpenAI was but a humble nonprofit. That nonprofit started a for-p...

27 Nov 20241h 22min

#208 – Elizabeth Cox on the case that TV shows, movies, and novels can improve the world

#208 – Elizabeth Cox on the case that TV shows, movies, and novels can improve the world

"I think stories are the way we shift the Overton window — so widen the range of things that are acceptable for policy and palatable to the public. Almost by definition, a lot of things that are going...

21 Nov 20242h 22min

#207 – Sarah Eustis-Guthrie on why she shut down her charity, and why more founders should follow her lead

#207 – Sarah Eustis-Guthrie on why she shut down her charity, and why more founders should follow her lead

"I think one of the reasons I took [shutting down my charity] so hard is because entrepreneurship is all about this bets-based mindset. So you say, “I’m going to take a bunch of bets. I’m going to tak...

14 Nov 20242h 58min

Parenting insights from Rob and 8 past guests

Parenting insights from Rob and 8 past guests

With kids very much on the team's mind we thought it would be fun to review some comments about parenting featured on the show over the years, then have hosts Luisa Rodriguez and Rob Wiblin react to t...

8 Nov 20241h 35min

#206 – Anil Seth on the predictive brain and how to study consciousness

#206 – Anil Seth on the predictive brain and how to study consciousness

"In that famous example of the dress, half of the people in the world saw [blue and black], half saw [white and gold]. It turns out there’s individual differences in how brains take into account ambie...

1 Nov 20242h 33min

How much does a vote matter? (Article)

How much does a vote matter? (Article)

If you care about social impact, is voting important? In this piece, Rob investigates the two key things that determine the impact of your vote:The chances of your vote changing an election’s outcome....

28 Okt 202432min

Populært innen Fakta

fastlegen
dine-penger-pengeradet
relasjonspodden-med-dora-thorhallsdottir-kjersti-idem
treningspodden
rss-strid-de-norske-borgerkrigene
foreldreradet
jakt-og-fiskepodden
rss-sunn-okonomi
merry-quizmas
hverdagspsyken
rss-mann-i-krise-med-sagen
gravid-uke-for-uke
generasjonspodden
sinnsyn
fryktlos
rss-impressions-2
rss-kunsten-a-leve
hagespiren-podcast
rss-kull
rss-ifengsel-podden