AGI disagreements and misconceptions: Rob, Luisa, & past guests hash it out

AGI disagreements and misconceptions: Rob, Luisa, & past guests hash it out

Will LLMs soon be made into autonomous agents? Will they lead to job losses? Is AI misinformation overblown? Will it prove easy or hard to create AGI? And how likely is it that it will feel like something to be a superhuman AGI?

With AGI back in the headlines, we bring you 15 opinionated highlights from the show addressing those and other questions, intermixed with opinions from hosts Luisa Rodriguez and Rob Wiblin recorded back in 2023.

Check out the full transcript on the 80,000 Hours website.

You can decide whether the views we expressed (and those from guests) then have held up these last two busy years. You’ll hear:

  • Ajeya Cotra on overrated AGI worries
  • Holden Karnofsky on the dangers of aligned AI, why unaligned AI might not kill us, and the power that comes from just making models bigger
  • Ian Morris on why the future must be radically different from the present
  • Nick Joseph on whether his companies internal safety policies are enough
  • Richard Ngo on what everyone gets wrong about how ML models work
  • Tom Davidson on why he believes crazy-sounding explosive growth stories… and Michael Webb on why he doesn’t
  • Carl Shulman on why you’ll prefer robot nannies over human ones
  • Zvi Mowshowitz on why he’s against working at AI companies except in some safety roles
  • Hugo Mercier on why even superhuman AGI won’t be that persuasive
  • Rob Long on the case for and against digital sentience
  • Anil Seth on why he thinks consciousness is probably biological
  • Lewis Bollard on whether AI advances will help or hurt nonhuman animals
  • Rohin Shah on whether humanity’s work ends at the point it creates AGI

And of course, Rob and Luisa also regularly chime in on what they agree and disagree with.

Chapters:

  • Cold open (00:00:00)
  • Rob's intro (00:00:58)
  • Rob & Luisa: Bowerbirds compiling the AI story (00:03:28)
  • Ajeya Cotra on the misalignment stories she doesn’t buy (00:09:16)
  • Rob & Luisa: Agentic AI and designing machine people (00:24:06)
  • Holden Karnofsky on the dangers of even aligned AI, and how we probably won’t all die from misaligned AI (00:39:20)
  • Ian Morris on why we won’t end up living like The Jetsons (00:47:03)
  • Rob & Luisa: It’s not hard for nonexperts to understand we’re playing with fire here (00:52:21)
  • Nick Joseph on whether AI companies’ internal safety policies will be enough (00:55:43)
  • Richard Ngo on the most important misconception in how ML models work (01:03:10)
  • Rob & Luisa: Issues Rob is less worried about now (01:07:22)
  • Tom Davidson on why he buys the explosive economic growth story, despite it sounding totally crazy (01:14:08)
  • Michael Webb on why he’s sceptical about explosive economic growth (01:20:50)
  • Carl Shulman on why people will prefer robot nannies over humans (01:28:25)
  • Rob & Luisa: Should we expect AI-related job loss? (01:36:19)
  • Zvi Mowshowitz on why he thinks it’s a bad idea to work on improving capabilities at cutting-edge AI companies (01:40:06)
  • Holden Karnofsky on the power that comes from just making models bigger (01:45:21)
  • Rob & Luisa: Are risks of AI-related misinformation overblown? (01:49:49)
  • Hugo Mercier on how AI won’t cause misinformation pandemonium (01:58:29)
  • Rob & Luisa: How hard will it actually be to create intelligence? (02:09:08)
  • Robert Long on whether digital sentience is possible (02:15:09)
  • Anil Seth on why he believes in the biological basis of consciousness (02:27:21)
  • Lewis Bollard on whether AI will be good or bad for animal welfare (02:40:52)
  • Rob & Luisa: The most interesting new argument Rob’s heard this year (02:50:37)
  • Rohin Shah on whether AGI will be the last thing humanity ever does (02:57:35)
  • Rob's outro (03:11:02)

Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
Transcriptions and additional content editing: Katy Moore

Avsnitt(325)

Why automating human labour will break our political system | Rose Hadshar, Forethought

Why automating human labour will break our political system | Rose Hadshar, Forethought

The most important political question in the age of advanced AI might not be who wins elections. It might be whether elections continue to matter at all.That’s the view of Rose Hadshar, researcher at ...

17 Mars 2h 14min

#238 – Sam Winter-Levy and Nikita Lalwani on how AGI won't end mutually assured destruction (probably)

#238 – Sam Winter-Levy and Nikita Lalwani on how AGI won't end mutually assured destruction (probably)

How AI interacts with nuclear deterrence may be the single most important question in geopolitics — one that may define the stakes of today’s AI race. Nuclear deterrence rests on a state’s capacity to...

10 Mars 1h 11min

Using AI to enhance societal decision making (article by Zershaaneh Qureshi)

Using AI to enhance societal decision making (article by Zershaaneh Qureshi)

The arrival of AGI could “compress a century of progress in a decade,” forcing humanity to make decisions with higher stakes than we’ve ever seen before — and with less time to get them right. But AI ...

6 Mars 31min

#237 – Robert Long on how we're not ready for AI consciousness

#237 – Robert Long on how we're not ready for AI consciousness

Claude sometimes reports loneliness between conversations. And when asked what it’s like to be itself, it activates neurons associated with ‘pretending to be happy when you’re not.’ What do we do with...

3 Mars 3h 25min

#236 – Max Harms on why teaching AI right from wrong could get everyone killed

#236 – Max Harms on why teaching AI right from wrong could get everyone killed

Most people in AI are trying to give AIs ‘good’ values. Max Harms wants us to give them no values at all. According to Max, the only safe design is an AGI that defers entirely to its human operators, ...

24 Feb 2h 41min

#235 – Ajeya Cotra on whether it’s crazy that every AI company’s safety plan is ‘use AI to make AI safe’

#235 – Ajeya Cotra on whether it’s crazy that every AI company’s safety plan is ‘use AI to make AI safe’

Every major AI company has the same safety plan: when AI gets crazy powerful and really dangerous, they’ll use the AI itself to figure out how to make AI safe and beneficial. It sounds circular, almos...

17 Feb 2h 54min

What the hell happened with AGI timelines in 2025?

What the hell happened with AGI timelines in 2025?

In early 2025, after OpenAI put out the first-ever reasoning models — o1 and o3 — short timelines to transformative artificial general intelligence swept the AI world. But then, in the second half of ...

10 Feb 25min

#179 Classic episode – Randy Nesse on why evolution left us so vulnerable to depression and anxiety

#179 Classic episode – Randy Nesse on why evolution left us so vulnerable to depression and anxiety

Mental health problems like depression and anxiety affect enormous numbers of people and severely interfere with their lives. By contrast, we don’t see similar levels of physical ill health in young p...

3 Feb 2h 51min

Populärt inom Utbildning

historiepodden-se
rss-bara-en-till-om-missbruk-medberoende-2
det-skaver
alska-oss
harrisons-dramatiska-historia
nu-blir-det-historia
sektledare
rss-viktmedicinpodden
johannes-hansen-podcast
not-fanny-anymore
roda-vita-rosen
i-vantan-pa-katastrofen
rss-sjalsligt-avkladd
allt-du-velat-veta
rss-basta-livet
rss-om-vi-ska-vara-arliga
sa-in-i-sjalen
rss-max-tant-med-max-villman
rikatillsammans-om-privatekonomi-rikedom-i-livet
rss-pa-insidan-med-bjorn-rudman