AGI disagreements and misconceptions: Rob, Luisa, & past guests hash it out

AGI disagreements and misconceptions: Rob, Luisa, & past guests hash it out

Will LLMs soon be made into autonomous agents? Will they lead to job losses? Is AI misinformation overblown? Will it prove easy or hard to create AGI? And how likely is it that it will feel like something to be a superhuman AGI?

With AGI back in the headlines, we bring you 15 opinionated highlights from the show addressing those and other questions, intermixed with opinions from hosts Luisa Rodriguez and Rob Wiblin recorded back in 2023.

Check out the full transcript on the 80,000 Hours website.

You can decide whether the views we expressed (and those from guests) then have held up these last two busy years. You’ll hear:

  • Ajeya Cotra on overrated AGI worries
  • Holden Karnofsky on the dangers of aligned AI, why unaligned AI might not kill us, and the power that comes from just making models bigger
  • Ian Morris on why the future must be radically different from the present
  • Nick Joseph on whether his companies internal safety policies are enough
  • Richard Ngo on what everyone gets wrong about how ML models work
  • Tom Davidson on why he believes crazy-sounding explosive growth stories… and Michael Webb on why he doesn’t
  • Carl Shulman on why you’ll prefer robot nannies over human ones
  • Zvi Mowshowitz on why he’s against working at AI companies except in some safety roles
  • Hugo Mercier on why even superhuman AGI won’t be that persuasive
  • Rob Long on the case for and against digital sentience
  • Anil Seth on why he thinks consciousness is probably biological
  • Lewis Bollard on whether AI advances will help or hurt nonhuman animals
  • Rohin Shah on whether humanity’s work ends at the point it creates AGI

And of course, Rob and Luisa also regularly chime in on what they agree and disagree with.

Chapters:

  • Cold open (00:00:00)
  • Rob's intro (00:00:58)
  • Rob & Luisa: Bowerbirds compiling the AI story (00:03:28)
  • Ajeya Cotra on the misalignment stories she doesn’t buy (00:09:16)
  • Rob & Luisa: Agentic AI and designing machine people (00:24:06)
  • Holden Karnofsky on the dangers of even aligned AI, and how we probably won’t all die from misaligned AI (00:39:20)
  • Ian Morris on why we won’t end up living like The Jetsons (00:47:03)
  • Rob & Luisa: It’s not hard for nonexperts to understand we’re playing with fire here (00:52:21)
  • Nick Joseph on whether AI companies’ internal safety policies will be enough (00:55:43)
  • Richard Ngo on the most important misconception in how ML models work (01:03:10)
  • Rob & Luisa: Issues Rob is less worried about now (01:07:22)
  • Tom Davidson on why he buys the explosive economic growth story, despite it sounding totally crazy (01:14:08)
  • Michael Webb on why he’s sceptical about explosive economic growth (01:20:50)
  • Carl Shulman on why people will prefer robot nannies over humans (01:28:25)
  • Rob & Luisa: Should we expect AI-related job loss? (01:36:19)
  • Zvi Mowshowitz on why he thinks it’s a bad idea to work on improving capabilities at cutting-edge AI companies (01:40:06)
  • Holden Karnofsky on the power that comes from just making models bigger (01:45:21)
  • Rob & Luisa: Are risks of AI-related misinformation overblown? (01:49:49)
  • Hugo Mercier on how AI won’t cause misinformation pandemonium (01:58:29)
  • Rob & Luisa: How hard will it actually be to create intelligence? (02:09:08)
  • Robert Long on whether digital sentience is possible (02:15:09)
  • Anil Seth on why he believes in the biological basis of consciousness (02:27:21)
  • Lewis Bollard on whether AI will be good or bad for animal welfare (02:40:52)
  • Rob & Luisa: The most interesting new argument Rob’s heard this year (02:50:37)
  • Rohin Shah on whether AGI will be the last thing humanity ever does (02:57:35)
  • Rob's outro (03:11:02)

Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
Transcriptions and additional content editing: Katy Moore

Jaksot(317)

#225 – Daniel Kokotajlo on what a hyperspeed robot economy might look like

#225 – Daniel Kokotajlo on what a hyperspeed robot economy might look like

When Daniel Kokotajlo talks to security experts at major AI labs, they tell him something chilling: “Of course we’re probably penetrated by the CCP already, and if they really wanted something, they c...

27 Loka 20252h 12min

#224 – There's a cheap and low-tech way to save humanity from any engineered disease | Andrew Snyder-Beattie

#224 – There's a cheap and low-tech way to save humanity from any engineered disease | Andrew Snyder-Beattie

Conventional wisdom is that safeguarding humanity from the worst biological risks — microbes optimised to kill as many as possible — is difficult bordering on impossible, making bioweapons humanity’s ...

2 Loka 20252h 31min

Inside the Biden admin’s AI policy approach | Jake Sullivan, Biden’s NSA | via The Cognitive Revolution

Inside the Biden admin’s AI policy approach | Jake Sullivan, Biden’s NSA | via The Cognitive Revolution

Jake Sullivan was the US National Security Advisor from 2021-2025. He joined our friends on The Cognitive Revolution podcast in August to discuss AI as a critical national security issue. We thought i...

26 Syys 20251h 5min

#223 – Neel Nanda on leading a Google DeepMind team at 26 – and advice if you want to work at an AI company (part 2)

#223 – Neel Nanda on leading a Google DeepMind team at 26 – and advice if you want to work at an AI company (part 2)

At 26, Neel Nanda leads an AI safety team at Google DeepMind, has published dozens of influential papers, and mentored 50 junior researchers — seven of whom now work at major AI companies. His secret?...

15 Syys 20251h 46min

#222 – Can we tell if an AI is loyal by reading its mind? DeepMind's Neel Nanda (part 1)

#222 – Can we tell if an AI is loyal by reading its mind? DeepMind's Neel Nanda (part 1)

We don’t know how AIs think or why they do what they do. Or at least, we don’t know much. That fact is only becoming more troubling as AIs grow more capable and appear on track to wield enormous cultu...

8 Syys 20253h 1min

#221 – Kyle Fish on the most bizarre findings from 5 AI welfare experiments

#221 – Kyle Fish on the most bizarre findings from 5 AI welfare experiments

What happens when you lock two AI systems in a room together and tell them they can discuss anything they want?According to experiments run by Kyle Fish — Anthropic’s first AI welfare researcher — som...

28 Elo 20252h 28min

How not to lose your job to AI (article by Benjamin Todd)

How not to lose your job to AI (article by Benjamin Todd)

About half of people are worried they’ll lose their job to AI. They’re right to be concerned: AI can now complete real-world coding tasks on GitHub, generate photorealistic video, drive a taxi more sa...

31 Heinä 202551min

Rebuilding after apocalypse: What 13 experts say about bouncing back

Rebuilding after apocalypse: What 13 experts say about bouncing back

What happens when civilisation faces its greatest tests?This compilation brings together insights from researchers, defence experts, philosophers, and policymakers on humanity’s ability to survive and...

15 Heinä 20254h 26min

Suosittua kategoriassa Koulutus

rss-murhan-anatomia
psykopodiaa-podcast
voi-hyvin-meditaatiot-2
rss-valo-minussa-2
rss-niinku-asia-on
aamukahvilla
rss-narsisti
adhd-podi
rss-duodecim-lehti
rahapuhetta
aloita-meditaatio
kesken
rss-elamankoulu
koulu-podcast-2
salainen-paivakirja
rss-uskonto-on-tylsaa
rss-liian-kuuma-peruna
rss-luonnollinen-synnytys-podcast
rss-koira-haudattuna
rss-hereilla