AGI disagreements and misconceptions: Rob, Luisa, & past guests hash it out

AGI disagreements and misconceptions: Rob, Luisa, & past guests hash it out

Will LLMs soon be made into autonomous agents? Will they lead to job losses? Is AI misinformation overblown? Will it prove easy or hard to create AGI? And how likely is it that it will feel like something to be a superhuman AGI?

With AGI back in the headlines, we bring you 15 opinionated highlights from the show addressing those and other questions, intermixed with opinions from hosts Luisa Rodriguez and Rob Wiblin recorded back in 2023.

Check out the full transcript on the 80,000 Hours website.

You can decide whether the views we expressed (and those from guests) then have held up these last two busy years. You’ll hear:

  • Ajeya Cotra on overrated AGI worries
  • Holden Karnofsky on the dangers of aligned AI, why unaligned AI might not kill us, and the power that comes from just making models bigger
  • Ian Morris on why the future must be radically different from the present
  • Nick Joseph on whether his companies internal safety policies are enough
  • Richard Ngo on what everyone gets wrong about how ML models work
  • Tom Davidson on why he believes crazy-sounding explosive growth stories… and Michael Webb on why he doesn’t
  • Carl Shulman on why you’ll prefer robot nannies over human ones
  • Zvi Mowshowitz on why he’s against working at AI companies except in some safety roles
  • Hugo Mercier on why even superhuman AGI won’t be that persuasive
  • Rob Long on the case for and against digital sentience
  • Anil Seth on why he thinks consciousness is probably biological
  • Lewis Bollard on whether AI advances will help or hurt nonhuman animals
  • Rohin Shah on whether humanity’s work ends at the point it creates AGI

And of course, Rob and Luisa also regularly chime in on what they agree and disagree with.

Chapters:

  • Cold open (00:00:00)
  • Rob's intro (00:00:58)
  • Rob & Luisa: Bowerbirds compiling the AI story (00:03:28)
  • Ajeya Cotra on the misalignment stories she doesn’t buy (00:09:16)
  • Rob & Luisa: Agentic AI and designing machine people (00:24:06)
  • Holden Karnofsky on the dangers of even aligned AI, and how we probably won’t all die from misaligned AI (00:39:20)
  • Ian Morris on why we won’t end up living like The Jetsons (00:47:03)
  • Rob & Luisa: It’s not hard for nonexperts to understand we’re playing with fire here (00:52:21)
  • Nick Joseph on whether AI companies’ internal safety policies will be enough (00:55:43)
  • Richard Ngo on the most important misconception in how ML models work (01:03:10)
  • Rob & Luisa: Issues Rob is less worried about now (01:07:22)
  • Tom Davidson on why he buys the explosive economic growth story, despite it sounding totally crazy (01:14:08)
  • Michael Webb on why he’s sceptical about explosive economic growth (01:20:50)
  • Carl Shulman on why people will prefer robot nannies over humans (01:28:25)
  • Rob & Luisa: Should we expect AI-related job loss? (01:36:19)
  • Zvi Mowshowitz on why he thinks it’s a bad idea to work on improving capabilities at cutting-edge AI companies (01:40:06)
  • Holden Karnofsky on the power that comes from just making models bigger (01:45:21)
  • Rob & Luisa: Are risks of AI-related misinformation overblown? (01:49:49)
  • Hugo Mercier on how AI won’t cause misinformation pandemonium (01:58:29)
  • Rob & Luisa: How hard will it actually be to create intelligence? (02:09:08)
  • Robert Long on whether digital sentience is possible (02:15:09)
  • Anil Seth on why he believes in the biological basis of consciousness (02:27:21)
  • Lewis Bollard on whether AI will be good or bad for animal welfare (02:40:52)
  • Rob & Luisa: The most interesting new argument Rob’s heard this year (02:50:37)
  • Rohin Shah on whether AGI will be the last thing humanity ever does (02:57:35)
  • Rob's outro (03:11:02)

Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
Transcriptions and additional content editing: Katy Moore

Jaksot(318)

Emergency pod: Did OpenAI give up, or is this just a new trap? (with Rose Chan Loui)

Emergency pod: Did OpenAI give up, or is this just a new trap? (with Rose Chan Loui)

When attorneys general intervene in corporate affairs, it usually means something has gone seriously wrong. In OpenAI’s case, it appears to have forced a dramatic reversal of the company’s plans to si...

8 Touko 20251h 2min

#216 – Ian Dunt on why governments in Britain and elsewhere can't get anything done – and how to fix it

#216 – Ian Dunt on why governments in Britain and elsewhere can't get anything done – and how to fix it

When you have a system where ministers almost never understand their portfolios, civil servants change jobs every few months, and MPs don't grasp parliamentary procedure even after decades in office —...

2 Touko 20253h 14min

Serendipity, weird bets, & cold emails that actually work: Career advice from 16 former guests

Serendipity, weird bets, & cold emails that actually work: Career advice from 16 former guests

How do you navigate a career path when the future of work is uncertain? How important is mentorship versus immediate impact? Is it better to focus on your strengths or on the world’s most pressing pro...

24 Huhti 20252h 18min

#215 – Tom Davidson on how AI-enabled coups could allow a tiny group to seize power

#215 – Tom Davidson on how AI-enabled coups could allow a tiny group to seize power

Throughout history, technological revolutions have fundamentally shifted the balance of power in society. The Industrial Revolution created conditions where democracies could flourish for the first ti...

16 Huhti 20253h 22min

Guilt, imposter syndrome & doing good: 16 past guests share their mental health journeys

Guilt, imposter syndrome & doing good: 16 past guests share their mental health journeys

"We are aiming for a place where we can decouple the scorecard from our worthiness. It’s of course the case that in trying to optimise the good, we will always be falling short. The question is how mu...

11 Huhti 20251h 47min

#214 – Buck Shlegeris on controlling AI that wants to take over – so we can use it anyway

#214 – Buck Shlegeris on controlling AI that wants to take over – so we can use it anyway

Most AI safety conversations centre on alignment: ensuring AI systems share our values and goals. But despite progress, we’re unlikely to know we’ve solved the problem before the arrival of human-leve...

4 Huhti 20252h 16min

15 expert takes on infosec in the age of AI

15 expert takes on infosec in the age of AI

"There’s almost no story of the future going well that doesn’t have a part that’s like '…and no evil person steals the AI weights and goes and does evil stuff.' So it has highlighted the importance of...

28 Maalis 20252h 35min

#213 – Will MacAskill on AI causing a “century in a decade” – and how we're completely unprepared

#213 – Will MacAskill on AI causing a “century in a decade” – and how we're completely unprepared

The 20th century saw unprecedented change: nuclear weapons, satellites, the rise and fall of communism, third-wave feminism, the internet, postmodernism, game theory, genetic engineering, the Big Bang...

11 Maalis 20253h 57min

Suosittua kategoriassa Koulutus

rss-murhan-anatomia
psykopodiaa-podcast
voi-hyvin-meditaatiot-2
rss-narsisti
adhd-podi
psykologia
rss-liian-kuuma-peruna
kesken
rss-niinku-asia-on
rss-duodecim-lehti
ihminen-tavattavissa-tommy-hellsten-instituutti
rahapuhetta
aamukahvilla
rss-honest-talk-with-laurrenna
rss-luonnollinen-synnytys-podcast
rss-tietoinen-yhteys-podcast-2
rss-opeklubi
rss-vapaudu-voimaasi
nakokulmia-rikollisuudesta-irrottautumiseen
rss-arkea-ja-aurinkoa-podcast-espanjasta