#220 – Ryan Greenblatt on the 4 most likely ways for AI to take over, and the case for and against AGI in <8 years

#220 – Ryan Greenblatt on the 4 most likely ways for AI to take over, and the case for and against AGI in <8 years

Ryan Greenblatt — lead author on the explosive paper “Alignment faking in large language models” and chief scientist at Redwood Research — thinks there’s a 25% chance that within four years, AI will be able to do everything needed to run an AI company, from writing code to designing experiments to making strategic and business decisions.

As Ryan lays out, AI models are “marching through the human regime”: systems that could handle five-minute tasks two years ago now tackle 90-minute projects. Double that a few more times and we may be automating full jobs rather than just parts of them.

Will setting AI to improve itself lead to an explosive positive feedback loop? Maybe, but maybe not.

The explosive scenario: Once you’ve automated your AI company, you could have the equivalent of 20,000 top researchers, each working 50 times faster than humans with total focus. “You have your AIs, they do a bunch of algorithmic research, they train a new AI, that new AI is smarter and better and more efficient… that new AI does even faster algorithmic research.” In this world, we could see years of AI progress compressed into months or even weeks.

With AIs now doing all of the work of programming their successors and blowing past the human level, Ryan thinks it would be fairly straightforward for them to take over and disempower humanity, if they thought doing so would better achieve their goals. In the interview he lays out the four most likely approaches for them to take.

The linear progress scenario: You automate your company but progress barely accelerates. Why? Multiple reasons, but the most likely is “it could just be that AI R&D research bottlenecks extremely hard on compute.” You’ve got brilliant AI researchers, but they’re all waiting for experiments to run on the same limited set of chips, so can only make modest progress.

Ryan’s median guess splits the difference: perhaps a 20x acceleration that lasts for a few months or years. Transformative, but less extreme than some in the AI companies imagine.

And his 25th percentile case? Progress “just barely faster” than before. All that automation, and all you’ve been able to do is keep pace.

Unfortunately the data we can observe today is so limited that it leaves us with vast error bars. “We’re extrapolating from a regime that we don’t even understand to a wildly different regime,” Ryan believes, “so no one knows.”

But that huge uncertainty means the explosive growth scenario is a plausible one — and the companies building these systems are spending tens of billions to try to make it happen.

In this extensive interview, Ryan elaborates on the above and the policy and technical response necessary to insure us against the possibility that they succeed — a scenario society has barely begun to prepare for.

Summary, video, and full transcript: https://80k.info/rg25

Recorded February 21, 2025.

Chapters:

  • Cold open (00:00:00)
  • Who’s Ryan Greenblatt? (00:01:10)
  • How close are we to automating AI R&D? (00:01:27)
  • Really, though: how capable are today's models? (00:05:08)
  • Why AI companies get automated earlier than others (00:12:35)
  • Most likely ways for AGI to take over (00:17:37)
  • Would AGI go rogue early or bide its time? (00:29:19)
  • The “pause at human level” approach (00:34:02)
  • AI control over AI alignment (00:45:38)
  • Do we have to hope to catch AIs red-handed? (00:51:23)
  • How would a slow AGI takeoff look? (00:55:33)
  • Why might an intelligence explosion not happen for 8+ years? (01:03:32)
  • Key challenges in forecasting AI progress (01:15:07)
  • The bear case on AGI (01:23:01)
  • The change to “compute at inference” (01:28:46)
  • How much has pretraining petered out? (01:34:22)
  • Could we get an intelligence explosion within a year? (01:46:36)
  • Reasons AIs might struggle to replace humans (01:50:33)
  • Things could go insanely fast when we automate AI R&D. Or not. (01:57:25)
  • How fast would the intelligence explosion slow down? (02:11:48)
  • Bottom line for mortals (02:24:33)
  • Six orders of magnitude of progress... what does that even look like? (02:30:34)
  • Neglected and important technical work people should be doing (02:40:32)
  • What's the most promising work in governance? (02:44:32)
  • Ryan's current research priorities (02:47:48)

Tell us what you thought! https://forms.gle/hCjfcXGeLKxm5pLaA

Video editing: Luke Monsour, Simon Monsour, and Dominic Armstrong
Audio engineering: Ben Cordell, Milo McGuire, and Dominic Armstrong
Music: Ben Cordell
Transcriptions and web: Katy Moore

Jaksot(318)

Luisa and Keiran on free will, and the consequences of never feeling enduring guilt or shame

Luisa and Keiran on free will, and the consequences of never feeling enduring guilt or shame

In this episode from our second show, 80k After Hours, Luisa Rodriguez and Keiran Harris chat about the consequences of letting go of enduring guilt, shame, anger, and pride.Links to learn more, highl...

27 Syys 20241h 36min

#202 – Venki Ramakrishnan on the cutting edge of anti-ageing science

#202 – Venki Ramakrishnan on the cutting edge of anti-ageing science

"For every far-out idea that turns out to be true, there were probably hundreds that were simply crackpot ideas. In general, [science] advances building on the knowledge we have, and seeing what the n...

19 Syys 20242h 20min

#201 – Ken Goldberg on why your robot butler isn’t here yet

#201 – Ken Goldberg on why your robot butler isn’t here yet

"Perception is quite difficult with cameras: even if you have a stereo camera, you still can’t really build a map of where everything is in space. It’s just very difficult. And I know that sounds surp...

13 Syys 20242h 1min

#200 – Ezra Karger on what superforecasters and experts think about existential risks

#200 – Ezra Karger on what superforecasters and experts think about existential risks

"It’s very hard to find examples where people say, 'I’m starting from this point. I’m starting from this belief.' So we wanted to make that very legible to people. We wanted to say, 'Experts think thi...

4 Syys 20242h 49min

#199 – Nathan Calvin on California’s AI bill SB 1047 and its potential to shape US AI policy

#199 – Nathan Calvin on California’s AI bill SB 1047 and its potential to shape US AI policy

"I do think that there is a really significant sentiment among parts of the opposition that it’s not really just that this bill itself is that bad or extreme — when you really drill into it, it feels ...

29 Elo 20241h 12min

#198 – Meghan Barrett on upending everything you thought you knew about bugs in 3 hours

#198 – Meghan Barrett on upending everything you thought you knew about bugs in 3 hours

"This is a group of animals I think people are particularly unfamiliar with. They are especially poorly covered in our science curriculum; they are especially poorly understood, because people don’t s...

26 Elo 20243h 48min

#197 – Nick Joseph on whether Anthropic's AI safety policy is up to the task

#197 – Nick Joseph on whether Anthropic's AI safety policy is up to the task

The three biggest AI companies — Anthropic, OpenAI, and DeepMind — have now all released policies designed to make their AI models less likely to go rogue or cause catastrophic damage as they approach...

22 Elo 20242h 29min

#196 – Jonathan Birch on the edge cases of sentience and why they matter

#196 – Jonathan Birch on the edge cases of sentience and why they matter

"In the 1980s, it was still apparently common to perform surgery on newborn babies without anaesthetic on both sides of the Atlantic. This led to appalling cases, and to public outcry, and to campaign...

15 Elo 20242h 1min

Suosittua kategoriassa Koulutus

rss-murhan-anatomia
psykopodiaa-podcast
voi-hyvin-meditaatiot-2
rss-niinku-asia-on
kesken
rss-liian-kuuma-peruna
rss-narsisti
adhd-podi
psykologia
rss-duodecim-lehti
ihminen-tavattavissa-tommy-hellsten-instituutti
rss-psykalab
aamukahvilla
aloita-meditaatio
rss-honest-talk-with-laurrenna
rss-luonnollinen-synnytys-podcast
rss-tietoinen-yhteys-podcast-2
rahapuhetta
puhutaan-koiraa
rss-elamankoulu