The case for and against AGI by 2030 (article by Benjamin Todd)

The case for and against AGI by 2030 (article by Benjamin Todd)

More and more people have been saying that we might have AGI (artificial general intelligence) before 2030. Is that really plausible?

This article by Benjamin Todd looks into the cases for and against, and summarises the key things you need to know to understand the debate. You can see all the images and many footnotes in the original article on the 80,000 Hours website.

In a nutshell:

  • Four key factors are driving AI progress: larger base models, teaching models to reason, increasing models’ thinking time, and building agent scaffolding for multi-step tasks. These are underpinned by increasing computational power to run and train AI systems, as well as increasing human capital going into algorithmic research.
  • All of these drivers are set to continue until 2028 and perhaps until 2032.
  • This means we should expect major further gains in AI performance. We don’t know how large they’ll be, but extrapolating recent trends on benchmarks suggests we’ll reach systems with beyond-human performance in coding and scientific reasoning, and that can autonomously complete multi-week projects.
  • Whether we call these systems ’AGI’ or not, they could be sufficient to enable AI research itself, robotics, the technology industry, and scientific research to accelerate — leading to transformative impacts.
  • Alternatively, AI might fail to overcome issues with ill-defined, high-context work over long time horizons and remain a tool (even if much improved compared to today).
  • Increasing AI performance requires exponential growth in investment and the research workforce. At current rates, we will likely start to reach bottlenecks around 2030. Simplifying a bit, that means we’ll likely either reach AGI by around 2030 or see progress slow significantly. Hybrid scenarios are also possible, but the next five years seem especially crucial.

Chapters:

  • Introduction (00:00:00)
  • The case for AGI by 2030 (00:00:33)
  • The article in a nutshell (00:04:04)
  • Section 1: What's driven recent AI progress? (00:05:46)
  • How we got here: the deep learning era (00:05:52)
  • Where are we now: the four key drivers (00:07:45)
  • Driver 1: Scaling pretraining (00:08:57)
  • Algorithmic efficiency (00:12:14)
  • How much further can pretraining scale? (00:14:22)
  • Driver 2: Training the models to reason (00:16:15)
  • How far can scaling reasoning continue? (00:22:06)
  • Driver 3: Increasing how long models think (00:25:01)
  • Driver 4: Building better agents (00:28:00)
  • How far can agent improvements continue? (00:33:40)
  • Section 2: How good will AI become by 2030? (00:35:59)
  • Trend extrapolation of AI capabilities (00:37:42)
  • What jobs would these systems help with? (00:39:59)
  • Software engineering (00:40:50)
  • Scientific research (00:42:13)
  • AI research (00:43:21)
  • What's the case against this? (00:44:30)
  • Additional resources on the sceptical view (00:49:18)
  • When do the 'experts' expect AGI? (00:49:50)
  • Section 3: Why the next 5 years are crucial (00:51:06)
  • Bottlenecks around 2030 (00:52:10)
  • Two potential futures for AI (00:56:02)
  • Conclusion (00:58:05)
  • Thanks for listening (00:59:27)

Audio engineering: Dominic Armstrong
Music: Ben Cordell

Jaksot(325)

#220 – Ryan Greenblatt on the 4 most likely ways for AI to take over, and the case for and against AGI in <8 years

#220 – Ryan Greenblatt on the 4 most likely ways for AI to take over, and the case for and against AGI in <8 years

Ryan Greenblatt — lead author on the explosive paper “Alignment faking in large language models” and chief scientist at Redwood Research — thinks there’s a 25% chance that within four years, AI will b...

8 Heinä 20252h 50min

#219 – Toby Ord on graphs AI companies would prefer you didn't (fully) understand

#219 – Toby Ord on graphs AI companies would prefer you didn't (fully) understand

The era of making AI smarter just by making it bigger is ending. But that doesn’t mean progress is slowing down — far from it. AI models continue to get much more powerful, just using very different m...

24 Kesä 20252h 48min

#218 – Hugh White on why Trump is abandoning US hegemony – and that’s probably good

#218 – Hugh White on why Trump is abandoning US hegemony – and that’s probably good

For decades, US allies have slept soundly under the protection of America’s overwhelming military might. Donald Trump — with his threats to ditch NATO, seize Greenland, and abandon Taiwan — seems hell...

12 Kesä 20252h 48min

#217 – Beth Barnes on the most important graph in AI right now — and the 7-month rule that governs its progress

#217 – Beth Barnes on the most important graph in AI right now — and the 7-month rule that governs its progress

AI models today have a 50% chance of successfully completing a task that would take an expert human one hour. Seven months ago, that number was roughly 30 minutes — and seven months before that, 15 mi...

2 Kesä 20253h 47min

Beyond human minds: The bewildering frontier of consciousness in insects, AI, and more

Beyond human minds: The bewildering frontier of consciousness in insects, AI, and more

What if there’s something it’s like to be a shrimp — or a chatbot?For centuries, humans have debated the nature of consciousness, often placing ourselves at the very top. But what about the minds of o...

23 Touko 20253h 34min

Don’t believe OpenAI’s “nonprofit” spin (emergency pod with Tyler Whitmer)

Don’t believe OpenAI’s “nonprofit” spin (emergency pod with Tyler Whitmer)

OpenAI’s recent announcement that its nonprofit would “retain control” of its for-profit business sounds reassuring. But this seemingly major concession, celebrated by so many, is in itself largely me...

15 Touko 20251h 12min

Emergency pod: Did OpenAI give up, or is this just a new trap? (with Rose Chan Loui)

Emergency pod: Did OpenAI give up, or is this just a new trap? (with Rose Chan Loui)

When attorneys general intervene in corporate affairs, it usually means something has gone seriously wrong. In OpenAI’s case, it appears to have forced a dramatic reversal of the company’s plans to si...

8 Touko 20251h 2min

Suosittua kategoriassa Koulutus

rss-murhan-anatomia
voi-hyvin-meditaatiot-2
rss-narsisti
adhd-podi
psykopodiaa-podcast
rss-rahamania
rss-uskonto-on-tylsaa
rss-valo-minussa-2
mielipaivakirja
rss-vapaudu-voimaasi
rss-niinku-asia-on
rss-duodecim-lehti
rahapuhetta
ilona-rauhala
aamukahvilla
aloita-meditaatio
kesken
dear-ladies
rss-eron-alkemiaa
rss-arkea-ja-aurinkoa-podcast-espanjasta