The case for and against AGI by 2030 (article by Benjamin Todd)

The case for and against AGI by 2030 (article by Benjamin Todd)

More and more people have been saying that we might have AGI (artificial general intelligence) before 2030. Is that really plausible?

This article by Benjamin Todd looks into the cases for and against, and summarises the key things you need to know to understand the debate. You can see all the images and many footnotes in the original article on the 80,000 Hours website.

In a nutshell:

  • Four key factors are driving AI progress: larger base models, teaching models to reason, increasing models’ thinking time, and building agent scaffolding for multi-step tasks. These are underpinned by increasing computational power to run and train AI systems, as well as increasing human capital going into algorithmic research.
  • All of these drivers are set to continue until 2028 and perhaps until 2032.
  • This means we should expect major further gains in AI performance. We don’t know how large they’ll be, but extrapolating recent trends on benchmarks suggests we’ll reach systems with beyond-human performance in coding and scientific reasoning, and that can autonomously complete multi-week projects.
  • Whether we call these systems ’AGI’ or not, they could be sufficient to enable AI research itself, robotics, the technology industry, and scientific research to accelerate — leading to transformative impacts.
  • Alternatively, AI might fail to overcome issues with ill-defined, high-context work over long time horizons and remain a tool (even if much improved compared to today).
  • Increasing AI performance requires exponential growth in investment and the research workforce. At current rates, we will likely start to reach bottlenecks around 2030. Simplifying a bit, that means we’ll likely either reach AGI by around 2030 or see progress slow significantly. Hybrid scenarios are also possible, but the next five years seem especially crucial.

Chapters:

  • Introduction (00:00:00)
  • The case for AGI by 2030 (00:00:33)
  • The article in a nutshell (00:04:04)
  • Section 1: What's driven recent AI progress? (00:05:46)
  • How we got here: the deep learning era (00:05:52)
  • Where are we now: the four key drivers (00:07:45)
  • Driver 1: Scaling pretraining (00:08:57)
  • Algorithmic efficiency (00:12:14)
  • How much further can pretraining scale? (00:14:22)
  • Driver 2: Training the models to reason (00:16:15)
  • How far can scaling reasoning continue? (00:22:06)
  • Driver 3: Increasing how long models think (00:25:01)
  • Driver 4: Building better agents (00:28:00)
  • How far can agent improvements continue? (00:33:40)
  • Section 2: How good will AI become by 2030? (00:35:59)
  • Trend extrapolation of AI capabilities (00:37:42)
  • What jobs would these systems help with? (00:39:59)
  • Software engineering (00:40:50)
  • Scientific research (00:42:13)
  • AI research (00:43:21)
  • What's the case against this? (00:44:30)
  • Additional resources on the sceptical view (00:49:18)
  • When do the 'experts' expect AGI? (00:49:50)
  • Section 3: Why the next 5 years are crucial (00:51:06)
  • Bottlenecks around 2030 (00:52:10)
  • Two potential futures for AI (00:56:02)
  • Conclusion (00:58:05)
  • Thanks for listening (00:59:27)

Audio engineering: Dominic Armstrong
Music: Ben Cordell

Jaksot(323)

Using AI to enhance societal decision making (article by Zershaaneh Qureshi)

Using AI to enhance societal decision making (article by Zershaaneh Qureshi)

The arrival of AGI could “compress a century of progress in a decade,” forcing humanity to make decisions with higher stakes than we’ve ever seen before — and with less time to get them right. But AI ...

6 Maalis 31min

We're Not Ready for AI Consciousness | Robert Long, philosopher and founder of Eleos AI

We're Not Ready for AI Consciousness | Robert Long, philosopher and founder of Eleos AI

Claude sometimes reports loneliness between conversations. And when asked what it’s like to be itself, it activates neurons associated with ‘pretending to be happy when you’re not.’ What do we do with...

3 Maalis 3h 25min

#236 – Max Harms on why teaching AI right from wrong could get everyone killed

#236 – Max Harms on why teaching AI right from wrong could get everyone killed

Most people in AI are trying to give AIs ‘good’ values. Max Harms wants us to give them no values at all. According to Max, the only safe design is an AGI that defers entirely to its human operators, ...

24 Helmi 2h 41min

#235 – Ajeya Cotra on whether it’s crazy that every AI company’s safety plan is ‘use AI to make AI safe’

#235 – Ajeya Cotra on whether it’s crazy that every AI company’s safety plan is ‘use AI to make AI safe’

Every major AI company has the same safety plan: when AI gets crazy powerful and really dangerous, they’ll use the AI itself to figure out how to make AI safe and beneficial. It sounds circular, almos...

17 Helmi 2h 54min

What the hell happened with AGI timelines in 2025?

What the hell happened with AGI timelines in 2025?

In early 2025, after OpenAI put out the first-ever reasoning models — o1 and o3 — short timelines to transformative artificial general intelligence swept the AI world. But then, in the second half of ...

10 Helmi 25min

#179 Classic episode – Randy Nesse on why evolution left us so vulnerable to depression and anxiety

#179 Classic episode – Randy Nesse on why evolution left us so vulnerable to depression and anxiety

Mental health problems like depression and anxiety affect enormous numbers of people and severely interfere with their lives. By contrast, we don’t see similar levels of physical ill health in young p...

3 Helmi 2h 51min

#234 – David Duvenaud on why 'aligned AI' would still kill democracy

#234 – David Duvenaud on why 'aligned AI' would still kill democracy

Democracy might be a brief historical blip. That’s the unsettling thesis of a recent paper, which argues AI that can do all the work a human can do inevitably leads to the “gradual disempowerment” of ...

27 Tammi 2h 31min

#145 Classic episode – Christopher Brown on why slavery abolition wasn't inevitable

#145 Classic episode – Christopher Brown on why slavery abolition wasn't inevitable

In many ways, humanity seems to have become more humane and inclusive over time. While there’s still a lot of progress to be made, campaigns to give people of different genders, races, sexualities, et...

20 Tammi 2h 56min

Suosittua kategoriassa Koulutus

rss-murhan-anatomia
voi-hyvin-meditaatiot-2
rss-narsisti
psykopodiaa-podcast
rss-uskonto-on-tylsaa
rss-vapaudu-voimaasi
psykologia
kesken
rss-liian-kuuma-peruna
rahapuhetta
adhd-podi
rss-taloustaito-podcast
rss-niinku-asia-on
jari-sarasvuo-podcast
ihminen-tavattavissa-tommy-hellsten-instituutti
rss-the-amicast
rss-xamk-podcast
rss-arkea-ja-aurinkoa-podcast-espanjasta
rss-opi-espanjaa
rss-tyohyvinvoinnin-aakkoset