The case for and against AGI by 2030 (article by Benjamin Todd)

The case for and against AGI by 2030 (article by Benjamin Todd)

More and more people have been saying that we might have AGI (artificial general intelligence) before 2030. Is that really plausible?

This article by Benjamin Todd looks into the cases for and against, and summarises the key things you need to know to understand the debate. You can see all the images and many footnotes in the original article on the 80,000 Hours website.

In a nutshell:

  • Four key factors are driving AI progress: larger base models, teaching models to reason, increasing models’ thinking time, and building agent scaffolding for multi-step tasks. These are underpinned by increasing computational power to run and train AI systems, as well as increasing human capital going into algorithmic research.
  • All of these drivers are set to continue until 2028 and perhaps until 2032.
  • This means we should expect major further gains in AI performance. We don’t know how large they’ll be, but extrapolating recent trends on benchmarks suggests we’ll reach systems with beyond-human performance in coding and scientific reasoning, and that can autonomously complete multi-week projects.
  • Whether we call these systems ’AGI’ or not, they could be sufficient to enable AI research itself, robotics, the technology industry, and scientific research to accelerate — leading to transformative impacts.
  • Alternatively, AI might fail to overcome issues with ill-defined, high-context work over long time horizons and remain a tool (even if much improved compared to today).
  • Increasing AI performance requires exponential growth in investment and the research workforce. At current rates, we will likely start to reach bottlenecks around 2030. Simplifying a bit, that means we’ll likely either reach AGI by around 2030 or see progress slow significantly. Hybrid scenarios are also possible, but the next five years seem especially crucial.

Chapters:

  • Introduction (00:00:00)
  • The case for AGI by 2030 (00:00:33)
  • The article in a nutshell (00:04:04)
  • Section 1: What's driven recent AI progress? (00:05:46)
  • How we got here: the deep learning era (00:05:52)
  • Where are we now: the four key drivers (00:07:45)
  • Driver 1: Scaling pretraining (00:08:57)
  • Algorithmic efficiency (00:12:14)
  • How much further can pretraining scale? (00:14:22)
  • Driver 2: Training the models to reason (00:16:15)
  • How far can scaling reasoning continue? (00:22:06)
  • Driver 3: Increasing how long models think (00:25:01)
  • Driver 4: Building better agents (00:28:00)
  • How far can agent improvements continue? (00:33:40)
  • Section 2: How good will AI become by 2030? (00:35:59)
  • Trend extrapolation of AI capabilities (00:37:42)
  • What jobs would these systems help with? (00:39:59)
  • Software engineering (00:40:50)
  • Scientific research (00:42:13)
  • AI research (00:43:21)
  • What's the case against this? (00:44:30)
  • Additional resources on the sceptical view (00:49:18)
  • When do the 'experts' expect AGI? (00:49:50)
  • Section 3: Why the next 5 years are crucial (00:51:06)
  • Bottlenecks around 2030 (00:52:10)
  • Two potential futures for AI (00:56:02)
  • Conclusion (00:58:05)
  • Thanks for listening (00:59:27)

Audio engineering: Dominic Armstrong
Music: Ben Cordell

Avsnitt(317)

Artificial General Intelligence leads to oligarchy | David Duvenaud, ex-Anthropic team lead

Artificial General Intelligence leads to oligarchy | David Duvenaud, ex-Anthropic team lead

Democracy might be a brief historical blip. That’s the unsettling thesis of a recent paper, which argues AI that can do all the work a human can do inevitably leads to the “gradual disempowerment” of ...

27 Jan 2h 31min

#145 Classic episode – Christopher Brown on why slavery abolition wasn't inevitable

#145 Classic episode – Christopher Brown on why slavery abolition wasn't inevitable

In many ways, humanity seems to have become more humane and inclusive over time. While there’s still a lot of progress to be made, campaigns to give people of different genders, races, sexualities, et...

20 Jan 2h 56min

#233 – James Smith on how to prevent a mirror life catastrophe

#233 – James Smith on how to prevent a mirror life catastrophe

When James Smith first heard about mirror bacteria, he was sceptical. But within two weeks, he’d dropped everything to work on it full time, considering it the worst biothreat that he’d seen described...

13 Jan 2h 9min

#144 Classic episode – Athena Aktipis on why cancer is a fundamental universal phenomena

#144 Classic episode – Athena Aktipis on why cancer is a fundamental universal phenomena

What’s the opposite of cancer? If you answered “cure,” “antidote,” or “antivenom” — you’ve obviously been reading the antonym section at www.merriam-webster.com/thesaurus/cancer.But today’s guest Athe...

9 Jan 3h 30min

#142 Classic episode – John McWhorter on why the optimal number of languages might be one, and other provocative claims about language

#142 Classic episode – John McWhorter on why the optimal number of languages might be one, and other provocative claims about language

John McWhorter is a linguistics professor at Columbia University specialising in research on creole languages. He's also a content-producing machine, never afraid to give his frank opinion on anything...

6 Jan 1h 35min

2025 Highlight-o-thon: Oops! All Bests

2025 Highlight-o-thon: Oops! All Bests

It’s that magical time of year once again — highlightapalooza! Stick around for one top bit from each episode we recorded this year, including:Kyle Fish explaining how Anthropic’s AI Claude descends i...

29 Dec 20251h 40min

#232 – Andreas Mogensen on what we owe 'philosophical Vulcans' and unconscious beings

#232 – Andreas Mogensen on what we owe 'philosophical Vulcans' and unconscious beings

Most debates about the moral status of AI systems circle the same question: is there something that it feels like to be them? But what if that’s the wrong question to ask? Andreas Mogensen — a senior ...

19 Dec 20252h 37min

#231 – Paul Scharre on how AI-controlled robots will and won't change war

#231 – Paul Scharre on how AI-controlled robots will and won't change war

In 1983, Stanislav Petrov, a Soviet lieutenant colonel, sat in a bunker watching a red screen flash “MISSILE LAUNCH.” Protocol demanded he report it to superiors, which would very likely trigger a ret...

17 Dec 20252h 45min

Populärt inom Utbildning

rss-bara-en-till-om-missbruk-medberoende-2
historiepodden-se
det-skaver
nu-blir-det-historia
harrisons-dramatiska-historia
johannes-hansen-podcast
rss-sjalsligt-avkladd
alska-oss
allt-du-velat-veta
sektledare
not-fanny-anymore
roda-vita-rosen
sa-in-i-sjalen
rss-max-tant-med-max-villman
rss-om-vi-ska-vara-arliga
rikatillsammans-om-privatekonomi-rikedom-i-livet
i-vantan-pa-katastrofen
sektpodden
psykologsnack
rss-basta-livet