#217 – Beth Barnes on the most important graph in AI right now — and the 7-month rule that governs its progress

#217 – Beth Barnes on the most important graph in AI right now — and the 7-month rule that governs its progress

AI models today have a 50% chance of successfully completing a task that would take an expert human one hour. Seven months ago, that number was roughly 30 minutes — and seven months before that, 15 minutes. (See graph.)

These are substantial, multi-step tasks requiring sustained focus: building web applications, conducting machine learning research, or solving complex programming challenges.

Today’s guest, Beth Barnes, is CEO of METR (Model Evaluation & Threat Research) — the leading organisation measuring these capabilities.

Links to learn more, video, highlights, and full transcript: https://80k.info/bb

Beth's team has been timing how long it takes skilled humans to complete projects of varying length, then seeing how AI models perform on the same work. The resulting paper “Measuring AI ability to complete long tasks” made waves by revealing that the planning horizon of AI models was doubling roughly every seven months. It's regarded by many as the most useful AI forecasting work in years.

Beth has found models can already do “meaningful work” improving themselves, and she wouldn’t be surprised if AI models were able to autonomously self-improve as little as two years from now — in fact, “It seems hard to rule out even shorter [timelines]. Is there 1% chance of this happening in six, nine months? Yeah, that seems pretty plausible.”

Beth adds:

The sense I really want to dispel is, “But the experts must be on top of this. The experts would be telling us if it really was time to freak out.” The experts are not on top of this. Inasmuch as there are experts, they are saying that this is a concerning risk. … And to the extent that I am an expert, I am an expert telling you you should freak out.


What did you think of this episode? https://forms.gle/sFuDkoznxBcHPVmX6


Chapters:

  • Cold open (00:00:00)
  • Who is Beth Barnes? (00:01:19)
  • Can we see AI scheming in the chain of thought? (00:01:52)
  • The chain of thought is essential for safety checking (00:08:58)
  • Alignment faking in large language models (00:12:24)
  • We have to test model honesty even before they're used inside AI companies (00:16:48)
  • We have to test models when unruly and unconstrained (00:25:57)
  • Each 7 months models can do tasks twice as long (00:30:40)
  • METR's research finds AIs are solid at AI research already (00:49:33)
  • AI may turn out to be strong at novel and creative research (00:55:53)
  • When can we expect an algorithmic 'intelligence explosion'? (00:59:11)
  • Recursively self-improving AI might even be here in two years — which is alarming (01:05:02)
  • Could evaluations backfire by increasing AI hype and racing? (01:11:36)
  • Governments first ignore new risks, but can overreact once they arrive (01:26:38)
  • Do we need external auditors doing AI safety tests, not just the companies themselves? (01:35:10)
  • A case against safety-focused people working at frontier AI companies (01:48:44)
  • The new, more dire situation has forced changes to METR's strategy (02:02:29)
  • AI companies are being locally reasonable, but globally reckless (02:10:31)
  • Overrated: Interpretability research (02:15:11)
  • Underrated: Developing more narrow AIs (02:17:01)
  • Underrated: Helping humans judge confusing model outputs (02:23:36)
  • Overrated: Major AI companies' contributions to safety research (02:25:52)
  • Could we have a science of translating AI models' nonhuman language or neuralese? (02:29:24)
  • Could we ban using AI to enhance AI, or is that just naive? (02:31:47)
  • Open-weighting models is often good, and Beth has changed her attitude to it (02:37:52)
  • What we can learn about AGI from the nuclear arms race (02:42:25)
  • Infosec is so bad that no models are truly closed-weight models (02:57:24)
  • AI is more like bioweapons because it undermines the leading power (03:02:02)
  • What METR can do best that others can't (03:12:09)
  • What METR isn't doing that other people have to step up and do (03:27:07)
  • What research METR plans to do next (03:32:09)

This episode was originally recorded on February 17, 2025.

Video editing: Luke Monsour and Simon Monsour
Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
Music: Ben Cordell
Transcriptions and web: Katy Moore

Episoder(321)

#8 - Lewis Bollard on how to end factory farming in our lifetimes

#8 - Lewis Bollard on how to end factory farming in our lifetimes

Every year tens of billions of animals are raised in terrible conditions in factory farms before being killed for human consumption. Over the last two years Lewis Bollard – Project Officer for Farm An...

27 Sep 20173h 16min

#7 - Julia Galef on making humanity more rational, what EA does wrong, and why Twitter isn’t all bad

#7 - Julia Galef on making humanity more rational, what EA does wrong, and why Twitter isn’t all bad

The scientific revolution in the 16th century was one of the biggest societal shifts in human history, driven by the discovery of new and better methods of figuring out who was right and who was wrong...

13 Sep 20171h 14min

#6 - Toby Ord on why the long-term future matters more than anything else & what to do about it

#6 - Toby Ord on why the long-term future matters more than anything else & what to do about it

Of all the people whose well-being we should care about, only a small fraction are alive today. The rest are members of future generations who are yet to exist. Whether they’ll be born into a world th...

6 Sep 20172h 8min

#5 - Alex Gordon-Brown on how to donate millions in your 20s working in quantitative trading

#5 - Alex Gordon-Brown on how to donate millions in your 20s working in quantitative trading

Quantitative financial trading is one of the highest paying parts of the world’s highest paying industry. 25 to 30 year olds with outstanding maths skills can earn millions a year in an obscure set of...

28 Aug 20171h 45min

#4 - Howie Lempel on pandemics that kill hundreds of millions and how to stop them

#4 - Howie Lempel on pandemics that kill hundreds of millions and how to stop them

What disaster is most likely to kill more than 10 million human beings in the next 20 years? Terrorism? Famine? An asteroid? Actually it’s probably a pandemic: a deadly new disease that spreads out o...

23 Aug 20172h 35min

#3 - Dario Amodei on OpenAI and how AI will change the world for good and ill

#3 - Dario Amodei on OpenAI and how AI will change the world for good and ill

Just two years ago OpenAI didn’t exist. It’s now among the most elite groups of machine learning researchers. They’re trying to make an AI that’s smarter than humans and have $1b at their disposal. Ev...

21 Jul 20171h 38min

#2 - David Spiegelhalter on risk, stats and improving understanding of science

#2 - David Spiegelhalter on risk, stats and improving understanding of science

Recorded in 2015 by Robert Wiblin with colleague Jess Whittlestone at the Centre for Effective Altruism, and recovered from the dusty 80,000 Hours archives. David Spiegelhalter is a statistician at th...

21 Jun 201733min

#1 - Miles Brundage on the world's desperate need for AI strategists and policy experts

#1 - Miles Brundage on the world's desperate need for AI strategists and policy experts

Robert Wiblin, Director of Research at 80,000 Hours speaks with Miles Brundage, research fellow at the University of Oxford's Future of Humanity Institute. Miles studies the social implications surrou...

5 Jun 201755min

Populært innen Fakta

fastlegen
dine-penger-pengeradet
relasjonspodden-med-dora-thorhallsdottir-kjersti-idem
treningspodden
rss-strid-de-norske-borgerkrigene
foreldreradet
jakt-og-fiskepodden
rss-sunn-okonomi
merry-quizmas
hverdagspsyken
gravid-uke-for-uke
rss-mann-i-krise-med-sagen
sinnsyn
rss-kunsten-a-leve
rss-impressions-2
babyverden
level-up-med-anniken-binz
fryktlos
teknologi-og-mennesker
lederskap-nhhs-podkast-om-ledelse