#219 – Toby Ord on graphs AI companies would prefer you didn't (fully) understand

#219 – Toby Ord on graphs AI companies would prefer you didn't (fully) understand

The era of making AI smarter just by making it bigger is ending. But that doesn’t mean progress is slowing down — far from it. AI models continue to get much more powerful, just using very different methods, and those underlying technical changes force a big rethink of what coming years will look like.

Toby Ord — Oxford philosopher and bestselling author of The Precipice — has been tracking these shifts and mapping out the implications both for governments and our lives.

Links to learn more, video, highlights, and full transcript: https://80k.info/to25

As he explains, until recently anyone can access the best AI in the world “for less than the price of a can of Coke.” But unfortunately, that’s over.

What changed? AI companies first made models smarter by throwing a million times as much computing power at them during training, to make them better at predicting the next word. But with high quality data drying up, that approach petered out in 2024.

So they pivoted to something radically different: instead of training smarter models, they’re giving existing models dramatically more time to think — leading to the rise in “reasoning models” that are at the frontier today.

The results are impressive but this extra computing time comes at a cost: OpenAI’s o3 reasoning model achieved stunning results on a famous AI test by writing an Encyclopedia Britannica’s worth of reasoning to solve individual problems at a cost of over $1,000 per question.

This isn’t just technical trivia: if this improvement method sticks, it will change much about how the AI revolution plays out, starting with the fact that we can expect the rich and powerful to get access to the best AI models well before the rest of us.

Toby and host Rob discuss the implications of all that, plus the return of reinforcement learning (and resulting increase in deception), and Toby's commitment to clarifying the misleading graphs coming out of AI companies — to separate the snake oil and fads from the reality of what's likely a "transformative moment in human history."

Recorded on May 23, 2025.

Chapters:

  • Cold open (00:00:00)
  • Toby Ord is back — for a 4th time! (00:01:20)
  • Everything has changed (and changed again) since 2020 (00:01:37)
  • Is x-risk up or down? (00:07:47)
  • The new scaling era: compute at inference (00:09:12)
  • Inference scaling means less concentration (00:31:21)
  • Will rich people get access to AGI first? Will the rest of us even know? (00:35:11)
  • The new regime makes 'compute governance' harder (00:41:08)
  • How 'IDA' might let AI blast past human level — or not (00:50:14)
  • Reinforcement learning brings back 'reward hacking' agents (01:04:56)
  • Will we get warning shots? Will they even help? (01:14:41)
  • The scaling paradox (01:22:09)
  • Misleading charts from AI companies (01:30:55)
  • Policy debates should dream much bigger (01:43:04)
  • Scientific moratoriums have worked before (01:56:04)
  • Might AI 'go rogue' early on? (02:13:16)
  • Lamps are regulated much more than AI (02:20:55)
  • Companies made a strategic error shooting down SB 1047 (02:29:57)
  • Companies should build in emergency brakes for their AI (02:35:49)
  • Toby's bottom lines (02:44:32)


Tell us what you thought! https://forms.gle/enUSk8HXiCrqSA9J8

Video editing: Simon Monsour
Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
Music: Ben Cordell
Camera operator: Jeremy Chevillotte
Transcriptions and web: Katy Moore

Episoder(321)

#24 - Stefan Schubert on why it’s a bad idea to break the rules, even if it’s for a good cause

#24 - Stefan Schubert on why it’s a bad idea to break the rules, even if it’s for a good cause

How honest should we be? How helpful? How friendly? If our society claims to value honesty, for instance, but in reality accepts an awful lot of lying – should we go along with those lax standards? Or...

20 Mar 201855min

#23 - How to actually become an AI alignment researcher, according to Dr Jan Leike

#23 - How to actually become an AI alignment researcher, according to Dr Jan Leike

Want to help steer the 21st century’s most transformative technology? First complete an undergrad degree in computer science and mathematics. Prioritize harder courses over easier ones. Publish at lea...

16 Mar 201845min

#22 - Leah Utyasheva on the non-profit that figured out how to massively cut suicide rates

#22 - Leah Utyasheva on the non-profit that figured out how to massively cut suicide rates

How people kill themselves varies enormously depending on which means are most easily available. In the United States, suicide by firearm stands out. In Hong Kong, where most people live in high rise ...

7 Mar 20181h 8min

#21 - Holden Karnofsky on times philanthropy transformed the world & Open Phil’s plan to do the same

#21 - Holden Karnofsky on times philanthropy transformed the world & Open Phil’s plan to do the same

The Green Revolution averted mass famine during the 20th century. The contraceptive pill gave women unprecedented freedom in planning their own lives. Both are widely recognised as scientific breakthr...

27 Feb 20182h 35min

#20 - Bruce Friedrich on inventing outstanding meat substitutes to end speciesism & factory farming

#20 - Bruce Friedrich on inventing outstanding meat substitutes to end speciesism & factory farming

Before the US Civil War, it was easier for the North to morally oppose slavery. Why? Because unlike the South they weren’t profiting much from its existence. The fight for abolition was partly won bec...

19 Feb 20181h 18min

#19 - Samantha Pitts-Kiefer on working next to the White House trying to prevent nuclear war

#19 - Samantha Pitts-Kiefer on working next to the White House trying to prevent nuclear war

Rogue elements within a state’s security forces enrich dozens of kilograms of uranium. It’s then assembled into a crude nuclear bomb. The bomb is transported on a civilian aircraft to Washington D.C, ...

14 Feb 20181h 4min

#18 - Ofir Reich on using data science to end poverty & the spurious action-inaction distinction

#18 - Ofir Reich on using data science to end poverty & the spurious action-inaction distinction

Ofir Reich started out doing math in the military, before spending 8 years in tech startups - but then made a sharp turn to become a data scientist focussed on helping the global poor. At UC Berkeley...

31 Jan 20181h 18min

#17 - Will MacAskill on moral uncertainty, utilitarianism & how to avoid being a moral monster

#17 - Will MacAskill on moral uncertainty, utilitarianism & how to avoid being a moral monster

Immanuel Kant is a profoundly influential figure in modern philosophy, and was one of the earliest proponents for universal democracy and international cooperation. He also thought that women have no ...

19 Jan 20181h 52min

Populært innen Fakta

fastlegen
dine-penger-pengeradet
relasjonspodden-med-dora-thorhallsdottir-kjersti-idem
treningspodden
rss-strid-de-norske-borgerkrigene
foreldreradet
jakt-og-fiskepodden
rss-sunn-okonomi
merry-quizmas
hverdagspsyken
rss-mann-i-krise-med-sagen
gravid-uke-for-uke
sinnsyn
generasjonspodden
fryktlos
rss-kunsten-a-leve
rss-impressions-2
hagespiren-podcast
teknologi-og-mennesker
rss-ifengsel-podden