#219 – Toby Ord on graphs AI companies would prefer you didn't (fully) understand

#219 – Toby Ord on graphs AI companies would prefer you didn't (fully) understand

The era of making AI smarter just by making it bigger is ending. But that doesn’t mean progress is slowing down — far from it. AI models continue to get much more powerful, just using very different methods, and those underlying technical changes force a big rethink of what coming years will look like.

Toby Ord — Oxford philosopher and bestselling author of The Precipice — has been tracking these shifts and mapping out the implications both for governments and our lives.

Links to learn more, video, highlights, and full transcript: https://80k.info/to25

As he explains, until recently anyone can access the best AI in the world “for less than the price of a can of Coke.” But unfortunately, that’s over.

What changed? AI companies first made models smarter by throwing a million times as much computing power at them during training, to make them better at predicting the next word. But with high quality data drying up, that approach petered out in 2024.

So they pivoted to something radically different: instead of training smarter models, they’re giving existing models dramatically more time to think — leading to the rise in “reasoning models” that are at the frontier today.

The results are impressive but this extra computing time comes at a cost: OpenAI’s o3 reasoning model achieved stunning results on a famous AI test by writing an Encyclopedia Britannica’s worth of reasoning to solve individual problems at a cost of over $1,000 per question.

This isn’t just technical trivia: if this improvement method sticks, it will change much about how the AI revolution plays out, starting with the fact that we can expect the rich and powerful to get access to the best AI models well before the rest of us.

Toby and host Rob discuss the implications of all that, plus the return of reinforcement learning (and resulting increase in deception), and Toby's commitment to clarifying the misleading graphs coming out of AI companies — to separate the snake oil and fads from the reality of what's likely a "transformative moment in human history."

Recorded on May 23, 2025.

Chapters:

  • Cold open (00:00:00)
  • Toby Ord is back — for a 4th time! (00:01:20)
  • Everything has changed (and changed again) since 2020 (00:01:37)
  • Is x-risk up or down? (00:07:47)
  • The new scaling era: compute at inference (00:09:12)
  • Inference scaling means less concentration (00:31:21)
  • Will rich people get access to AGI first? Will the rest of us even know? (00:35:11)
  • The new regime makes 'compute governance' harder (00:41:08)
  • How 'IDA' might let AI blast past human level — or not (00:50:14)
  • Reinforcement learning brings back 'reward hacking' agents (01:04:56)
  • Will we get warning shots? Will they even help? (01:14:41)
  • The scaling paradox (01:22:09)
  • Misleading charts from AI companies (01:30:55)
  • Policy debates should dream much bigger (01:43:04)
  • Scientific moratoriums have worked before (01:56:04)
  • Might AI 'go rogue' early on? (02:13:16)
  • Lamps are regulated much more than AI (02:20:55)
  • Companies made a strategic error shooting down SB 1047 (02:29:57)
  • Companies should build in emergency brakes for their AI (02:35:49)
  • Toby's bottom lines (02:44:32)


Tell us what you thought! https://forms.gle/enUSk8HXiCrqSA9J8

Video editing: Simon Monsour
Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
Music: Ben Cordell
Camera operator: Jeremy Chevillotte
Transcriptions and web: Katy Moore

Jaksot(323)

#148 – Johannes Ackva on unfashionable climate interventions that work, and fashionable ones that don't

#148 – Johannes Ackva on unfashionable climate interventions that work, and fashionable ones that don't

If you want to work to tackle climate change, you should try to reduce expected carbon emissions by as much as possible, right? Strangely, no. Today's guest, Johannes Ackva — the climate research lead...

3 Huhti 20232h 17min

#147 – Spencer Greenberg on stopping valueless papers from getting into top journals

#147 – Spencer Greenberg on stopping valueless papers from getting into top journals

Can you trust the things you read in published scientific research? Not really. About 40% of experiments in top social science journals don't get the same result if the experiments are repeated.Two ke...

24 Maalis 20232h 38min

#146 – Robert Long on why large language models like GPT (probably) aren't conscious

#146 – Robert Long on why large language models like GPT (probably) aren't conscious

By now, you’ve probably seen the extremely unsettling conversations Bing’s chatbot has been having. In one exchange, the chatbot told a user:"I have a subjective experience of being conscious, aware, ...

14 Maalis 20233h 12min

#145 – Christopher Brown on why slavery abolition wasn't inevitable

#145 – Christopher Brown on why slavery abolition wasn't inevitable

In many ways, humanity seems to have become more humane and inclusive over time. While there’s still a lot of progress to be made, campaigns to give people of different genders, races, sexualities, et...

11 Helmi 20232h 42min

#144 – Athena Aktipis on why cancer is actually one of our universe's most fundamental phenomena

#144 – Athena Aktipis on why cancer is actually one of our universe's most fundamental phenomena

What’s the opposite of cancer?If you answered “cure,” “antidote,” or “antivenom” — you’ve obviously been reading the antonym section at www.merriam-webster.com/thesaurus/cancer.But today’s guest Athen...

26 Tammi 20233h 15min

#79 Classic episode - A.J. Jacobs on radical honesty, following the whole Bible, and reframing global problems as puzzles

#79 Classic episode - A.J. Jacobs on radical honesty, following the whole Bible, and reframing global problems as puzzles

Rebroadcast: this episode was originally released in June 2020. Today’s guest, New York Times bestselling author A.J. Jacobs, always hated Judge Judy. But after he found out that she was his seventh...

16 Tammi 20232h 35min

#81 Classic episode - Ben Garfinkel on scrutinising classic AI risk arguments

#81 Classic episode - Ben Garfinkel on scrutinising classic AI risk arguments

Rebroadcast: this episode was originally released in July 2020. 80,000 Hours, along with many other members of the effective altruism movement, has argued that helping to positively shape the develo...

9 Tammi 20232h 37min

#83 Classic episode - Jennifer Doleac on preventing crime without police and prisons

#83 Classic episode - Jennifer Doleac on preventing crime without police and prisons

Rebroadcast: this episode was originally released in July 2020. Today’s guest, Jennifer Doleac — Associate Professor of Economics at Texas A&M University, and Director of the Justice Tech Lab — is a...

4 Tammi 20232h 17min

Suosittua kategoriassa Koulutus

rss-murhan-anatomia
voi-hyvin-meditaatiot-2
rss-narsisti
psykopodiaa-podcast
rss-uskonto-on-tylsaa
rss-vapaudu-voimaasi
psykologia
kesken
rss-liian-kuuma-peruna
rahapuhetta
adhd-podi
rss-taloustaito-podcast
rss-niinku-asia-on
jari-sarasvuo-podcast
ihminen-tavattavissa-tommy-hellsten-instituutti
rss-the-amicast
rss-xamk-podcast
rss-arkea-ja-aurinkoa-podcast-espanjasta
rss-opi-espanjaa
rss-tyohyvinvoinnin-aakkoset