#219 – Toby Ord on graphs AI companies would prefer you didn't (fully) understand

#219 – Toby Ord on graphs AI companies would prefer you didn't (fully) understand

The era of making AI smarter just by making it bigger is ending. But that doesn’t mean progress is slowing down — far from it. AI models continue to get much more powerful, just using very different methods, and those underlying technical changes force a big rethink of what coming years will look like.

Toby Ord — Oxford philosopher and bestselling author of The Precipice — has been tracking these shifts and mapping out the implications both for governments and our lives.

Links to learn more, video, highlights, and full transcript: https://80k.info/to25

As he explains, until recently anyone can access the best AI in the world “for less than the price of a can of Coke.” But unfortunately, that’s over.

What changed? AI companies first made models smarter by throwing a million times as much computing power at them during training, to make them better at predicting the next word. But with high quality data drying up, that approach petered out in 2024.

So they pivoted to something radically different: instead of training smarter models, they’re giving existing models dramatically more time to think — leading to the rise in “reasoning models” that are at the frontier today.

The results are impressive but this extra computing time comes at a cost: OpenAI’s o3 reasoning model achieved stunning results on a famous AI test by writing an Encyclopedia Britannica’s worth of reasoning to solve individual problems at a cost of over $1,000 per question.

This isn’t just technical trivia: if this improvement method sticks, it will change much about how the AI revolution plays out, starting with the fact that we can expect the rich and powerful to get access to the best AI models well before the rest of us.

Toby and host Rob discuss the implications of all that, plus the return of reinforcement learning (and resulting increase in deception), and Toby's commitment to clarifying the misleading graphs coming out of AI companies — to separate the snake oil and fads from the reality of what's likely a "transformative moment in human history."

Recorded on May 23, 2025.

Chapters:

  • Cold open (00:00:00)
  • Toby Ord is back — for a 4th time! (00:01:20)
  • Everything has changed (and changed again) since 2020 (00:01:37)
  • Is x-risk up or down? (00:07:47)
  • The new scaling era: compute at inference (00:09:12)
  • Inference scaling means less concentration (00:31:21)
  • Will rich people get access to AGI first? Will the rest of us even know? (00:35:11)
  • The new regime makes 'compute governance' harder (00:41:08)
  • How 'IDA' might let AI blast past human level — or not (00:50:14)
  • Reinforcement learning brings back 'reward hacking' agents (01:04:56)
  • Will we get warning shots? Will they even help? (01:14:41)
  • The scaling paradox (01:22:09)
  • Misleading charts from AI companies (01:30:55)
  • Policy debates should dream much bigger (01:43:04)
  • Scientific moratoriums have worked before (01:56:04)
  • Might AI 'go rogue' early on? (02:13:16)
  • Lamps are regulated much more than AI (02:20:55)
  • Companies made a strategic error shooting down SB 1047 (02:29:57)
  • Companies should build in emergency brakes for their AI (02:35:49)
  • Toby's bottom lines (02:44:32)


Tell us what you thought! https://forms.gle/enUSk8HXiCrqSA9J8

Video editing: Simon Monsour
Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
Music: Ben Cordell
Camera operator: Jeremy Chevillotte
Transcriptions and web: Katy Moore

Avsnitt(320)

#204 – Nate Silver on making sense of SBF, and his biggest critiques of effective altruism

#204 – Nate Silver on making sense of SBF, and his biggest critiques of effective altruism

Rob Wiblin speaks with FiveThirtyEight election forecaster and author Nate Silver about his new book: On the Edge: The Art of Risking Everything.Links to learn more, highlights, video, and full transc...

16 Okt 20241h 57min

#203 – Peter Godfrey-Smith on interfering with wild nature, accepting death, and the origin of complex civilisation

#203 – Peter Godfrey-Smith on interfering with wild nature, accepting death, and the origin of complex civilisation

"In the human case, it would be mistaken to give a kind of hour-by-hour accounting. You know, 'I had +4 level of experience for this hour, then I had -2 for the next hour, and then I had -1' — and you...

3 Okt 20241h 25min

Luisa and Keiran on free will, and the consequences of never feeling enduring guilt or shame

Luisa and Keiran on free will, and the consequences of never feeling enduring guilt or shame

In this episode from our second show, 80k After Hours, Luisa Rodriguez and Keiran Harris chat about the consequences of letting go of enduring guilt, shame, anger, and pride.Links to learn more, highl...

27 Sep 20241h 36min

#202 – Venki Ramakrishnan on the cutting edge of anti-ageing science

#202 – Venki Ramakrishnan on the cutting edge of anti-ageing science

"For every far-out idea that turns out to be true, there were probably hundreds that were simply crackpot ideas. In general, [science] advances building on the knowledge we have, and seeing what the n...

19 Sep 20242h 20min

#201 – Ken Goldberg on why your robot butler isn’t here yet

#201 – Ken Goldberg on why your robot butler isn’t here yet

"Perception is quite difficult with cameras: even if you have a stereo camera, you still can’t really build a map of where everything is in space. It’s just very difficult. And I know that sounds surp...

13 Sep 20242h 1min

#200 – Ezra Karger on what superforecasters and experts think about existential risks

#200 – Ezra Karger on what superforecasters and experts think about existential risks

"It’s very hard to find examples where people say, 'I’m starting from this point. I’m starting from this belief.' So we wanted to make that very legible to people. We wanted to say, 'Experts think thi...

4 Sep 20242h 49min

#199 – Nathan Calvin on California’s AI bill SB 1047 and its potential to shape US AI policy

#199 – Nathan Calvin on California’s AI bill SB 1047 and its potential to shape US AI policy

"I do think that there is a really significant sentiment among parts of the opposition that it’s not really just that this bill itself is that bad or extreme — when you really drill into it, it feels ...

29 Aug 20241h 12min

#198 – Meghan Barrett on upending everything you thought you knew about bugs in 3 hours

#198 – Meghan Barrett on upending everything you thought you knew about bugs in 3 hours

"This is a group of animals I think people are particularly unfamiliar with. They are especially poorly covered in our science curriculum; they are especially poorly understood, because people don’t s...

26 Aug 20243h 48min

Populärt inom Utbildning

historiepodden-se
rss-bara-en-till-om-missbruk-medberoende-2
det-skaver
nu-blir-det-historia
harrisons-dramatiska-historia
rss-viktmedicinpodden
johannes-hansen-podcast
roda-vita-rosen
not-fanny-anymore
allt-du-velat-veta
sektledare
alska-oss
rss-sjalsligt-avkladd
i-vantan-pa-katastrofen
sa-in-i-sjalen
rss-beratta-alltid-det-har
rss-max-tant-med-max-villman
rss-basta-livet
dumforklarat
rikatillsammans-om-privatekonomi-rikedom-i-livet