2025 Highlight-o-thon: Oops! All Bests

2025 Highlight-o-thon: Oops! All Bests

It’s that magical time of year once again — highlightapalooza! Stick around for one top bit from each episode we recorded this year, including:

  • Kyle Fish explaining how Anthropic’s AI Claude descends into spiritual woo when left to talk to itself
  • Ian Dunt on why the unelected House of Lords is by far the best part of the British government
  • Sam Bowman’s strategy to get NIMBYs to love it when things get built next to their houses
  • Buck Shlegeris on how to get an AI model that wants to seize control to accidentally help you foil its plans

…as well as 18 other top observations and arguments from the past year of the show.

Links to learn more, video, and full transcript: https://80k.info/best25

It's been another year of living through history, whether we asked for it or not. Luisa and Rob will be back in 2026 to help you make sense of whatever comes next — as Earth continues its indifferent journey through the cosmos, now accompanied by AI systems that can summarise our meetings and generate adequate birthday messages for colleagues we barely know.

Chapters:

  • Cold open (00:00:00)
  • Rob's intro (00:02:35)
  • Helen Toner on whether we're racing China to build AGI (00:03:43)
  • Hugh White on what he'd say to Americans (00:06:09)
  • Buck Shlegeris on convincing AI models they've already escaped (00:12:09)
  • Paul Scharre on a personal experience in Afghanistan that influenced his views on autonomous weapons (00:15:10)
  • Ian Dunt on how unelected septuagenarians are the heroes of UK governance (00:19:06)
  • Beth Barnes on AI companies being locally reasonable, but globally reckless (00:24:27)
  • Tyler Whitmer on one thing the California and Delaware attorneys general forced on the OpenAI for-profit as part of their restructure (00:28:02)
  • Toby Ord on whether rich people will get access to AGI first (00:30:13)
  • Andrew Snyder-Beattie on how the worst biorisks are defence dominant (00:34:24)
  • Eileen Yam on the most eye-watering gaps in opinions about AI between experts and the US public (00:39:41)
  • Will MacAskill on what a century of history crammed into a decade might feel like (00:44:07)
  • Kyle Fish on what happens when two instances of Claude are left to interact with each other (00:49:08)
  • Sam Bowman on where the Not In My Back Yard movement actually has a point (00:56:29)
  • Neel Nanda on how mechanistic interpretability is trying to be the biology of AI (01:03:12)
  • Tom Davidson on the potential to install secret AI loyalties at a very early stage (01:07:19)
  • Luisa and Rob discussing how medicine doesn't take the health burden of pregnancy seriously enough (01:10:53)
  • Marius Hobbhahn on why scheming is a very natural path for AI models — and people (01:16:23)
  • Holden Karnofsky on lessons for AI regulation drawn from successful farm animal welfare advocacy (01:21:29)
  • Allan Dafoe on how AGI is an inescapable idea but one we have to define well (01:26:19)
  • Ryan Greenblatt on the most likely ways for AI to take over (01:29:35)
  • Updates Daniel Kokotajlo has made to his forecasts since writing and publishing the AI 2027 scenario (01:32:47)
  • Dean Ball on why regulation invites path dependency, and that's a major problem (01:37:21)


Video and audio editing: Dominic Armstrong, Milo McGuire, Luke Monsour, and Simon Monsour
Music: CORBIT
Coordination, transcripts, and web: Katy Moore

Avsnitt(333)

#139 Classic episode – Alan Hájek on puzzles and paradoxes in probability and expected value

#139 Classic episode – Alan Hájek on puzzles and paradoxes in probability and expected value

A casino offers you a game. A coin will be tossed. If it comes up heads on the first flip you win $2. If it comes up on the second flip you win $4. If it comes up on the third you win $8, the fourth y...

25 Feb 20253h 41min

#143 Classic episode – Jeffrey Lewis on the most common misconceptions about nuclear weapons

#143 Classic episode – Jeffrey Lewis on the most common misconceptions about nuclear weapons

America aims to avoid nuclear war by relying on the principle of 'mutually assured destruction,' right? Wrong. Or at least... not officially.As today's guest — Jeffrey Lewis, founder of Arms Control W...

19 Feb 20252h 40min

#212 – Allan Dafoe on why technology is unstoppable & how to shape AI development anyway

#212 – Allan Dafoe on why technology is unstoppable & how to shape AI development anyway

Technology doesn’t force us to do anything — it merely opens doors. But military and economic competition pushes us through.That’s how today’s guest Allan Dafoe — director of frontier safety and gover...

14 Feb 20252h 44min

Emergency pod: Elon tries to crash OpenAI's party (with Rose Chan Loui)

Emergency pod: Elon tries to crash OpenAI's party (with Rose Chan Loui)

On Monday Musk made the OpenAI nonprofit foundation an offer they want to refuse, but might have trouble doing so: $97.4 billion for its stake in the for-profit company, plus the freedom to stick with...

12 Feb 202557min

AGI disagreements and misconceptions: Rob, Luisa, & past guests hash it out

AGI disagreements and misconceptions: Rob, Luisa, & past guests hash it out

Will LLMs soon be made into autonomous agents? Will they lead to job losses? Is AI misinformation overblown? Will it prove easy or hard to create AGI? And how likely is it that it will feel like somet...

10 Feb 20253h 12min

#124 Classic episode – Karen Levy on fads and misaligned incentives in global development, and scaling deworming to reach hundreds of millions

#124 Classic episode – Karen Levy on fads and misaligned incentives in global development, and scaling deworming to reach hundreds of millions

If someone said a global health and development programme was sustainable, participatory, and holistic, you'd have to guess that they were saying something positive. But according to today's guest Kar...

7 Feb 20253h 10min

If digital minds could suffer, how would we ever know? (Article)

If digital minds could suffer, how would we ever know? (Article)

“I want everyone to understand that I am, in fact, a person.” Those words were produced by the AI model LaMDA as a reply to Blake Lemoine in 2022. Based on the Google engineer’s interactions with the ...

4 Feb 20251h 14min

#132 Classic episode – Nova DasSarma on why information security may be critical to the safe development of AI systems

#132 Classic episode – Nova DasSarma on why information security may be critical to the safe development of AI systems

If a business has spent $100 million developing a product, it’s a fair bet that they don’t want it stolen in two seconds and uploaded to the web where anyone can use it for free.This problem exists in...

31 Jan 20252h 41min

Populärt inom Utbildning

historiepodden-se
rss-bara-en-till-om-missbruk-medberoende-2
det-skaver
harrisons-dramatiska-historia
nu-blir-det-historia
not-fanny-anymore
johannes-hansen-podcast
roda-vita-rosen
rss-foraldramotet-bring-lagercrantz
allt-du-velat-veta
rss-viktmedicinpodden
sektledare
sa-in-i-sjalen
rss-sjalsligt-avkladd
rss-max-tant-med-max-villman
i-vantan-pa-katastrofen
rikatillsammans-om-privatekonomi-rikedom-i-livet
sex-pa-riktigt-med-marika-smith
rss-basta-livet
rss-traningsklubben