2025 Highlight-o-thon: Oops! All Bests

2025 Highlight-o-thon: Oops! All Bests

It’s that magical time of year once again — highlightapalooza! Stick around for one top bit from each episode we recorded this year, including:

  • Kyle Fish explaining how Anthropic’s AI Claude descends into spiritual woo when left to talk to itself
  • Ian Dunt on why the unelected House of Lords is by far the best part of the British government
  • Sam Bowman’s strategy to get NIMBYs to love it when things get built next to their houses
  • Buck Shlegeris on how to get an AI model that wants to seize control to accidentally help you foil its plans

…as well as 18 other top observations and arguments from the past year of the show.

Links to learn more, video, and full transcript: https://80k.info/best25

It's been another year of living through history, whether we asked for it or not. Luisa and Rob will be back in 2026 to help you make sense of whatever comes next — as Earth continues its indifferent journey through the cosmos, now accompanied by AI systems that can summarise our meetings and generate adequate birthday messages for colleagues we barely know.

Chapters:

  • Cold open (00:00:00)
  • Rob's intro (00:02:35)
  • Helen Toner on whether we're racing China to build AGI (00:03:43)
  • Hugh White on what he'd say to Americans (00:06:09)
  • Buck Shlegeris on convincing AI models they've already escaped (00:12:09)
  • Paul Scharre on a personal experience in Afghanistan that influenced his views on autonomous weapons (00:15:10)
  • Ian Dunt on how unelected septuagenarians are the heroes of UK governance (00:19:06)
  • Beth Barnes on AI companies being locally reasonable, but globally reckless (00:24:27)
  • Tyler Whitmer on one thing the California and Delaware attorneys general forced on the OpenAI for-profit as part of their restructure (00:28:02)
  • Toby Ord on whether rich people will get access to AGI first (00:30:13)
  • Andrew Snyder-Beattie on how the worst biorisks are defence dominant (00:34:24)
  • Eileen Yam on the most eye-watering gaps in opinions about AI between experts and the US public (00:39:41)
  • Will MacAskill on what a century of history crammed into a decade might feel like (00:44:07)
  • Kyle Fish on what happens when two instances of Claude are left to interact with each other (00:49:08)
  • Sam Bowman on where the Not In My Back Yard movement actually has a point (00:56:29)
  • Neel Nanda on how mechanistic interpretability is trying to be the biology of AI (01:03:12)
  • Tom Davidson on the potential to install secret AI loyalties at a very early stage (01:07:19)
  • Luisa and Rob discussing how medicine doesn't take the health burden of pregnancy seriously enough (01:10:53)
  • Marius Hobbhahn on why scheming is a very natural path for AI models — and people (01:16:23)
  • Holden Karnofsky on lessons for AI regulation drawn from successful farm animal welfare advocacy (01:21:29)
  • Allan Dafoe on how AGI is an inescapable idea but one we have to define well (01:26:19)
  • Ryan Greenblatt on the most likely ways for AI to take over (01:29:35)
  • Updates Daniel Kokotajlo has made to his forecasts since writing and publishing the AI 2027 scenario (01:32:47)
  • Dean Ball on why regulation invites path dependency, and that's a major problem (01:37:21)


Video and audio editing: Dominic Armstrong, Milo McGuire, Luke Monsour, and Simon Monsour
Music: CORBIT
Coordination, transcripts, and web: Katy Moore

Jaksot(319)

#227 – Helen Toner on the geopolitics of AGI in China and the Middle East

#227 – Helen Toner on the geopolitics of AGI in China and the Middle East

With the US racing to develop AGI and superintelligence ahead of China, you might expect the two countries to be negotiating how they’ll deploy AI, including in the military, without coming to blows. ...

5 Marras 20252h 20min

#226 – Holden Karnofsky on unexploited opportunities to make AI safer — and all his AGI takes

#226 – Holden Karnofsky on unexploited opportunities to make AI safer — and all his AGI takes

For years, working on AI safety usually meant theorising about the ‘alignment problem’ or trying to convince other people to give a damn. If you could find any way to help, the work was frustrating an...

30 Loka 20254h 30min

#225 – Daniel Kokotajlo on what a hyperspeed robot economy might look like

#225 – Daniel Kokotajlo on what a hyperspeed robot economy might look like

When Daniel Kokotajlo talks to security experts at major AI labs, they tell him something chilling: “Of course we’re probably penetrated by the CCP already, and if they really wanted something, they c...

27 Loka 20252h 12min

#224 – There's a cheap and low-tech way to save humanity from any engineered disease | Andrew Snyder-Beattie

#224 – There's a cheap and low-tech way to save humanity from any engineered disease | Andrew Snyder-Beattie

Conventional wisdom is that safeguarding humanity from the worst biological risks — microbes optimised to kill as many as possible — is difficult bordering on impossible, making bioweapons humanity’s ...

2 Loka 20252h 31min

Inside the Biden admin’s AI policy approach | Jake Sullivan, Biden’s NSA | via The Cognitive Revolution

Inside the Biden admin’s AI policy approach | Jake Sullivan, Biden’s NSA | via The Cognitive Revolution

Jake Sullivan was the US National Security Advisor from 2021-2025. He joined our friends on The Cognitive Revolution podcast in August to discuss AI as a critical national security issue. We thought i...

26 Syys 20251h 5min

#223 – Neel Nanda on leading a Google DeepMind team at 26 – and advice if you want to work at an AI company (part 2)

#223 – Neel Nanda on leading a Google DeepMind team at 26 – and advice if you want to work at an AI company (part 2)

At 26, Neel Nanda leads an AI safety team at Google DeepMind, has published dozens of influential papers, and mentored 50 junior researchers — seven of whom now work at major AI companies. His secret?...

15 Syys 20251h 46min

#222 – Can we tell if an AI is loyal by reading its mind? DeepMind's Neel Nanda (part 1)

#222 – Can we tell if an AI is loyal by reading its mind? DeepMind's Neel Nanda (part 1)

We don’t know how AIs think or why they do what they do. Or at least, we don’t know much. That fact is only becoming more troubling as AIs grow more capable and appear on track to wield enormous cultu...

8 Syys 20253h 1min

#221 – Kyle Fish on the most bizarre findings from 5 AI welfare experiments

#221 – Kyle Fish on the most bizarre findings from 5 AI welfare experiments

What happens when you lock two AI systems in a room together and tell them they can discuss anything they want?According to experiments run by Kyle Fish — Anthropic’s first AI welfare researcher — som...

28 Elo 20252h 28min

Suosittua kategoriassa Koulutus

rss-murhan-anatomia
voi-hyvin-meditaatiot-2
psykopodiaa-podcast
rss-narsisti
adhd-podi
rss-liian-kuuma-peruna
rss-valo-minussa-2
aamukahvilla
jari-sarasvuo-podcast
rss-uskonto-on-tylsaa
rss-duodecim-lehti
rss-luonnollinen-synnytys-podcast
rss-arkea-ja-aurinkoa-podcast-espanjasta
rss-niinku-asia-on
kesken
salainen-paivakirja
psykologia
ihminen-tavattavissa-tommy-hellsten-instituutti
leveli
rss-tietoinen-yhteys-podcast-2