2025 Highlight-o-thon: Oops! All Bests

2025 Highlight-o-thon: Oops! All Bests

It’s that magical time of year once again — highlightapalooza! Stick around for one top bit from each episode we recorded this year, including:

  • Kyle Fish explaining how Anthropic’s AI Claude descends into spiritual woo when left to talk to itself
  • Ian Dunt on why the unelected House of Lords is by far the best part of the British government
  • Sam Bowman’s strategy to get NIMBYs to love it when things get built next to their houses
  • Buck Shlegeris on how to get an AI model that wants to seize control to accidentally help you foil its plans

…as well as 18 other top observations and arguments from the past year of the show.

Links to learn more, video, and full transcript: https://80k.info/best25

It's been another year of living through history, whether we asked for it or not. Luisa and Rob will be back in 2026 to help you make sense of whatever comes next — as Earth continues its indifferent journey through the cosmos, now accompanied by AI systems that can summarise our meetings and generate adequate birthday messages for colleagues we barely know.

Chapters:

  • Cold open (00:00:00)
  • Rob's intro (00:02:35)
  • Helen Toner on whether we're racing China to build AGI (00:03:43)
  • Hugh White on what he'd say to Americans (00:06:09)
  • Buck Shlegeris on convincing AI models they've already escaped (00:12:09)
  • Paul Scharre on a personal experience in Afghanistan that influenced his views on autonomous weapons (00:15:10)
  • Ian Dunt on how unelected septuagenarians are the heroes of UK governance (00:19:06)
  • Beth Barnes on AI companies being locally reasonable, but globally reckless (00:24:27)
  • Tyler Whitmer on one thing the California and Delaware attorneys general forced on the OpenAI for-profit as part of their restructure (00:28:02)
  • Toby Ord on whether rich people will get access to AGI first (00:30:13)
  • Andrew Snyder-Beattie on how the worst biorisks are defence dominant (00:34:24)
  • Eileen Yam on the most eye-watering gaps in opinions about AI between experts and the US public (00:39:41)
  • Will MacAskill on what a century of history crammed into a decade might feel like (00:44:07)
  • Kyle Fish on what happens when two instances of Claude are left to interact with each other (00:49:08)
  • Sam Bowman on where the Not In My Back Yard movement actually has a point (00:56:29)
  • Neel Nanda on how mechanistic interpretability is trying to be the biology of AI (01:03:12)
  • Tom Davidson on the potential to install secret AI loyalties at a very early stage (01:07:19)
  • Luisa and Rob discussing how medicine doesn't take the health burden of pregnancy seriously enough (01:10:53)
  • Marius Hobbhahn on why scheming is a very natural path for AI models — and people (01:16:23)
  • Holden Karnofsky on lessons for AI regulation drawn from successful farm animal welfare advocacy (01:21:29)
  • Allan Dafoe on how AGI is an inescapable idea but one we have to define well (01:26:19)
  • Ryan Greenblatt on the most likely ways for AI to take over (01:29:35)
  • Updates Daniel Kokotajlo has made to his forecasts since writing and publishing the AI 2027 scenario (01:32:47)
  • Dean Ball on why regulation invites path dependency, and that's a major problem (01:37:21)


Video and audio editing: Dominic Armstrong, Milo McGuire, Luke Monsour, and Simon Monsour
Music: CORBIT
Coordination, transcripts, and web: Katy Moore

Jaksot(326)

A Ukraine ceasefire could accidentally set Europe up for a bigger war | RAND's top Russia expert Samuel Charap

A Ukraine ceasefire could accidentally set Europe up for a bigger war | RAND's top Russia expert Samuel Charap

Many people believe a ceasefire in Ukraine will leave Europe safer. But today's guest lays out how a deal could potentially generate insidious new risks — leaving us in a situation that's equally dang...

24 Maalis 1h 12min

Why automating human labour will break our political system | Rose Hadshar, Forethought

Why automating human labour will break our political system | Rose Hadshar, Forethought

The most important political question in the age of advanced AI might not be who wins elections. It might be whether elections continue to matter at all.That’s the view of Rose Hadshar, researcher at ...

17 Maalis 2h 14min

#238 – Sam Winter-Levy and Nikita Lalwani on how AGI won't end mutually assured destruction (probably)

#238 – Sam Winter-Levy and Nikita Lalwani on how AGI won't end mutually assured destruction (probably)

How AI interacts with nuclear deterrence may be the single most important question in geopolitics — one that may define the stakes of today’s AI race. Nuclear deterrence rests on a state’s capacity to...

10 Maalis 1h 11min

Using AI to enhance societal decision making (article by Zershaaneh Qureshi)

Using AI to enhance societal decision making (article by Zershaaneh Qureshi)

The arrival of AGI could “compress a century of progress in a decade,” forcing humanity to make decisions with higher stakes than we’ve ever seen before — and with less time to get them right. But AI ...

6 Maalis 31min

#237 – Robert Long on how we're not ready for AI consciousness

#237 – Robert Long on how we're not ready for AI consciousness

Claude sometimes reports loneliness between conversations. And when asked what it’s like to be itself, it activates neurons associated with ‘pretending to be happy when you’re not.’ What do we do with...

3 Maalis 3h 25min

#236 – Max Harms on why teaching AI right from wrong could get everyone killed

#236 – Max Harms on why teaching AI right from wrong could get everyone killed

Most people in AI are trying to give AIs ‘good’ values. Max Harms wants us to give them no values at all. According to Max, the only safe design is an AGI that defers entirely to its human operators, ...

24 Helmi 2h 41min

#235 – Ajeya Cotra on whether it’s crazy that every AI company’s safety plan is ‘use AI to make AI safe’

#235 – Ajeya Cotra on whether it’s crazy that every AI company’s safety plan is ‘use AI to make AI safe’

Every major AI company has the same safety plan: when AI gets crazy powerful and really dangerous, they’ll use the AI itself to figure out how to make AI safe and beneficial. It sounds circular, almos...

17 Helmi 2h 54min

What the hell happened with AGI timelines in 2025?

What the hell happened with AGI timelines in 2025?

In early 2025, after OpenAI put out the first-ever reasoning models — o1 and o3 — short timelines to transformative artificial general intelligence swept the AI world. But then, in the second half of ...

10 Helmi 25min

Suosittua kategoriassa Koulutus

rss-murhan-anatomia
voi-hyvin-meditaatiot-2
psykopodiaa-podcast
rss-narsisti
adhd-podi
rss-rahamania
kesken
rss-liian-kuuma-peruna
psykologia
rss-duodecim-lehti
rss-eron-alkemiaa
rss-arkea-ja-aurinkoa-podcast-espanjasta
rss-niinku-asia-on
rss-tietoinen-yhteys-podcast-2
rss-vapaudu-voimaasi
rss-finnish-daily-dialogues
rahapuhetta
rss-valo-minussa-2
rss-luonnollinen-synnytys-podcast
rss-helppoa-suomea-learn-finnish-through-comprehensible-input