#222 – Can we tell if an AI is loyal by reading its mind? DeepMind's Neel Nanda (part 1)

#222 – Can we tell if an AI is loyal by reading its mind? DeepMind's Neel Nanda (part 1)

We don’t know how AIs think or why they do what they do. Or at least, we don’t know much. That fact is only becoming more troubling as AIs grow more capable and appear on track to wield enormous cultural influence, directly advise on major government decisions, and even operate military equipment autonomously. We simply can’t tell what models, if any, should be trusted with such authority.

Neel Nanda of Google DeepMind is one of the founding figures of the field of machine learning trying to fix this situation — mechanistic interpretability (or “mech interp”). The project has generated enormous hype, exploding from a handful of researchers five years ago to hundreds today — all working to make sense of the jumble of tens of thousands of numbers that frontier AIs use to process information and decide what to say or do.

Full transcript, video, and links to learn more: https://80k.info/nn1

Neel now has a warning for us: the most ambitious vision of mech interp he once dreamed of is probably dead. He doesn’t see a path to deeply and reliably understanding what AIs are thinking. The technical and practical barriers are simply too great to get us there in time, before competitive pressures push us to deploy human-level or superhuman AIs. Indeed, Neel argues no one approach will guarantee alignment, and our only choice is the “Swiss cheese” model of accident prevention, layering multiple safeguards on top of one another.

But while mech interp won’t be a silver bullet for AI safety, it has nevertheless had some major successes and will be one of the best tools in our arsenal.

For instance: by inspecting the neural activations in the middle of an AI’s thoughts, we can pick up many of the concepts the model is thinking about — from the Golden Gate Bridge, to refusing to answer a question, to the option of deceiving the user. While we can’t know all the thoughts a model is having all the time, picking up 90% of the concepts it is using 90% of the time should help us muddle through, so long as mech interp is paired with other techniques to fill in the gaps.

This episode was recorded on July 17 and 21, 2025.

Part 2 of the conversation is now available! https://80k.info/nn2

What did you think? https://forms.gle/xKyUrGyYpYenp8N4A

Chapters:

  • Cold open (00:00)
  • Who's Neel Nanda? (01:02)
  • How would mechanistic interpretability help with AGI (01:59)
  • What's mech interp? (05:09)
  • How Neel changed his take on mech interp (09:47)
  • Top successes in interpretability (15:53)
  • Probes can cheaply detect harmful intentions in AIs (20:06)
  • In some ways we understand AIs better than human minds (26:49)
  • Mech interp won't solve all our AI alignment problems (29:21)
  • Why mech interp is the 'biology' of neural networks (38:07)
  • Interpretability can't reliably find deceptive AI – nothing can (40:28)
  • 'Black box' interpretability — reading the chain of thought (49:39)
  • 'Self-preservation' isn't always what it seems (53:06)
  • For how long can we trust the chain of thought (01:02:09)
  • We could accidentally destroy chain of thought's usefulness (01:11:39)
  • Models can tell when they're being tested and act differently (01:16:56)
  • Top complaints about mech interp (01:23:50)
  • Why everyone's excited about sparse autoencoders (SAEs) (01:37:52)
  • Limitations of SAEs (01:47:16)
  • SAEs performance on real-world tasks (01:54:49)
  • Best arguments in favour of mech interp (02:08:10)
  • Lessons from the hype around mech interp (02:12:03)
  • Where mech interp will shine in coming years (02:17:50)
  • Why focus on understanding over control (02:21:02)
  • If AI models are conscious, will mech interp help us figure it out (02:24:09)
  • Neel's new research philosophy (02:26:19)
  • Who should join the mech interp field (02:38:31)
  • Advice for getting started in mech interp (02:46:55)
  • Keeping up to date with mech interp results (02:54:41)
  • Who's hiring and where to work? (02:57:43)

Host: Rob Wiblin
Video editing: Simon Monsour, Luke Monsour, Dominic Armstrong, and Milo McGuire
Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
Music: Ben Cordell
Camera operator: Jeremy Chevillotte
Coordination, transcriptions, and web: Katy Moore

Episoder(317)

Artificial General Intelligence leads to oligarchy | David Duvenaud, ex-Anthropic team lead

Artificial General Intelligence leads to oligarchy | David Duvenaud, ex-Anthropic team lead

Democracy might be a brief historical blip. That’s the unsettling thesis of a recent paper, which argues AI that can do all the work a human can do inevitably leads to the “gradual disempowerment” of ...

27 Jan 2h 31min

#145 Classic episode – Christopher Brown on why slavery abolition wasn't inevitable

#145 Classic episode – Christopher Brown on why slavery abolition wasn't inevitable

In many ways, humanity seems to have become more humane and inclusive over time. While there’s still a lot of progress to be made, campaigns to give people of different genders, races, sexualities, et...

20 Jan 2h 56min

#233 – James Smith on how to prevent a mirror life catastrophe

#233 – James Smith on how to prevent a mirror life catastrophe

When James Smith first heard about mirror bacteria, he was sceptical. But within two weeks, he’d dropped everything to work on it full time, considering it the worst biothreat that he’d seen described...

13 Jan 2h 9min

#144 Classic episode – Athena Aktipis on why cancer is a fundamental universal phenomena

#144 Classic episode – Athena Aktipis on why cancer is a fundamental universal phenomena

What’s the opposite of cancer? If you answered “cure,” “antidote,” or “antivenom” — you’ve obviously been reading the antonym section at www.merriam-webster.com/thesaurus/cancer.But today’s guest Athe...

9 Jan 3h 30min

#142 Classic episode – John McWhorter on why the optimal number of languages might be one, and other provocative claims about language

#142 Classic episode – John McWhorter on why the optimal number of languages might be one, and other provocative claims about language

John McWhorter is a linguistics professor at Columbia University specialising in research on creole languages. He's also a content-producing machine, never afraid to give his frank opinion on anything...

6 Jan 1h 35min

2025 Highlight-o-thon: Oops! All Bests

2025 Highlight-o-thon: Oops! All Bests

It’s that magical time of year once again — highlightapalooza! Stick around for one top bit from each episode we recorded this year, including:Kyle Fish explaining how Anthropic’s AI Claude descends i...

29 Des 20251h 40min

#232 – Andreas Mogensen on what we owe 'philosophical Vulcans' and unconscious beings

#232 – Andreas Mogensen on what we owe 'philosophical Vulcans' and unconscious beings

Most debates about the moral status of AI systems circle the same question: is there something that it feels like to be them? But what if that’s the wrong question to ask? Andreas Mogensen — a senior ...

19 Des 20252h 37min

#231 – Paul Scharre on how AI-controlled robots will and won't change war

#231 – Paul Scharre on how AI-controlled robots will and won't change war

In 1983, Stanislav Petrov, a Soviet lieutenant colonel, sat in a bunker watching a red screen flash “MISSILE LAUNCH.” Protocol demanded he report it to superiors, which would very likely trigger a ret...

17 Des 20252h 45min

Populært innen Fakta

fastlegen
dine-penger-pengeradet
relasjonspodden-med-dora-thorhallsdottir-kjersti-idem
treningspodden
foreldreradet
merry-quizmas
dopet
sovnlos
jakt-og-fiskepodden
rss-strid-de-norske-borgerkrigene
rss-kull
podme-bio-3
sinnsyn
gravid-uke-for-uke
hverdagspsyken
rss-kunsten-a-leve
rss-var-forste-kaffe
fryktlos
tomprat-med-gunnar-tjomlid
hagespiren-podcast