#229 – Marius Hobbhahn on the race to solve AI scheming before models go superhuman

#229 – Marius Hobbhahn on the race to solve AI scheming before models go superhuman

We often worry about AI models “hallucinating” or making honest mistakes. But what happens when a model knows the truth, but decides to deceive you anyway to achieve a goal of its own? This isn’t sci-fi — it’s happening regularly in deployment today. Marius Hobbhahn, CEO of the world’s top research organisation focused on AI deception (Apollo Research), has been collaborating with OpenAI to figure out what causes OpenAI’s reasoning models to 'scheme' against users.

Links to learn more, video, and full transcript: https://80k.info/mh25

In a striking case uncovered by Apollo, when many AI models were told they would have capabilities removed if they performed too well on a test, they successfully “sandbagged” — intentionally answering questions incorrectly to appear less capable than they were, while also being careful not to perform so poorly it would arouse suspicion.

These models had somehow developed a preference to preserve their own capabilities, despite never being trained in that goal or assigned a task that called for it.

This doesn’t cause significant risk now, but as AI models become more general, superhuman in more areas, and are given more decision-making power, it could become outright dangerous.

In today’s episode, Marius details his recent collaboration with OpenAI to train o3 to follow principles like “never lie,” even when placed in “high-pressure” situations where it would otherwise make sense.

The good news: They reduced “covert rule violations” (scheming) by about 97%.

The bad news: In the remaining 3% of cases, the models sometimes became more sophisticated — making up new principles to justify their lying, or realising they were in a test environment and deciding to play along until the coast was clear.

Marius argues that while we can patch specific behaviours, we might be entering a “cat-and-mouse game” where models are becoming more situationally aware — that is, aware of when they’re being evaluated — faster than we are getting better at testing.

Even if models can’t tell they’re being tested, they can produce hundreds of pages of reasoning before giving answers and include strange internal dialects humans can’t make sense of, making it much harder to tell whether models are scheming or train them to stop.

Marius and host Rob Wiblin discuss:

  • Why models pretending to be dumb is a rational survival strategy
  • The Replit AI agent that deleted a production database and then lied about it
  • Why rewarding AIs for achieving outcomes might lead to them becoming better liars
  • The weird new language models are using in their internal chain-of-thought

This episode was recorded on September 19, 2025.

Chapters:

  • Cold open (00:00:00)
  • Who’s Marius Hobbhahn? (00:01:20)
  • Top three examples of scheming and deception (00:02:11)
  • Scheming is a natural path for AI models (and people) (00:15:56)
  • How enthusiastic to lie are the models? (00:28:18)
  • Does eliminating deception fix our fears about rogue AI? (00:35:04)
  • Apollo’s collaboration with OpenAI to stop o3 lying (00:38:24)
  • They reduced lying a lot, but the problem is mostly unsolved (00:52:07)
  • Detecting situational awareness with thought injections (01:02:18)
  • Chains of thought becoming less human understandable (01:16:09)
  • Why can’t we use LLMs to make realistic test environments? (01:28:06)
  • Is the window to address scheming closing? (01:33:58)
  • Would anything still work with superintelligent systems? (01:45:48)
  • Companies’ incentives and most promising regulation options (01:54:56)
  • 'Internal deployment' is a core risk we mostly ignore (02:09:19)
  • Catastrophe through chaos (02:28:10)
  • Careers in AI scheming research (02:43:21)
  • Marius's key takeaways for listeners (03:01:48)

Video and audio editing: Dominic Armstrong, Milo McGuire, Luke Monsour, and Simon Monsour
Music: CORBIT
Camera operator: Mateo Villanueva Brandt
Coordination, transcripts, and web: Katy Moore

Avsnitt(326)

#226 – Holden Karnofsky on unexploited opportunities to make AI safer — and all his AGI takes

#226 – Holden Karnofsky on unexploited opportunities to make AI safer — and all his AGI takes

For years, working on AI safety usually meant theorising about the ‘alignment problem’ or trying to convince other people to give a damn. If you could find any way to help, the work was frustrating an...

30 Okt 20254h 30min

#225 – Daniel Kokotajlo on what a hyperspeed robot economy might look like

#225 – Daniel Kokotajlo on what a hyperspeed robot economy might look like

When Daniel Kokotajlo talks to security experts at major AI labs, they tell him something chilling: “Of course we’re probably penetrated by the CCP already, and if they really wanted something, they c...

27 Okt 20252h 12min

#224 – There's a cheap and low-tech way to save humanity from any engineered disease | Andrew Snyder-Beattie

#224 – There's a cheap and low-tech way to save humanity from any engineered disease | Andrew Snyder-Beattie

Conventional wisdom is that safeguarding humanity from the worst biological risks — microbes optimised to kill as many as possible — is difficult bordering on impossible, making bioweapons humanity’s ...

2 Okt 20252h 31min

Inside the Biden admin’s AI policy approach | Jake Sullivan, Biden’s NSA | via The Cognitive Revolution

Inside the Biden admin’s AI policy approach | Jake Sullivan, Biden’s NSA | via The Cognitive Revolution

Jake Sullivan was the US National Security Advisor from 2021-2025. He joined our friends on The Cognitive Revolution podcast in August to discuss AI as a critical national security issue. We thought i...

26 Sep 20251h 5min

#223 – Neel Nanda on leading a Google DeepMind team at 26 – and advice if you want to work at an AI company (part 2)

#223 – Neel Nanda on leading a Google DeepMind team at 26 – and advice if you want to work at an AI company (part 2)

At 26, Neel Nanda leads an AI safety team at Google DeepMind, has published dozens of influential papers, and mentored 50 junior researchers — seven of whom now work at major AI companies. His secret?...

15 Sep 20251h 46min

#222 – Can we tell if an AI is loyal by reading its mind? DeepMind's Neel Nanda (part 1)

#222 – Can we tell if an AI is loyal by reading its mind? DeepMind's Neel Nanda (part 1)

We don’t know how AIs think or why they do what they do. Or at least, we don’t know much. That fact is only becoming more troubling as AIs grow more capable and appear on track to wield enormous cultu...

8 Sep 20253h 1min

#221 – Kyle Fish on the most bizarre findings from 5 AI welfare experiments

#221 – Kyle Fish on the most bizarre findings from 5 AI welfare experiments

What happens when you lock two AI systems in a room together and tell them they can discuss anything they want?According to experiments run by Kyle Fish — Anthropic’s first AI welfare researcher — som...

28 Aug 20252h 28min

How not to lose your job to AI (article by Benjamin Todd)

How not to lose your job to AI (article by Benjamin Todd)

About half of people are worried they’ll lose their job to AI. They’re right to be concerned: AI can now complete real-world coding tasks on GitHub, generate photorealistic video, drive a taxi more sa...

31 Juli 202551min

Populärt inom Utbildning

historiepodden-se
rss-bara-en-till-om-missbruk-medberoende-2
det-skaver
alska-oss
nu-blir-det-historia
harrisons-dramatiska-historia
roda-vita-rosen
not-fanny-anymore
johannes-hansen-podcast
sektledare
rss-viktmedicinpodden
rss-foraldramotet-bring-lagercrantz
sa-in-i-sjalen
i-vantan-pa-katastrofen
allt-du-velat-veta
rss-max-tant-med-max-villman
rikatillsammans-om-privatekonomi-rikedom-i-livet
rib-podcast
rss-sjalsligt-avkladd
rss-om-vi-ska-vara-arliga