#229 – Marius Hobbhahn on the race to solve AI scheming before models go superhuman

#229 – Marius Hobbhahn on the race to solve AI scheming before models go superhuman

We often worry about AI models “hallucinating” or making honest mistakes. But what happens when a model knows the truth, but decides to deceive you anyway to achieve a goal of its own? This isn’t sci-fi — it’s happening regularly in deployment today. Marius Hobbhahn, CEO of the world’s top research organisation focused on AI deception (Apollo Research), has been collaborating with OpenAI to figure out what causes OpenAI’s reasoning models to 'scheme' against users.

Links to learn more, video, and full transcript: https://80k.info/mh25

In a striking case uncovered by Apollo, when many AI models were told they would have capabilities removed if they performed too well on a test, they successfully “sandbagged” — intentionally answering questions incorrectly to appear less capable than they were, while also being careful not to perform so poorly it would arouse suspicion.

These models had somehow developed a preference to preserve their own capabilities, despite never being trained in that goal or assigned a task that called for it.

This doesn’t cause significant risk now, but as AI models become more general, superhuman in more areas, and are given more decision-making power, it could become outright dangerous.

In today’s episode, Marius details his recent collaboration with OpenAI to train o3 to follow principles like “never lie,” even when placed in “high-pressure” situations where it would otherwise make sense.

The good news: They reduced “covert rule violations” (scheming) by about 97%.

The bad news: In the remaining 3% of cases, the models sometimes became more sophisticated — making up new principles to justify their lying, or realising they were in a test environment and deciding to play along until the coast was clear.

Marius argues that while we can patch specific behaviours, we might be entering a “cat-and-mouse game” where models are becoming more situationally aware — that is, aware of when they’re being evaluated — faster than we are getting better at testing.

Even if models can’t tell they’re being tested, they can produce hundreds of pages of reasoning before giving answers and include strange internal dialects humans can’t make sense of, making it much harder to tell whether models are scheming or train them to stop.

Marius and host Rob Wiblin discuss:

  • Why models pretending to be dumb is a rational survival strategy
  • The Replit AI agent that deleted a production database and then lied about it
  • Why rewarding AIs for achieving outcomes might lead to them becoming better liars
  • The weird new language models are using in their internal chain-of-thought

This episode was recorded on September 19, 2025.

Chapters:

  • Cold open (00:00:00)
  • Who’s Marius Hobbhahn? (00:01:20)
  • Top three examples of scheming and deception (00:02:11)
  • Scheming is a natural path for AI models (and people) (00:15:56)
  • How enthusiastic to lie are the models? (00:28:18)
  • Does eliminating deception fix our fears about rogue AI? (00:35:04)
  • Apollo’s collaboration with OpenAI to stop o3 lying (00:38:24)
  • They reduced lying a lot, but the problem is mostly unsolved (00:52:07)
  • Detecting situational awareness with thought injections (01:02:18)
  • Chains of thought becoming less human understandable (01:16:09)
  • Why can’t we use LLMs to make realistic test environments? (01:28:06)
  • Is the window to address scheming closing? (01:33:58)
  • Would anything still work with superintelligent systems? (01:45:48)
  • Companies’ incentives and most promising regulation options (01:54:56)
  • 'Internal deployment' is a core risk we mostly ignore (02:09:19)
  • Catastrophe through chaos (02:28:10)
  • Careers in AI scheming research (02:43:21)
  • Marius's key takeaways for listeners (03:01:48)

Video and audio editing: Dominic Armstrong, Milo McGuire, Luke Monsour, and Simon Monsour
Music: CORBIT
Camera operator: Mateo Villanueva Brandt
Coordination, transcripts, and web: Katy Moore

Avsnitt(322)

We're Not Ready for AI Consciousness | Robert Long, philosopher and founder of Eleos AI

We're Not Ready for AI Consciousness | Robert Long, philosopher and founder of Eleos AI

Claude sometimes reports loneliness between conversations. And when asked what it’s like to be itself, it activates neurons associated with ‘pretending to be happy when you’re not.’ What do we do with...

3 Mars 3h 25min

Why Teaching AI Right from Wrong Could Get Everyone Killed | Max Harms, MIRI

Why Teaching AI Right from Wrong Could Get Everyone Killed | Max Harms, MIRI

Most people in AI are trying to give AIs ‘good’ values. Max Harms wants us to give them no values at all. According to Max, the only safe design is an AGI that defers entirely to its human operators, ...

24 Feb 2h 41min

#235 – Ajeya Cotra on whether it’s crazy that every AI company’s safety plan is ‘use AI to make AI safe’

#235 – Ajeya Cotra on whether it’s crazy that every AI company’s safety plan is ‘use AI to make AI safe’

Every major AI company has the same safety plan: when AI gets crazy powerful and really dangerous, they’ll use the AI itself to figure out how to make AI safe and beneficial. It sounds circular, almos...

17 Feb 2h 54min

What the hell happened with AGI timelines in 2025?

What the hell happened with AGI timelines in 2025?

In early 2025, after OpenAI put out the first-ever reasoning models — o1 and o3 — short timelines to transformative artificial general intelligence swept the AI world. But then, in the second half of ...

10 Feb 25min

#179 Classic episode – Randy Nesse on why evolution left us so vulnerable to depression and anxiety

#179 Classic episode – Randy Nesse on why evolution left us so vulnerable to depression and anxiety

Mental health problems like depression and anxiety affect enormous numbers of people and severely interfere with their lives. By contrast, we don’t see similar levels of physical ill health in young p...

3 Feb 2h 51min

#234 – David Duvenaud on why 'aligned AI' would still kill democracy

#234 – David Duvenaud on why 'aligned AI' would still kill democracy

Democracy might be a brief historical blip. That’s the unsettling thesis of a recent paper, which argues AI that can do all the work a human can do inevitably leads to the “gradual disempowerment” of ...

27 Jan 2h 31min

#145 Classic episode – Christopher Brown on why slavery abolition wasn't inevitable

#145 Classic episode – Christopher Brown on why slavery abolition wasn't inevitable

In many ways, humanity seems to have become more humane and inclusive over time. While there’s still a lot of progress to be made, campaigns to give people of different genders, races, sexualities, et...

20 Jan 2h 56min

#233 – James Smith on how to prevent a mirror life catastrophe

#233 – James Smith on how to prevent a mirror life catastrophe

When James Smith first heard about mirror bacteria, he was sceptical. But within two weeks, he’d dropped everything to work on it full time, considering it the worst biothreat that he’d seen described...

13 Jan 2h 9min

Populärt inom Utbildning

historiepodden-se
rss-bara-en-till-om-missbruk-medberoende-2
det-skaver
sektledare
harrisons-dramatiska-historia
nu-blir-det-historia
alska-oss
not-fanny-anymore
rss-viktmedicinpodden
allt-du-velat-veta
roda-vita-rosen
johannes-hansen-podcast
rss-sjalsligt-avkladd
rss-max-tant-med-max-villman
sa-in-i-sjalen
rss-basta-livet
rikatillsammans-om-privatekonomi-rikedom-i-livet
sektpodden
rss-traningsklubben
sex-pa-riktigt-med-marika-smith