#229 – Marius Hobbhahn on the race to solve AI scheming before models go superhuman

#229 – Marius Hobbhahn on the race to solve AI scheming before models go superhuman

We often worry about AI models “hallucinating” or making honest mistakes. But what happens when a model knows the truth, but decides to deceive you anyway to achieve a goal of its own? This isn’t sci-fi — it’s happening regularly in deployment today. Marius Hobbhahn, CEO of the world’s top research organisation focused on AI deception (Apollo Research), has been collaborating with OpenAI to figure out what causes OpenAI’s reasoning models to 'scheme' against users.

Links to learn more, video, and full transcript: https://80k.info/mh25

In a striking case uncovered by Apollo, when many AI models were told they would have capabilities removed if they performed too well on a test, they successfully “sandbagged” — intentionally answering questions incorrectly to appear less capable than they were, while also being careful not to perform so poorly it would arouse suspicion.

These models had somehow developed a preference to preserve their own capabilities, despite never being trained in that goal or assigned a task that called for it.

This doesn’t cause significant risk now, but as AI models become more general, superhuman in more areas, and are given more decision-making power, it could become outright dangerous.

In today’s episode, Marius details his recent collaboration with OpenAI to train o3 to follow principles like “never lie,” even when placed in “high-pressure” situations where it would otherwise make sense.

The good news: They reduced “covert rule violations” (scheming) by about 97%.

The bad news: In the remaining 3% of cases, the models sometimes became more sophisticated — making up new principles to justify their lying, or realising they were in a test environment and deciding to play along until the coast was clear.

Marius argues that while we can patch specific behaviours, we might be entering a “cat-and-mouse game” where models are becoming more situationally aware — that is, aware of when they’re being evaluated — faster than we are getting better at testing.

Even if models can’t tell they’re being tested, they can produce hundreds of pages of reasoning before giving answers and include strange internal dialects humans can’t make sense of, making it much harder to tell whether models are scheming or train them to stop.

Marius and host Rob Wiblin discuss:

  • Why models pretending to be dumb is a rational survival strategy
  • The Replit AI agent that deleted a production database and then lied about it
  • Why rewarding AIs for achieving outcomes might lead to them becoming better liars
  • The weird new language models are using in their internal chain-of-thought

This episode was recorded on September 19, 2025.

Chapters:

  • Cold open (00:00:00)
  • Who’s Marius Hobbhahn? (00:01:20)
  • Top three examples of scheming and deception (00:02:11)
  • Scheming is a natural path for AI models (and people) (00:15:56)
  • How enthusiastic to lie are the models? (00:28:18)
  • Does eliminating deception fix our fears about rogue AI? (00:35:04)
  • Apollo’s collaboration with OpenAI to stop o3 lying (00:38:24)
  • They reduced lying a lot, but the problem is mostly unsolved (00:52:07)
  • Detecting situational awareness with thought injections (01:02:18)
  • Chains of thought becoming less human understandable (01:16:09)
  • Why can’t we use LLMs to make realistic test environments? (01:28:06)
  • Is the window to address scheming closing? (01:33:58)
  • Would anything still work with superintelligent systems? (01:45:48)
  • Companies’ incentives and most promising regulation options (01:54:56)
  • 'Internal deployment' is a core risk we mostly ignore (02:09:19)
  • Catastrophe through chaos (02:28:10)
  • Careers in AI scheming research (02:43:21)
  • Marius's key takeaways for listeners (03:01:48)

Video and audio editing: Dominic Armstrong, Milo McGuire, Luke Monsour, and Simon Monsour
Music: CORBIT
Camera operator: Mateo Villanueva Brandt
Coordination, transcripts, and web: Katy Moore

Jaksot(323)

Rob & Luisa chat kids, the 2016 fertility crash, and how the 50s invented parenting that makes us miserable

Rob & Luisa chat kids, the 2016 fertility crash, and how the 50s invented parenting that makes us miserable

Global fertility rates aren’t just falling: the rate of decline is accelerating. From 2006 to 2016, fertility dropped gradually, but since 2016 the rate of decline has increased 4.5-fold. In many weal...

25 Marras 20251h 59min

#228 – Eileen Yam on how we're completely out of touch with what the public thinks about AI

#228 – Eileen Yam on how we're completely out of touch with what the public thinks about AI

If you work in AI, you probably think it’s going to boost productivity, create wealth, advance science, and improve your life. If you’re a member of the American public, you probably strongly disagree...

20 Marras 20251h 43min

OpenAI: The nonprofit refuses to be killed (with Tyler Whitmer)

OpenAI: The nonprofit refuses to be killed (with Tyler Whitmer)

Last December, the OpenAI business put forward a plan to completely sideline its nonprofit board. But two state attorneys general have now blocked that effort and kept that board very much alive and k...

11 Marras 20251h 56min

#227 – Helen Toner on the geopolitics of AGI in China and the Middle East

#227 – Helen Toner on the geopolitics of AGI in China and the Middle East

With the US racing to develop AGI and superintelligence ahead of China, you might expect the two countries to be negotiating how they’ll deploy AI, including in the military, without coming to blows. ...

5 Marras 20252h 20min

#226 – Holden Karnofsky on unexploited opportunities to make AI safer — and all his AGI takes

#226 – Holden Karnofsky on unexploited opportunities to make AI safer — and all his AGI takes

For years, working on AI safety usually meant theorising about the ‘alignment problem’ or trying to convince other people to give a damn. If you could find any way to help, the work was frustrating an...

30 Loka 20254h 30min

#225 – Daniel Kokotajlo on what a hyperspeed robot economy might look like

#225 – Daniel Kokotajlo on what a hyperspeed robot economy might look like

When Daniel Kokotajlo talks to security experts at major AI labs, they tell him something chilling: “Of course we’re probably penetrated by the CCP already, and if they really wanted something, they c...

27 Loka 20252h 12min

#224 – There's a cheap and low-tech way to save humanity from any engineered disease | Andrew Snyder-Beattie

#224 – There's a cheap and low-tech way to save humanity from any engineered disease | Andrew Snyder-Beattie

Conventional wisdom is that safeguarding humanity from the worst biological risks — microbes optimised to kill as many as possible — is difficult bordering on impossible, making bioweapons humanity’s ...

2 Loka 20252h 31min

Suosittua kategoriassa Koulutus

rss-murhan-anatomia
voi-hyvin-meditaatiot-2
rss-narsisti
psykopodiaa-podcast
psykologia
rss-vapaudu-voimaasi
rss-uskonto-on-tylsaa
rss-liian-kuuma-peruna
kesken
rahapuhetta
rss-niinku-asia-on
adhd-podi
ihminen-tavattavissa-tommy-hellsten-instituutti
rss-duodecim-lehti
rss-taloustaito-podcast
rss-tietoinen-yhteys-podcast-2
kehossa
aamukahvilla
rss-luonnollinen-synnytys-podcast
rss-hereilla