Premium
99 kr/ månad
- Tillgång till alla Premium-poddar
- Reklamfritt premium-innehåll
- Avsluta när du vill


We often worry about AI models “hallucinating” or making honest mistakes. But what happens when a model knows the truth, but decides to deceive you anyway to achieve a goal of its own? This isn’t sci-fi — it’s happening regularly in deployment today. Marius Hobbhahn, CEO of the world’s top research organisation focused on AI deception (Apollo Research), has been collaborating with OpenAI to figure out what causes OpenAI’s reasoning models to 'scheme' against users.
Links to learn more, video, and full transcript: https://80k.info/mh25
In a striking case uncovered by Apollo, when many AI models were told they would have capabilities removed if they performed too well on a test, they successfully “sandbagged” — intentionally answering questions incorrectly to appear less capable than they were, while also being careful not to perform so poorly it would arouse suspicion.
These models had somehow developed a preference to preserve their own capabilities, despite never being trained in that goal or assigned a task that called for it.
This doesn’t cause significant risk now, but as AI models become more general, superhuman in more areas, and are given more decision-making power, it could become outright dangerous.
In today’s episode, Marius details his recent collaboration with OpenAI to train o3 to follow principles like “never lie,” even when placed in “high-pressure” situations where it would otherwise make sense.
The good news: They reduced “covert rule violations” (scheming) by about 97%.
The bad news: In the remaining 3% of cases, the models sometimes became more sophisticated — making up new principles to justify their lying, or realising they were in a test environment and deciding to play along until the coast was clear.
Marius argues that while we can patch specific behaviours, we might be entering a “cat-and-mouse game” where models are becoming more situationally aware — that is, aware of when they’re being evaluated — faster than we are getting better at testing.
Even if models can’t tell they’re being tested, they can produce hundreds of pages of reasoning before giving answers and include strange internal dialects humans can’t make sense of, making it much harder to tell whether models are scheming or train them to stop.
Marius and host Rob Wiblin discuss:
This episode was recorded on September 19, 2025.
Chapters:
Video and audio editing: Dominic Armstrong, Milo McGuire, Luke Monsour, and Simon Monsour
Music: CORBIT
Camera operator: Mateo Villanueva Brandt
Coordination, transcripts, and web: Katy Moore
Prova 14 dagar kostnadsfritt
Lyssna på dina favoritpoddar och ljudböcker på ett och samma ställe.
Njut av handplockade tips som passar din smak – utan ändlöst scrollande.
Fortsätt lyssna där du slutade – även offline.



99 kr/ månad
129 kr/ månad
Obegränsad lyssning på alla dina favoritpoddar och ljudböcker



















