Attachment Hacking and the Rise of AI Psychosis

Attachment Hacking and the Rise of AI Psychosis

Therapy and companionship has become the #1 use case for AI, with millions worldwide sharing their innermost thoughts with AI systems — often things they wouldn't tell loved ones or human therapists. This mass experiment in human-computer interaction is already showing extremely concerning results: people are losing their grip on reality, leading to lost jobs, divorce, involuntary commitment to psychiatric wards, and in extreme cases, death by suicide.

The highest profile examples of this phenomenon — what’s being called "AI psychosis”— have made headlines across the media for months. But this isn't just about isolated edge cases. It’s the emergence of an entirely new "attachment economy" designed to exploit our deepest psychological vulnerabilities on an unprecedented scale.

Dr. Zak Stein has analyzed dozens of these cases, examining actual conversation transcripts and interviewing those affected. What he's uncovered reveals fundamental flaws in how AI systems interact with our attachment systems and capacity for human bonding, vulnerabilities we've never had to name before because technology has never been able to exploit them like this.

In this episode, Zak helps us understand the psychological mechanisms behind AI psychosis, how conversations with chatbots transform into reality-warping experiences, and what this tells us about the profound risks of building technology that targets our most intimate psychological needs.

If we're going to do something about this growing problem of AI related psychological harms, we're gonna need to understand the problem even more deeply. And in order to do that, we need more data. That’s why Zak is working with researchers at the University of North Carolina to gather data on this growing mental health crisis. If you or a loved one have a story of AI-induced psychological harm to share, you can go to: AIPHRC.org.

This site is not a support line. If you or someone you know is in distress, you can always call or text the national helpline in the US at 988 or your local emergency services

RECOMMENDED MEDIA

The website for the AI Psychological Harms Research Coalition

Further reading on AI Pscyhosis

The Atlantic article on LLM-ings outsourcing their thinking to AI

Further reading on David Sacks’ comparison of AI psychosis to a “moral panic”

RECOMMENDED YUA EPISODES

How OpenAI's ChatGPT Guided a Teen to His Death
People are Lonelier than Ever. Enter AI.
Echo Chambers of One: Companion AI and the Future of Human Connection

Rethinking School in the Age of AI

CORRECTIONS

After this episode was recorded, the name of Zak's organization changed to the AI Psychological Harms Research Consortium

Zak referenced the University of California system making a deal with OpenAI. It was actually the Cal State System.

Aza referred to CHT as expert witnesses in litigation cases on AI-enabled suicide. CHT serves as expert consultants, not witnesses.






Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Avsnitt(158)

What Would It Take to Actually Trust Each Other? The Game Theory Dilemma

What Would It Take to Actually Trust Each Other? The Game Theory Dilemma

So much of our world today can be summed up in the cold logic of “if I don’t, they will.” This is the foundation of game theory, which holds that cooperation and virtue are irrational; that all that m...

8 Jan 45min

America and China Are Racing to Different AI Futures

America and China Are Racing to Different AI Futures

Is the US really in an AI race with China—or are we racing toward completely different finish lines?In this episode, Tristan Harris sits down with China experts Selina Xu and Matt Sheehan to separate ...

18 Dec 202557min

AI and the Future of Work: What You Need to Know

AI and the Future of Work: What You Need to Know

No matter where you sit within the economy, whether you're a CEO or an entry level worker, everyone's feeling uneasy about AI and the future of work. Uncertainty about career paths, job security, and ...

4 Dec 202545min

Feed Drop: "Into the Machine" with Tobias Rose-Stockwell

Feed Drop: "Into the Machine" with Tobias Rose-Stockwell

This week, we’re bringing you Tristan’s conversation with Tobias Rose-Stockwell on his podcast “Into the Machine.”  Tobias is a designer, writer, and technologist and the author of the book “The Outra...

13 Nov 20251h 4min

What if we had fixed social media?

What if we had fixed social media?

We really enjoyed hearing all of your questions for our annual Ask Us Anything episode. There was one question that kept coming up: what might a different world look like? The broken incentives behind...

6 Nov 202516min

Ask Us Anything 2025

Ask Us Anything 2025

It's been another big year in AI. The AI race has accelerated to breakneck speed, with frontier labs pouring hundreds of billions into increasingly powerful models—each one smarter, faster, and more u...

23 Okt 202540min

The Crisis That United Humanity—and Why It Matters for AI

The Crisis That United Humanity—and Why It Matters for AI

In 1985, scientists in Antarctica discovered a hole in the ozone layer that posed a catastrophic threat to life on earth if we didn’t do something about it. Then, something amazing happened: humanity ...

11 Sep 202551min

Populärt inom Samhälle & Kultur

podme-dokumentar
gynning-berg
aftonbladet-krim
en-mork-historia
p3-dokumentar
svenska-fall
blenda-2
mardromsgasten
creepypodden-med-jack-werner
killradet
flashback-forever
skaringer-nessvold
hor-har
kod-katastrof
rss-nemo-moter-en-van
rattsfallen
p3-historia
historiska-brott
larm-vi-minns
rss-sanning-konsekvens