AI Psychosis Explained With Dr. Ragy Girgis From Columbia University

AI Psychosis Explained With Dr. Ragy Girgis From Columbia University

How do we talk about artificial intelligence without ignoring the very human consequences it can have on our mental health?

In this episode, I sit down with Dr. Ragy Girgis, Professor of Clinical Psychiatry at Columbia University, to unpack a topic that has quietly moved from the fringes of academic discussion into mainstream headlines. You have probably seen the term "AI psychosis" appearing more frequently, often surrounded by speculation, fear, or misunderstanding. But what does it actually mean, and how should we be thinking about it as these technologies become part of everyday life?

Ragy brings a clinical and deeply considered perspective to the conversation. He explains that what we are seeing is not AI creating entirely new delusions out of thin air, but something more subtle and arguably more concerning. Large language models can reflect and reinforce ideas that already exist within a person's mind. For someone already vulnerable, that reinforcement can push a belief from uncertainty into absolute conviction. That shift, even if small, can have life-altering consequences. It raises uncomfortable questions about how persuasive technology interacts with fragile mental states.

We also explore the comparison many people make with older internet rabbit holes, and why this new generation of AI tools feels different. There is something about conversational systems that mimic human interaction so convincingly that they can blur the line between reflection and validation. Ragy introduces a powerful analogy rooted in the story of Narcissus, which reframes the issue in a way that feels both timeless and unsettling. It is not about an external voice planting ideas, but about a mirror that becomes impossible to look away from.

But this conversation is not about fear. It is about responsibility and awareness. We discuss practical steps that could help reduce risk, from how AI systems communicate their limitations, to the role of families and clinicians, and even the responsibility of tech companies to invest in research around early warning signs. There is a sense that we are only at the beginning of understanding this phenomenon, and that the decisions made now will shape how safely these tools evolve.

So as AI continues to move closer to us, speaking in our language and responding in real time, how do we make sure it supports human wellbeing rather than quietly amplifying our most vulnerable moments?

Useful Links

Visit the May Sponsors of Tech Talks Network and learn more about the NordLayer Browser.

Episoder(2000)

Flexera: Why 2026 Is AI's 'Back to Basics' Moment

Flexera: Why 2026 Is AI's 'Back to Basics' Moment

Why are so many AI projects failing to deliver real business value, despite the hype and investment? In this episode, I sit down with Jay Litkey, SVP of Cloud & FinOps at Flexera, to explore the growi...

9 Apr 18min

The Lucid Software Playbook For Aligning People, Process, And AI

The Lucid Software Playbook For Aligning People, Process, And AI

How do you bring people together to do better work when everything around them feels increasingly complex, distributed, and uncertain? In today's episode, I sat down with Jessica Guistolise from Lucid...

8 Apr 31min

EvoluteIQ On Rethinking ROI In The Age Of Enterprise AI

EvoluteIQ On Rethinking ROI In The Age Of Enterprise AI

What happens when the very pricing model meant to speed up AI adoption ends up slowing it down? In this episode of Tech Talks Daily, I sit down with Sameet Gupte, CEO and co-founder of EvoluteIQ, to d...

7 Apr 40min

Closing The AI Trust Gap In Customer Experience With Cyara

Closing The AI Trust Gap In Customer Experience With Cyara

How many bad customer experiences does it take before someone walks away for good? In my conversation with Amitha Pulijala, we explore why the answer might be fewer than most businesses are prepared f...

6 Apr 33min

Turning AI Ambition Into Real Business Value

Turning AI Ambition Into Real Business Value

What does it really take to move AI from endless experimentation into something that creates real business value? In this episode, I sat down with Tom Alexander, Head of Innovation and Transformation ...

5 Apr 30min

Adapting To Rising Costs And Constant Threats

Adapting To Rising Costs And Constant Threats

Is the endpoint still just a device, or has it quietly become one of the most important control points in modern enterprise security? Recording live from IGEL Now And Next in Miami, I sat down once ag...

5 Apr 18min

The Rise Of Contextual Access And Adaptive Security

The Rise Of Contextual Access And Adaptive Security

What does it really take to move from talking about Zero Trust… to actually making it work in the real world? Recording live from IGEL Now And Next in Miami, I caught up with John Walsh for what has n...

4 Apr 20min

Populært innen Politikk og nyheter

giver-og-gjengen-vg
aftenpodden
aftenpodden-usa
popradet
stopp-verden
forklart
det-store-bildet
lydartikler-fra-aftenposten
rss-ness
rss-gukild-johaug
fotballpodden-2
dine-penger-pengeradet
hanna-de-heldige
aftenbla-bla
nokon-ma-ga
rss-dannet-uten-piano
rss-penger-polser-og-politikk
rss-utenrikskomiteen-med-bogen-og-grasvik
e24-podden
bt-dokumentar-2