Professor Magda Osman on Psychological Harm

Professor Magda Osman on Psychological Harm

What is psychological harm, and can we really regulate it? Should an AI-companion app be allowed to dump the person who is using it?

📝 Episode Summary
On this episode, I’m joined once again by Professor Magda Osman, someone who’s been on the show several times before, who always has something compelling to say.

This time, we're talking about psychological harm, a term you’ve probably heard, but which remains vague, slippery, and surprisingly unhelpful when it comes to actually protecting people.

Together, we explore what psychological harm really means, why defining it matters, and why regulating it, especially in digital contexts, is so tricky.

We draw comparisons to physical harm, ask whether some emotional distress might be necessary, and consider what kinds of harm are moral rather than measurable.

The conversation touches on loneliness, AI companions, consent, and even chainsaws!

👤 Guest Biography
Magda is a Principal Research Associate at the Judge Business School, University of Cambridge, and holds a Professorial position at Leeds Business School, University of Leeds, where she supports policy impact.

She describes herself as a psychologist by training, with specific interests in decision-making under risk and uncertainty, folk beliefs in the unconscious, and behavioural change effectiveness.

Magda works at the intersection of behavioural science, regulation, and public policy, offering practical insights that challenge assumptions and bring clarity to complex issues.

⏱️ AI-Generated Timestamped Summary
[00:00:00] Introduction and framing of psychological harm

[00:02:00] The conceptual problems with defining psychological harm

[00:05:00] Psychological harm and the precautionary principle in digital regulation

[00:08:00] Social context, platform functions, and why generalisations don’t work

[00:12:00] The idea of rites of passage and unavoidable suffering

[00:15:00] AI companion apps and emotional dependency

[00:17:00] Exploitation, data harvesting, and moral transparency


[00:22:00] Frustration as normal vs. actual psychological damage

[00:26:00] The danger of regulating the trivial and the need for precision

[00:29:00] Why causal links are necessary for meaningful intervention

[00:33:00] Legal obligations and holding tech companies to account

[00:38:00] What users actually care about: privacy, data, trust

[00:42:00] Society’s negotiation of what counts as tolerable harm

[00:45:00] Why this isn’t an unprecedented problem — and how we’ve faced it before

[00:50:00] The risk of bad definitions leading to bad regulation

[00:54:00] Two contrasting examples of online services and their impacts

[00:57:00] What kind of regulation might we actually need?

[00:59:00] The case for rethinking how regulation itself is structured

[01:01:00] Where to find Magda’s work and final reflections

🔗 Links
Magda's LinkedIn profile: https://www.linkedin.com/in/magda-osman-11165138/

Her website: https://www.magdaosman.com/

Magda’s previous appearances on the show exploring:

Behavioural Interventions that fail:
https://www.humanriskpodcast.com/dr-magda-osman-on-behavioural/


Unconscious Bias: what is it, and can we train people not to show it?
https://www.humanriskpodcast.com/dr-magda-osman-on-unconscious/

Compliance, Coercion & Competence
https://www.humanriskpodcast.com/professor-magda-osman-on-compliance-coercion-competence/

Misinformation
https://www.humanriskpodcast.com/professor-magda-osman-on-misinformation/

Risk Prioritisation
https://www.humanriskpodcast.com/professor-magda-osman-on-risk-prioritisation/

Denne episoden er hentet fra en åpen RSS-feed og er ikke publisert av Podme. Den kan derfor inneholde annonser.

Episoder(368)

Chloé Valdary on The Theory of Enchantment

Chloé Valdary on The Theory of Enchantment

How can watching Disney movies, help create more inclusive environments?  The answer is in the word Enchantment.On this episode, I'm speaking to Chloé Valdary, the founder of an organisation called Th...

26 Sep 20241h 4min

Morgan Hamel on Moral Polarization

Morgan Hamel on Moral Polarization

How does moral polarization challenge even the most well-intentioned leaders?In this episode, I speak with Morgan Hamel, who talks frankly about her journey from working in business ethics to launchin...

21 Sep 20241h 8min

Chloé Valdary, Morgan Hamel & Peter Stein on De-Polarized Diversity, Equity & Inclusion

Chloé Valdary, Morgan Hamel & Peter Stein on De-Polarized Diversity, Equity & Inclusion

What do you think of when you hear the words Diversity, Equity & Inclusion (DE&I)? If it's something negative, then this is the episode for you; because I'm exploring how we can transform DE&I from so...

14 Sep 202455min

Matt Ottley on Neurodiversity and Creativity

Matt Ottley on Neurodiversity and Creativity

How does neurodiversity impact creativity? On this episode, I explore the challenges of mental health and how it can be both a driver of astonishing creativity and the cause of significant pain. I bel...

7 Sep 20241h 8min

Sharon O'Dea on Navigating the Digital Workspace

Sharon O'Dea on Navigating the Digital Workspace

What makes a digital workplace truly effective? In this episode, I talk to Sharon O'Dea, co-founder of Lithos Partners, about the intricacies of navigating the digital workplace.Sharon brings her weal...

30 Aug 20241h 2min

Announcing the Decision-Making Studio Podcast

Announcing the Decision-Making Studio Podcast

Introducing the Decision-Making Studio Podcast If you’re a regular listener to the show, you’ll know that it’s all about the risks of human decision-making. I try to bring you guests who can help us t...

27 Aug 20245min

Danielle Letayf on building a community

Danielle Letayf on building a community

How can passion and irritation lead to innovation and community building? In this episode, my guest is Danielle Letayf, the founder of Badassery, a dynamic community designed for unconventional though...

24 Aug 20241h 5min

Professor Shannon Vallor on the AI Mirror

Professor Shannon Vallor on the AI Mirror

What if we saw Artificial Intelligence as a mirror rather than as a form of intelligence?That’s the subject of a fabulous new book by Professor Shannon Vallor, who is my guest on this episode.In our d...

16 Aug 20241h 10min

Populært innen Vitenskap

fastlegen
tingenes-tilstand
jss
rss-zahid-ali-hjelper-deg
rekommandert
sinnsyn
rss-paradigmepodden
liberal-halvtime
vett-og-vitenskap-med-gaute-einevoll
forskningno
rss-overskuddsliv
villmarksliv
kvinnehelsepodden
nordnorsk-historie
grunnstoffene
tidlose-historier
rss-inn-til-kjernen-med-sunniva-rose
nevropodden
dekodet-2
rss-rekommandert