#232 – Andreas Mogensen on what we owe 'philosophical Vulcans' and unconscious beings

#232 – Andreas Mogensen on what we owe 'philosophical Vulcans' and unconscious beings

Most debates about the moral status of AI systems circle the same question: is there something that it feels like to be them? But what if that’s the wrong question to ask? Andreas Mogensen — a senior researcher in moral philosophy at the University of Oxford — argues that so-called 'phenomenal consciousness' might be neither necessary nor sufficient for a being to deserve moral consideration.

Links to learn more and full transcript: https://80k.info/am25

For instance, a creature on the sea floor that experiences nothing but faint brightness from the sun might have no moral claim on us, despite being conscious.

Meanwhile, any being with real desires that can be fulfilled or not fulfilled can arguably be benefited or harmed. Such beings arguably have a capacity for welfare, which means they might matter morally. And, Andreas argues, desire may not require subjective experience.

Desire may need to be backed by positive or negative emotions — but as Andreas explains, there are some reasons to think a being could also have emotions without being conscious.

There’s another underexplored route to moral patienthood: autonomy. If a being can rationally reflect on its goals and direct its own existence, we might have a moral duty to avoid interfering with its choices — even if it has no capacity for welfare.

However, Andreas suspects genuine autonomy might require consciousness after all. To be a rational agent, your beliefs probably need to be justified by something, and conscious experience might be what does the justifying. But even this isn’t clear.

The upshot? There’s a chance we could just be really mistaken about what it would take for an AI to matter morally. And with AI systems potentially proliferating at massive scale, getting this wrong could be among the largest moral errors in history.

In today’s interview, Andreas and host Zershaaneh Qureshi confront all these confusing ideas, challenging their intuitions about consciousness, welfare, and morality along the way. They also grapple with a few seemingly attractive arguments which share a very unsettling conclusion: that human extinction (or even the extinction of all sentient life) could actually be a morally desirable thing.

This episode was recorded on December 3, 2025.

Chapters:

  • Cold open (00:00:00)
  • Introducing Zershaaneh (00:00:55)
  • The puzzle of moral patienthood (00:03:20)
  • Is subjective experience necessary? (00:05:52)
  • What is it to desire? (00:10:42)
  • Desiring without experiencing (00:17:56)
  • What would make AIs moral patients? (00:28:17)
  • Another route entirely: deserving autonomy (00:45:12)
  • Maybe there's no objective truth about any of this (01:12:06)
  • Practical implications (01:29:21)
  • Why not just let superintelligence figure this out for us? (01:38:07)
  • How could human extinction be a good thing? (01:47:30)
  • Lexical threshold negative utilitarianism (02:12:30)
  • So... should we still try to prevent extinction? (02:25:22)
  • What are the most important questions for people to address here? (02:32:16)
  • Is God GDPR compliant? (02:35:32)

Video and audio editing: Dominic Armstrong, Milo McGuire, Luke Monsour, and Simon Monsour
Coordination, transcripts, and web: Katy Moore

Tämä jakso on lisätty Podme-palveluun avoimen RSS-syötteen kautta eikä se ole Podmen omaa tuotantoa. Siksi jakso saattaa sisältää mainontaa.

Jaksot(334)

#226 – Holden Karnofsky on unexploited opportunities to make AI safer — and all his AGI takes

#226 – Holden Karnofsky on unexploited opportunities to make AI safer — and all his AGI takes

For years, working on AI safety usually meant theorising about the ‘alignment problem’ or trying to convince other people to give a damn. If you could find any way to help, the work was frustrating an...

30 Loka 20254h 30min

#225 – Daniel Kokotajlo on what a hyperspeed robot economy might look like

#225 – Daniel Kokotajlo on what a hyperspeed robot economy might look like

When Daniel Kokotajlo talks to security experts at major AI labs, they tell him something chilling: “Of course we’re probably penetrated by the CCP already, and if they really wanted something, they c...

27 Loka 20252h 12min

#224 – There's a cheap and low-tech way to save humanity from any engineered disease | Andrew Snyder-Beattie

#224 – There's a cheap and low-tech way to save humanity from any engineered disease | Andrew Snyder-Beattie

Conventional wisdom is that safeguarding humanity from the worst biological risks — microbes optimised to kill as many as possible — is difficult bordering on impossible, making bioweapons humanity’s ...

2 Loka 20252h 31min

Inside the Biden admin’s AI policy approach | Jake Sullivan, Biden’s NSA | via The Cognitive Revolution

Inside the Biden admin’s AI policy approach | Jake Sullivan, Biden’s NSA | via The Cognitive Revolution

Jake Sullivan was the US National Security Advisor from 2021-2025. He joined our friends on The Cognitive Revolution podcast in August to discuss AI as a critical national security issue. We thought i...

26 Syys 20251h 5min

#223 – Neel Nanda on leading a Google DeepMind team at 26 – and advice if you want to work at an AI company (part 2)

#223 – Neel Nanda on leading a Google DeepMind team at 26 – and advice if you want to work at an AI company (part 2)

At 26, Neel Nanda leads an AI safety team at Google DeepMind, has published dozens of influential papers, and mentored 50 junior researchers — seven of whom now work at major AI companies. His secret?...

15 Syys 20251h 46min

#222 – Can we tell if an AI is loyal by reading its mind? DeepMind's Neel Nanda (part 1)

#222 – Can we tell if an AI is loyal by reading its mind? DeepMind's Neel Nanda (part 1)

We don’t know how AIs think or why they do what they do. Or at least, we don’t know much. That fact is only becoming more troubling as AIs grow more capable and appear on track to wield enormous cultu...

8 Syys 20253h 1min

#221 – Kyle Fish on the most bizarre findings from 5 AI welfare experiments

#221 – Kyle Fish on the most bizarre findings from 5 AI welfare experiments

What happens when you lock two AI systems in a room together and tell them they can discuss anything they want?According to experiments run by Kyle Fish — Anthropic’s first AI welfare researcher — som...

28 Elo 20252h 28min

How not to lose your job to AI (article by Benjamin Todd)

How not to lose your job to AI (article by Benjamin Todd)

About half of people are worried they’ll lose their job to AI. They’re right to be concerned: AI can now complete real-world coding tasks on GitHub, generate photorealistic video, drive a taxi more sa...

31 Heinä 202551min

Suosittua kategoriassa Koulutus

rss-murhan-anatomia
voi-hyvin-meditaatiot-2
psykopodiaa-podcast
adhd-podi
psykologia
rss-rahamania
rahapuhetta
kesken
rss-liian-kuuma-peruna
rss-tietoinen-yhteys-podcast-2
rss-hereilla
rss-niinku-asia-on
rss-vapaudu-voimaasi
kehossa
jari-sarasvuo-podcast
rss-valo-minussa-2
filocast-filosofian-perusteet
rss-elamankoulu
rss-koira-haudattuna
rss-arkea-ja-aurinkoa-podcast-espanjasta