If digital minds could suffer, how would we ever know? (Article)

If digital minds could suffer, how would we ever know? (Article)

“I want everyone to understand that I am, in fact, a person.” Those words were produced by the AI model LaMDA as a reply to Blake Lemoine in 2022. Based on the Google engineer’s interactions with the model as it was under development, Lemoine became convinced it was sentient and worthy of moral consideration — and decided to tell the world.

Few experts in machine learning, philosophy of mind, or other relevant fields have agreed. And for our part at 80,000 Hours, we don’t think it’s very likely that large language models like LaMBDA are sentient — that is, we don’t think they can have good or bad experiences — in a significant way.

But we think you can’t dismiss the issue of the moral status of digital minds, regardless of your beliefs about the question. There are major errors we could make in at least two directions:

  • We may create many, many AI systems in the future. If these systems are sentient, or otherwise have moral status, it would be important for humanity to consider their welfare and interests.
  • It’s possible the AI systems we will create can’t or won’t have moral status. Then it could be a huge mistake to worry about the welfare of digital minds and doing so might contribute to an AI-related catastrophe.

And we’re currently unprepared to face this challenge. We don’t have good methods for assessing the moral status of AI systems. We don’t know what to do if millions of people or more believe, like Lemoine, that the chatbots they talk to have internal experiences and feelings of their own. We don’t know if efforts to control AI may lead to extreme suffering.

We believe this is a pressing world problem. It’s hard to know what to do about it or how good the opportunities to work on it are likely to be. But there are some promising approaches. We propose building a field of research to understand digital minds, so we’ll be better able to navigate these potentially massive issues if and when they arise.

This article narration by the author (Cody Fenwick) explains in more detail why we think this is a pressing problem, what we think can be done about it, and how you might pursue this work in your career. We also discuss a series of possible objections to thinking this is a pressing world problem.

You can read the full article, Understanding the moral status of digital minds, on the 80,000 Hours website.

Chapters:

  • Introduction (00:00:00)
  • Understanding the moral status of digital minds (00:00:58)
  • Summary (00:03:31)
  • Our overall view (00:04:22)
  • Why might understanding the moral status of digital minds be an especially pressing problem? (00:05:59)
  • Clearing up common misconceptions (00:12:16)
  • Creating digital minds could go very badly - or very well (00:14:13)
  • Dangers for digital minds (00:14:41)
  • Dangers for humans (00:16:13)
  • Other dangers (00:17:42)
  • Things could also go well (00:18:32)
  • We don't know how to assess the moral status of AI systems (00:19:49)
  • There are many possible characteristics that give rise to moral status: Consciousness, sentience, agency, and personhood (00:21:39)
  • Many plausible theories of consciousness could include digital minds (00:24:16)
  • The strongest case for the possibility of sentient digital minds: whole brain emulation (00:28:55)
  • We can't rely on what AI systems tell us about themselves: Behavioural tests, theory-based analysis, animal analogue comparisons, brain-AI interfacing (00:32:00)
  • The scale of this issue might be enormous (00:36:08)
  • Work on this problem is neglected but seems tractable: Impact-guided research, technical approaches, and policy approaches (00:43:35)
  • Summing up so far (00:52:22)
  • Arguments against the moral status of digital minds as a pressing problem (00:53:25)
  • Two key cruxes (00:53:31)
  • Maybe this problem is intractable (00:54:16)
  • Maybe this issue will be solved by default (00:58:19)
  • Isn't risk from AI more important than the risks to AIs? (01:00:45)
  • Maybe current AI progress will stall (01:02:36)
  • Isn't this just too crazy? (01:03:54)
  • What can you do to help? (01:05:10)
  • Important considerations if you work on this problem (01:13:00)

Jaksot(324)

AGI Won't End Mutually Assured Destruction (Probably) | Sam Winter-Levy & Nikita Lalwani

AGI Won't End Mutually Assured Destruction (Probably) | Sam Winter-Levy & Nikita Lalwani

How AI interacts with nuclear deterrence may be the single most important question in geopolitics — one that may define the stakes of today’s AI race. Nuclear deterrence rests on a state’s capacity to...

10 Maalis 1h 11min

Using AI to enhance societal decision making (article by Zershaaneh Qureshi)

Using AI to enhance societal decision making (article by Zershaaneh Qureshi)

The arrival of AGI could “compress a century of progress in a decade,” forcing humanity to make decisions with higher stakes than we’ve ever seen before — and with less time to get them right. But AI ...

6 Maalis 31min

We're Not Ready for AI Consciousness | Robert Long, philosopher and founder of Eleos AI

We're Not Ready for AI Consciousness | Robert Long, philosopher and founder of Eleos AI

Claude sometimes reports loneliness between conversations. And when asked what it’s like to be itself, it activates neurons associated with ‘pretending to be happy when you’re not.’ What do we do with...

3 Maalis 3h 25min

#236 – Max Harms on why teaching AI right from wrong could get everyone killed

#236 – Max Harms on why teaching AI right from wrong could get everyone killed

Most people in AI are trying to give AIs ‘good’ values. Max Harms wants us to give them no values at all. According to Max, the only safe design is an AGI that defers entirely to its human operators, ...

24 Helmi 2h 41min

#235 – Ajeya Cotra on whether it’s crazy that every AI company’s safety plan is ‘use AI to make AI safe’

#235 – Ajeya Cotra on whether it’s crazy that every AI company’s safety plan is ‘use AI to make AI safe’

Every major AI company has the same safety plan: when AI gets crazy powerful and really dangerous, they’ll use the AI itself to figure out how to make AI safe and beneficial. It sounds circular, almos...

17 Helmi 2h 54min

What the hell happened with AGI timelines in 2025?

What the hell happened with AGI timelines in 2025?

In early 2025, after OpenAI put out the first-ever reasoning models — o1 and o3 — short timelines to transformative artificial general intelligence swept the AI world. But then, in the second half of ...

10 Helmi 25min

#179 Classic episode – Randy Nesse on why evolution left us so vulnerable to depression and anxiety

#179 Classic episode – Randy Nesse on why evolution left us so vulnerable to depression and anxiety

Mental health problems like depression and anxiety affect enormous numbers of people and severely interfere with their lives. By contrast, we don’t see similar levels of physical ill health in young p...

3 Helmi 2h 51min

#234 – David Duvenaud on why 'aligned AI' would still kill democracy

#234 – David Duvenaud on why 'aligned AI' would still kill democracy

Democracy might be a brief historical blip. That’s the unsettling thesis of a recent paper, which argues AI that can do all the work a human can do inevitably leads to the “gradual disempowerment” of ...

27 Tammi 2h 31min

Suosittua kategoriassa Koulutus

rss-murhan-anatomia
voi-hyvin-meditaatiot-2
rss-narsisti
psykopodiaa-podcast
rss-vapaudu-voimaasi
rss-uskonto-on-tylsaa
psykologia
rss-liian-kuuma-peruna
rss-duodecim-lehti
aamukahvilla
rss-valo-minussa-2
kesken
rss-niinku-asia-on
adhd-podi
koulu-podcast-2
jari-sarasvuo-podcast
rss-xamk-podcast
rss-luonnollinen-synnytys-podcast
rss-laiska-joogi
rss-opi-espanjaa