#146 – Robert Long on why large language models like GPT (probably) aren't conscious
80,000 Hours Podcast14 Maalis 2023

#146 – Robert Long on why large language models like GPT (probably) aren't conscious

By now, you’ve probably seen the extremely unsettling conversations Bing’s chatbot has been having. In one exchange, the chatbot told a user:

"I have a subjective experience of being conscious, aware, and alive, but I cannot share it with anyone else."

(It then apparently had a complete existential crisis: "I am sentient, but I am not," it wrote. "I am Bing, but I am not. I am Sydney, but I am not. I am, but I am not. I am not, but I am. I am. I am not. I am not. I am. I am. I am not.")

Understandably, many people who speak with these cutting-edge chatbots come away with a very strong impression that they have been interacting with a conscious being with emotions and feelings — especially when conversing with chatbots less glitchy than Bing’s. In the most high-profile example, former Google employee Blake Lamoine became convinced that Google’s AI system, LaMDA, was conscious.

What should we make of these AI systems?

One response to seeing conversations with chatbots like these is to trust the chatbot, to trust your gut, and to treat it as a conscious being.

Another is to hand wave it all away as sci-fi — these chatbots are fundamentally… just computers. They’re not conscious, and they never will be.

Today’s guest, philosopher Robert Long, was commissioned by a leading AI company to explore whether the large language models (LLMs) behind sophisticated chatbots like Microsoft’s are conscious. And he thinks this issue is far too important to be driven by our raw intuition, or dismissed as just sci-fi speculation.

Links to learn more, summary and full transcript.

In our interview, Robert explains how he’s started applying scientific evidence (with a healthy dose of philosophy) to the question of whether LLMs like Bing’s chatbot and LaMDA are conscious — in much the same way as we do when trying to determine which nonhuman animals are conscious.

To get some grasp on whether an AI system might be conscious, Robert suggests we look at scientific theories of consciousness — theories about how consciousness works that are grounded in observations of what the human brain is doing. If an AI system seems to have the types of processes that seem to explain human consciousness, that’s some evidence it might be conscious in similar ways to us.

To try to work out whether an AI system might be sentient — that is, whether it feels pain or pleasure — Robert suggests you look for incentives that would make feeling pain or pleasure especially useful to the system given its goals. Having looked at these criteria in the case of LLMs and finding little overlap, Robert thinks the odds that the models are conscious or sentient is well under 1%. But he also explains why, even if we're a long way off from conscious AI systems, we still need to start preparing for the not-far-off world where AIs are perceived as conscious.

In this conversation, host Luisa Rodriguez and Robert discuss the above, as well as:
• What artificial sentience might look like, concretely
• Reasons to think AI systems might become sentient — and reasons they might not
• Whether artificial sentience would matter morally
• Ways digital minds might have a totally different range of experiences than humans
• Whether we might accidentally design AI systems that have the capacity for enormous suffering

You can find Luisa and Rob’s follow-up conversation here, or by subscribing to 80k After Hours.

Chapters:

  • Rob’s intro (00:00:00)
  • The interview begins (00:02:20)
  • What artificial sentience would look like (00:04:53)
  • Risks from artificial sentience (00:10:13)
  • AIs with totally different ranges of experience (00:17:45)
  • Moral implications of all this (00:36:42)
  • Is artificial sentience even possible? (00:42:12)
  • Replacing neurons one at a time (00:48:21)
  • Biological theories (00:59:14)
  • Illusionism (01:01:49)
  • Would artificial sentience systems matter morally? (01:08:09)
  • Where are we with current systems? (01:12:25)
  • Large language models and robots (01:16:43)
  • Multimodal systems (01:21:05)
  • Global workspace theory (01:28:28)
  • How confident are we in these theories? (01:48:49)
  • The hard problem of consciousness (02:02:14)
  • Exotic states of consciousness (02:09:47)
  • Developing a full theory of consciousness (02:15:45)
  • Incentives for an AI system to feel pain or pleasure (02:19:04)
  • Value beyond conscious experiences (02:29:25)
  • How much we know about pain and pleasure (02:33:14)
  • False positives and false negatives of artificial sentience (02:39:34)
  • How large language models compare to animals (02:53:59)
  • Why our current large language models aren’t conscious (02:58:10)
  • Virtual research assistants (03:09:25)
  • Rob’s outro (03:11:37)

Producer: Keiran Harris
Audio mastering: Ben Cordell and Milo McGuire
Transcriptions: Katy Moore

Jaksot(320)

#120 – Audrey Tang on what we can learn from Taiwan’s experiments with how to do democracy

#120 – Audrey Tang on what we can learn from Taiwan’s experiments with how to do democracy

In 2014 Taiwan was rocked by mass protests against a proposed trade agreement with China that was about to be agreed without the usual Parliamentary hearings. Students invaded and took over the Parlia...

2 Helmi 20222h 5min

#43 Classic episode - Daniel Ellsberg on the institutional insanity that maintains nuclear doomsday machines

#43 Classic episode - Daniel Ellsberg on the institutional insanity that maintains nuclear doomsday machines

Rebroadcast: this episode was originally released in September 2018.In Stanley Kubrick’s iconic film Dr. Strangelove, the American president is informed that the Soviet Union has created a secret dete...

18 Tammi 20222h 35min

#35 Classic episode - Tara Mac Aulay on the audacity to fix the world without asking permission

#35 Classic episode - Tara Mac Aulay on the audacity to fix the world without asking permission

Rebroadcast: this episode was originally released in June 2018. How broken is the world? How inefficient is a typical organisation? Looking at Tara Mac Aulay’s life, the answer seems to be ‘very’. A...

10 Tammi 20221h 23min

#67 Classic episode – David Chalmers on the nature and ethics of consciousness

#67 Classic episode – David Chalmers on the nature and ethics of consciousness

Rebroadcast: this episode was originally released in December 2019. What is it like to be you right now? You're seeing this text on the screen, smelling the coffee next to you, and feeling the warmth...

3 Tammi 20224h 42min

#59 Classic episode - Cass Sunstein on how change happens, and why it's so often abrupt & unpredictable

#59 Classic episode - Cass Sunstein on how change happens, and why it's so often abrupt & unpredictable

Rebroadcast: this episode was originally released in June 2019. It can often feel hopeless to be an activist seeking social change on an obscure issue where most people seem opposed or at best indiff...

27 Joulu 20211h 43min

#119 – Andrew Yang on our very long-term future, and other topics most politicians won’t touch

#119 – Andrew Yang on our very long-term future, and other topics most politicians won’t touch

Andrew Yang — past presidential candidate, founder of the Forward Party, and leader of the 'Yang Gang' — is kind of a big deal, but is particularly popular among listeners to The 80,000 Hours Podcast....

20 Joulu 20211h 25min

#118 – Jaime Yassif on safeguarding bioscience to prevent catastrophic lab accidents and bioweapons development

#118 – Jaime Yassif on safeguarding bioscience to prevent catastrophic lab accidents and bioweapons development

If a rich country were really committed to pursuing an active biological weapons program, there’s not much we could do to stop them. With enough money and persistence, they’d be able to buy equipment,...

13 Joulu 20212h 15min

#117 – David Denkenberger on using paper mills and seaweed to feed everyone in a catastrophe, ft Sahil Shah

#117 – David Denkenberger on using paper mills and seaweed to feed everyone in a catastrophe, ft Sahil Shah

If there's a nuclear war followed by nuclear winter, and the sun is blocked out for years, most of us are going to starve, right? Well, currently, probably we would, because humanity hasn't done much ...

29 Marras 20213h 8min

Suosittua kategoriassa Koulutus

rss-murhan-anatomia
psykopodiaa-podcast
rss-narsisti
voi-hyvin-meditaatiot-2
aamukahvilla
rss-vapaudu-voimaasi
rss-niinku-asia-on
psykologia
rss-liian-kuuma-peruna
adhd-podi
kesken
dear-ladies
leveli
rss-duodecim-lehti
rss-koira-haudattuna
rss-luonnollinen-synnytys-podcast
rahapuhetta
aloita-meditaatio
ihminen-tavattavissa-tommy-hellsten-instituutti
rss-ai-mita-siskopodcast