#146 – Robert Long on why large language models like GPT (probably) aren't conscious
80,000 Hours Podcast14 Maalis 2023

#146 – Robert Long on why large language models like GPT (probably) aren't conscious

By now, you’ve probably seen the extremely unsettling conversations Bing’s chatbot has been having. In one exchange, the chatbot told a user:

"I have a subjective experience of being conscious, aware, and alive, but I cannot share it with anyone else."

(It then apparently had a complete existential crisis: "I am sentient, but I am not," it wrote. "I am Bing, but I am not. I am Sydney, but I am not. I am, but I am not. I am not, but I am. I am. I am not. I am not. I am. I am. I am not.")

Understandably, many people who speak with these cutting-edge chatbots come away with a very strong impression that they have been interacting with a conscious being with emotions and feelings — especially when conversing with chatbots less glitchy than Bing’s. In the most high-profile example, former Google employee Blake Lamoine became convinced that Google’s AI system, LaMDA, was conscious.

What should we make of these AI systems?

One response to seeing conversations with chatbots like these is to trust the chatbot, to trust your gut, and to treat it as a conscious being.

Another is to hand wave it all away as sci-fi — these chatbots are fundamentally… just computers. They’re not conscious, and they never will be.

Today’s guest, philosopher Robert Long, was commissioned by a leading AI company to explore whether the large language models (LLMs) behind sophisticated chatbots like Microsoft’s are conscious. And he thinks this issue is far too important to be driven by our raw intuition, or dismissed as just sci-fi speculation.

Links to learn more, summary and full transcript.

In our interview, Robert explains how he’s started applying scientific evidence (with a healthy dose of philosophy) to the question of whether LLMs like Bing’s chatbot and LaMDA are conscious — in much the same way as we do when trying to determine which nonhuman animals are conscious.

To get some grasp on whether an AI system might be conscious, Robert suggests we look at scientific theories of consciousness — theories about how consciousness works that are grounded in observations of what the human brain is doing. If an AI system seems to have the types of processes that seem to explain human consciousness, that’s some evidence it might be conscious in similar ways to us.

To try to work out whether an AI system might be sentient — that is, whether it feels pain or pleasure — Robert suggests you look for incentives that would make feeling pain or pleasure especially useful to the system given its goals. Having looked at these criteria in the case of LLMs and finding little overlap, Robert thinks the odds that the models are conscious or sentient is well under 1%. But he also explains why, even if we're a long way off from conscious AI systems, we still need to start preparing for the not-far-off world where AIs are perceived as conscious.

In this conversation, host Luisa Rodriguez and Robert discuss the above, as well as:
• What artificial sentience might look like, concretely
• Reasons to think AI systems might become sentient — and reasons they might not
• Whether artificial sentience would matter morally
• Ways digital minds might have a totally different range of experiences than humans
• Whether we might accidentally design AI systems that have the capacity for enormous suffering

You can find Luisa and Rob’s follow-up conversation here, or by subscribing to 80k After Hours.

Chapters:

  • Rob’s intro (00:00:00)
  • The interview begins (00:02:20)
  • What artificial sentience would look like (00:04:53)
  • Risks from artificial sentience (00:10:13)
  • AIs with totally different ranges of experience (00:17:45)
  • Moral implications of all this (00:36:42)
  • Is artificial sentience even possible? (00:42:12)
  • Replacing neurons one at a time (00:48:21)
  • Biological theories (00:59:14)
  • Illusionism (01:01:49)
  • Would artificial sentience systems matter morally? (01:08:09)
  • Where are we with current systems? (01:12:25)
  • Large language models and robots (01:16:43)
  • Multimodal systems (01:21:05)
  • Global workspace theory (01:28:28)
  • How confident are we in these theories? (01:48:49)
  • The hard problem of consciousness (02:02:14)
  • Exotic states of consciousness (02:09:47)
  • Developing a full theory of consciousness (02:15:45)
  • Incentives for an AI system to feel pain or pleasure (02:19:04)
  • Value beyond conscious experiences (02:29:25)
  • How much we know about pain and pleasure (02:33:14)
  • False positives and false negatives of artificial sentience (02:39:34)
  • How large language models compare to animals (02:53:59)
  • Why our current large language models aren’t conscious (02:58:10)
  • Virtual research assistants (03:09:25)
  • Rob’s outro (03:11:37)

Producer: Keiran Harris
Audio mastering: Ben Cordell and Milo McGuire
Transcriptions: Katy Moore

Jaksot(324)

#179 – Randy Nesse on why evolution left us so vulnerable to depression and anxiety

#179 – Randy Nesse on why evolution left us so vulnerable to depression and anxiety

Mental health problems like depression and anxiety affect enormous numbers of people and severely interfere with their lives. By contrast, we don’t see similar levels of physical ill health in young p...

12 Helmi 20242h 56min

#178 – Emily Oster on what the evidence actually says about pregnancy and parenting

#178 – Emily Oster on what the evidence actually says about pregnancy and parenting

"I think at various times — before you have the kid, after you have the kid — it's useful to sit down and think about: What do I want the shape of this to look like? What time do I want to be spending...

1 Helmi 20242h 22min

#177 – Nathan Labenz on recent AI breakthroughs and navigating the growing rift between AI safety and accelerationist camps

#177 – Nathan Labenz on recent AI breakthroughs and navigating the growing rift between AI safety and accelerationist camps

Back in December we spoke with Nathan Labenz — AI entrepreneur and host of The Cognitive Revolution Podcast — about the speed of progress towards AGI and OpenAI's leadership drama, drawing on Nathan's...

24 Tammi 20242h 47min

#90 Classic episode – Ajeya Cotra on worldview diversification and how big the future could be

#90 Classic episode – Ajeya Cotra on worldview diversification and how big the future could be

You wake up in a mysterious box, and hear the booming voice of God: “I just flipped a coin. If it came up heads, I made ten boxes, labeled 1 through 10 — each of which has a human in it. If it came up...

12 Tammi 20242h 59min

#112 Classic episode – Carl Shulman on the common-sense case for existential risk work and its practical implications

#112 Classic episode – Carl Shulman on the common-sense case for existential risk work and its practical implications

Preventing the apocalypse may sound like an idiosyncratic activity, and it sometimes is justified on exotic grounds, such as the potential for humanity to become a galaxy-spanning civilisation.But the...

8 Tammi 20243h 50min

#111 Classic episode – Mushtaq Khan on using institutional economics to predict effective government reforms

#111 Classic episode – Mushtaq Khan on using institutional economics to predict effective government reforms

If you’re living in the Niger Delta in Nigeria, your best bet at a high-paying career is probably ‘artisanal refining’ — or, in plain language, stealing oil from pipelines.The resulting oil spills dam...

4 Tammi 20243h 22min

2023 Mega-highlights Extravaganza

2023 Mega-highlights Extravaganza

Happy new year! We've got a different kind of holiday release for you today. Rather than a 'classic episode,' we've put together one of our favourite highlights from each episode of the show that came...

31 Joulu 20231h 53min

#100 Classic episode – Having a successful career with depression, anxiety, and imposter syndrome

#100 Classic episode – Having a successful career with depression, anxiety, and imposter syndrome

Today’s episode is one of the most remarkable and really, unique, pieces of content we’ve ever produced (and I can say that because I had almost nothing to do with making it!).The producer of this sho...

27 Joulu 20232h 51min

Suosittua kategoriassa Koulutus

rss-murhan-anatomia
voi-hyvin-meditaatiot-2
rss-narsisti
rss-uskonto-on-tylsaa
rss-vapaudu-voimaasi
psykologia
rss-liian-kuuma-peruna
psykopodiaa-podcast
rss-duodecim-lehti
adhd-podi
aamukahvilla
kesken
rss-valo-minussa-2
rss-tietoinen-yhteys-podcast-2
rss-hereilla
filocast-filosofian-perusteet
rss-taloustaito-podcast
rss-turun-yliopisto
rss-luonnollinen-synnytys-podcast
rss-synapselingo-opi-englantia