#146 – Robert Long on why large language models like GPT (probably) aren't conscious

#146 – Robert Long on why large language models like GPT (probably) aren't conscious

By now, you’ve probably seen the extremely unsettling conversations Bing’s chatbot has been having. In one exchange, the chatbot told a user:

"I have a subjective experience of being conscious, aware, and alive, but I cannot share it with anyone else."

(It then apparently had a complete existential crisis: "I am sentient, but I am not," it wrote. "I am Bing, but I am not. I am Sydney, but I am not. I am, but I am not. I am not, but I am. I am. I am not. I am not. I am. I am. I am not.")

Understandably, many people who speak with these cutting-edge chatbots come away with a very strong impression that they have been interacting with a conscious being with emotions and feelings — especially when conversing with chatbots less glitchy than Bing’s. In the most high-profile example, former Google employee Blake Lamoine became convinced that Google’s AI system, LaMDA, was conscious.

What should we make of these AI systems?

One response to seeing conversations with chatbots like these is to trust the chatbot, to trust your gut, and to treat it as a conscious being.

Another is to hand wave it all away as sci-fi — these chatbots are fundamentally… just computers. They’re not conscious, and they never will be.

Today’s guest, philosopher Robert Long, was commissioned by a leading AI company to explore whether the large language models (LLMs) behind sophisticated chatbots like Microsoft’s are conscious. And he thinks this issue is far too important to be driven by our raw intuition, or dismissed as just sci-fi speculation.

Links to learn more, summary and full transcript.

In our interview, Robert explains how he’s started applying scientific evidence (with a healthy dose of philosophy) to the question of whether LLMs like Bing’s chatbot and LaMDA are conscious — in much the same way as we do when trying to determine which nonhuman animals are conscious.

To get some grasp on whether an AI system might be conscious, Robert suggests we look at scientific theories of consciousness — theories about how consciousness works that are grounded in observations of what the human brain is doing. If an AI system seems to have the types of processes that seem to explain human consciousness, that’s some evidence it might be conscious in similar ways to us.

To try to work out whether an AI system might be sentient — that is, whether it feels pain or pleasure — Robert suggests you look for incentives that would make feeling pain or pleasure especially useful to the system given its goals. Having looked at these criteria in the case of LLMs and finding little overlap, Robert thinks the odds that the models are conscious or sentient is well under 1%. But he also explains why, even if we're a long way off from conscious AI systems, we still need to start preparing for the not-far-off world where AIs are perceived as conscious.

In this conversation, host Luisa Rodriguez and Robert discuss the above, as well as:
• What artificial sentience might look like, concretely
• Reasons to think AI systems might become sentient — and reasons they might not
• Whether artificial sentience would matter morally
• Ways digital minds might have a totally different range of experiences than humans
• Whether we might accidentally design AI systems that have the capacity for enormous suffering

You can find Luisa and Rob’s follow-up conversation here, or by subscribing to 80k After Hours.

Chapters:

  • Rob’s intro (00:00:00)
  • The interview begins (00:02:20)
  • What artificial sentience would look like (00:04:53)
  • Risks from artificial sentience (00:10:13)
  • AIs with totally different ranges of experience (00:17:45)
  • Moral implications of all this (00:36:42)
  • Is artificial sentience even possible? (00:42:12)
  • Replacing neurons one at a time (00:48:21)
  • Biological theories (00:59:14)
  • Illusionism (01:01:49)
  • Would artificial sentience systems matter morally? (01:08:09)
  • Where are we with current systems? (01:12:25)
  • Large language models and robots (01:16:43)
  • Multimodal systems (01:21:05)
  • Global workspace theory (01:28:28)
  • How confident are we in these theories? (01:48:49)
  • The hard problem of consciousness (02:02:14)
  • Exotic states of consciousness (02:09:47)
  • Developing a full theory of consciousness (02:15:45)
  • Incentives for an AI system to feel pain or pleasure (02:19:04)
  • Value beyond conscious experiences (02:29:25)
  • How much we know about pain and pleasure (02:33:14)
  • False positives and false negatives of artificial sentience (02:39:34)
  • How large language models compare to animals (02:53:59)
  • Why our current large language models aren’t conscious (02:58:10)
  • Virtual research assistants (03:09:25)
  • Rob’s outro (03:11:37)

Producer: Keiran Harris
Audio mastering: Ben Cordell and Milo McGuire
Transcriptions: Katy Moore

Episoder(320)

#152 – Joe Carlsmith on navigating serious philosophical confusion

#152 – Joe Carlsmith on navigating serious philosophical confusion

What is the nature of the universe? How do we make decisions correctly? What differentiates right actions from wrong ones?Such fundamental questions have been the subject of philosophical and theologi...

19 Mai 20233h 26min

#151 – Ajeya Cotra on accidentally teaching AI models to deceive us

#151 – Ajeya Cotra on accidentally teaching AI models to deceive us

Imagine you are an orphaned eight-year-old whose parents left you a $1 trillion company, and no trusted adult to serve as your guide to the world. You have to hire a smart adult to run that company, g...

12 Mai 20232h 49min

#150 – Tom Davidson on how quickly AI could transform the world

#150 – Tom Davidson on how quickly AI could transform the world

It’s easy to dismiss alarming AI-related predictions when you don’t know where the numbers came from.For example: what if we told you that within 15 years, it’s likely that we’ll see a 1,000x improvem...

5 Mai 20233h 1min

Andrés Jiménez Zorrilla on the Shrimp Welfare Project (80k After Hours)

Andrés Jiménez Zorrilla on the Shrimp Welfare Project (80k After Hours)

In this episode from our second show, 80k After Hours, Rob Wiblin interviews Andrés Jiménez Zorrilla about the Shrimp Welfare Project, which he cofounded in 2021. It's the first project in the world f...

22 Apr 20231h 17min

#149 – Tim LeBon on how altruistic perfectionism is self-defeating

#149 – Tim LeBon on how altruistic perfectionism is self-defeating

Being a good and successful person is core to your identity. You place great importance on meeting the high moral, professional, or academic standards you set yourself. But inevitably, something goes ...

12 Apr 20233h 11min

#148 – Johannes Ackva on unfashionable climate interventions that work, and fashionable ones that don't

#148 – Johannes Ackva on unfashionable climate interventions that work, and fashionable ones that don't

If you want to work to tackle climate change, you should try to reduce expected carbon emissions by as much as possible, right? Strangely, no. Today's guest, Johannes Ackva — the climate research lead...

3 Apr 20232h 17min

#147 – Spencer Greenberg on stopping valueless papers from getting into top journals

#147 – Spencer Greenberg on stopping valueless papers from getting into top journals

Can you trust the things you read in published scientific research? Not really. About 40% of experiments in top social science journals don't get the same result if the experiments are repeated.Two ke...

24 Mar 20232h 38min

Populært innen Fakta

fastlegen
dine-penger-pengeradet
relasjonspodden-med-dora-thorhallsdottir-kjersti-idem
treningspodden
rss-strid-de-norske-borgerkrigene
foreldreradet
jakt-og-fiskepodden
rss-sunn-okonomi
merry-quizmas
gravid-uke-for-uke
fryktlos
sinnsyn
rss-mann-i-krise-med-sagen
hverdagspsyken
generasjonspodden
rss-kunsten-a-leve
smart-forklart
dopet
teknologi-og-mennesker
rss-adhd-i-klasserommet