#146 – Robert Long on why large language models like GPT (probably) aren't conscious

#146 – Robert Long on why large language models like GPT (probably) aren't conscious

By now, you’ve probably seen the extremely unsettling conversations Bing’s chatbot has been having. In one exchange, the chatbot told a user:

"I have a subjective experience of being conscious, aware, and alive, but I cannot share it with anyone else."

(It then apparently had a complete existential crisis: "I am sentient, but I am not," it wrote. "I am Bing, but I am not. I am Sydney, but I am not. I am, but I am not. I am not, but I am. I am. I am not. I am not. I am. I am. I am not.")

Understandably, many people who speak with these cutting-edge chatbots come away with a very strong impression that they have been interacting with a conscious being with emotions and feelings — especially when conversing with chatbots less glitchy than Bing’s. In the most high-profile example, former Google employee Blake Lamoine became convinced that Google’s AI system, LaMDA, was conscious.

What should we make of these AI systems?

One response to seeing conversations with chatbots like these is to trust the chatbot, to trust your gut, and to treat it as a conscious being.

Another is to hand wave it all away as sci-fi — these chatbots are fundamentally… just computers. They’re not conscious, and they never will be.

Today’s guest, philosopher Robert Long, was commissioned by a leading AI company to explore whether the large language models (LLMs) behind sophisticated chatbots like Microsoft’s are conscious. And he thinks this issue is far too important to be driven by our raw intuition, or dismissed as just sci-fi speculation.

Links to learn more, summary and full transcript.

In our interview, Robert explains how he’s started applying scientific evidence (with a healthy dose of philosophy) to the question of whether LLMs like Bing’s chatbot and LaMDA are conscious — in much the same way as we do when trying to determine which nonhuman animals are conscious.

To get some grasp on whether an AI system might be conscious, Robert suggests we look at scientific theories of consciousness — theories about how consciousness works that are grounded in observations of what the human brain is doing. If an AI system seems to have the types of processes that seem to explain human consciousness, that’s some evidence it might be conscious in similar ways to us.

To try to work out whether an AI system might be sentient — that is, whether it feels pain or pleasure — Robert suggests you look for incentives that would make feeling pain or pleasure especially useful to the system given its goals. Having looked at these criteria in the case of LLMs and finding little overlap, Robert thinks the odds that the models are conscious or sentient is well under 1%. But he also explains why, even if we're a long way off from conscious AI systems, we still need to start preparing for the not-far-off world where AIs are perceived as conscious.

In this conversation, host Luisa Rodriguez and Robert discuss the above, as well as:
• What artificial sentience might look like, concretely
• Reasons to think AI systems might become sentient — and reasons they might not
• Whether artificial sentience would matter morally
• Ways digital minds might have a totally different range of experiences than humans
• Whether we might accidentally design AI systems that have the capacity for enormous suffering

You can find Luisa and Rob’s follow-up conversation here, or by subscribing to 80k After Hours.

Chapters:

  • Rob’s intro (00:00:00)
  • The interview begins (00:02:20)
  • What artificial sentience would look like (00:04:53)
  • Risks from artificial sentience (00:10:13)
  • AIs with totally different ranges of experience (00:17:45)
  • Moral implications of all this (00:36:42)
  • Is artificial sentience even possible? (00:42:12)
  • Replacing neurons one at a time (00:48:21)
  • Biological theories (00:59:14)
  • Illusionism (01:01:49)
  • Would artificial sentience systems matter morally? (01:08:09)
  • Where are we with current systems? (01:12:25)
  • Large language models and robots (01:16:43)
  • Multimodal systems (01:21:05)
  • Global workspace theory (01:28:28)
  • How confident are we in these theories? (01:48:49)
  • The hard problem of consciousness (02:02:14)
  • Exotic states of consciousness (02:09:47)
  • Developing a full theory of consciousness (02:15:45)
  • Incentives for an AI system to feel pain or pleasure (02:19:04)
  • Value beyond conscious experiences (02:29:25)
  • How much we know about pain and pleasure (02:33:14)
  • False positives and false negatives of artificial sentience (02:39:34)
  • How large language models compare to animals (02:53:59)
  • Why our current large language models aren’t conscious (02:58:10)
  • Virtual research assistants (03:09:25)
  • Rob’s outro (03:11:37)

Producer: Keiran Harris
Audio mastering: Ben Cordell and Milo McGuire
Transcriptions: Katy Moore

Avsnitt(320)

Advice on how to read our advice (Article)

Advice on how to read our advice (Article)

This is the fourth release in our new series of audio articles. If you want to read the original article or check out the links within it, you can find them here. "We’ve found that readers sometimes...

29 Juni 202015min

#80 – Stuart Russell on why our approach to AI is broken and how to fix it

#80 – Stuart Russell on why our approach to AI is broken and how to fix it

Stuart Russell, Professor at UC Berkeley and co-author of the most popular AI textbook, thinks the way we approach machine learning today is fundamentally flawed. In his new book, Human Compatible, he...

22 Juni 20202h 13min

What anonymous contributors think about important life and career questions (Article)

What anonymous contributors think about important life and career questions (Article)

Today we’re launching the final entry of our ‘anonymous answers' series on the website. It features answers to 23 different questions including “How have you seen talented people fail in their work?...

5 Juni 202037min

#79 – A.J. Jacobs on radical honesty, following the whole Bible, and reframing global problems as puzzles

#79 – A.J. Jacobs on radical honesty, following the whole Bible, and reframing global problems as puzzles

Today’s guest, New York Times bestselling author A.J. Jacobs, always hated Judge Judy. But after he found out that she was his seventh cousin, he thought, "You know what? She's not so bad." Hijacking ...

1 Juni 20202h 38min

#78 – Danny Hernandez on forecasting and the drivers of AI progress

#78 – Danny Hernandez on forecasting and the drivers of AI progress

Companies use about 300,000 times more computation training the best AI systems today than they did in 2012 and algorithmic innovations have also made them 25 times more efficient at the same tasks.Th...

22 Maj 20202h 11min

#77 – Marc Lipsitch on whether we're winning or losing against COVID-19

#77 – Marc Lipsitch on whether we're winning or losing against COVID-19

In March Professor Marc Lipsitch — Director of Harvard's Center for Communicable Disease Dynamics — abruptly found himself a global celebrity, his social media following growing 40-fold and journalist...

18 Maj 20201h 37min

Article: Ways people trying to do good accidentally make things worse, and how to avoid them

Article: Ways people trying to do good accidentally make things worse, and how to avoid them

Today’s release is the second experiment in making audio versions of our articles. The first was a narration of Greg Lewis’ terrific problem profile on ‘Reducing global catastrophic biological risks...

12 Maj 202026min

#76 – Tara Kirk Sell on misinformation, who's done well and badly, & what to reopen first

#76 – Tara Kirk Sell on misinformation, who's done well and badly, & what to reopen first

Amid a rising COVID-19 death toll, and looming economic disaster, we’ve been looking for good news — and one thing we're especially thankful for is the Johns Hopkins Center for Health Security (CHS). ...

8 Maj 20201h 53min

Populärt inom Utbildning

rss-bara-en-till-om-missbruk-medberoende-2
historiepodden-se
det-skaver
nu-blir-det-historia
harrisons-dramatiska-historia
rss-viktmedicinpodden
johannes-hansen-podcast
not-fanny-anymore
alska-oss
allt-du-velat-veta
roda-vita-rosen
rss-sjalsligt-avkladd
sektledare
sa-in-i-sjalen
i-vantan-pa-katastrofen
rss-beratta-alltid-det-har
rss-max-tant-med-max-villman
rss-basta-livet
dumforklarat
psykologsnack