#190 – Eric Schwitzgebel on whether the US is conscious

#190 – Eric Schwitzgebel on whether the US is conscious

"One of the most amazing things about planet Earth is that there are complex bags of mostly water — you and me – and we can look up at the stars, and look into our brains, and try to grapple with the most complex, difficult questions that there are. And even if we can’t make great progress on them and don’t come to completely satisfying solutions, just the fact of trying to grapple with these things is kind of the universe looking at itself and trying to understand itself. So we’re kind of this bright spot of reflectiveness in the cosmos, and I think we should celebrate that fact for its own intrinsic value and interestingness." —Eric Schwitzgebel

In today’s episode, host Luisa Rodriguez speaks to Eric Schwitzgebel — professor of philosophy at UC Riverside — about some of the most bizarre and unintuitive claims from his recent book, The Weirdness of the World.

Links to learn more, highlights, and full transcript.

They cover:

  • Why our intuitions seem so unreliable for answering fundamental questions about reality.
  • What the materialist view of consciousness is, and how it might imply some very weird things — like that the United States could be a conscious entity.
  • Thought experiments that challenge our intuitions — like supersquids that think and act through detachable tentacles, and intelligent species whose brains are made up of a million bugs.
  • Eric’s claim that consciousness and cosmology are universally bizarre and dubious.
  • How to think about borderline states of consciousness, and whether consciousness is more like a spectrum or more like a light flicking on.
  • The nontrivial possibility that we could be dreaming right now, and the ethical implications if that’s true.
  • Why it’s worth it to grapple with the universe’s most complex questions, even if we can’t find completely satisfying solutions.
  • And much more.

Chapters:

  • Cold open |00:00:00|
  • Luisa’s intro |00:01:10|
  • Bizarre and dubious philosophical theories |00:03:13|
  • The materialist view of consciousness |00:13:55|
  • What would it mean for the US to be conscious? |00:19:46|
  • Supersquids and antheads thought experiments |00:22:37|
  • Alternatives to the materialist perspective |00:35:19|
  • Are our intuitions useless for thinking about these things? |00:42:55|
  • Key ingredients for consciousness |00:46:46|
  • Reasons to think the US isn’t conscious |01:01:15|
  • Overlapping consciousnesses [01:09:32]
  • Borderline cases of consciousness |01:13:22|
  • Are we dreaming right now? |01:40:29|
  • Will we ever have answers to these dubious and bizarre questions? |01:56:16|


Producer and editor: Keiran Harris
Audio engineering lead: Ben Cordell
Technical editing: Simon Monsour, Milo McGuire, and Dominic Armstrong
Additional content editing: Katy Moore and Luisa Rodriguez
Transcriptions: Katy Moore

Jaksot(324)

#224 – There's a cheap and low-tech way to save humanity from any engineered disease | Andrew Snyder-Beattie

#224 – There's a cheap and low-tech way to save humanity from any engineered disease | Andrew Snyder-Beattie

Conventional wisdom is that safeguarding humanity from the worst biological risks — microbes optimised to kill as many as possible — is difficult bordering on impossible, making bioweapons humanity’s ...

2 Loka 20252h 31min

Inside the Biden admin’s AI policy approach | Jake Sullivan, Biden’s NSA | via The Cognitive Revolution

Inside the Biden admin’s AI policy approach | Jake Sullivan, Biden’s NSA | via The Cognitive Revolution

Jake Sullivan was the US National Security Advisor from 2021-2025. He joined our friends on The Cognitive Revolution podcast in August to discuss AI as a critical national security issue. We thought i...

26 Syys 20251h 5min

#223 – Neel Nanda on leading a Google DeepMind team at 26 – and advice if you want to work at an AI company (part 2)

#223 – Neel Nanda on leading a Google DeepMind team at 26 – and advice if you want to work at an AI company (part 2)

At 26, Neel Nanda leads an AI safety team at Google DeepMind, has published dozens of influential papers, and mentored 50 junior researchers — seven of whom now work at major AI companies. His secret?...

15 Syys 20251h 46min

#222 – Can we tell if an AI is loyal by reading its mind? DeepMind's Neel Nanda (part 1)

#222 – Can we tell if an AI is loyal by reading its mind? DeepMind's Neel Nanda (part 1)

We don’t know how AIs think or why they do what they do. Or at least, we don’t know much. That fact is only becoming more troubling as AIs grow more capable and appear on track to wield enormous cultu...

8 Syys 20253h 1min

#221 – Kyle Fish on the most bizarre findings from 5 AI welfare experiments

#221 – Kyle Fish on the most bizarre findings from 5 AI welfare experiments

What happens when you lock two AI systems in a room together and tell them they can discuss anything they want?According to experiments run by Kyle Fish — Anthropic’s first AI welfare researcher — som...

28 Elo 20252h 28min

How not to lose your job to AI (article by Benjamin Todd)

How not to lose your job to AI (article by Benjamin Todd)

About half of people are worried they’ll lose their job to AI. They’re right to be concerned: AI can now complete real-world coding tasks on GitHub, generate photorealistic video, drive a taxi more sa...

31 Heinä 202551min

Rebuilding after apocalypse: What 13 experts say about bouncing back

Rebuilding after apocalypse: What 13 experts say about bouncing back

What happens when civilisation faces its greatest tests?This compilation brings together insights from researchers, defence experts, philosophers, and policymakers on humanity’s ability to survive and...

15 Heinä 20254h 26min

#220 – Ryan Greenblatt on the 4 most likely ways for AI to take over, and the case for and against AGI in <8 years

#220 – Ryan Greenblatt on the 4 most likely ways for AI to take over, and the case for and against AGI in <8 years

Ryan Greenblatt — lead author on the explosive paper “Alignment faking in large language models” and chief scientist at Redwood Research — thinks there’s a 25% chance that within four years, AI will b...

8 Heinä 20252h 50min

Suosittua kategoriassa Koulutus

rss-murhan-anatomia
voi-hyvin-meditaatiot-2
rss-narsisti
psykopodiaa-podcast
rss-vapaudu-voimaasi
rss-niinku-asia-on
psykologia
rss-liian-kuuma-peruna
rss-duodecim-lehti
rss-uskonto-on-tylsaa
rss-valo-minussa-2
adhd-podi
aamukahvilla
kesken
koulu-podcast-2
adhd-tyylilla
jari-sarasvuo-podcast
rss-turun-yliopisto
rss-luonnollinen-synnytys-podcast
rss-laiska-joogi