Guilt, imposter syndrome & doing good: 16 past guests share their mental health journeys

Guilt, imposter syndrome & doing good: 16 past guests share their mental health journeys

"We are aiming for a place where we can decouple the scorecard from our worthiness. It’s of course the case that in trying to optimise the good, we will always be falling short. The question is how much, and in what ways are we not there yet? And if we then extrapolate that to how much and in what ways am I not enough, that’s where we run into trouble." —Hannah Boettcher

What happens when your desire to do good starts to undermine your own wellbeing?

Over the years, we’ve heard from therapists, charity directors, researchers, psychologists, and career advisors — all wrestling with how to do good without falling apart. Today’s episode brings together insights from 16 past guests on the emotional and psychological costs of pursuing a high-impact career to improve the world — and how to best navigate the all-too-common guilt, burnout, perfectionism, and imposter syndrome along the way.

Check out the full transcript and links to learn more: https://80k.info/mh

If you’re dealing with your own mental health concerns, here are some resources that might help:

Chapters:

  • Cold open (00:00:00)
  • Luisa's intro (00:01:32)
  • 80,000 Hours’ former CEO Howie on what his anxiety and self-doubt feels like (00:03:47)
  • Evolutionary psychiatrist Randy Nesse on what emotions are for (00:07:35)
  • Therapist Hannah Boettcher on how striving for impact can affect our self-worth (00:13:45)
  • Luisa Rodriguez on grieving the gap between who you are and who you wish you were (00:16:57)
  • Charity director Cameron Meyer Shorb on managing work-related guilt and shame (00:24:01)
  • Therapist Tim LeBon on aiming for excellence rather than perfection (00:29:18)
  • Author Cal Newport on making time to be alone with our thoughts (00:36:03)
  • 80,000 Hours career advisors Michelle Hutchinson and Habiba Islam on prioritising mental health over career impact (00:40:28)
  • Charity founder Sarah Eustis-Guthrie on the ups and downs of founding an organisation (00:45:52)
  • Our World in Data researcher Hannah Ritchie on feeling like an imposter as a generalist (00:51:28)
  • Moral philosopher Will MacAskill on being proactive about mental health and preventing burnout (01:00:46)
  • Grantmaker Ajeya Cotra on the psychological toll of big open-ended research questions (01:11:00)
  • Researcher and grantmaker Christian Ruhl on how having a stutter affects him personally and professionally (01:19:30)
  • Mercy For Animals’ CEO Leah Garcés on insisting on self-care when doing difficult work (01:32:39)
  • 80,000 Hours’ former CEO Howie on balancing a job and mental illness (01:37:12)
  • Therapist Hannah Boettcher on how self-compassion isn’t self-indulgence (01:40:39)
  • Journalist Kelsey Piper on communicating about mental health in ways that resonate (01:43:32)
  • Luisa's outro (01:46:10)

Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
Content editing: Katy Moore and Milo McGuire
Transcriptions and web: Katy Moore

Upptäck Premium

Prova 14 dagar kostnadsfritt

Prova gratisArrow Right

Avsnitt(295)

#196 – Jonathan Birch on the edge cases of sentience and why they matter

#196 – Jonathan Birch on the edge cases of sentience and why they matter

"In the 1980s, it was still apparently common to perform surgery on newborn babies without anaesthetic on both sides of the Atlantic. This led to appalling cases, and to public outcry, and to campaigns to change clinical practice. And as soon as [some courageous scientists] looked for evidence, it showed that this practice was completely indefensible and then the clinical practice was changed. People don’t need convincing anymore that we should take newborn human babies seriously as sentience candidates. But the tale is a useful cautionary tale, because it shows you how deep that overconfidence can run and how problematic it can be. It just underlines this point that overconfidence about sentience is everywhere and is dangerous." —Jonathan BirchIn today’s episode, host Luisa Rodriguez speaks to Dr Jonathan Birch — philosophy professor at the London School of Economics — about his new book, The Edge of Sentience: Risk and Precaution in Humans, Other Animals, and AI. (Check out the free PDF version!)Links to learn more, highlights, and full transcript.They cover:Candidates for sentience, such as humans with consciousness disorders, foetuses, neural organoids, invertebrates, and AIsHumanity’s history of acting as if we’re sure that such beings are incapable of having subjective experiences — and why Jonathan thinks that that certainty is completely unjustified.Chilling tales about overconfident policies that probably caused significant suffering for decades.How policymakers can act ethically given real uncertainty.Whether simulating the brain of the roundworm C. elegans or Drosophila (aka fruit flies) would create minds equally sentient to the biological versions.How new technologies like brain organoids could replace animal testing, and how big the risk is that they could be sentient too.Why Jonathan is so excited about citizens’ assemblies.Jonathan’s conversation with the Dalai Lama about whether insects are sentient.And plenty more.Chapters:Cold open (00:00:00)Luisa’s intro (00:01:20)The interview begins (00:03:04)Why does sentience matter? (00:03:31)Inescapable uncertainty about other minds (00:05:43)The “zone of reasonable disagreement” in sentience research (00:10:31)Disorders of consciousness: comas and minimally conscious states (00:17:06)Foetuses and the cautionary tale of newborn pain (00:43:23)Neural organoids (00:55:49)AI sentience and whole brain emulation (01:06:17)Policymaking at the edge of sentience (01:28:09)Citizens’ assemblies (01:31:13)The UK’s Sentience Act (01:39:45)Ways Jonathan has changed his mind (01:47:26)Careers (01:54:54)Discussing animal sentience with the Dalai Lama (01:59:08)Luisa’s outro (02:01:04)Producer and editor: Keiran HarrisAudio engineering by Ben Cordell, Milo McGuire, Simon Monsour, and Dominic ArmstrongAdditional content editing: Katy Moore and Luisa RodriguezTranscriptions: Katy Moore

15 Aug 20242h 1min

#195 – Sella Nevo on who's trying to steal frontier AI models, and what they could do with them

#195 – Sella Nevo on who's trying to steal frontier AI models, and what they could do with them

"Computational systems have literally millions of physical and conceptual components, and around 98% of them are embedded into your infrastructure without you ever having heard of them. And an inordinate amount of them can lead to a catastrophic failure of your security assumptions. And because of this, the Iranian secret nuclear programme failed to prevent a breach, most US agencies failed to prevent multiple breaches, most US national security agencies failed to prevent breaches. So ensuring your system is truly secure against highly resourced and dedicated attackers is really, really hard." —Sella NevoIn today’s episode, host Luisa Rodriguez speaks to Sella Nevo — director of the Meselson Center at RAND — about his team’s latest report on how to protect the model weights of frontier AI models from actors who might want to steal them.Links to learn more, highlights, and full transcript.They cover:Real-world examples of sophisticated security breaches, and what we can learn from them.Why AI model weights might be such a high-value target for adversaries like hackers, rogue states, and other bad actors.The many ways that model weights could be stolen, from using human insiders to sophisticated supply chain hacks.The current best practices in cybersecurity, and why they may not be enough to keep bad actors away.New security measures that Sella hopes can mitigate with the growing risks.Sella’s work using machine learning for flood forecasting, which has significantly reduced injuries and costs from floods across Africa and Asia.And plenty more.Also, RAND is currently hiring for roles in technical and policy information security — check them out if you're interested in this field! Chapters:Cold open (00:00:00)Luisa’s intro (00:00:56)The interview begins (00:02:30)The importance of securing the model weights of frontier AI models (00:03:01)The most sophisticated and surprising security breaches (00:10:22)AI models being leaked (00:25:52)Researching for the RAND report (00:30:11)Who tries to steal model weights? (00:32:21)Malicious code and exploiting zero-days (00:42:06)Human insiders (00:53:20)Side-channel attacks (01:04:11)Getting access to air-gapped networks (01:10:52)Model extraction (01:19:47)Reducing and hardening authorised access (01:38:52)Confidential computing (01:48:05)Red-teaming and security testing (01:53:42)Careers in information security (01:59:54)Sella’s work on flood forecasting systems (02:01:57)Luisa’s outro (02:04:51)Producer and editor: Keiran HarrisAudio engineering team: Ben Cordell, Simon Monsour, Milo McGuire, and Dominic ArmstrongAdditional content editing: Katy Moore and Luisa RodriguezTranscriptions: Katy Moore

1 Aug 20242h 8min

#194 – Vitalik Buterin on defensive acceleration and how to regulate AI when you fear government

#194 – Vitalik Buterin on defensive acceleration and how to regulate AI when you fear government

"If you’re a power that is an island and that goes by sea, then you’re more likely to do things like valuing freedom, being democratic, being pro-foreigner, being open-minded, being interested in trade. If you are on the Mongolian steppes, then your entire mindset is kill or be killed, conquer or be conquered … the breeding ground for basically everything that all of us consider to be dystopian governance. If you want more utopian governance and less dystopian governance, then find ways to basically change the landscape, to try to make the world look more like mountains and rivers and less like the Mongolian steppes." —Vitalik ButerinCan ‘effective accelerationists’ and AI ‘doomers’ agree on a common philosophy of technology? Common sense says no. But programmer and Ethereum cofounder Vitalik Buterin showed otherwise with his essay “My techno-optimism,” which both camps agreed was basically reasonable.Links to learn more, highlights, video, and full transcript.Seeing his social circle divided and fighting, Vitalik hoped to write a careful synthesis of the best ideas from both the optimists and the apprehensive.Accelerationists are right: most technologies leave us better off, the human cost of delaying further advances can be dreadful, and centralising control in government hands often ends disastrously.But the fearful are also right: some technologies are important exceptions, AGI has an unusually high chance of being one of those, and there are options to advance AI in safer directions.The upshot? Defensive acceleration: humanity should run boldly but also intelligently into the future — speeding up technology to get its benefits, but preferentially developing ‘defensive’ technologies that lower systemic risks, permit safe decentralisation of power, and help both individuals and countries defend themselves against aggression and domination.Entrepreneur First is running a defensive acceleration incubation programme with $250,000 of investment. If these ideas resonate with you, learn about the programme and apply by August 2, 2024. You don’t need a business idea yet — just the hustle to start a technology company.In addition to all of that, host Rob Wiblin and Vitalik discuss:AI regulation disagreements being less about AI in particular, and more whether you’re typically more scared of anarchy or totalitarianism.Vitalik’s updated p(doom).Whether the social impact of blockchain and crypto has been a disappointment.Whether humans can merge with AI, and if that’s even desirable.The most valuable defensive technologies to accelerate.How to trustlessly identify what everyone will agree is misinformationWhether AGI is offence-dominant or defence-dominant.Vitalik’s updated take on effective altruism.Plenty more.Chapters:Cold open (00:00:00)Rob’s intro (00:00:56)The interview begins (00:04:47)Three different views on technology (00:05:46)Vitalik’s updated probability of doom (00:09:25)Technology is amazing, and AI is fundamentally different from other tech (00:15:55)Fear of totalitarianism and finding middle ground (00:22:44)Should AI be more centralised or more decentralised? (00:42:20)Humans merging with AIs to remain relevant (01:06:59)Vitalik’s “d/acc” alternative (01:18:48)Biodefence (01:24:01)Pushback on Vitalik’s vision (01:37:09)How much do people actually disagree? (01:42:14)Cybersecurity (01:47:28)Information defence (02:01:44)Is AI more offence-dominant or defence-dominant? (02:21:00)How Vitalik communicates among different camps (02:25:44)Blockchain applications with social impact (02:34:37)Rob’s outro (03:01:00)Producer and editor: Keiran HarrisAudio engineering team: Ben Cordell, Simon Monsour, Milo McGuire, and Dominic ArmstrongTranscriptions: Katy Moore

26 Juli 20243h 4min

#193 – Sihao Huang on navigating the geopolitics of US–China AI competition

#193 – Sihao Huang on navigating the geopolitics of US–China AI competition

"You don’t necessarily need world-leading compute to create highly risky AI systems. The biggest biological design tools right now, like AlphaFold’s, are orders of magnitude smaller in terms of compute requirements than the frontier large language models. And China has the compute to train these systems. And if you’re, for instance, building a cyber agent or something that conducts cyberattacks, perhaps you also don’t need the general reasoning or mathematical ability of a large language model. You train on a much smaller subset of data. You fine-tune it on a smaller subset of data. And those systems — one, if China intentionally misuses them, and two, if they get proliferated because China just releases them as open source, or China does not have as comprehensive AI regulations — this could cause a lot of harm in the world." —Sihao HuangIn today’s episode, host Luisa Rodriguez speaks to Sihao Huang about his work on AI governance and tech policy in China, what’s happening on the ground in China in AI development and regulation, and the importance of US–China cooperation on AI governance.Links to learn more, highlights, video, and full transcript.They cover:Whether the US and China are in an AI race, and the global implications if they are.The state of the art of AI in China.China’s response to American export controls, and whether China is on track to indigenise its semiconductor supply chain.How China’s current AI regulations try to maintain a delicate balance between fostering innovation and keeping strict information control over the Chinese people.Whether China’s extensive AI regulations signal real commitment to safety or just censorship — and how AI is already used in China for surveillance and authoritarian control.How advancements in AI could reshape global power dynamics, and Sihao’s vision of international cooperation to manage this responsibly.And plenty more.Chapters:Cold open (00:00:00)Luisa's intro (00:01:02)The interview begins (00:02:06)Is China in an AI race with the West? (00:03:20)How advanced is Chinese AI? (00:15:21)Bottlenecks in Chinese AI development (00:22:30)China and AI risks (00:27:41)Information control and censorship (00:31:32)AI safety research in China (00:36:31)Could China be a source of catastrophic AI risk? (00:41:58)AI enabling human rights abuses and undermining democracy (00:50:10)China’s semiconductor industry (00:59:47)China’s domestic AI governance landscape (01:29:22)China’s international AI governance strategy (01:49:56)Coordination (01:53:56)Track two dialogues (02:03:04)Misunderstandings Western actors have about Chinese approaches (02:07:34)Complexity thinking (02:14:40)Sihao’s pet bacteria hobby (02:20:34)Luisa's outro (02:22:47)Producer and editor: Keiran HarrisAudio engineering team: Ben Cordell, Simon Monsour, Milo McGuire, and Dominic ArmstrongAdditional content editing: Katy Moore and Luisa RodriguezTranscriptions: Katy Moore

18 Juli 20242h 23min

#192 – Annie Jacobsen on what would happen if North Korea launched a nuclear weapon at the US

#192 – Annie Jacobsen on what would happen if North Korea launched a nuclear weapon at the US

"Ring one: total annihilation; no cellular life remains. Ring two, another three-mile diameter out: everything is ablaze. Ring three, another three or five miles out on every side: third-degree burns among almost everyone. You are talking about people who may have gone down into the secret tunnels beneath Washington, DC, escaped from the Capitol and such: people are now broiling to death; people are dying from carbon monoxide poisoning; people who followed instructions and went into their basement are dying of suffocation. Everywhere there is death, everywhere there is fire."That iconic mushroom stem and cap that represents a nuclear blast — when a nuclear weapon has been exploded on a city — that stem and cap is made up of people. What is left over of people and of human civilisation." —Annie JacobsenIn today’s episode, host Luisa Rodriguez speaks to Pulitzer Prize finalist and New York Times bestselling author Annie Jacobsen about her latest book, Nuclear War: A Scenario.Links to learn more, highlights, and full transcript.They cover:The most harrowing findings from Annie’s hundreds of hours of interviews with nuclear experts.What happens during the window that the US president would have to decide about nuclear retaliation after hearing news of a possible nuclear attack.The horrific humanitarian impacts on millions of innocent civilians from nuclear strikes.The overlooked dangers of a nuclear-triggered electromagnetic pulse (EMP) attack crippling critical infrastructure within seconds.How we’re on the razor’s edge between the logic of nuclear deterrence and catastrophe, and urgently need reforms to move away from hair-trigger alert nuclear postures.And plenty more.Chapters:Cold open (00:00:00)Luisa’s intro (00:01:03)The interview begins (00:02:28)The first 24 minutes (00:02:59)The Black Book and presidential advisors (00:13:35)False alarms (00:40:43)Russian misperception of US counterattack (00:44:50)A narcissistic madman with a nuclear arsenal (01:00:13)Is escalation inevitable? (01:02:53)Firestorms and rings of annihilation (01:12:56)Nuclear electromagnetic pulses (01:27:34)Continuity of government (01:36:35)Rays of hope (01:41:07)Where we’re headed (01:43:52)Avoiding politics (01:50:34)Luisa’s outro (01:52:29)Producer and editor: Keiran HarrisAudio engineering team: Ben Cordell, Simon Monsour, Milo McGuire, and Dominic ArmstrongAdditional content editing: Katy Moore and Luisa RodriguezTranscriptions: Katy Moore

12 Juli 20241h 54min

#191 (Part 2) – Carl Shulman on government and society after AGI

#191 (Part 2) – Carl Shulman on government and society after AGI

This is the second part of our marathon interview with Carl Shulman. The first episode is on the economy and national security after AGI. You can listen to them in either order!If we develop artificial general intelligence that's reasonably aligned with human goals, it could put a fast and near-free superhuman advisor in everyone's pocket. How would that affect culture, government, and our ability to act sensibly and coordinate together?It's common to worry that AI advances will lead to a proliferation of misinformation and further disconnect us from reality. But in today's conversation, AI expert Carl Shulman argues that this underrates the powerful positive applications the technology could have in the public sphere.Links to learn more, highlights, and full transcript.As Carl explains, today the most important questions we face as a society remain in the "realm of subjective judgement" -- without any "robust, well-founded scientific consensus on how to answer them." But if AI 'evals' and interpretability advance to the point that it's possible to demonstrate which AI models have truly superhuman judgement and give consistently trustworthy advice, society could converge on firm or 'best-guess' answers to far more cases.If the answers are publicly visible and confirmable by all, the pressure on officials to act on that advice could be great. That's because when it's hard to assess if a line has been crossed or not, we usually give people much more discretion. For instance, a journalist inventing an interview that never happened will get fired because it's an unambiguous violation of honesty norms — but so long as there's no universally agreed-upon standard for selective reporting, that same journalist will have substantial discretion to report information that favours their preferred view more often than that which contradicts it.Similarly, today we have no generally agreed-upon way to tell when a decision-maker has behaved irresponsibly. But if experience clearly shows that following AI advice is the wise move, not seeking or ignoring such advice could become more like crossing a red line — less like making an understandable mistake and more like fabricating your balance sheet.To illustrate the possible impact, Carl imagines how the COVID pandemic could have played out in the presence of AI advisors that everyone agrees are exceedingly insightful and reliable. But in practice, a significantly superhuman AI might suggest novel approaches better than any we can suggest.In the past we've usually found it easier to predict how hard technologies like planes or factories will change than to imagine the social shifts that those technologies will create — and the same is likely happening for AI.Carl Shulman and host Rob Wiblin discuss the above, as well as:The risk of society using AI to lock in its values.The difficulty of preventing coups once AI is key to the military and police.What international treaties we need to make this go well.How to make AI superhuman at forecasting the future.Whether AI will be able to help us with intractable philosophical questions.Whether we need dedicated projects to make wise AI advisors, or if it will happen automatically as models scale.Why Carl doesn't support AI companies voluntarily pausing AI research, but sees a stronger case for binding international controls once we're closer to 'crunch time.'Opportunities for listeners to contribute to making the future go well.Chapters:Cold open (00:00:00)Rob’s intro (00:01:16)The interview begins (00:03:24)COVID-19 concrete example (00:11:18)Sceptical arguments against the effect of AI advisors (00:24:16)Value lock-in (00:33:59)How democracies avoid coups (00:48:08)Where AI could most easily help (01:00:25)AI forecasting (01:04:30)Application to the most challenging topics (01:24:03)How to make it happen (01:37:50)International negotiations and coordination and auditing (01:43:54)Opportunities for listeners (02:00:09)Why Carl doesn't support enforced pauses on AI research (02:03:58)How Carl is feeling about the future (02:15:47)Rob’s outro (02:17:37)Producer and editor: Keiran HarrisAudio engineering team: Ben Cordell, Simon Monsour, Milo McGuire, and Dominic ArmstrongTranscriptions: Katy Moore

5 Juli 20242h 20min

#191 (Part 1) – Carl Shulman on the economy and national security after AGI

#191 (Part 1) – Carl Shulman on the economy and national security after AGI

This is the first part of our marathon interview with Carl Shulman. The second episode is on government and society after AGI. You can listen to them in either order!The human brain does what it does with a shockingly low energy supply: just 20 watts — a fraction of a cent worth of electricity per hour. What would happen if AI technology merely matched what evolution has already managed, and could accomplish the work of top human professionals given a 20-watt power supply?Many people sort of consider that hypothetical, but maybe nobody has followed through and considered all the implications as much as Carl Shulman. Behind the scenes, his work has greatly influenced how leaders in artificial general intelligence (AGI) picture the world they're creating.Links to learn more, highlights, and full transcript.Carl simply follows the logic to its natural conclusion. This is a world where 1 cent of electricity can be turned into medical advice, company management, or scientific research that would today cost $100s, resulting in a scramble to manufacture chips and apply them to the most lucrative forms of intellectual labour.It's a world where, given their incredible hourly salaries, the supply of outstanding AI researchers quickly goes from 10,000 to 10 million or more, enormously accelerating progress in the field.It's a world where companies operated entirely by AIs working together are much faster and more cost-effective than those that lean on humans for decision making, and the latter are progressively driven out of business. It's a world where the technical challenges around control of robots are rapidly overcome, leading to robots into strong, fast, precise, and tireless workers able to accomplish any physical work the economy requires, and a rush to build billions of them and cash in.As the economy grows, each person could effectively afford the practical equivalent of a team of hundreds of machine 'people' to help them with every aspect of their lives.And with growth rates this high, it doesn't take long to run up against Earth's physical limits — in this case, the toughest to engineer your way out of is the Earth's ability to release waste heat. If this machine economy and its insatiable demand for power generates more heat than the Earth radiates into space, then it will rapidly heat up and become uninhabitable for humans and other animals.This creates pressure to move economic activity off-planet. So you could develop effective populations of billions of scientific researchers operating on computer chips orbiting in space, sending the results of their work, such as drug designs, back to Earth for use.These are just some of the wild implications that could follow naturally from truly embracing the hypothetical: what if we develop AGI that could accomplish everything that the most productive humans can, using the same energy supply?In today's episode, Carl explains the above, and then host Rob Wiblin pushes back on whether that’s realistic or just a cool story, asking:If we're heading towards the above, how come economic growth is slow now and not really increasing?Why have computers and computer chips had so little effect on economic productivity so far?Are self-replicating biological systems a good comparison for self-replicating machine systems?Isn't this just too crazy and weird to be plausible?What bottlenecks would be encountered in supplying energy and natural resources to this growing economy?Might there not be severely declining returns to bigger brains and more training?Wouldn't humanity get scared and pull the brakes if such a transformation kicked off?If this is right, how come economists don't agree?Finally, Carl addresses the moral status of machine minds themselves. Would they be conscious or otherwise have a claim to moral or rights? And how might humans and machines coexist with neither side dominating or exploiting the other?Chapters:Cold open (00:00:00)Rob’s intro (00:01:00)Transitioning to a world where AI systems do almost all the work (00:05:21)Economics after an AI explosion (00:14:25)Objection: Shouldn’t we be seeing economic growth rates increasing today? (00:59:12)Objection: Speed of doubling time (01:07:33)Objection: Declining returns to increases in intelligence? (01:11:59)Objection: Physical transformation of the environment (01:17:39)Objection: Should we expect an increased demand for safety and security? (01:29:14)Objection: “This sounds completely whack” (01:36:10)Income and wealth distribution (01:48:02)Economists and the intelligence explosion (02:13:31)Baumol effect arguments (02:19:12)Denying that robots can exist (02:27:18)Classic economic growth models (02:36:12)Robot nannies (02:48:27)Slow integration of decision-making and authority power (02:57:39)Economists’ mistaken heuristics (03:01:07)Moral status of AIs (03:11:45)Rob’s outro (04:11:47)Producer and editor: Keiran HarrisAudio engineering lead: Ben CordellTechnical editing: Simon Monsour, Milo McGuire, and Dominic ArmstrongTranscriptions: Katy Moore

27 Juni 20244h 14min

#190 – Eric Schwitzgebel on whether the US is conscious

#190 – Eric Schwitzgebel on whether the US is conscious

"One of the most amazing things about planet Earth is that there are complex bags of mostly water — you and me – and we can look up at the stars, and look into our brains, and try to grapple with the most complex, difficult questions that there are. And even if we can’t make great progress on them and don’t come to completely satisfying solutions, just the fact of trying to grapple with these things is kind of the universe looking at itself and trying to understand itself. So we’re kind of this bright spot of reflectiveness in the cosmos, and I think we should celebrate that fact for its own intrinsic value and interestingness." —Eric SchwitzgebelIn today’s episode, host Luisa Rodriguez speaks to Eric Schwitzgebel — professor of philosophy at UC Riverside — about some of the most bizarre and unintuitive claims from his recent book, The Weirdness of the World.Links to learn more, highlights, and full transcript.They cover:Why our intuitions seem so unreliable for answering fundamental questions about reality.What the materialist view of consciousness is, and how it might imply some very weird things — like that the United States could be a conscious entity.Thought experiments that challenge our intuitions — like supersquids that think and act through detachable tentacles, and intelligent species whose brains are made up of a million bugs.Eric’s claim that consciousness and cosmology are universally bizarre and dubious.How to think about borderline states of consciousness, and whether consciousness is more like a spectrum or more like a light flicking on.The nontrivial possibility that we could be dreaming right now, and the ethical implications if that’s true.Why it’s worth it to grapple with the universe’s most complex questions, even if we can’t find completely satisfying solutions.And much more.Chapters:Cold open |00:00:00|Luisa’s intro |00:01:10|Bizarre and dubious philosophical theories |00:03:13|The materialist view of consciousness |00:13:55|What would it mean for the US to be conscious? |00:19:46|Supersquids and antheads thought experiments |00:22:37|Alternatives to the materialist perspective |00:35:19|Are our intuitions useless for thinking about these things? |00:42:55|Key ingredients for consciousness |00:46:46|Reasons to think the US isn’t conscious |01:01:15|Overlapping consciousnesses [01:09:32]Borderline cases of consciousness |01:13:22|Are we dreaming right now? |01:40:29|Will we ever have answers to these dubious and bizarre questions? |01:56:16|Producer and editor: Keiran HarrisAudio engineering lead: Ben CordellTechnical editing: Simon Monsour, Milo McGuire, and Dominic ArmstrongAdditional content editing: Katy Moore and Luisa RodriguezTranscriptions: Katy Moore

7 Juni 20242h

Allt en och samma app

Lyssna på dina favoritpoddar och ljudböcker på ett och samma ställe.

Noga utvalt innehåll

Njut av handplockade tips som passar din smak – utan ändlöst scrollande.

Fortsätt när du vill

Fortsätt lyssna där du slutade – även offline.

Premium

99 kr/mån

  • Tillgång till alla Premium-poddar
  • Lyssna utan reklam
  • Avsluta när du vill

Premium

129 kr/mån

  • Tillgång till alla Premium-poddar
  • Lyssna utan reklam
  • Avsluta när du vill
  • Ett extra konto

Populärt inom Utbildning

historiepodden-se
rss-bara-en-till-om-missbruk-medberoende-2
det-skaver
nu-blir-det-historia
rosceremoni
bygga-at-idioter
harrisons-dramatiska-historia
allt-du-velat-veta
alska-oss
johannes-hansen-podcast
not-fanny-anymore
roda-vita-rosen
rikatillsammans-om-privatekonomi-rikedom-i-livet
sektledare
svd-ledarredaktionen
sa-in-i-sjalen
jagaren
rss-max-tant-med-max-villman
i-vantan-pa-katastrofen
handen-pa-hjartat

Berättelserna och rösterna du älskar att lyssna på

Obegränsad lyssning på alla dina favoritpoddar och ljudböcker

Upptäck PremiumArrow Right