#152 – Joe Carlsmith on navigating serious philosophical confusion

#152 – Joe Carlsmith on navigating serious philosophical confusion

What is the nature of the universe? How do we make decisions correctly? What differentiates right actions from wrong ones?

Such fundamental questions have been the subject of philosophical and theological debates for millennia. But, as we all know, and surveys of expert opinion make clear, we are very far from agreement. So... with these most basic questions unresolved, what’s a species to do?

In today's episode, philosopher Joe Carlsmith — Senior Research Analyst at Open Philanthropy — makes the case that many current debates in philosophy ought to leave us confused and humbled. These are themes he discusses in his PhD thesis, A stranger priority? Topics at the outer reaches of effective altruism.

Links to learn more, summary and full transcript.

To help transmit the disorientation he thinks is appropriate, Joe presents three disconcerting theories — originating from him and his peers — that challenge humanity's self-assured understanding of the world.

The first idea is that we might be living in a computer simulation, because, in the classic formulation, if most civilisations go on to run many computer simulations of their past history, then most beings who perceive themselves as living in such a history must themselves be in computer simulations. Joe prefers a somewhat different way of making the point, but, having looked into it, he hasn't identified any particular rebuttal to this 'simulation argument.'

If true, it could revolutionise our comprehension of the universe and the way we ought to live...

Other two ideas cut for length — click here to read the full post.

These are just three particular instances of a much broader set of ideas that some have dubbed the "train to crazy town." Basically, if you commit to always take philosophy and arguments seriously, and try to act on them, it can lead to what seem like some pretty crazy and impractical places. So what should we do with this buffet of plausible-sounding but bewildering arguments?

Joe and Rob discuss to what extent this should prompt us to pay less attention to philosophy, and how we as individuals can cope psychologically with feeling out of our depth just trying to make the most basic sense of the world.

In today's challenging conversation, Joe and Rob discuss all of the above, as well as:

  • What Joe doesn't like about the drowning child thought experiment
  • An alternative thought experiment about helping a stranger that might better highlight our intrinsic desire to help others
  • What Joe doesn't like about the expression “the train to crazy town”
  • Whether Elon Musk should place a higher probability on living in a simulation than most other people
  • Whether the deterministic twin prisoner’s dilemma, if fully appreciated, gives us an extra reason to keep promises
  • To what extent learning to doubt our own judgement about difficult questions -- so-called “epistemic learned helplessness” -- is a good thing
  • How strong the case is that advanced AI will engage in generalised power-seeking behaviour

Chapters:

  • Rob’s intro (00:00:00)
  • The interview begins (00:09:21)
  • Downsides of the drowning child thought experiment (00:12:24)
  • Making demanding moral values more resonant (00:24:56)
  • The crazy train (00:36:48)
  • Whether we’re living in a simulation (00:48:50)
  • Reasons to doubt we’re living in a simulation, and practical implications if we are (00:57:02)
  • Rob's explainer about anthropics (01:12:27)
  • Back to the interview (01:19:53)
  • Decision theory and affecting the past (01:23:33)
  • Rob's explainer about decision theory (01:29:19)
  • Back to the interview (01:39:55)
  • Newcomb's problem (01:46:14)
  • Practical implications of acausal decision theory (01:50:04)
  • The hitchhiker in the desert (01:55:57)
  • Acceptance within philosophy (02:01:22)
  • Infinite ethics (02:04:35)
  • Rob's explainer about the expanding spheres approach (02:17:05)
  • Back to the interview (02:20:27)
  • Infinite ethics and the utilitarian dream (02:27:42)
  • Rob's explainer about epicycles (02:29:30)
  • Back to the interview (02:31:26)
  • What to do with all of these weird philosophical ideas (02:35:28)
  • Welfare longtermism and wisdom longtermism (02:53:23)
  • Epistemic learned helplessness (03:03:10)
  • Power-seeking AI (03:12:41)
  • Rob’s outro (03:25:45)

Producer: Keiran Harris

Audio mastering: Milo McGuire and Ben Cordell

Transcriptions: Katy Moore

Episoder(325)

Why automating human labour will break our political system | Rose Hadshar, Forethought

Why automating human labour will break our political system | Rose Hadshar, Forethought

The most important political question in the age of advanced AI might not be who wins elections. It might be whether elections continue to matter at all.That’s the view of Rose Hadshar, researcher at ...

17 Mar 2h 14min

#238 – Sam Winter-Levy and Nikita Lalwani on how AGI won't end mutually assured destruction (probably)

#238 – Sam Winter-Levy and Nikita Lalwani on how AGI won't end mutually assured destruction (probably)

How AI interacts with nuclear deterrence may be the single most important question in geopolitics — one that may define the stakes of today’s AI race. Nuclear deterrence rests on a state’s capacity to...

10 Mar 1h 11min

Using AI to enhance societal decision making (article by Zershaaneh Qureshi)

Using AI to enhance societal decision making (article by Zershaaneh Qureshi)

The arrival of AGI could “compress a century of progress in a decade,” forcing humanity to make decisions with higher stakes than we’ve ever seen before — and with less time to get them right. But AI ...

6 Mar 31min

#237 – Robert Long on how we're not ready for AI consciousness

#237 – Robert Long on how we're not ready for AI consciousness

Claude sometimes reports loneliness between conversations. And when asked what it’s like to be itself, it activates neurons associated with ‘pretending to be happy when you’re not.’ What do we do with...

3 Mar 3h 25min

#236 – Max Harms on why teaching AI right from wrong could get everyone killed

#236 – Max Harms on why teaching AI right from wrong could get everyone killed

Most people in AI are trying to give AIs ‘good’ values. Max Harms wants us to give them no values at all. According to Max, the only safe design is an AGI that defers entirely to its human operators, ...

24 Feb 2h 41min

#235 – Ajeya Cotra on whether it’s crazy that every AI company’s safety plan is ‘use AI to make AI safe’

#235 – Ajeya Cotra on whether it’s crazy that every AI company’s safety plan is ‘use AI to make AI safe’

Every major AI company has the same safety plan: when AI gets crazy powerful and really dangerous, they’ll use the AI itself to figure out how to make AI safe and beneficial. It sounds circular, almos...

17 Feb 2h 54min

What the hell happened with AGI timelines in 2025?

What the hell happened with AGI timelines in 2025?

In early 2025, after OpenAI put out the first-ever reasoning models — o1 and o3 — short timelines to transformative artificial general intelligence swept the AI world. But then, in the second half of ...

10 Feb 25min

#179 Classic episode – Randy Nesse on why evolution left us so vulnerable to depression and anxiety

#179 Classic episode – Randy Nesse on why evolution left us so vulnerable to depression and anxiety

Mental health problems like depression and anxiety affect enormous numbers of people and severely interfere with their lives. By contrast, we don’t see similar levels of physical ill health in young p...

3 Feb 2h 51min

Populært innen Fakta

fastlegen
dine-penger-pengeradet
relasjonspodden-med-dora-thorhallsdottir-kjersti-idem
treningspodden
rss-strid-de-norske-borgerkrigene
foreldreradet
rss-sunn-okonomi
jakt-og-fiskepodden
takk-og-lov-med-anine-kierulf
sinnsyn
merry-quizmas
rss-kunsten-a-leve
lederskap-nhhs-podkast-om-ledelse
smart-forklart
hverdagspsyken
gravid-uke-for-uke
level-up-med-anniken-binz
hagespiren-podcast
rss-kull
fryktlos