#152 – Joe Carlsmith on navigating serious philosophical confusion

#152 – Joe Carlsmith on navigating serious philosophical confusion

What is the nature of the universe? How do we make decisions correctly? What differentiates right actions from wrong ones?

Such fundamental questions have been the subject of philosophical and theological debates for millennia. But, as we all know, and surveys of expert opinion make clear, we are very far from agreement. So... with these most basic questions unresolved, what’s a species to do?

In today's episode, philosopher Joe Carlsmith — Senior Research Analyst at Open Philanthropy — makes the case that many current debates in philosophy ought to leave us confused and humbled. These are themes he discusses in his PhD thesis, A stranger priority? Topics at the outer reaches of effective altruism.

Links to learn more, summary and full transcript.

To help transmit the disorientation he thinks is appropriate, Joe presents three disconcerting theories — originating from him and his peers — that challenge humanity's self-assured understanding of the world.

The first idea is that we might be living in a computer simulation, because, in the classic formulation, if most civilisations go on to run many computer simulations of their past history, then most beings who perceive themselves as living in such a history must themselves be in computer simulations. Joe prefers a somewhat different way of making the point, but, having looked into it, he hasn't identified any particular rebuttal to this 'simulation argument.'

If true, it could revolutionise our comprehension of the universe and the way we ought to live...

Other two ideas cut for length — click here to read the full post.

These are just three particular instances of a much broader set of ideas that some have dubbed the "train to crazy town." Basically, if you commit to always take philosophy and arguments seriously, and try to act on them, it can lead to what seem like some pretty crazy and impractical places. So what should we do with this buffet of plausible-sounding but bewildering arguments?

Joe and Rob discuss to what extent this should prompt us to pay less attention to philosophy, and how we as individuals can cope psychologically with feeling out of our depth just trying to make the most basic sense of the world.

In today's challenging conversation, Joe and Rob discuss all of the above, as well as:

  • What Joe doesn't like about the drowning child thought experiment
  • An alternative thought experiment about helping a stranger that might better highlight our intrinsic desire to help others
  • What Joe doesn't like about the expression “the train to crazy town”
  • Whether Elon Musk should place a higher probability on living in a simulation than most other people
  • Whether the deterministic twin prisoner’s dilemma, if fully appreciated, gives us an extra reason to keep promises
  • To what extent learning to doubt our own judgement about difficult questions -- so-called “epistemic learned helplessness” -- is a good thing
  • How strong the case is that advanced AI will engage in generalised power-seeking behaviour

Chapters:

  • Rob’s intro (00:00:00)
  • The interview begins (00:09:21)
  • Downsides of the drowning child thought experiment (00:12:24)
  • Making demanding moral values more resonant (00:24:56)
  • The crazy train (00:36:48)
  • Whether we’re living in a simulation (00:48:50)
  • Reasons to doubt we’re living in a simulation, and practical implications if we are (00:57:02)
  • Rob's explainer about anthropics (01:12:27)
  • Back to the interview (01:19:53)
  • Decision theory and affecting the past (01:23:33)
  • Rob's explainer about decision theory (01:29:19)
  • Back to the interview (01:39:55)
  • Newcomb's problem (01:46:14)
  • Practical implications of acausal decision theory (01:50:04)
  • The hitchhiker in the desert (01:55:57)
  • Acceptance within philosophy (02:01:22)
  • Infinite ethics (02:04:35)
  • Rob's explainer about the expanding spheres approach (02:17:05)
  • Back to the interview (02:20:27)
  • Infinite ethics and the utilitarian dream (02:27:42)
  • Rob's explainer about epicycles (02:29:30)
  • Back to the interview (02:31:26)
  • What to do with all of these weird philosophical ideas (02:35:28)
  • Welfare longtermism and wisdom longtermism (02:53:23)
  • Epistemic learned helplessness (03:03:10)
  • Power-seeking AI (03:12:41)
  • Rob’s outro (03:25:45)

Producer: Keiran Harris

Audio mastering: Milo McGuire and Ben Cordell

Transcriptions: Katy Moore

Jaksot(332)

#230 – Dean Ball on how AI is a huge deal — but we shouldn’t regulate it yet

#230 – Dean Ball on how AI is a huge deal — but we shouldn’t regulate it yet

Former White House staffer Dean Ball thinks it's very likely some form of 'superintelligence' arrives in under 20 years. He thinks AI being used for bioweapon research is "a real threat model, obvious...

10 Joulu 20252h 54min

#229 – Marius Hobbhahn on the race to solve AI scheming before models go superhuman

#229 – Marius Hobbhahn on the race to solve AI scheming before models go superhuman

We often worry about AI models “hallucinating” or making honest mistakes. But what happens when a model knows the truth, but decides to deceive you anyway to achieve a goal of its own? This isn’t sci-...

3 Joulu 20253h 3min

Rob & Luisa chat kids, the 2016 fertility crash, and how the 50s invented parenting that makes us miserable

Rob & Luisa chat kids, the 2016 fertility crash, and how the 50s invented parenting that makes us miserable

Global fertility rates aren’t just falling: the rate of decline is accelerating. From 2006 to 2016, fertility dropped gradually, but since 2016 the rate of decline has increased 4.5-fold. In many weal...

25 Marras 20251h 59min

#228 – Eileen Yam on how we're completely out of touch with what the public thinks about AI

#228 – Eileen Yam on how we're completely out of touch with what the public thinks about AI

If you work in AI, you probably think it’s going to boost productivity, create wealth, advance science, and improve your life. If you’re a member of the American public, you probably strongly disagree...

20 Marras 20251h 43min

OpenAI: The nonprofit refuses to be killed (with Tyler Whitmer)

OpenAI: The nonprofit refuses to be killed (with Tyler Whitmer)

Last December, the OpenAI business put forward a plan to completely sideline its nonprofit board. But two state attorneys general have now blocked that effort and kept that board very much alive and k...

11 Marras 20251h 56min

#227 – Helen Toner on the geopolitics of AGI in China and the Middle East

#227 – Helen Toner on the geopolitics of AGI in China and the Middle East

With the US racing to develop AGI and superintelligence ahead of China, you might expect the two countries to be negotiating how they’ll deploy AI, including in the military, without coming to blows. ...

5 Marras 20252h 20min

#226 – Holden Karnofsky on unexploited opportunities to make AI safer — and all his AGI takes

#226 – Holden Karnofsky on unexploited opportunities to make AI safer — and all his AGI takes

For years, working on AI safety usually meant theorising about the ‘alignment problem’ or trying to convince other people to give a damn. If you could find any way to help, the work was frustrating an...

30 Loka 20254h 30min

#225 – Daniel Kokotajlo on what a hyperspeed robot economy might look like

#225 – Daniel Kokotajlo on what a hyperspeed robot economy might look like

When Daniel Kokotajlo talks to security experts at major AI labs, they tell him something chilling: “Of course we’re probably penetrated by the CCP already, and if they really wanted something, they c...

27 Loka 20252h 12min

Suosittua kategoriassa Koulutus

rss-murhan-anatomia
psykopodiaa-podcast
voi-hyvin-meditaatiot-2
adhd-podi
rss-tietoinen-yhteys-podcast-2
rss-valo-minussa-2
psykologia
rss-liian-kuuma-peruna
rss-rahamania
kesken
rss-niinku-asia-on
rss-arkea-ja-aurinkoa-podcast-espanjasta
rahapuhetta
rss-uskonto-on-tylsaa
rss-vapaudu-voimaasi
jari-sarasvuo-podcast
ihminen-tavattavissa-tommy-hellsten-instituutti
rss-hereilla
aamukahvilla
dear-ladies