Luisa and Keiran on free will, and the consequences of never feeling enduring guilt or shame

Luisa and Keiran on free will, and the consequences of never feeling enduring guilt or shame

In this episode from our second show, 80k After Hours, Luisa Rodriguez and Keiran Harris chat about the consequences of letting go of enduring guilt, shame, anger, and pride.

Links to learn more, highlights, and full transcript.

They cover:

  • Keiran’s views on free will, and how he came to hold them
  • What it’s like not experiencing sustained guilt, shame, and anger
  • Whether Luisa would become a worse person if she felt less guilt and shame — specifically whether she’d work fewer hours, or donate less money, or become a worse friend
  • Whether giving up guilt and shame also means giving up pride
  • The implications for love
  • The neurological condition ‘Jerk Syndrome’
  • And some practical advice on feeling less guilt, shame, and anger

Who this episode is for:

  • People sympathetic to the idea that free will is an illusion
  • People who experience tons of guilt, shame, or anger
  • People worried about what would happen if they stopped feeling tonnes of guilt, shame, or anger

Who this episode isn’t for:

  • People strongly in favour of retributive justice
  • Philosophers who can’t stand random non-philosophers talking about philosophy
  • Non-philosophers who can’t stand random non-philosophers talking about philosophy

Chapters:

  • Cold open (00:00:00)
  • Luisa's intro (00:01:16)
  • The chat begins (00:03:15)
  • Keiran's origin story (00:06:30)
  • Charles Whitman (00:11:00)
  • Luisa's origin story (00:16:41)
  • It's unlucky to be a bad person (00:19:57)
  • Doubts about whether free will is an illusion (00:23:09)
  • Acting this way just for other people (00:34:57)
  • Feeling shame over not working enough (00:37:26)
  • First person / third person distinction (00:39:42)
  • Would Luisa become a worse person if she felt less guilt? (00:44:09)
  • Feeling bad about not being a different person (00:48:18)
  • Would Luisa donate less money? (00:55:14)
  • Would Luisa become a worse friend? (01:01:07)
  • Pride (01:08:02)
  • Love (01:15:35)
  • Bears and hurricanes (01:19:53)
  • Jerk Syndrome (01:24:24)
  • Keiran's outro (01:34:47)

Get more episodes like this by subscribing to our more experimental podcast on the world’s most pressing problems and how to solve them: type "80k After Hours" into your podcasting app.

Producer: Keiran Harris
Audio mastering: Milo McGuire
Transcriptions: Katy Moore

Episoder(324)

AGI Won't End Mutually Assured Destruction (Probably) | Sam Winter-Levy & Nikita Lalwani

AGI Won't End Mutually Assured Destruction (Probably) | Sam Winter-Levy & Nikita Lalwani

How AI interacts with nuclear deterrence may be the single most important question in geopolitics — one that may define the stakes of today’s AI race. Nuclear deterrence rests on a state’s capacity to...

10 Mar 1h 11min

Using AI to enhance societal decision making (article by Zershaaneh Qureshi)

Using AI to enhance societal decision making (article by Zershaaneh Qureshi)

The arrival of AGI could “compress a century of progress in a decade,” forcing humanity to make decisions with higher stakes than we’ve ever seen before — and with less time to get them right. But AI ...

6 Mar 31min

We're Not Ready for AI Consciousness | Robert Long, philosopher and founder of Eleos AI

We're Not Ready for AI Consciousness | Robert Long, philosopher and founder of Eleos AI

Claude sometimes reports loneliness between conversations. And when asked what it’s like to be itself, it activates neurons associated with ‘pretending to be happy when you’re not.’ What do we do with...

3 Mar 3h 25min

#236 – Max Harms on why teaching AI right from wrong could get everyone killed

#236 – Max Harms on why teaching AI right from wrong could get everyone killed

Most people in AI are trying to give AIs ‘good’ values. Max Harms wants us to give them no values at all. According to Max, the only safe design is an AGI that defers entirely to its human operators, ...

24 Feb 2h 41min

#235 – Ajeya Cotra on whether it’s crazy that every AI company’s safety plan is ‘use AI to make AI safe’

#235 – Ajeya Cotra on whether it’s crazy that every AI company’s safety plan is ‘use AI to make AI safe’

Every major AI company has the same safety plan: when AI gets crazy powerful and really dangerous, they’ll use the AI itself to figure out how to make AI safe and beneficial. It sounds circular, almos...

17 Feb 2h 54min

What the hell happened with AGI timelines in 2025?

What the hell happened with AGI timelines in 2025?

In early 2025, after OpenAI put out the first-ever reasoning models — o1 and o3 — short timelines to transformative artificial general intelligence swept the AI world. But then, in the second half of ...

10 Feb 25min

#179 Classic episode – Randy Nesse on why evolution left us so vulnerable to depression and anxiety

#179 Classic episode – Randy Nesse on why evolution left us so vulnerable to depression and anxiety

Mental health problems like depression and anxiety affect enormous numbers of people and severely interfere with their lives. By contrast, we don’t see similar levels of physical ill health in young p...

3 Feb 2h 51min

#234 – David Duvenaud on why 'aligned AI' would still kill democracy

#234 – David Duvenaud on why 'aligned AI' would still kill democracy

Democracy might be a brief historical blip. That’s the unsettling thesis of a recent paper, which argues AI that can do all the work a human can do inevitably leads to the “gradual disempowerment” of ...

27 Jan 2h 31min

Populært innen Fakta

fastlegen
dine-penger-pengeradet
relasjonspodden-med-dora-thorhallsdottir-kjersti-idem
treningspodden
rss-strid-de-norske-borgerkrigene
foreldreradet
rss-sunn-okonomi
jakt-og-fiskepodden
hverdagspsyken
sinnsyn
merry-quizmas
gravid-uke-for-uke
tomprat-med-gunnar-tjomlid
rss-kunsten-a-leve
smart-forklart
takk-og-lov-med-anine-kierulf
fryktlos
rss-impressions-2
hagespiren-podcast
rss-kull