#42 - Amanda Askell on moral empathy, the value of information & the ethics of infinity

#42 - Amanda Askell on moral empathy, the value of information & the ethics of infinity

Consider two familiar moments at a family reunion.

Our host, Uncle Bill, takes pride in his barbecuing skills. But his niece Becky says that she now refuses to eat meat. A groan goes round the table; the family mostly think of this as an annoying picky preference. But if seriously considered as a moral position, as they might if instead Becky were avoiding meat on religious grounds, it would usually receive a very different reaction.

An hour later Bill expresses a strong objection to abortion. Again, a groan goes round the table; the family mostly think that he has no business in trying to foist his regressive preference on anyone. But if considered not as a matter of personal taste, but rather as a moral position - that Bill genuinely believes he’s opposing mass-murder - his comment might start a serious conversation.

Amanda Askell, who recently completed a PhD in philosophy at NYU focused on the ethics of infinity, thinks that we often betray a complete lack of moral empathy. All sides of the political spectrum struggle to get inside the mind of people we disagree with and see issues from their point of view.

Links to learn more, summary and full transcript.

This often happens because of confusion between preferences and moral positions.

Assuming good faith on the part of the person you disagree with, and actually engaging with the beliefs they claim to hold, is perhaps the best remedy for our inability to make progress on controversial issues.

One potential path for progress surrounds contraception; a lot of people who are anti-abortion are also anti-contraception. But they’ll usually think that abortion is much worse than contraception, so why can’t we compromise and agree to have much more contraception available?

According to Amanda, a charitable explanation for this is that people who are anti-abortion and anti-contraception engage in moral reasoning and advocacy based on what, in their minds, is the best of all possible worlds: one where people neither use contraception nor get abortions.

So instead of arguing about abortion and contraception, we could discuss the underlying principle that one should advocate for the best possible world, rather than the best probable world.

Successfully break down such ethical beliefs, absent political toxicity, and it might be possible to actually converge on a key point of agreement.

Today’s episode blends such everyday topics with in-depth philosophy, including:

* What is 'moral cluelessness' and how can we work around it?
* Amanda's biggest criticisms of social justice activists, and of critics of social justice activists
* Is there an ethical difference between prison and corporal punishment?
* How to resolve 'infinitarian paralysis' - the inability to make decisions when infinities are involved.
* What’s effective altruism doing wrong?
* How should we think about jargon? Are a lot of people who don’t communicate clearly just scamming us?
* How can people be more successful within the cocoon of school and university?
* How did Amanda find doing a philosophy PhD, and how will she decide what to do now?

Links:
* Career review: Congressional staffer
* Randomised experiment on quitting
* Psychology replication quiz
* Should you focus on your comparative advantage.

Get this episode by subscribing: type 80,000 Hours into your podcasting app.

The 80,000 Hours podcast is produced by Keiran Harris.

Jaksot(324)

AGI Won't End Mutually Assured Destruction (Probably) | Sam Winter-Levy & Nikita Lalwani

AGI Won't End Mutually Assured Destruction (Probably) | Sam Winter-Levy & Nikita Lalwani

How AI interacts with nuclear deterrence may be the single most important question in geopolitics — one that may define the stakes of today’s AI race. Nuclear deterrence rests on a state’s capacity to...

10 Maalis 1h 11min

Using AI to enhance societal decision making (article by Zershaaneh Qureshi)

Using AI to enhance societal decision making (article by Zershaaneh Qureshi)

The arrival of AGI could “compress a century of progress in a decade,” forcing humanity to make decisions with higher stakes than we’ve ever seen before — and with less time to get them right. But AI ...

6 Maalis 31min

We're Not Ready for AI Consciousness | Robert Long, philosopher and founder of Eleos AI

We're Not Ready for AI Consciousness | Robert Long, philosopher and founder of Eleos AI

Claude sometimes reports loneliness between conversations. And when asked what it’s like to be itself, it activates neurons associated with ‘pretending to be happy when you’re not.’ What do we do with...

3 Maalis 3h 25min

#236 – Max Harms on why teaching AI right from wrong could get everyone killed

#236 – Max Harms on why teaching AI right from wrong could get everyone killed

Most people in AI are trying to give AIs ‘good’ values. Max Harms wants us to give them no values at all. According to Max, the only safe design is an AGI that defers entirely to its human operators, ...

24 Helmi 2h 41min

#235 – Ajeya Cotra on whether it’s crazy that every AI company’s safety plan is ‘use AI to make AI safe’

#235 – Ajeya Cotra on whether it’s crazy that every AI company’s safety plan is ‘use AI to make AI safe’

Every major AI company has the same safety plan: when AI gets crazy powerful and really dangerous, they’ll use the AI itself to figure out how to make AI safe and beneficial. It sounds circular, almos...

17 Helmi 2h 54min

What the hell happened with AGI timelines in 2025?

What the hell happened with AGI timelines in 2025?

In early 2025, after OpenAI put out the first-ever reasoning models — o1 and o3 — short timelines to transformative artificial general intelligence swept the AI world. But then, in the second half of ...

10 Helmi 25min

#179 Classic episode – Randy Nesse on why evolution left us so vulnerable to depression and anxiety

#179 Classic episode – Randy Nesse on why evolution left us so vulnerable to depression and anxiety

Mental health problems like depression and anxiety affect enormous numbers of people and severely interfere with their lives. By contrast, we don’t see similar levels of physical ill health in young p...

3 Helmi 2h 51min

#234 – David Duvenaud on why 'aligned AI' would still kill democracy

#234 – David Duvenaud on why 'aligned AI' would still kill democracy

Democracy might be a brief historical blip. That’s the unsettling thesis of a recent paper, which argues AI that can do all the work a human can do inevitably leads to the “gradual disempowerment” of ...

27 Tammi 2h 31min

Suosittua kategoriassa Koulutus

rss-murhan-anatomia
voi-hyvin-meditaatiot-2
rss-narsisti
psykopodiaa-podcast
rss-niinku-asia-on
rss-vapaudu-voimaasi
adhd-podi
psykologia
rss-duodecim-lehti
rss-rahamania
rss-valo-minussa-2
kesken
rss-uskonto-on-tylsaa
aamukahvilla
koulu-podcast-2
rss-liian-kuuma-peruna
rss-koira-haudattuna
avara-mieli
rss-turun-yliopisto
rss-arkea-ja-aurinkoa-podcast-espanjasta