#86 – Hilary Greaves on Pascal's mugging, strong longtermism, and whether existing can be good for us

#86 – Hilary Greaves on Pascal's mugging, strong longtermism, and whether existing can be good for us

Had World War 1 never happened, you might never have existed.

It’s very unlikely that the exact chain of events that led to your conception would have happened otherwise — so perhaps you wouldn't have been born.

Would that mean that it's better for you that World War 1 happened (regardless of whether it was better for the world overall)?

On the one hand, if you're living a pretty good life, you might think the answer is yes – you get to live rather than not.

On the other hand, it sounds strange to say that it's better for you to be alive, because if you'd never existed there'd be no you to be worse off. But if you wouldn't be worse off if you hadn't existed, can you be better off because you do?

In this episode, philosophy professor Hilary Greaves – Director of Oxford University’s Global Priorities Institute – helps untangle this puzzle for us and walks me and Rob through the space of possible answers. She argues that philosophers have been too quick to conclude what she calls existence non-comparativism – i.e, that it can't be better for someone to exist vs. not.

Links to learn more, summary and full transcript.

Where we come down on this issue matters. If people are not made better off by existing and having good lives, you might conclude that bringing more people into existence isn't better for them, and thus, perhaps, that it's not better at all.

This would imply that bringing about a world in which more people live happy lives might not actually be a good thing (if the people wouldn't otherwise have existed) — which would affect how we try to make the world a better place.

Those wanting to have children in order to give them the pleasure of a good life would in some sense be mistaken. And if humanity stopped bothering to have kids and just gradually died out we would have no particular reason to be concerned.

Furthermore it might mean we should deprioritise issues that primarily affect future generations, like climate change or the risk of humanity accidentally wiping itself out.

This is our second episode with Professor Greaves. The first one was a big hit, so we thought we'd come back and dive into even more complex ethical issues.

We discuss:

• The case for different types of ‘strong longtermism’ — the idea that we ought morally to try to make the very long run future go as well as possible
• What it means for us to be 'clueless' about the consequences of our actions
• Moral uncertainty -- what we should do when we don't know which moral theory is correct
• Whether we should take a bet on a really small probability of a really great outcome
• The field of global priorities research at the Global Priorities Institute and beyond

Chapters:

  • The interview begins (00:02:53)
  • The Case for Strong Longtermism (00:05:49)
  • Compatible moral views (00:20:03)
  • Defining cluelessness (00:39:26)
  • Why cluelessness isn’t an objection to longtermism (00:51:05)
  • Theories of what to do under moral uncertainty (01:07:42)
  • Pascal’s mugging (01:16:37)
  • Comparing Existence and Non-Existence (01:30:58)
  • Philosophers who reject existence comparativism (01:48:56)
  • Lives framework (02:01:52)
  • Global priorities research (02:09:25)


Get this episode by subscribing: type 80,000 Hours into your podcasting app. Or read the linked transcript.

Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcriptions: Zakee Ulhaq.

Episoder(319)

#227 – Helen Toner on the geopolitics of AGI in China and the Middle East

#227 – Helen Toner on the geopolitics of AGI in China and the Middle East

With the US racing to develop AGI and superintelligence ahead of China, you might expect the two countries to be negotiating how they’ll deploy AI, including in the military, without coming to blows. ...

5 Nov 20252h 20min

#226 – Holden Karnofsky on unexploited opportunities to make AI safer — and all his AGI takes

#226 – Holden Karnofsky on unexploited opportunities to make AI safer — and all his AGI takes

For years, working on AI safety usually meant theorising about the ‘alignment problem’ or trying to convince other people to give a damn. If you could find any way to help, the work was frustrating an...

30 Okt 20254h 30min

#225 – Daniel Kokotajlo on what a hyperspeed robot economy might look like

#225 – Daniel Kokotajlo on what a hyperspeed robot economy might look like

When Daniel Kokotajlo talks to security experts at major AI labs, they tell him something chilling: “Of course we’re probably penetrated by the CCP already, and if they really wanted something, they c...

27 Okt 20252h 12min

#224 – There's a cheap and low-tech way to save humanity from any engineered disease | Andrew Snyder-Beattie

#224 – There's a cheap and low-tech way to save humanity from any engineered disease | Andrew Snyder-Beattie

Conventional wisdom is that safeguarding humanity from the worst biological risks — microbes optimised to kill as many as possible — is difficult bordering on impossible, making bioweapons humanity’s ...

2 Okt 20252h 31min

Inside the Biden admin’s AI policy approach | Jake Sullivan, Biden’s NSA | via The Cognitive Revolution

Inside the Biden admin’s AI policy approach | Jake Sullivan, Biden’s NSA | via The Cognitive Revolution

Jake Sullivan was the US National Security Advisor from 2021-2025. He joined our friends on The Cognitive Revolution podcast in August to discuss AI as a critical national security issue. We thought i...

26 Sep 20251h 5min

#223 – Neel Nanda on leading a Google DeepMind team at 26 – and advice if you want to work at an AI company (part 2)

#223 – Neel Nanda on leading a Google DeepMind team at 26 – and advice if you want to work at an AI company (part 2)

At 26, Neel Nanda leads an AI safety team at Google DeepMind, has published dozens of influential papers, and mentored 50 junior researchers — seven of whom now work at major AI companies. His secret?...

15 Sep 20251h 46min

#222 – Can we tell if an AI is loyal by reading its mind? DeepMind's Neel Nanda (part 1)

#222 – Can we tell if an AI is loyal by reading its mind? DeepMind's Neel Nanda (part 1)

We don’t know how AIs think or why they do what they do. Or at least, we don’t know much. That fact is only becoming more troubling as AIs grow more capable and appear on track to wield enormous cultu...

8 Sep 20253h 1min

#221 – Kyle Fish on the most bizarre findings from 5 AI welfare experiments

#221 – Kyle Fish on the most bizarre findings from 5 AI welfare experiments

What happens when you lock two AI systems in a room together and tell them they can discuss anything they want?According to experiments run by Kyle Fish — Anthropic’s first AI welfare researcher — som...

28 Aug 20252h 28min

Populært innen Fakta

fastlegen
dine-penger-pengeradet
relasjonspodden-med-dora-thorhallsdottir-kjersti-idem
treningspodden
rss-strid-de-norske-borgerkrigene
foreldreradet
jakt-og-fiskepodden
rss-sunn-okonomi
smart-forklart
sinnsyn
merry-quizmas
hverdagspsyken
tomprat-med-gunnar-tjomlid
gravid-uke-for-uke
rss-var-forste-kaffe
rss-adhd-i-klasserommet
rss-kunsten-a-leve
rss-kull
fryktlos
aldring-og-helse-podden