#136 – Will MacAskill on what we owe the future

#136 – Will MacAskill on what we owe the future

  1. People who exist in the future deserve some degree of moral consideration.
  2. The future could be very big, very long, and/or very good.
  3. We can reasonably hope to influence whether people in the future exist, and how good or bad their lives are.
  4. So trying to make the world better for future generations is a key priority of our time.

This is the simple four-step argument for 'longtermism' put forward in What We Owe The Future, the latest book from today's guest — University of Oxford philosopher and cofounder of the effective altruism community, Will MacAskill.

Links to learn more, summary and full transcript.

From one point of view this idea is common sense. We work on breakthroughs to treat cancer or end use of fossil fuels not just for people alive today, but because we hope such scientific advances will help our children, grandchildren, and great-grandchildren as well.

Some who take this longtermist idea seriously work to develop broad-spectrum vaccines they hope will safeguard humanity against the sorts of extremely deadly pandemics that could permanently throw civilisation off track — the sort of project few could argue is not worthwhile.

But Will is upfront that longtermism is also counterintuitive. To start with, he's willing to contemplate timescales far beyond what's typically discussed.

A natural objection to thinking millions of years ahead is that it's hard enough to take actions that have positive effects that persist for hundreds of years, let alone “indefinitely.” It doesn't matter how important something might be if you can't predictably change it.

This is one reason, among others, that Will was initially sceptical of longtermism and took years to come around. He preferred to focus on ending poverty and preventable diseases in ways he could directly see were working.

But over seven years he gradually changed his mind, and in *What We Owe The Future*, Will argues that in fact there are clear ways we might act now that could benefit not just a few but *all* future generations.

The idea that preventing human extinction would have long-lasting impacts is pretty intuitive. If we entirely disappear, we aren't coming back.

But the idea that we can shape human values — not just for our age, but for all ages — is a surprising one that Will has come to more recently.

In the book, he argues that what people value is far more fragile and historically contingent than it might first seem. For instance, today it feels like the abolition of slavery was an inevitable part of the arc of history. But Will lays out that the best research on the topic suggests otherwise.

If moral progress really is so contingent, and bad ideas can persist almost without end, it raises the stakes for moral debate today. If we don't eliminate a bad practice now, it may be with us forever. In today's in-depth conversation, we discuss the possibility of a harmful moral 'lock-in' as well as:

• How Will was eventually won over to longtermism
• The three best lines of argument against longtermism
• How to avoid moral fanaticism
• Which technologies or events are most likely to have permanent effects
• What 'longtermists' do today in practice
• How to predict the long-term effect of our actions
• Whether the future is likely to be good or bad
• Concrete ideas to make the future better
• What Will donates his money to personally
• Potatoes and megafauna
• And plenty more

Chapters:

  • Rob’s intro (00:00:00)
  • The interview begins (00:01:36)
  • What longtermism actually is (00:02:31)
  • The case for longtermism (00:04:30)
  • What longtermists are actually doing (00:15:54)
  • Will’s personal journey (00:22:15)
  • Strongest arguments against longtermism (00:42:28)
  • Preventing extinction vs. improving the quality of the future (00:59:29)
  • Is humanity likely to converge on doing the same thing regardless? (01:06:58)
  • Lock-in scenario vs. long reflection (01:27:11)
  • Is the future good in expectation? (01:32:29)
  • Can we actually predictably influence the future positively? (01:47:27)
  • Tiny probabilities of enormous value (01:53:40)
  • Stagnation (02:19:04)
  • Concrete suggestions (02:34:27)
  • Where Will donates (02:39:40)
  • Potatoes and megafauna (02:41:48)

Producer: Keiran Harris
Audio mastering: Ben Cordell
Transcriptions: Katy Moore

Jaksot(321)

Why Teaching AI Right from Wrong Could Get Everyone Killed | Max Harms, MIRI

Why Teaching AI Right from Wrong Could Get Everyone Killed | Max Harms, MIRI

Most people in AI are trying to give AIs ‘good’ values. Max Harms wants us to give them no values at all. According to Max, the only safe design is an AGI that defers entirely to its human operators, ...

24 Helmi 2h 41min

Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

Every major AI company has the same safety plan: when AI gets crazy powerful and really dangerous, they’ll use the AI itself to figure out how to make AI safe and beneficial. It sounds circular, almos...

17 Helmi 2h 54min

What the hell happened with AGI timelines in 2025?

What the hell happened with AGI timelines in 2025?

In early 2025, after OpenAI put out the first-ever reasoning models — o1 and o3 — short timelines to transformative artificial general intelligence swept the AI world. But then, in the second half of ...

10 Helmi 25min

#179 Classic episode – Randy Nesse on why evolution left us so vulnerable to depression and anxiety

#179 Classic episode – Randy Nesse on why evolution left us so vulnerable to depression and anxiety

Mental health problems like depression and anxiety affect enormous numbers of people and severely interfere with their lives. By contrast, we don’t see similar levels of physical ill health in young p...

3 Helmi 2h 51min

#234 – David Duvenaud on why 'aligned AI' would still kill democracy

#234 – David Duvenaud on why 'aligned AI' would still kill democracy

Democracy might be a brief historical blip. That’s the unsettling thesis of a recent paper, which argues AI that can do all the work a human can do inevitably leads to the “gradual disempowerment” of ...

27 Tammi 2h 31min

#145 Classic episode – Christopher Brown on why slavery abolition wasn't inevitable

#145 Classic episode – Christopher Brown on why slavery abolition wasn't inevitable

In many ways, humanity seems to have become more humane and inclusive over time. While there’s still a lot of progress to be made, campaigns to give people of different genders, races, sexualities, et...

20 Tammi 2h 56min

#233 – James Smith on how to prevent a mirror life catastrophe

#233 – James Smith on how to prevent a mirror life catastrophe

When James Smith first heard about mirror bacteria, he was sceptical. But within two weeks, he’d dropped everything to work on it full time, considering it the worst biothreat that he’d seen described...

13 Tammi 2h 9min

#144 Classic episode – Athena Aktipis on why cancer is a fundamental universal phenomena

#144 Classic episode – Athena Aktipis on why cancer is a fundamental universal phenomena

What’s the opposite of cancer? If you answered “cure,” “antidote,” or “antivenom” — you’ve obviously been reading the antonym section at www.merriam-webster.com/thesaurus/cancer.But today’s guest Athe...

9 Tammi 3h 30min

Suosittua kategoriassa Koulutus

rss-murhan-anatomia
psykopodiaa-podcast
voi-hyvin-meditaatiot-2
rss-narsisti
rss-liian-kuuma-peruna
rss-vapaudu-voimaasi
dear-ladies
aamukahvilla
psykologia
leveli
kesken
ihminen-tavattavissa-tommy-hellsten-instituutti
rss-uskonto-on-tylsaa
rss-duodecim-lehti
rss-valo-minussa-2
rahapuhetta
adhd-podi
rss-tietoinen-yhteys-podcast-2
rss-hereilla
rss-xamk-podcast