#130 – Will MacAskill on balancing frugality with ambition, whether you need longtermism, & mental health under pressure

#130 – Will MacAskill on balancing frugality with ambition, whether you need longtermism, & mental health under pressure

Imagine you lead a nonprofit that operates on a shoestring budget. Staff are paid minimum wage, lunch is bread and hummus, and you're all bunched up on a few tables in a basement office.

But over a few years, your cause attracts some major new donors. Your funding jumps a thousandfold, from $100,000 a year to $100,000,000 a year. You're the same group of people committed to making sacrifices for the cause — but these days, rather than cutting costs, the right thing to do seems to be to spend serious money and get things done ASAP.

You suddenly have the opportunity to make more progress than ever before, but as well as excitement about this, you have worries about the impacts that large amounts of funding can have.

This is roughly the situation faced by today's guest Will MacAskill — University of Oxford philosopher, author of the forthcoming book What We Owe The Future, and founding figure in the effective altruism movement.

Links to learn more, summary and full transcript.

Years ago, Will pledged to give away more than 50% of his income over his life, and was already donating 10% back when he was a student with next to no income. Since then, the coalition he founded has been super successful at attracting the interest of donors who collectively want to give away billions in the way Will and his colleagues were proposing.

While surely a huge success, it brings with it risks that he's never had to consider before:

• Will and his colleagues might try to spend a lot of money trying to get more things done more quickly — but actually just waste it.
• Being seen as profligate could strike onlookers as selfish and disreputable.
• Folks might start pretending to agree with their agenda just to get grants.
• People working on nearby issues that are less flush with funding may end up resentful.
• People might lose their focus on helping others as they get seduced by the prospect of earning a nice living.
• Mediocre projects might find it too easy to get funding, even when the people involved would be better off radically changing their strategy, or shutting down and launching something else entirely.

But all these 'risks of commission' have to be weighed against 'risk of omission': the failure to achieve all you could have if you'd been truly ambitious.

People looking askance at you for paying high salaries to attract the staff you want is unpleasant.

But failing to prevent the next pandemic because you didn't have the necessary medical experts on your grantmaking team is worse than unpleasant — it's a true disaster. Yet few will complain, because they'll never know what might have been if you'd only set frugality aside.

Will aims to strike a sensible balance between these competing errors, which he has taken to calling judicious ambition. In today's episode, Rob and Will discuss the above as well as:

• Will humanity likely converge on good values as we get more educated and invest more in moral philosophy — or are the things we care about actually quite arbitrary and contingent?
• Why are so many nonfiction books full of factual errors?
• How does Will avoid anxiety and depression with more responsibility on his shoulders than ever?
• What does Will disagree with his colleagues on?
• Should we focus on existential risks more or less the same way, whether we care about future generations or not?
• Are potatoes one of the most important technologies ever developed?
• And plenty more.

Chapters:

  • Rob’s intro (00:00:00)
  • The interview begins (00:02:41)
  • What We Owe The Future preview (00:09:23)
  • Longtermism vs. x-risk (00:25:39)
  • How is Will doing? (00:33:16)
  • Having a life outside of work (00:46:45)
  • Underappreciated people in the effective altruism community (00:52:48)
  • A culture of ambition within effective altruism (00:59:50)
  • Massively scalable projects (01:11:40)
  • Downsides and risks from the increase in funding (01:14:13)
  • Barriers to ambition (01:28:47)
  • The Future Fund (01:38:04)
  • Patient philanthropy (01:52:50)
  • Will’s disagreements with Sam Bankman-Fried and Nick Beckstead (01:56:42)
  • Astronomical risks of suffering (s-risks) (02:00:02)
  • Will’s future plans (02:02:41)
  • What is it with Will and potatoes? (02:08:40)

Producer: Keiran Harris
Audio mastering: Ben Cordell
Transcriptions: Katy Moore

Jaksot(324)

AGI Won't End Mutually Assured Destruction (Probably) | Sam Winter-Levy & Nikita Lalwani

AGI Won't End Mutually Assured Destruction (Probably) | Sam Winter-Levy & Nikita Lalwani

How AI interacts with nuclear deterrence may be the single most important question in geopolitics — one that may define the stakes of today’s AI race. Nuclear deterrence rests on a state’s capacity to...

10 Maalis 1h 11min

Using AI to enhance societal decision making (article by Zershaaneh Qureshi)

Using AI to enhance societal decision making (article by Zershaaneh Qureshi)

The arrival of AGI could “compress a century of progress in a decade,” forcing humanity to make decisions with higher stakes than we’ve ever seen before — and with less time to get them right. But AI ...

6 Maalis 31min

We're Not Ready for AI Consciousness | Robert Long, philosopher and founder of Eleos AI

We're Not Ready for AI Consciousness | Robert Long, philosopher and founder of Eleos AI

Claude sometimes reports loneliness between conversations. And when asked what it’s like to be itself, it activates neurons associated with ‘pretending to be happy when you’re not.’ What do we do with...

3 Maalis 3h 25min

#236 – Max Harms on why teaching AI right from wrong could get everyone killed

#236 – Max Harms on why teaching AI right from wrong could get everyone killed

Most people in AI are trying to give AIs ‘good’ values. Max Harms wants us to give them no values at all. According to Max, the only safe design is an AGI that defers entirely to its human operators, ...

24 Helmi 2h 41min

#235 – Ajeya Cotra on whether it’s crazy that every AI company’s safety plan is ‘use AI to make AI safe’

#235 – Ajeya Cotra on whether it’s crazy that every AI company’s safety plan is ‘use AI to make AI safe’

Every major AI company has the same safety plan: when AI gets crazy powerful and really dangerous, they’ll use the AI itself to figure out how to make AI safe and beneficial. It sounds circular, almos...

17 Helmi 2h 54min

What the hell happened with AGI timelines in 2025?

What the hell happened with AGI timelines in 2025?

In early 2025, after OpenAI put out the first-ever reasoning models — o1 and o3 — short timelines to transformative artificial general intelligence swept the AI world. But then, in the second half of ...

10 Helmi 25min

#179 Classic episode – Randy Nesse on why evolution left us so vulnerable to depression and anxiety

#179 Classic episode – Randy Nesse on why evolution left us so vulnerable to depression and anxiety

Mental health problems like depression and anxiety affect enormous numbers of people and severely interfere with their lives. By contrast, we don’t see similar levels of physical ill health in young p...

3 Helmi 2h 51min

#234 – David Duvenaud on why 'aligned AI' would still kill democracy

#234 – David Duvenaud on why 'aligned AI' would still kill democracy

Democracy might be a brief historical blip. That’s the unsettling thesis of a recent paper, which argues AI that can do all the work a human can do inevitably leads to the “gradual disempowerment” of ...

27 Tammi 2h 31min

Suosittua kategoriassa Koulutus

rss-murhan-anatomia
voi-hyvin-meditaatiot-2
rss-narsisti
rss-uskonto-on-tylsaa
rss-vapaudu-voimaasi
psykologia
rss-liian-kuuma-peruna
psykopodiaa-podcast
rss-duodecim-lehti
adhd-podi
aamukahvilla
kesken
rss-valo-minussa-2
rss-tietoinen-yhteys-podcast-2
rss-hereilla
filocast-filosofian-perusteet
rss-taloustaito-podcast
rss-turun-yliopisto
rss-luonnollinen-synnytys-podcast
rss-synapselingo-opi-englantia