Om episode
People who exist in the future deserve some degree of moral consideration.The future could be very big, very long, and/or very good.We can reasonably hope to influence whether people in the future exist, and how good or bad their lives are.So trying to make the world better for future generations is a key priority of our time.This is the simple four-step argument for 'longtermism' put forward in What We Owe The Future, the latest book from today's guest — University of Oxford philosopher and cofounder of the effective altruism community, Will MacAskill. Links to learn more, summary and full transcript. From one point of view this idea is common sense. We work on breakthroughs to treat cancer or end use of fossil fuels not just for people alive today, but because we hope such scientific advances will help our children, grandchildren, and great-grandchildren as well. Some who take this longtermist idea seriously work to develop broad-spectrum vaccines they hope will safeguard humanity against the sorts of extremely deadly pandemics that could permanently throw civilisation off track — the sort of project few could argue is not worthwhile. But Will is upfront that longtermism is also counterintuitive. To start with, he's willing to contemplate timescales far beyond what's typically discussed. A natural objection to thinking millions of years ahead is that it's hard enough to take actions that have positive effects that persist for hundreds of years, let alone “indefinitely.” It doesn't matter how important something might be if you can't predictably change it. This is one reason, among others, that Will was initially sceptical of longtermism and took years to come around. He preferred to focus on ending poverty and preventable diseases in ways he could directly see were working. But over seven years he gradually changed his mind, and in *What We Owe The Future*, Will argues that in fact there are clear ways we might act now that could benefit not just a few but *all* future generations. The idea that preventing human extinction would have long-lasting impacts is pretty intuitive. If we entirely disappear, we aren't coming back. But the idea that we can shape human values — not just for our age, but for all ages — is a surprising one that Will has come to more recently. In the book, he argues that what people value is far more fragile and historically contingent than it might first seem. For instance, today it feels like the abolition of slavery was an inevitable part of the arc of history. But Will lays out that the best research on the topic suggests otherwise. If moral progress really is so contingent, and bad ideas can persist almost without end, it raises the stakes for moral debate today. If we don't eliminate a bad practice now, it may be with us forever. In today's in-depth conversation, we discuss the possibility of a harmful moral 'lock-in' as well as: • How Will was eventually won over to longtermism • The three best lines of argument against longtermism • How to avoid moral fanaticism • Which technologies or events are most likely to have permanent effects • What 'longtermists' do today in practice • How to predict the long-term effect of our actions • Whether the future is likely to be good or bad • Concrete ideas to make the future better • What Will donates his money to personally • Potatoes and megafauna • And plenty moreChapters:Rob’s intro (00:00:00)The interview begins (00:01:36)What longtermism actually is (00:02:31)The case for longtermism (00:04:30)What longtermists are actually doing (00:15:54)Will’s personal journey (00:22:15)Strongest arguments against longtermism (00:42:28)Preventing extinction vs. improving the quality of the future (00:59:29)Is humanity likely to converge on doing the same thing regardless? (01:06:58)Lock-in scenario vs. long reflection (01:27:11)Is the future good in expectation? (01:32:29)Can we actually predictably influence the future positively? (01:47:27)Tiny probabilities of enormous value (01:53:40)Stagnation (02:19:04)Concrete suggestions (02:34:27)Where Will donates (02:39:40)Potatoes and megafauna (02:41:48)Producer: Keiran HarrisAudio mastering: Ben CordellTranscriptions: Katy Moore