#163 – Toby Ord on the perils of maximising the good that you do

#163 – Toby Ord on the perils of maximising the good that you do

Effective altruism is associated with the slogan "do the most good." On one level, this has to be unobjectionable: What could be bad about helping people more and more?

But in today's interview, Toby Ord — moral philosopher at the University of Oxford and one of the founding figures of effective altruism — lays out three reasons to be cautious about the idea of maximising the good that you do. He suggests that rather than “doing the most good that we can,” perhaps we should be happy with a more modest and manageable goal: “doing most of the good that we can.”

Links to learn more, summary and full transcript.

Toby was inspired to revisit these ideas by the possibility that Sam Bankman-Fried, who stands accused of committing severe fraud as CEO of the cryptocurrency exchange FTX, was motivated to break the law by a desire to give away as much money as possible to worthy causes.

Toby's top reason not to fully maximise is the following: if the goal you're aiming at is subtly wrong or incomplete, then going all the way towards maximising it will usually cause you to start doing some very harmful things.

This result can be shown mathematically, but can also be made intuitive, and may explain why we feel instinctively wary of going “all-in” on any idea, or goal, or way of living — even something as benign as helping other people as much as possible.

Toby gives the example of someone pursuing a career as a professional swimmer. Initially, as our swimmer takes their training and performance more seriously, they adjust their diet, hire a better trainer, and pay more attention to their technique. While swimming is the main focus of their life, they feel fit and healthy and also enjoy other aspects of their life as well — family, friends, and personal projects.

But if they decide to increase their commitment further and really go all-in on their swimming career, holding back nothing back, then this picture can radically change. Their effort was already substantial, so how can they shave those final few seconds off their racing time? The only remaining options are those which were so costly they were loath to consider them before.

To eke out those final gains — and go from 80% effort to 100% — our swimmer must sacrifice other hobbies, deprioritise their relationships, neglect their career, ignore food preferences, accept a higher risk of injury, and maybe even consider using steroids.

Now, if maximising one's speed at swimming really were the only goal they ought to be pursuing, there'd be no problem with this. But if it's the wrong goal, or only one of many things they should be aiming for, then the outcome is disastrous. In going from 80% to 100% effort, their swimming speed was only increased by a tiny amount, while everything else they were accomplishing dropped off a cliff.

The bottom line is simple: a dash of moderation makes you much more robust to uncertainty and error.

As Toby notes, this is similar to the observation that a sufficiently capable superintelligent AI, given any one goal, would ruin the world if it maximised it to the exclusion of everything else. And it follows a similar pattern to performance falling off a cliff when a statistical model is 'overfit' to its data.

In the full interview, Toby also explains the “moral trade” argument against pursuing narrow goals at the expense of everything else, and how consequentialism changes if you judge not just outcomes or acts, but everything according to its impacts on the world.

Toby and Rob also discuss:

  • The rise and fall of FTX and some of its impacts
  • What Toby hoped effective altruism would and wouldn't become when he helped to get it off the ground
  • What utilitarianism has going for it, and what's wrong with it in Toby's view
  • How to mathematically model the importance of personal integrity
  • Which AI labs Toby thinks have been acting more responsibly than others
  • How having a young child affects Toby’s feelings about AI risk
  • Whether infinities present a fundamental problem for any theory of ethics that aspire to be fully impartial
  • How Toby ended up being the source of the highest quality images of the Earth from space

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript.

Producer and editor: Keiran Harris
Audio Engineering Lead: Ben Cordell
Technical editing: Simon Monsour
Transcriptions: Katy Moore

Jaksot(320)

#101 – Robert Wright on using cognitive empathy to save the world

#101 – Robert Wright on using cognitive empathy to save the world

In 2003, Saddam Hussein refused to let Iraqi weapons scientists leave the country to be interrogated. Given the overwhelming domestic support for an invasion at the time, most key figures in the U.S. ...

28 Touko 20211h 36min

#100 – Having a successful career with depression, anxiety and imposter syndrome

#100 – Having a successful career with depression, anxiety and imposter syndrome

Today's episode is one of the most remarkable and really, unique, pieces of content we’ve ever produced (and I can say that because I had almost nothing to do with making it!). The producer of this ...

19 Touko 20212h 51min

#99 – Leah Garcés on turning adversaries into allies to change the chicken industry

#99 – Leah Garcés on turning adversaries into allies to change the chicken industry

For a chance to prevent enormous amounts of suffering, would you be brave enough to drive five hours to a remote location to meet a man who seems likely to be your enemy, knowing that it might be an a...

13 Touko 20212h 26min

#98 – Christian Tarsney on future bias and a possible solution to moral fanaticism

#98 – Christian Tarsney on future bias and a possible solution to moral fanaticism

Imagine that you’re in the hospital for surgery. This kind of procedure is always safe, and always successful — but it can take anywhere from one to ten hours. You can’t be knocked out for the operati...

5 Touko 20212h 38min

#97 – Mike Berkowitz on keeping the US a liberal democratic country

#97 – Mike Berkowitz on keeping the US a liberal democratic country

Donald Trump’s attempt to overturn the results of the 2020 election split the Republican party. There were those who went along with it — 147 members of Congress raised objections to the official cert...

20 Huhti 20212h 36min

The ten episodes of this show you should listen to first

The ten episodes of this show you should listen to first

Today we're launching a new podcast feed that might be useful to you and people you know. It's called 'Effective Altruism: An Introduction', and it's a carefully chosen selection of ten episodes of ...

15 Huhti 20213min

#96 – Nina Schick on disinformation and the rise of synthetic media

#96 – Nina Schick on disinformation and the rise of synthetic media

You might have heard fears like this in the last few years: What if Donald Trump was woken up in the middle of the night and shown a fake video — indistinguishable from a real one — in which Kim Jong ...

6 Huhti 20212h

#95 – Kelly Wanser on whether to deliberately intervene in the climate

#95 – Kelly Wanser on whether to deliberately intervene in the climate

How long do you think it’ll be before we’re able to bend the weather to our will? A massive rainmaking program in China, efforts to seed new oases in the Arabian peninsula, or chemically induce snow f...

26 Maalis 20211h 24min

Suosittua kategoriassa Koulutus

rss-murhan-anatomia
psykopodiaa-podcast
rss-narsisti
voi-hyvin-meditaatiot-2
aamukahvilla
rss-vapaudu-voimaasi
rss-niinku-asia-on
adhd-podi
rss-liian-kuuma-peruna
kesken
psykologia
dear-ladies
rss-koira-haudattuna
leveli
rss-luonnollinen-synnytys-podcast
rahapuhetta
aloita-meditaatio
rss-duodecim-lehti
jari-sarasvuo-podcast
rss-palopaikalla-podcast