#163 – Toby Ord on the perils of maximising the good that you do

#163 – Toby Ord on the perils of maximising the good that you do

Effective altruism is associated with the slogan "do the most good." On one level, this has to be unobjectionable: What could be bad about helping people more and more?

But in today's interview, Toby Ord — moral philosopher at the University of Oxford and one of the founding figures of effective altruism — lays out three reasons to be cautious about the idea of maximising the good that you do. He suggests that rather than “doing the most good that we can,” perhaps we should be happy with a more modest and manageable goal: “doing most of the good that we can.”

Links to learn more, summary and full transcript.

Toby was inspired to revisit these ideas by the possibility that Sam Bankman-Fried, who stands accused of committing severe fraud as CEO of the cryptocurrency exchange FTX, was motivated to break the law by a desire to give away as much money as possible to worthy causes.

Toby's top reason not to fully maximise is the following: if the goal you're aiming at is subtly wrong or incomplete, then going all the way towards maximising it will usually cause you to start doing some very harmful things.

This result can be shown mathematically, but can also be made intuitive, and may explain why we feel instinctively wary of going “all-in” on any idea, or goal, or way of living — even something as benign as helping other people as much as possible.

Toby gives the example of someone pursuing a career as a professional swimmer. Initially, as our swimmer takes their training and performance more seriously, they adjust their diet, hire a better trainer, and pay more attention to their technique. While swimming is the main focus of their life, they feel fit and healthy and also enjoy other aspects of their life as well — family, friends, and personal projects.

But if they decide to increase their commitment further and really go all-in on their swimming career, holding back nothing back, then this picture can radically change. Their effort was already substantial, so how can they shave those final few seconds off their racing time? The only remaining options are those which were so costly they were loath to consider them before.

To eke out those final gains — and go from 80% effort to 100% — our swimmer must sacrifice other hobbies, deprioritise their relationships, neglect their career, ignore food preferences, accept a higher risk of injury, and maybe even consider using steroids.

Now, if maximising one's speed at swimming really were the only goal they ought to be pursuing, there'd be no problem with this. But if it's the wrong goal, or only one of many things they should be aiming for, then the outcome is disastrous. In going from 80% to 100% effort, their swimming speed was only increased by a tiny amount, while everything else they were accomplishing dropped off a cliff.

The bottom line is simple: a dash of moderation makes you much more robust to uncertainty and error.

As Toby notes, this is similar to the observation that a sufficiently capable superintelligent AI, given any one goal, would ruin the world if it maximised it to the exclusion of everything else. And it follows a similar pattern to performance falling off a cliff when a statistical model is 'overfit' to its data.

In the full interview, Toby also explains the “moral trade” argument against pursuing narrow goals at the expense of everything else, and how consequentialism changes if you judge not just outcomes or acts, but everything according to its impacts on the world.

Toby and Rob also discuss:

  • The rise and fall of FTX and some of its impacts
  • What Toby hoped effective altruism would and wouldn't become when he helped to get it off the ground
  • What utilitarianism has going for it, and what's wrong with it in Toby's view
  • How to mathematically model the importance of personal integrity
  • Which AI labs Toby thinks have been acting more responsibly than others
  • How having a young child affects Toby’s feelings about AI risk
  • Whether infinities present a fundamental problem for any theory of ethics that aspire to be fully impartial
  • How Toby ended up being the source of the highest quality images of the Earth from space

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript.

Producer and editor: Keiran Harris
Audio Engineering Lead: Ben Cordell
Technical editing: Simon Monsour
Transcriptions: Katy Moore

Episoder(326)

#99 – Leah Garcés on turning adversaries into allies to change the chicken industry

#99 – Leah Garcés on turning adversaries into allies to change the chicken industry

For a chance to prevent enormous amounts of suffering, would you be brave enough to drive five hours to a remote location to meet a man who seems likely to be your enemy, knowing that it might be an a...

13 Mai 20212h 26min

#98 – Christian Tarsney on future bias and a possible solution to moral fanaticism

#98 – Christian Tarsney on future bias and a possible solution to moral fanaticism

Imagine that you’re in the hospital for surgery. This kind of procedure is always safe, and always successful — but it can take anywhere from one to ten hours. You can’t be knocked out for the operati...

5 Mai 20212h 38min

#97 – Mike Berkowitz on keeping the US a liberal democratic country

#97 – Mike Berkowitz on keeping the US a liberal democratic country

Donald Trump’s attempt to overturn the results of the 2020 election split the Republican party. There were those who went along with it — 147 members of Congress raised objections to the official cert...

20 Apr 20212h 36min

The ten episodes of this show you should listen to first

The ten episodes of this show you should listen to first

Today we're launching a new podcast feed that might be useful to you and people you know. It's called 'Effective Altruism: An Introduction', and it's a carefully chosen selection of ten episodes of ...

15 Apr 20213min

#96 – Nina Schick on disinformation and the rise of synthetic media

#96 – Nina Schick on disinformation and the rise of synthetic media

You might have heard fears like this in the last few years: What if Donald Trump was woken up in the middle of the night and shown a fake video — indistinguishable from a real one — in which Kim Jong ...

6 Apr 20212h

#95 – Kelly Wanser on whether to deliberately intervene in the climate

#95 – Kelly Wanser on whether to deliberately intervene in the climate

How long do you think it’ll be before we’re able to bend the weather to our will? A massive rainmaking program in China, efforts to seed new oases in the Arabian peninsula, or chemically induce snow f...

26 Mar 20211h 24min

#94 – Ezra Klein on aligning journalism, politics, and what matters most

#94 – Ezra Klein on aligning journalism, politics, and what matters most

How many words in U.S. newspapers have been spilled on tax policy in the past five years? And how many words on CRISPR? Or meat alternatives? Or how AI may soon automate the majority of jobs? When p...

20 Mar 20211h 45min

#93 – Andy Weber on rendering bioweapons obsolete & ending the new nuclear arms race

#93 – Andy Weber on rendering bioweapons obsolete & ending the new nuclear arms race

COVID-19 has provided a vivid reminder of the power of biological threats. But the threat doesn't come from natural sources alone. Weaponized contagious diseases — which were abandoned by the United S...

12 Mar 20211h 54min

Populært innen Fakta

fastlegen
dine-penger-pengeradet
relasjonspodden-med-dora-thorhallsdottir-kjersti-idem
foreldreradet
rss-strid-de-norske-borgerkrigene
treningspodden
jakt-og-fiskepodden
rss-sunn-okonomi
sinnsyn
mikkels-paskenotter
takk-og-lov-med-anine-kierulf
hverdagspsyken
gravid-uke-for-uke
rss-kunsten-a-leve
tomprat-med-gunnar-tjomlid
hagespiren-podcast
rss-bisarr-historie
rss-var-forste-kaffe
fryktlos
rss-kull