#144 – Athena Aktipis on why cancer is actually one of our universe's most fundamental phenomena

#144 – Athena Aktipis on why cancer is actually one of our universe's most fundamental phenomena

What’s the opposite of cancer?

If you answered “cure,” “antidote,” or “antivenom” — you’ve obviously been reading the antonym section at www.merriam-webster.com/thesaurus/cancer.

But today’s guest Athena Aktipis says that the opposite of cancer is us: it's having a functional multicellular body that’s cooperating effectively in order to make that multicellular body function.

If, like us, you found her answer far more satisfying than the dictionary, maybe you could consider closing your dozens of merriam-webster.com tabs, and start listening to this podcast instead.

Links to learn more, summary and full transcript.

As Athena explains in her book The Cheating Cell, what we see with cancer is a breakdown in each of the foundations of cooperation that allowed multicellularity to arise:

  • Cells will proliferate when they shouldn't.
  • Cells won't die when they should.
  • Cells won't engage in the kind of division of labour that they should.
  • Cells won’t do the jobs that they're supposed to do.
  • Cells will monopolise resources.
  • And cells will trash the environment.

When we think about animals in the wild, or even bacteria living inside our cells, we understand that they're facing evolutionary pressures to figure out how they can replicate more; how they can get more resources; and how they can avoid predators — like lions, or antibiotics.

We don’t normally think of individual cells as acting as if they have their own interests like this. But cancer cells are actually facing similar kinds of evolutionary pressures within our bodies, with one major difference: they replicate much, much faster.

Incredibly, the opportunity for evolution by natural selection to operate just over the course of cancer progression is easily faster than all of the evolutionary time that we have had as humans since *Homo sapiens* came about.

Here’s a quote from Athena:

“So you have to shift your thinking to be like: the body is a world with all these different ecosystems in it, and the cells are existing on a time scale where, if we're going to map it onto anything like what we experience, a day is at least 10 years for them, right? So it's a very, very different way of thinking.”

You can find compelling examples of cooperation and conflict all over the universe, so Rob and Athena don’t stop with cancer. They also discuss:

  • Cheating within cells themselves
  • Cooperation in human societies as they exist today — and perhaps in the future, between civilisations spread across different planets or stars
  • Whether it’s too out-there to think of humans as engaging in cancerous behaviour
  • Why elephants get deadly cancers less often than humans, despite having way more cells
  • When a cell should commit suicide
  • The strategy of deliberately not treating cancer aggressively
  • Superhuman cooperation

And at the end of the episode, they cover Athena’s new book Everything is Fine! How to Thrive in the Apocalypse, including:

  • Staying happy while thinking about the apocalypse
  • Practical steps to prepare for the apocalypse
  • And whether a zombie apocalypse is already happening among Tasmanian devils

And if you’d rather see Rob and Athena’s facial expressions as they laugh and laugh while discussing cancer and the apocalypse — you can watch the video of the full interview.

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app.

Producer: Keiran Harris
Audio mastering: Milo McGuire
Transcriptions: Katy Moore

Episoder(318)

#226 – Holden Karnofsky on unexploited opportunities to make AI safer — and all his AGI takes

#226 – Holden Karnofsky on unexploited opportunities to make AI safer — and all his AGI takes

For years, working on AI safety usually meant theorising about the ‘alignment problem’ or trying to convince other people to give a damn. If you could find any way to help, the work was frustrating an...

30 Okt 20254h 30min

#225 – Daniel Kokotajlo on what a hyperspeed robot economy might look like

#225 – Daniel Kokotajlo on what a hyperspeed robot economy might look like

When Daniel Kokotajlo talks to security experts at major AI labs, they tell him something chilling: “Of course we’re probably penetrated by the CCP already, and if they really wanted something, they c...

27 Okt 20252h 12min

#224 – There's a cheap and low-tech way to save humanity from any engineered disease | Andrew Snyder-Beattie

#224 – There's a cheap and low-tech way to save humanity from any engineered disease | Andrew Snyder-Beattie

Conventional wisdom is that safeguarding humanity from the worst biological risks — microbes optimised to kill as many as possible — is difficult bordering on impossible, making bioweapons humanity’s ...

2 Okt 20252h 31min

Inside the Biden admin’s AI policy approach | Jake Sullivan, Biden’s NSA | via The Cognitive Revolution

Inside the Biden admin’s AI policy approach | Jake Sullivan, Biden’s NSA | via The Cognitive Revolution

Jake Sullivan was the US National Security Advisor from 2021-2025. He joined our friends on The Cognitive Revolution podcast in August to discuss AI as a critical national security issue. We thought i...

26 Sep 20251h 5min

#223 – Neel Nanda on leading a Google DeepMind team at 26 – and advice if you want to work at an AI company (part 2)

#223 – Neel Nanda on leading a Google DeepMind team at 26 – and advice if you want to work at an AI company (part 2)

At 26, Neel Nanda leads an AI safety team at Google DeepMind, has published dozens of influential papers, and mentored 50 junior researchers — seven of whom now work at major AI companies. His secret?...

15 Sep 20251h 46min

#222 – Can we tell if an AI is loyal by reading its mind? DeepMind's Neel Nanda (part 1)

#222 – Can we tell if an AI is loyal by reading its mind? DeepMind's Neel Nanda (part 1)

We don’t know how AIs think or why they do what they do. Or at least, we don’t know much. That fact is only becoming more troubling as AIs grow more capable and appear on track to wield enormous cultu...

8 Sep 20253h 1min

#221 – Kyle Fish on the most bizarre findings from 5 AI welfare experiments

#221 – Kyle Fish on the most bizarre findings from 5 AI welfare experiments

What happens when you lock two AI systems in a room together and tell them they can discuss anything they want?According to experiments run by Kyle Fish — Anthropic’s first AI welfare researcher — som...

28 Aug 20252h 28min

How not to lose your job to AI (article by Benjamin Todd)

How not to lose your job to AI (article by Benjamin Todd)

About half of people are worried they’ll lose their job to AI. They’re right to be concerned: AI can now complete real-world coding tasks on GitHub, generate photorealistic video, drive a taxi more sa...

31 Jul 202551min

Populært innen Fakta

fastlegen
dine-penger-pengeradet
relasjonspodden-med-dora-thorhallsdottir-kjersti-idem
treningspodden
foreldreradet
jakt-og-fiskepodden
merry-quizmas
dopet
podme-bio-3
rss-strid-de-norske-borgerkrigene
sovnlos
rss-kull
sinnsyn
gravid-uke-for-uke
rss-var-forste-kaffe
hverdagspsyken
fryktlos
rss-kunsten-a-leve
dypdykk
rss-impressions-2