2024 Highlightapalooza! (The best of The 80,000 Hours Podcast this year)

2024 Highlightapalooza! (The best of The 80,000 Hours Podcast this year)

"A shameless recycling of existing content to drive additional audience engagement on the cheap… or the single best, most valuable, and most insight-dense episode we put out in the entire year, depending on how you want to look at it." — Rob Wiblin

It’s that magical time of year once again — highlightapalooza! Stick around for one top bit from each episode, including:

  • How to use the microphone on someone’s mobile phone to figure out what password they’re typing into their laptop
  • Why mercilessly driving the New World screwworm to extinction could be the most compassionate thing humanity has ever done
  • Why evolutionary psychology doesn’t support a cynical view of human nature but actually explains why so many of us are intensely sensitive to the harms we cause to others
  • How superforecasters and domain experts seem to disagree so much about AI risk, but when you zoom in it’s mostly a disagreement about timing
  • Why the sceptics are wrong and you will want to use robot nannies to take care of your kids — and also why despite having big worries about the development of AGI, Carl Shulman is strongly against efforts to pause AI research today
  • How much of the gender pay gap is due to direct pay discrimination vs other factors
  • How cleaner wrasse fish blow the mirror test out of the water
  • Why effective altruism may be too big a tent to work well
  • How we could best motivate pharma companies to test existing drugs to see if they help cure other diseases — something they currently have no reason to bother with

…as well as 27 other top observations and arguments from the past year of the show.

Check out the full transcript and episode links on the 80,000 Hours website.

Remember that all of these clips come from the 20-minute highlight reels we make for every episode, which are released on our sister feed, 80k After Hours. So if you’re struggling to keep up with our regularly scheduled entertainment, you can still get the best parts of our conversations there.

It has been a hell of a year, and we can only imagine next year is going to be even weirder — but Luisa and Rob will be here to keep you company as Earth hurtles through the galaxy to a fate as yet unknown.

Enjoy, and look forward to speaking with you in 2025!

Chapters:

  • Rob's intro (00:00:00)
  • Randy Nesse on the origins of morality and the problem of simplistic selfish-gene thinking (00:02:11)
  • Hugo Mercier on the evolutionary argument against humans being gullible (00:07:17)
  • Meghan Barrett on the likelihood of insect sentience (00:11:26)
  • Sébastien Moro on the mirror test triumph of cleaner wrasses (00:14:47)
  • Sella Nevo on side-channel attacks (00:19:32)
  • Zvi Mowshowitz on AI sleeper agents (00:22:59)
  • Zach Weinersmith on why space settlement (probably) won't make us rich (00:29:11)
  • Rachel Glennerster on pull mechanisms to incentivise repurposing of generic drugs (00:35:23)
  • Emily Oster on the impact of kids on women's careers (00:40:29)
  • Carl Shulman on robot nannies (00:45:19)
  • Nathan Labenz on kids and artificial friends (00:50:12)
  • Nathan Calvin on why it's not too early for AI policies (00:54:13)
  • Rose Chan Loui on how control of OpenAI is independently incredibly valuable and requires compensation (00:58:08)
  • Nick Joseph on why he’s a big fan of the responsible scaling policy approach (01:03:11)
  • Sihao Huang on how the US and UK might coordinate with China (01:06:09)
  • Nathan Labenz on better transparency about predicted capabilities (01:10:18)
  • Ezra Karger on what explains forecasters’ disagreements about AI risks (01:15:22)
  • Carl Shulman on why he doesn't support enforced pauses on AI research (01:18:58)
  • Matt Clancy on the omnipresent frictions that might prevent explosive economic growth (01:25:24)
  • Vitalik Buterin on defensive acceleration (01:29:43)
  • Annie Jacobsen on the war games that suggest escalation is inevitable (01:34:59)
  • Nate Silver on whether effective altruism is too big to succeed (01:38:42)
  • Kevin Esvelt on why killing every screwworm would be the best thing humanity ever did (01:42:27)
  • Lewis Bollard on how factory farming is philosophically indefensible (01:46:28)
  • Bob Fischer on how to think about moral weights if you're not a hedonist (01:49:27)
  • Elizabeth Cox on the empirical evidence of the impact of storytelling (01:57:43)
  • Anil Seth on how our brain interprets reality (02:01:03)
  • Eric Schwitzgebel on whether consciousness can be nested (02:04:53)
  • Jonathan Birch on our overconfidence around disorders of consciousness (02:10:23)
  • Peter Godfrey-Smith on uploads of ourselves (02:14:34)
  • Laura Deming on surprising things that make mice live longer (02:21:17)
  • Venki Ramakrishnan on freezing cells, organs, and bodies (02:24:46)
  • Ken Goldberg on why low fault tolerance makes some skills extra hard to automate in robots (02:29:12)
  • Sarah Eustis-Guthrie on the ups and downs of founding an organisation (02:34:04)
  • Dean Spears on the cost effectiveness of kangaroo mother care (02:38:26)
  • Cameron Meyer Shorb on vaccines for wild animals (02:42:53)
  • Spencer Greenberg on personal principles (02:46:08)

Producing and editing: Keiran Harris
Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
Video editing: Simon Monsour
Transcriptions: Katy Moore

Episoder(324)

AGI Won't End Mutually Assured Destruction (Probably) | Sam Winter-Levy & Nikita Lalwani

AGI Won't End Mutually Assured Destruction (Probably) | Sam Winter-Levy & Nikita Lalwani

How AI interacts with nuclear deterrence may be the single most important question in geopolitics — one that may define the stakes of today’s AI race. Nuclear deterrence rests on a state’s capacity to...

10 Mar 1h 11min

Using AI to enhance societal decision making (article by Zershaaneh Qureshi)

Using AI to enhance societal decision making (article by Zershaaneh Qureshi)

The arrival of AGI could “compress a century of progress in a decade,” forcing humanity to make decisions with higher stakes than we’ve ever seen before — and with less time to get them right. But AI ...

6 Mar 31min

We're Not Ready for AI Consciousness | Robert Long, philosopher and founder of Eleos AI

We're Not Ready for AI Consciousness | Robert Long, philosopher and founder of Eleos AI

Claude sometimes reports loneliness between conversations. And when asked what it’s like to be itself, it activates neurons associated with ‘pretending to be happy when you’re not.’ What do we do with...

3 Mar 3h 25min

#236 – Max Harms on why teaching AI right from wrong could get everyone killed

#236 – Max Harms on why teaching AI right from wrong could get everyone killed

Most people in AI are trying to give AIs ‘good’ values. Max Harms wants us to give them no values at all. According to Max, the only safe design is an AGI that defers entirely to its human operators, ...

24 Feb 2h 41min

#235 – Ajeya Cotra on whether it’s crazy that every AI company’s safety plan is ‘use AI to make AI safe’

#235 – Ajeya Cotra on whether it’s crazy that every AI company’s safety plan is ‘use AI to make AI safe’

Every major AI company has the same safety plan: when AI gets crazy powerful and really dangerous, they’ll use the AI itself to figure out how to make AI safe and beneficial. It sounds circular, almos...

17 Feb 2h 54min

What the hell happened with AGI timelines in 2025?

What the hell happened with AGI timelines in 2025?

In early 2025, after OpenAI put out the first-ever reasoning models — o1 and o3 — short timelines to transformative artificial general intelligence swept the AI world. But then, in the second half of ...

10 Feb 25min

#179 Classic episode – Randy Nesse on why evolution left us so vulnerable to depression and anxiety

#179 Classic episode – Randy Nesse on why evolution left us so vulnerable to depression and anxiety

Mental health problems like depression and anxiety affect enormous numbers of people and severely interfere with their lives. By contrast, we don’t see similar levels of physical ill health in young p...

3 Feb 2h 51min

#234 – David Duvenaud on why 'aligned AI' would still kill democracy

#234 – David Duvenaud on why 'aligned AI' would still kill democracy

Democracy might be a brief historical blip. That’s the unsettling thesis of a recent paper, which argues AI that can do all the work a human can do inevitably leads to the “gradual disempowerment” of ...

27 Jan 2h 31min

Populært innen Fakta

fastlegen
dine-penger-pengeradet
relasjonspodden-med-dora-thorhallsdottir-kjersti-idem
treningspodden
jakt-og-fiskepodden
foreldreradet
rss-strid-de-norske-borgerkrigene
rss-sunn-okonomi
hverdagspsyken
merry-quizmas
sinnsyn
rss-kunsten-a-leve
gravid-uke-for-uke
tomprat-med-gunnar-tjomlid
fryktlos
rss-impressions-2
rss-mann-i-krise-med-sagen
rss-kull
hagespiren-podcast
level-up-med-anniken-binz