#164 – Kevin Esvelt on cults that want to kill everyone, stealth vs wildfire pandemics, and how he felt inventing gene drives

#164 – Kevin Esvelt on cults that want to kill everyone, stealth vs wildfire pandemics, and how he felt inventing gene drives

"Imagine a fast-spreading respiratory HIV. It sweeps around the world. Almost nobody has symptoms. Nobody notices until years later, when the first people who are infected begin to succumb. They might die, something else debilitating might happen to them, but by that point, just about everyone on the planet would have been infected already.

And then it would be a race. Can we come up with some way of defusing the thing? Can we come up with the equivalent of HIV antiretrovirals before it's too late?" — Kevin Esvelt

In today’s episode, host Luisa Rodriguez interviews Kevin Esvelt — a biologist at the MIT Media Lab and the inventor of CRISPR-based gene drive — about the threat posed by engineered bioweapons.

Links to learn more, summary and full transcript.

They cover:

  • Why it makes sense to focus on deliberately released pandemics
  • Case studies of people who actually wanted to kill billions of humans
  • How many people have the technical ability to produce dangerous viruses
  • The different threats of stealth and wildfire pandemics that could crash civilisation
  • The potential for AI models to increase access to dangerous pathogens
  • Why scientists try to identify new pandemic-capable pathogens, and the case against that research
  • Technological solutions, including UV lights and advanced PPE
  • Using CRISPR-based gene drive to fight diseases and reduce animal suffering
  • And plenty more.

Producer and editor: Keiran Harris
Audio Engineering Lead: Ben Cordell
Technical editing: Simon Monsour
Additional content editing: Katy Moore and Luisa Rodriguez
Transcriptions: Katy Moore

Jaksot(318)

#226 – Holden Karnofsky on unexploited opportunities to make AI safer — and all his AGI takes

#226 – Holden Karnofsky on unexploited opportunities to make AI safer — and all his AGI takes

For years, working on AI safety usually meant theorising about the ‘alignment problem’ or trying to convince other people to give a damn. If you could find any way to help, the work was frustrating an...

30 Loka 20254h 30min

#225 – Daniel Kokotajlo on what a hyperspeed robot economy might look like

#225 – Daniel Kokotajlo on what a hyperspeed robot economy might look like

When Daniel Kokotajlo talks to security experts at major AI labs, they tell him something chilling: “Of course we’re probably penetrated by the CCP already, and if they really wanted something, they c...

27 Loka 20252h 12min

#224 – There's a cheap and low-tech way to save humanity from any engineered disease | Andrew Snyder-Beattie

#224 – There's a cheap and low-tech way to save humanity from any engineered disease | Andrew Snyder-Beattie

Conventional wisdom is that safeguarding humanity from the worst biological risks — microbes optimised to kill as many as possible — is difficult bordering on impossible, making bioweapons humanity’s ...

2 Loka 20252h 31min

Inside the Biden admin’s AI policy approach | Jake Sullivan, Biden’s NSA | via The Cognitive Revolution

Inside the Biden admin’s AI policy approach | Jake Sullivan, Biden’s NSA | via The Cognitive Revolution

Jake Sullivan was the US National Security Advisor from 2021-2025. He joined our friends on The Cognitive Revolution podcast in August to discuss AI as a critical national security issue. We thought i...

26 Syys 20251h 5min

#223 – Neel Nanda on leading a Google DeepMind team at 26 – and advice if you want to work at an AI company (part 2)

#223 – Neel Nanda on leading a Google DeepMind team at 26 – and advice if you want to work at an AI company (part 2)

At 26, Neel Nanda leads an AI safety team at Google DeepMind, has published dozens of influential papers, and mentored 50 junior researchers — seven of whom now work at major AI companies. His secret?...

15 Syys 20251h 46min

#222 – Can we tell if an AI is loyal by reading its mind? DeepMind's Neel Nanda (part 1)

#222 – Can we tell if an AI is loyal by reading its mind? DeepMind's Neel Nanda (part 1)

We don’t know how AIs think or why they do what they do. Or at least, we don’t know much. That fact is only becoming more troubling as AIs grow more capable and appear on track to wield enormous cultu...

8 Syys 20253h 1min

#221 – Kyle Fish on the most bizarre findings from 5 AI welfare experiments

#221 – Kyle Fish on the most bizarre findings from 5 AI welfare experiments

What happens when you lock two AI systems in a room together and tell them they can discuss anything they want?According to experiments run by Kyle Fish — Anthropic’s first AI welfare researcher — som...

28 Elo 20252h 28min

How not to lose your job to AI (article by Benjamin Todd)

How not to lose your job to AI (article by Benjamin Todd)

About half of people are worried they’ll lose their job to AI. They’re right to be concerned: AI can now complete real-world coding tasks on GitHub, generate photorealistic video, drive a taxi more sa...

31 Heinä 202551min

Suosittua kategoriassa Koulutus

rss-murhan-anatomia
psykopodiaa-podcast
voi-hyvin-meditaatiot-2
rss-niinku-asia-on
kesken
rss-liian-kuuma-peruna
rss-narsisti
adhd-podi
ihminen-tavattavissa-tommy-hellsten-instituutti
rss-duodecim-lehti
rss-psykalab
aamukahvilla
aloita-meditaatio
psykologia
rahapuhetta
rss-elamankoulu
rss-valo-minussa-2
rss-arkijarki
rss-honest-talk-with-laurrenna
rss-mental-race