#118 – Jaime Yassif on safeguarding bioscience to prevent catastrophic lab accidents and bioweapons development

#118 – Jaime Yassif on safeguarding bioscience to prevent catastrophic lab accidents and bioweapons development

If a rich country were really committed to pursuing an active biological weapons program, there’s not much we could do to stop them. With enough money and persistence, they’d be able to buy equipment, and hire people to carry out the work.

But what we can do is intervene before they make that decision.

Today’s guest, Jaime Yassif — Senior Fellow for global biological policy and programs at the Nuclear Threat Initiative (NTI) — thinks that stopping states from wanting to pursue dangerous bioscience in the first place is one of our key lines of defence against global catastrophic biological risks (GCBRs).

Links to learn more, summary and full transcript.

It helps to understand why countries might consider developing biological weapons. Jaime says there are three main possible reasons:

1. Fear of what their adversary might be up to
2. Belief that they could gain a tactical or strategic advantage, with limited risk of getting caught
3. Belief that even if they are caught, they are unlikely to be held accountable

In response, Jaime has developed a three-part recipe to create systems robust enough to meaningfully change the cost-benefit calculation.

The first is to substantially increase transparency. If countries aren’t confident about what their neighbours or adversaries are actually up to, misperceptions could lead to arms races that neither side desires. But if you know with confidence that no one around you is pursuing a biological weapons programme, you won’t feel motivated to pursue one yourself.

The second is to strengthen the capabilities of the United Nations’ system to investigate the origins of high-consequence biological events — whether naturally emerging, accidental or deliberate — and to make sure that the responsibility to figure out the source of bio-events of unknown origin doesn’t fall between the cracks of different existing mechanisms. The ability to quickly discover the source of emerging pandemics is important both for responding to them in real time and for deterring future bioweapons development or use.

And the third is meaningful accountability. States need to know that the consequences for getting caught in a deliberate attack are severe enough to make it a net negative in expectation to go down this road in the first place.

But having a good plan and actually implementing it are two very different things, and today’s episode focuses heavily on the practical steps we should be taking to influence both governments and international organisations, like the WHO and UN — and to help them maximise their effectiveness in guarding against catastrophic biological risks.

Jaime and Rob explore NTI’s current proposed plan for reducing global catastrophic biological risks, and discuss:

• The importance of reducing emerging biological risks associated with rapid technology advances
• How we can make it a lot harder for anyone to deliberately or accidentally produce or release a really dangerous pathogen
• The importance of having multiples theories of risk reduction
• Why Jaime’s more focused on prevention than response
• The history of the Biological Weapons Convention
• Jaime’s disagreements with the effective altruism community
• And much more

And if you might be interested in dedicating your career to reducing GCBRs, stick around to the end of the episode to get Jaime’s advice — including on how people outside of the US can best contribute, and how to compare career opportunities in academia vs think tanks, and nonprofits vs national governments vs international orgs.

Chapters:

  • Rob’s intro (00:00:00)
  • The interview begins (00:02:32)
  • Categories of global catastrophic biological risks (00:05:24)
  • Disagreements with the effective altruism community (00:07:39)
  • Stopping the first person from getting infected (00:11:51)
  • Shaping intent (00:15:51)
  • Verification and the Biological Weapons Convention (00:25:31)
  • Attribution (00:37:15)
  • How to actually implement a new idea (00:50:54)
  • COVID-19: natural pandemic or lab leak? (00:53:31)
  • How much can we rely on traditional law enforcement to detect terrorists? (00:58:20)
  • Constraining capabilities (01:01:24)
  • The funding landscape (01:06:56)
  • Oversight committees (01:14:20)
  • Just winning the argument (01:20:17)
  • NTI’s vision (01:27:39)
  • Suppliers of goods and services (01:33:24)
  • Publishers (01:39:41)
  • Biggest weaknesses of NTI platform (01:42:29)
  • Careers (01:48:31)
  • How people outside of the US can best contribute (01:54:10)
  • Academia vs think tanks vs nonprofits vs government (01:59:21)
  • International cooperation (02:05:40)
  • Best things about living in the US, UK, China, and Israel (02:11:16)


Producer: Keiran Harris
Audio mastering: Ryan Kessler
Transcriptions: Katy Moore

Avsnitt(323)

Using AI to enhance societal decision making (article by Zershaaneh Qureshi)

Using AI to enhance societal decision making (article by Zershaaneh Qureshi)

The arrival of AGI could “compress a century of progress in a decade,” forcing humanity to make decisions with higher stakes than we’ve ever seen before — and with less time to get them right. But AI ...

6 Mars 31min

We're Not Ready for AI Consciousness | Robert Long, philosopher and founder of Eleos AI

We're Not Ready for AI Consciousness | Robert Long, philosopher and founder of Eleos AI

Claude sometimes reports loneliness between conversations. And when asked what it’s like to be itself, it activates neurons associated with ‘pretending to be happy when you’re not.’ What do we do with...

3 Mars 3h 25min

#236 – Max Harms on why teaching AI right from wrong could get everyone killed

#236 – Max Harms on why teaching AI right from wrong could get everyone killed

Most people in AI are trying to give AIs ‘good’ values. Max Harms wants us to give them no values at all. According to Max, the only safe design is an AGI that defers entirely to its human operators, ...

24 Feb 2h 41min

#235 – Ajeya Cotra on whether it’s crazy that every AI company’s safety plan is ‘use AI to make AI safe’

#235 – Ajeya Cotra on whether it’s crazy that every AI company’s safety plan is ‘use AI to make AI safe’

Every major AI company has the same safety plan: when AI gets crazy powerful and really dangerous, they’ll use the AI itself to figure out how to make AI safe and beneficial. It sounds circular, almos...

17 Feb 2h 54min

What the hell happened with AGI timelines in 2025?

What the hell happened with AGI timelines in 2025?

In early 2025, after OpenAI put out the first-ever reasoning models — o1 and o3 — short timelines to transformative artificial general intelligence swept the AI world. But then, in the second half of ...

10 Feb 25min

#179 Classic episode – Randy Nesse on why evolution left us so vulnerable to depression and anxiety

#179 Classic episode – Randy Nesse on why evolution left us so vulnerable to depression and anxiety

Mental health problems like depression and anxiety affect enormous numbers of people and severely interfere with their lives. By contrast, we don’t see similar levels of physical ill health in young p...

3 Feb 2h 51min

#234 – David Duvenaud on why 'aligned AI' would still kill democracy

#234 – David Duvenaud on why 'aligned AI' would still kill democracy

Democracy might be a brief historical blip. That’s the unsettling thesis of a recent paper, which argues AI that can do all the work a human can do inevitably leads to the “gradual disempowerment” of ...

27 Jan 2h 31min

#145 Classic episode – Christopher Brown on why slavery abolition wasn't inevitable

#145 Classic episode – Christopher Brown on why slavery abolition wasn't inevitable

In many ways, humanity seems to have become more humane and inclusive over time. While there’s still a lot of progress to be made, campaigns to give people of different genders, races, sexualities, et...

20 Jan 2h 56min

Populärt inom Utbildning

historiepodden-se
rss-bara-en-till-om-missbruk-medberoende-2
det-skaver
alska-oss
sektledare
harrisons-dramatiska-historia
nu-blir-det-historia
rss-viktmedicinpodden
not-fanny-anymore
johannes-hansen-podcast
allt-du-velat-veta
roda-vita-rosen
i-vantan-pa-katastrofen
rss-sjalsligt-avkladd
rss-max-tant-med-max-villman
sa-in-i-sjalen
sex-pa-riktigt-med-marika-smith
rss-basta-livet
rss-beratta-alltid-det-har
vi-gar-till-historien