Podme logo
HemUpptäckKategorierSökStudent
#118 – Jaime Yassif on safeguarding bioscience to prevent catastrophic lab accidents and bioweapons development

#118 – Jaime Yassif on safeguarding bioscience to prevent catastrophic lab accidents and bioweapons development

02:15:402021-12-13

Om avsnittet

If a rich country were really committed to pursuing an active biological weapons program, there’s not much we could do to stop them. With enough money and persistence, they’d be able to buy equipment, and hire people to carry out the work. But what we can do is intervene before they make that decision. Today’s guest, Jaime Yassif — Senior Fellow for global biological policy and programs at the Nuclear Threat Initiative (NTI) — thinks that stopping states from wanting to pursue dangerous bioscience in the first place is one of our key lines of defence against global catastrophic biological risks (GCBRs). Links to learn more, summary and full transcript. It helps to understand why countries might consider developing biological weapons. Jaime says there are three main possible reasons: 1. Fear of what their adversary might be up to 2. Belief that they could gain a tactical or strategic advantage, with limited risk of getting caught 3. Belief that even if they are caught, they are unlikely to be held accountable In response, Jaime has developed a three-part recipe to create systems robust enough to meaningfully change the cost-benefit calculation. The first is to substantially increase transparency. If countries aren’t confident about what their neighbours or adversaries are actually up to, misperceptions could lead to arms races that neither side desires. But if you know with confidence that no one around you is pursuing a biological weapons programme, you won’t feel motivated to pursue one yourself. The second is to strengthen the capabilities of the United Nations’ system to investigate the origins of high-consequence biological events — whether naturally emerging, accidental or deliberate — and to make sure that the responsibility to figure out the source of bio-events of unknown origin doesn’t fall between the cracks of different existing mechanisms. The ability to quickly discover the source of emerging pandemics is important both for responding to them in real time and for deterring future bioweapons development or use. And the third is meaningful accountability. States need to know that the consequences for getting caught in a deliberate attack are severe enough to make it a net negative in expectation to go down this road in the first place. But having a good plan and actually implementing it are two very different things, and today’s episode focuses heavily on the practical steps we should be taking to influence both governments and international organisations, like the WHO and UN — and to help them maximise their effectiveness in guarding against catastrophic biological risks. Jaime and Rob explore NTI’s current proposed plan for reducing global catastrophic biological risks, and discuss: • The importance of reducing emerging biological risks associated with rapid technology advances • How we can make it a lot harder for anyone to deliberately or accidentally produce or release a really dangerous pathogen • The importance of having multiples theories of risk reduction • Why Jaime’s more focused on prevention than response • The history of the Biological Weapons Convention • Jaime’s disagreements with the effective altruism community • And much more And if you might be interested in dedicating your career to reducing GCBRs, stick around to the end of the episode to get Jaime’s advice — including on how people outside of the US can best contribute, and how to compare career opportunities in academia vs think tanks, and nonprofits vs national governments vs international orgs. Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Producer: Keiran Harris Audio mastering: Ryan Kessler Transcriptions: Katy Moore

Senaste avsnitten

80,000 Hours Podcast
80,000 Hours Podcast

#202 – Venki Ramakrishnan on the cutting edge of anti-ageing science

2024-09-192h 20min
80,000 Hours Podcast
80,000 Hours Podcast

#201 – Ken Goldberg on why your robot butler isn’t here yet

2024-09-132h 1min
80,000 Hours Podcast
80,000 Hours Podcast

#200 – Ezra Karger on what superforecasters and experts think about existential risks

2024-09-042h 49min
80,000 Hours Podcast
80,000 Hours Podcast

#199 – Nathan Calvin on California’s AI bill SB 1047 and its potential to shape US AI policy

2024-08-291h 12min
80,000 Hours Podcast
80,000 Hours Podcast

#198 – Meghan Barrett on challenging our assumptions about insects

2024-08-263h 48min
80,000 Hours Podcast
80,000 Hours Podcast

#197 – Nick Joseph on whether Anthropic's AI safety policy is up to the task

2024-08-222h 29min
80,000 Hours Podcast
80,000 Hours Podcast

#196 – Jonathan Birch on the edge cases of sentience and why they matter

2024-08-152h 1min
80,000 Hours Podcast
80,000 Hours Podcast

#195 – Sella Nevo on who's trying to steal frontier AI models, and what they could do with them

2024-08-012h 8min
80,000 Hours Podcast
80,000 Hours Podcast

#194 – Vitalik Buterin on defensive acceleration and how to regulate AI when you fear government

2024-07-263h 4min
80,000 Hours Podcast
80,000 Hours Podcast

#193 – Sihao Huang on the risk that US–China AI competition leads to war

2024-07-182h 23min
logo

PODME

INFORMATION

  • Om kakor
  • Allmänna villkor
  • Integritetspolicy
  • Press

LADDA NED APPEN

app storegoogle play store

REGION

flag
  • sweden_flag
  • norway_flag
  • finland_flag

© Podme AB 2024