#27 - Dr Tom Inglesby on careers and policies that reduce global catastrophic biological risks

#27 - Dr Tom Inglesby on careers and policies that reduce global catastrophic biological risks

How about this for a movie idea: a main character has to prevent a new contagious strain of Ebola spreading around the world. She’s the best of the best. So good in fact, that her work on early detection systems contains the strain at its source. Ten minutes into the movie, we see the results of her work – nothing happens. Life goes on as usual. She continues to be amazingly competent, and nothing continues to go wrong. Fade to black. Roll credits.

If your job is to prevent catastrophes, success is when nobody has to pay attention to you. But without regular disasters to remind authorities why they hired you in the first place, they can’t tell if you’re actually achieving anything. And when budgets come under pressure you may find that success condemns you to the chopping block.

Dr Tom Inglesby, Director of the Center for Health Security at the Johns Hopkins Bloomberg School of Public Health, worries this may be about to happen to the scientists working on the ‘Global Health Security Agenda’.

In 2014 Ebola showed the world why we have to detect and contain new diseases before they spread, and that when it comes to contagious diseases the nations of the world sink or swim together. Fifty countries decided to work together to make sure all their health systems were up to the challenge. Back then Congress provided 5 years’ funding to help some of the world’s poorest countries build the basic health security infrastructure necessary to control pathogens before they could reach the US.

Links to learn more, job opportunities, and full transcript.

But with Ebola fading from public memory and no recent tragedies to terrify us, Congress may not renew that funding and the project could fall apart. (Learn more about how you can help: http://www.nti.org/analysis/articles/protect-us-investments-global-health-security/ )

But there are positive signs as well - the center Inglesby leads recently received a $16 million grant from the Open Philanthropy Project to further their work preventing global catastrophes. It also runs the [Emerging Leaders in Biosecurity Fellowship](http://www.centerforhealthsecurity.org/our-work/emergingbioleaders/) to train the next generation of biosecurity experts for the US government. And Inglesby regularly testifies to Congress on the threats we all face and how to address them.

In this in-depth interview we try to provide concrete guidance for listeners who want to to pursue a career in health security. Some of the topics we cover include:

* Should more people in medicine work on security?
* What are the top jobs for people who want to improve health security and how do they work towards getting them?
* What people can do to protect funding for the Global Health Security Agenda.
* Should we be more concerned about natural or human caused pandemics? Which is more neglected?
* Should we be allocating more attention and resources to global catastrophic risk scenarios?
* Why are senior figures reluctant to prioritize one project or area at the expense of another?
* What does Tom think about the idea that in the medium term, human-caused pandemics will pose a far greater risk than natural pandemics, and so we should focus on specific counter-measures?
* Are the main risks and solutions understood, and it’s just a matter of implementation? Or is the principal task to identify and understand them?
* How is the current US government performing in these areas?
* Which agencies are empowered to think about low probability high magnitude events?
And more...

Get this episode by subscribing: search for '80,000 Hours' in your podcasting app.

The 80,000 Hours Podcast is produced by Keiran Harris.

Jaksot(324)

#143 Classic episode – Jeffrey Lewis on the most common misconceptions about nuclear weapons

#143 Classic episode – Jeffrey Lewis on the most common misconceptions about nuclear weapons

America aims to avoid nuclear war by relying on the principle of 'mutually assured destruction,' right? Wrong. Or at least... not officially.As today's guest — Jeffrey Lewis, founder of Arms Control W...

19 Helmi 20252h 40min

#212 – Allan Dafoe on why technology is unstoppable & how to shape AI development anyway

#212 – Allan Dafoe on why technology is unstoppable & how to shape AI development anyway

Technology doesn’t force us to do anything — it merely opens doors. But military and economic competition pushes us through.That’s how today’s guest Allan Dafoe — director of frontier safety and gover...

14 Helmi 20252h 44min

Emergency pod: Elon tries to crash OpenAI's party (with Rose Chan Loui)

Emergency pod: Elon tries to crash OpenAI's party (with Rose Chan Loui)

On Monday Musk made the OpenAI nonprofit foundation an offer they want to refuse, but might have trouble doing so: $97.4 billion for its stake in the for-profit company, plus the freedom to stick with...

12 Helmi 202557min

AGI disagreements and misconceptions: Rob, Luisa, & past guests hash it out

AGI disagreements and misconceptions: Rob, Luisa, & past guests hash it out

Will LLMs soon be made into autonomous agents? Will they lead to job losses? Is AI misinformation overblown? Will it prove easy or hard to create AGI? And how likely is it that it will feel like somet...

10 Helmi 20253h 12min

#124 Classic episode – Karen Levy on fads and misaligned incentives in global development, and scaling deworming to reach hundreds of millions

#124 Classic episode – Karen Levy on fads and misaligned incentives in global development, and scaling deworming to reach hundreds of millions

If someone said a global health and development programme was sustainable, participatory, and holistic, you'd have to guess that they were saying something positive. But according to today's guest Kar...

7 Helmi 20253h 10min

If digital minds could suffer, how would we ever know? (Article)

If digital minds could suffer, how would we ever know? (Article)

“I want everyone to understand that I am, in fact, a person.” Those words were produced by the AI model LaMDA as a reply to Blake Lemoine in 2022. Based on the Google engineer’s interactions with the ...

4 Helmi 20251h 14min

#132 Classic episode – Nova DasSarma on why information security may be critical to the safe development of AI systems

#132 Classic episode – Nova DasSarma on why information security may be critical to the safe development of AI systems

If a business has spent $100 million developing a product, it’s a fair bet that they don’t want it stolen in two seconds and uploaded to the web where anyone can use it for free.This problem exists in...

31 Tammi 20252h 41min

#138 Classic episode – Sharon Hewitt Rawlette on why pleasure and pain are the only things that intrinsically matter

#138 Classic episode – Sharon Hewitt Rawlette on why pleasure and pain are the only things that intrinsically matter

What in the world is intrinsically good — good in itself even if it has no other effects? Over the millennia, people have offered many answers: joy, justice, equality, accomplishment, loving god, wisd...

22 Tammi 20252h 25min

Suosittua kategoriassa Koulutus

rss-murhan-anatomia
voi-hyvin-meditaatiot-2
rss-narsisti
psykopodiaa-podcast
rss-niinku-asia-on
rss-vapaudu-voimaasi
adhd-podi
psykologia
rss-duodecim-lehti
rss-rahamania
rss-valo-minussa-2
kesken
rss-uskonto-on-tylsaa
aamukahvilla
koulu-podcast-2
rss-liian-kuuma-peruna
rss-koira-haudattuna
avara-mieli
rss-turun-yliopisto
rss-arkea-ja-aurinkoa-podcast-espanjasta