#74 – Dr Greg Lewis on COVID-19 & catastrophic biological risks

#74 – Dr Greg Lewis on COVID-19 & catastrophic biological risks

Our lives currently revolve around the global emergency of COVID-19; you’re probably reading this while confined to your house, as the death toll from the worst pandemic since 1918 continues to rise.

The question of how to tackle COVID-19 has been foremost in the minds of many, including here at 80,000 Hours.

Today's guest, Dr Gregory Lewis, acting head of the Biosecurity Research Group at Oxford University's Future of Humanity Institute, puts the crisis in context, explaining how COVID-19 compares to other diseases, pandemics of the past, and possible worse crises in the future.

COVID-19 is a vivid reminder that we are unprepared to contain or respond to new pathogens.

How would we cope with a virus that was even more contagious and even more deadly? Greg's work focuses on these risks -- of outbreaks that threaten our entire future through an unrecoverable collapse of civilisation, or even the extinction of humanity.

Links to learn more, summary and full transcript.

If such a catastrophe were to occur, Greg believes it’s more likely to be caused by accidental or deliberate misuse of biotechnology than by a pathogen developed by nature.

There are a few direct causes for concern: humans now have the ability to produce some of the most dangerous diseases in history in the lab; technological progress may enable the creation of pathogens which are nastier than anything we see in nature; and most biotechnology has yet to even be conceived, so we can’t assume all the dangers will be familiar.

This is grim stuff, but it needn’t be paralysing. In the years following COVID-19, humanity may be inspired to better prepare for the existential risks of the next century: improving our science, updating our policy options, and enhancing our social cohesion.

COVID-19 is a tragedy of stunning proportions, and its immediate threat is undoubtedly worthy of significant resources.

But we will get through it; if a future biological catastrophe poses an existential risk, we may not get a second chance. It is therefore vital to learn every lesson we can from this pandemic, and provide our descendants with the security we wish for ourselves.

Today’s episode is the hosting debut of our Strategy Advisor, Howie Lempel.

80,000 Hours has focused on COVID-19 for the last few weeks and published over ten pieces about it, and a substantial benefit of this interview was to help inform our own views. As such, at times this episode may feel like eavesdropping on a private conversation, and it is likely to be of most interest to people primarily focused on making the long-term future go as well as possible.

In this episode, Howie and Greg cover:

• Reflections on the first few months of the pandemic
• Common confusions around COVID-19
• How COVID-19 compares to other diseases
• What types of interventions have been available to policymakers
• Arguments for and against working on global catastrophic biological risks (GCBRs)
• How to know if you’re a good fit to work on GCBRs
• The response of the effective altruism community, as well as 80,000 Hours in particular, to COVID-19
• And much more.

Chapters:

  • Rob’s intro (00:00:00)
  • The interview begins (00:03:15)
  • What is COVID-19? (00:16:05)
  • If you end up infected, how severe is it likely to be? (00:19:21)
  • How does COVID-19 compare to other diseases? (00:25:42)
  • Common confusions around COVID-19 (00:32:02)
  • What types of interventions were available to policymakers? (00:46:20)
  • Nonpharmaceutical Interventions (01:04:18)
  • What can you do personally? (01:18:25)
  • Reflections on the first few months of the pandemic (01:23:46)
  • Global catastrophic biological risks (GCBRs) (01:26:17)
  • Counterarguments to working on GCBRs (01:45:56)
  • How do GCBRs compare to other problems? (01:49:05)
  • Careers (01:59:50)
  • The response of the effective altruism community to COVID-19 (02:11:42)
  • The response of 80,000 Hours to COVID-19 (02:28:12)


Get this episode by subscribing: type '80,000 Hours' into your podcasting app. Or read the linked transcript.

Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcriptions: Zakee Ulhaq.

Episoder(321)

#81 - Ben Garfinkel on scrutinising classic AI risk arguments

#81 - Ben Garfinkel on scrutinising classic AI risk arguments

80,000 Hours, along with many other members of the effective altruism movement, has argued that helping to positively shape the development of artificial intelligence may be one of the best ways to ha...

9 Jul 20202h 38min

Advice on how to read our advice (Article)

Advice on how to read our advice (Article)

This is the fourth release in our new series of audio articles. If you want to read the original article or check out the links within it, you can find them here. "We’ve found that readers sometimes...

29 Jun 202015min

#80 – Stuart Russell on why our approach to AI is broken and how to fix it

#80 – Stuart Russell on why our approach to AI is broken and how to fix it

Stuart Russell, Professor at UC Berkeley and co-author of the most popular AI textbook, thinks the way we approach machine learning today is fundamentally flawed. In his new book, Human Compatible, he...

22 Jun 20202h 13min

What anonymous contributors think about important life and career questions (Article)

What anonymous contributors think about important life and career questions (Article)

Today we’re launching the final entry of our ‘anonymous answers' series on the website. It features answers to 23 different questions including “How have you seen talented people fail in their work?...

5 Jun 202037min

#79 – A.J. Jacobs on radical honesty, following the whole Bible, and reframing global problems as puzzles

#79 – A.J. Jacobs on radical honesty, following the whole Bible, and reframing global problems as puzzles

Today’s guest, New York Times bestselling author A.J. Jacobs, always hated Judge Judy. But after he found out that she was his seventh cousin, he thought, "You know what? She's not so bad." Hijacking ...

1 Jun 20202h 38min

#78 – Danny Hernandez on forecasting and the drivers of AI progress

#78 – Danny Hernandez on forecasting and the drivers of AI progress

Companies use about 300,000 times more computation training the best AI systems today than they did in 2012 and algorithmic innovations have also made them 25 times more efficient at the same tasks.Th...

22 Mai 20202h 11min

#77 – Marc Lipsitch on whether we're winning or losing against COVID-19

#77 – Marc Lipsitch on whether we're winning or losing against COVID-19

In March Professor Marc Lipsitch — Director of Harvard's Center for Communicable Disease Dynamics — abruptly found himself a global celebrity, his social media following growing 40-fold and journalist...

18 Mai 20201h 37min

Article: Ways people trying to do good accidentally make things worse, and how to avoid them

Article: Ways people trying to do good accidentally make things worse, and how to avoid them

Today’s release is the second experiment in making audio versions of our articles. The first was a narration of Greg Lewis’ terrific problem profile on ‘Reducing global catastrophic biological risks...

12 Mai 202026min

Populært innen Fakta

fastlegen
dine-penger-pengeradet
relasjonspodden-med-dora-thorhallsdottir-kjersti-idem
treningspodden
rss-strid-de-norske-borgerkrigene
foreldreradet
jakt-og-fiskepodden
rss-sunn-okonomi
merry-quizmas
hverdagspsyken
gravid-uke-for-uke
sinnsyn
rss-mann-i-krise-med-sagen
rss-impressions-2
rss-kunsten-a-leve
hagespiren-podcast
fryktlos
babyverden
level-up-med-anniken-binz
nevropodden