Article: Reducing global catastrophic biological risks

Article: Reducing global catastrophic biological risks

In a few days we'll be putting out a conversation with Dr Greg Lewis, who studies how to prevent global catastrophic biological risks at Oxford's Future of Humanity Institute.

Greg also wrote a new problem profile on that topic for our website, and reading that is a good lead-in to our interview with him. So in a bit of an experiment we decided to make this audio version of that article, narrated by the producer of the 80,000 Hours Podcast, Keiran Harris.

We’re thinking about having audio versions of other important articles we write, so it’d be great if you could let us know if you’d like more of these. You can email us your view at podcast@80000hours.org.

If you want to check out all of Greg’s graphs and footnotes that we didn’t include, and get links to learn more about GCBRs - you can find those here.

And if you want to read more about COVID-19, the 80,000 Hours team has produced a fantastic package of 10 pieces about how to stop the pandemic. You can find those here.

Episoder(332)

#162 – Mustafa Suleyman on getting Washington and Silicon Valley to tame AI

#162 – Mustafa Suleyman on getting Washington and Silicon Valley to tame AI

Mustafa Suleyman was part of the trio that founded DeepMind, and his new AI project is building one of the world's largest supercomputers to train a large language model on 10–100x the compute used to...

1 Sep 202359min

#161 – Michael Webb on whether AI will soon cause job loss, lower incomes, and higher inequality — or the opposite

#161 – Michael Webb on whether AI will soon cause job loss, lower incomes, and higher inequality — or the opposite

"Do you remember seeing these photographs of generally women sitting in front of these huge panels and connecting calls, plugging different calls between different numbers? The automated version of th...

23 Aug 20233h 30min

#160 – Hannah Ritchie on why it makes sense to be optimistic about the environment

#160 – Hannah Ritchie on why it makes sense to be optimistic about the environment

"There's no money to invest in education elsewhere, so they almost get trapped in the cycle where they don't get a lot from crop production, but everyone in the family has to work there to just stay a...

14 Aug 20232h 36min

#159 – Jan Leike on OpenAI's massive push to make superintelligence safe in 4 years or less

#159 – Jan Leike on OpenAI's massive push to make superintelligence safe in 4 years or less

In July, OpenAI announced a new team and project: Superalignment. The goal is to figure out how to make superintelligent AI systems aligned and safe to use within four years, and the lab is putting a ...

7 Aug 20232h 51min

We now offer shorter 'interview highlights' episodes

We now offer shorter 'interview highlights' episodes

Over on our other feed, 80k After Hours, you can now find 20-30 minute highlights episodes of our 80,000 Hours Podcast interviews. These aren’t necessarily the most important parts of the interview, a...

5 Aug 20236min

#158 – Holden Karnofsky on how AIs might take over even if they're no smarter than humans, and his 4-part playbook for AI risk

#158 – Holden Karnofsky on how AIs might take over even if they're no smarter than humans, and his 4-part playbook for AI risk

Back in 2007, Holden Karnofsky cofounded GiveWell, where he sought out the charities that most cost-effectively helped save lives. He then cofounded Open Philanthropy, where he oversaw a team making b...

31 Jul 20233h 13min

#157 – Ezra Klein on existential risk from AI and what DC could do about it

#157 – Ezra Klein on existential risk from AI and what DC could do about it

In Oppenheimer, scientists detonate a nuclear weapon despite thinking there's some 'near zero' chance it would ignite the atmosphere, putting an end to life on Earth. Today, scientists working on AI t...

24 Jul 20231h 18min

#156 – Markus Anderljung on how to regulate cutting-edge AI models

#156 – Markus Anderljung on how to regulate cutting-edge AI models

"At the front of the pack we have these frontier AI developers, and we want them to identify particularly dangerous models ahead of time. Once those mines have been discovered, and the frontier develo...

10 Jul 20232h 6min

Populært innen Fakta

fastlegen
dine-penger-pengeradet
relasjonspodden-med-dora-thorhallsdottir-kjersti-idem
mikkels-paskenotter
rss-strid-de-norske-borgerkrigene
foreldreradet
treningspodden
jakt-og-fiskepodden
rss-bisarr-historie
takk-og-lov-med-anine-kierulf
sinnsyn
tomprat-med-gunnar-tjomlid
gravid-uke-for-uke
fryktlos
hverdagspsyken
hagespiren-podcast
rss-kull
level-up-med-anniken-binz
rss-sunn-okonomi
rss-kunsten-a-leve