#188 – Matt Clancy on whether science is good

#188 – Matt Clancy on whether science is good

"Suppose we make these grants, we do some of those experiments I talk about. We discover, for example — I’m just making this up — but we give people superforecasting tests when they’re doing peer review, and we find that you can identify people who are super good at picking science. And then we have this much better targeted science, and we’re making progress at a 10% faster rate than we normally would have. Over time, that aggregates up, and maybe after 10 years, we’re a year ahead of where we would have been if we hadn’t done this kind of stuff.

"Now, suppose in 10 years we’re going to discover a cheap new genetic engineering technology that anyone can use in the world if they order the right parts off of Amazon. That could be great, but could also allow bad actors to genetically engineer pandemics and basically try to do terrible things with this technology. And if we’ve brought that forward, and that happens at year nine instead of year 10 because of some of these interventions we did, now we start to think that if that’s really bad, if these people using this technology causes huge problems for humanity, it begins to sort of wash out the benefits of getting the science a little bit faster." —Matt Clancy

In today’s episode, host Luisa Rodriguez speaks to Matt Clancy — who oversees Open Philanthropy’s Innovation Policy programme — about his recent work modelling the risks and benefits of the increasing speed of scientific progress.

Links to learn more, highlights, and full transcript.

They cover:

  • Whether scientific progress is actually net positive for humanity.
  • Scenarios where accelerating science could lead to existential risks, such as advanced biotechnology being used by bad actors.
  • Why Matt thinks metascience research and targeted funding could improve the scientific process and better incentivise outcomes that are good for humanity.
  • Whether Matt trusts domain experts or superforecasters more when estimating how the future will turn out.
  • Why Matt is sceptical that AGI could really cause explosive economic growth.
  • And much more.

Chapters:

  • Is scientific progress net positive for humanity? (00:03:00)
  • The time of biological perils (00:17:50)
  • Modelling the benefits of science (00:25:48)
  • Income and health gains from scientific progress (00:32:49)
  • Discount rates (00:42:14)
  • How big are the returns to science? (00:51:08)
  • Forecasting global catastrophic biological risks from scientific progress (01:05:20)
  • What’s the value of scientific progress, given the risks? (01:15:09)
  • Factoring in extinction risk (01:21:56)
  • How science could reduce extinction risk (01:30:18)
  • Are we already too late to delay the time of perils? (01:42:38)
  • Domain experts vs superforecasters (01:46:03)
  • What Open Philanthropy’s Innovation Policy programme settled on (01:53:47)
  • Explosive economic growth (02:06:28)
  • Matt’s favourite thought experiment (02:34:57)

Producer and editor: Keiran Harris
Audio engineering lead: Ben Cordell
Technical editing: Simon Monsour, Milo McGuire, and Dominic Armstrong
Additional content editing: Katy Moore and Luisa Rodriguez
Transcriptions: Katy Moore

Episoder(325)

#139 Classic episode – Alan Hájek on puzzles and paradoxes in probability and expected value

#139 Classic episode – Alan Hájek on puzzles and paradoxes in probability and expected value

A casino offers you a game. A coin will be tossed. If it comes up heads on the first flip you win $2. If it comes up on the second flip you win $4. If it comes up on the third you win $8, the fourth y...

25 Feb 20253h 41min

#143 Classic episode – Jeffrey Lewis on the most common misconceptions about nuclear weapons

#143 Classic episode – Jeffrey Lewis on the most common misconceptions about nuclear weapons

America aims to avoid nuclear war by relying on the principle of 'mutually assured destruction,' right? Wrong. Or at least... not officially.As today's guest — Jeffrey Lewis, founder of Arms Control W...

19 Feb 20252h 40min

#212 – Allan Dafoe on why technology is unstoppable & how to shape AI development anyway

#212 – Allan Dafoe on why technology is unstoppable & how to shape AI development anyway

Technology doesn’t force us to do anything — it merely opens doors. But military and economic competition pushes us through.That’s how today’s guest Allan Dafoe — director of frontier safety and gover...

14 Feb 20252h 44min

Emergency pod: Elon tries to crash OpenAI's party (with Rose Chan Loui)

Emergency pod: Elon tries to crash OpenAI's party (with Rose Chan Loui)

On Monday Musk made the OpenAI nonprofit foundation an offer they want to refuse, but might have trouble doing so: $97.4 billion for its stake in the for-profit company, plus the freedom to stick with...

12 Feb 202557min

AGI disagreements and misconceptions: Rob, Luisa, & past guests hash it out

AGI disagreements and misconceptions: Rob, Luisa, & past guests hash it out

Will LLMs soon be made into autonomous agents? Will they lead to job losses? Is AI misinformation overblown? Will it prove easy or hard to create AGI? And how likely is it that it will feel like somet...

10 Feb 20253h 12min

#124 Classic episode – Karen Levy on fads and misaligned incentives in global development, and scaling deworming to reach hundreds of millions

#124 Classic episode – Karen Levy on fads and misaligned incentives in global development, and scaling deworming to reach hundreds of millions

If someone said a global health and development programme was sustainable, participatory, and holistic, you'd have to guess that they were saying something positive. But according to today's guest Kar...

7 Feb 20253h 10min

If digital minds could suffer, how would we ever know? (Article)

If digital minds could suffer, how would we ever know? (Article)

“I want everyone to understand that I am, in fact, a person.” Those words were produced by the AI model LaMDA as a reply to Blake Lemoine in 2022. Based on the Google engineer’s interactions with the ...

4 Feb 20251h 14min

#132 Classic episode – Nova DasSarma on why information security may be critical to the safe development of AI systems

#132 Classic episode – Nova DasSarma on why information security may be critical to the safe development of AI systems

If a business has spent $100 million developing a product, it’s a fair bet that they don’t want it stolen in two seconds and uploaded to the web where anyone can use it for free.This problem exists in...

31 Jan 20252h 41min

Populært innen Fakta

fastlegen
dine-penger-pengeradet
relasjonspodden-med-dora-thorhallsdottir-kjersti-idem
treningspodden
foreldreradet
rss-strid-de-norske-borgerkrigene
rss-sunn-okonomi
jakt-og-fiskepodden
sinnsyn
takk-og-lov-med-anine-kierulf
rss-kunsten-a-leve
gravid-uke-for-uke
merry-quizmas
hverdagspsyken
hagespiren-podcast
smart-forklart
rss-kull
fryktlos
lederskap-nhhs-podkast-om-ledelse
hr-podden-2