#188 – Matt Clancy on whether science is good

#188 – Matt Clancy on whether science is good

"Suppose we make these grants, we do some of those experiments I talk about. We discover, for example — I’m just making this up — but we give people superforecasting tests when they’re doing peer review, and we find that you can identify people who are super good at picking science. And then we have this much better targeted science, and we’re making progress at a 10% faster rate than we normally would have. Over time, that aggregates up, and maybe after 10 years, we’re a year ahead of where we would have been if we hadn’t done this kind of stuff.

"Now, suppose in 10 years we’re going to discover a cheap new genetic engineering technology that anyone can use in the world if they order the right parts off of Amazon. That could be great, but could also allow bad actors to genetically engineer pandemics and basically try to do terrible things with this technology. And if we’ve brought that forward, and that happens at year nine instead of year 10 because of some of these interventions we did, now we start to think that if that’s really bad, if these people using this technology causes huge problems for humanity, it begins to sort of wash out the benefits of getting the science a little bit faster." —Matt Clancy

In today’s episode, host Luisa Rodriguez speaks to Matt Clancy — who oversees Open Philanthropy’s Innovation Policy programme — about his recent work modelling the risks and benefits of the increasing speed of scientific progress.

Links to learn more, highlights, and full transcript.

They cover:

  • Whether scientific progress is actually net positive for humanity.
  • Scenarios where accelerating science could lead to existential risks, such as advanced biotechnology being used by bad actors.
  • Why Matt thinks metascience research and targeted funding could improve the scientific process and better incentivise outcomes that are good for humanity.
  • Whether Matt trusts domain experts or superforecasters more when estimating how the future will turn out.
  • Why Matt is sceptical that AGI could really cause explosive economic growth.
  • And much more.

Chapters:

  • Is scientific progress net positive for humanity? (00:03:00)
  • The time of biological perils (00:17:50)
  • Modelling the benefits of science (00:25:48)
  • Income and health gains from scientific progress (00:32:49)
  • Discount rates (00:42:14)
  • How big are the returns to science? (00:51:08)
  • Forecasting global catastrophic biological risks from scientific progress (01:05:20)
  • What’s the value of scientific progress, given the risks? (01:15:09)
  • Factoring in extinction risk (01:21:56)
  • How science could reduce extinction risk (01:30:18)
  • Are we already too late to delay the time of perils? (01:42:38)
  • Domain experts vs superforecasters (01:46:03)
  • What Open Philanthropy’s Innovation Policy programme settled on (01:53:47)
  • Explosive economic growth (02:06:28)
  • Matt’s favourite thought experiment (02:34:57)

Producer and editor: Keiran Harris
Audio engineering lead: Ben Cordell
Technical editing: Simon Monsour, Milo McGuire, and Dominic Armstrong
Additional content editing: Katy Moore and Luisa Rodriguez
Transcriptions: Katy Moore

Jaksot(333)

'95% of AI Pilots Fail': The hidden agenda behind the viral stat that misled millions

'95% of AI Pilots Fail': The hidden agenda behind the viral stat that misled millions

You might have heard that '95% of corporate AI pilots' are failing. It was one of the most widely cited AI statistics of 2025, parroted by media outlets everywhere. It helped trigger a Nasdaq selloff ...

28 Huhti 10min

#242 – Will MacAskill on how we survive the 'intelligence explosion,' AI character, and the case for 'viatopia'

#242 – Will MacAskill on how we survive the 'intelligence explosion,' AI character, and the case for 'viatopia'

Hundreds of millions already turn to AI on the most personal of topics — therapy, political opinions, and how to treat others. And as AI takes over more of the economy, the character of these systems ...

22 Huhti 3h 9min

Risks from power-seeking AI systems (article narration by Zershaaneh Qureshi)

Risks from power-seeking AI systems (article narration by Zershaaneh Qureshi)

Hundreds of prominent AI scientists and other notable figures signed a statement in 2023 saying that mitigating the risk of extinction from AI should be a global priority. At 80,000 Hours, we’ve consi...

16 Huhti 1h 29min

How scary is Claude Mythos? 303 pages in 21 minutes

How scary is Claude Mythos? 303 pages in 21 minutes

With Claude Mythos we have an AI that knows when it's being tested, can obscure its reasoning when it wants, and is better at breaking into (and out of) computers than any human alive. Rob Wiblin work...

10 Huhti 21min

Village gossip, pesticide bans, and gene drives: 17 experts on the future of global health

Village gossip, pesticide bans, and gene drives: 17 experts on the future of global health

What does it really take to lift millions out of poverty and prevent needless deaths?In this special compilation episode, 17 past guests — including economists, nonprofit founders, and policy advisors...

7 Huhti 4h 6min

What everyone is missing about Anthropic vs the Pentagon. And: The Meta leaks are worse than you think.

What everyone is missing about Anthropic vs the Pentagon. And: The Meta leaks are worse than you think.

When the Pentagon tried to strong-arm Anthropic into dropping its ban on AI-only kill decisions and mass domestic surveillance, the company refused. Its critics went on the attack: Anthropic and its s...

3 Huhti 20min

#241 – Richard Moulange on how now AI codes viable genomes from scratch and outperforms virologists at lab work — what could go wrong?

#241 – Richard Moulange on how now AI codes viable genomes from scratch and outperforms virologists at lab work — what could go wrong?

Last September, scientists used an AI model to design genomes for entirely new bacteriophages (viruses that infect bacteria). They then built them in a lab. Many were viable. And despite being entirel...

31 Maalis 3h 7min

#240 – Samuel Charap on how a Ukraine ceasefire could accidentally set Europe up for a bigger war

#240 – Samuel Charap on how a Ukraine ceasefire could accidentally set Europe up for a bigger war

Many people believe a ceasefire in Ukraine will leave Europe safer. But today's guest lays out how a deal could potentially generate insidious new risks — leaving us in a situation that's equally dang...

24 Maalis 1h 12min

Suosittua kategoriassa Koulutus

rss-murhan-anatomia
psykopodiaa-podcast
voi-hyvin-meditaatiot-2
adhd-podi
rss-rahamania
rss-liian-kuuma-peruna
rss-niinku-asia-on
rss-arkea-ja-aurinkoa-podcast-espanjasta
kesken
rss-valo-minussa-2
rss-narsisti
rss-tietoinen-yhteys-podcast-2
rss-vapaudu-voimaasi
kehossa
jari-sarasvuo-podcast
psykologia
rss-duodecim-lehti
taytta-tavaraa
rahapuhetta
rss-mentalrace