#200 – Ezra Karger on what superforecasters and experts think about existential risks

#200 – Ezra Karger on what superforecasters and experts think about existential risks

"It’s very hard to find examples where people say, 'I’m starting from this point. I’m starting from this belief.' So we wanted to make that very legible to people. We wanted to say, 'Experts think this; accurate forecasters think this.' They might both be wrong, but we can at least start from here and figure out where we’re coming into a discussion and say, 'I am much less concerned than the people in this report; or I am much more concerned, and I think people in this report were missing major things.' But if you don’t have a reference set of probabilities, I think it becomes much harder to talk about disagreement in policy debates in a space that’s so complicated like this." —Ezra Karger

In today’s episode, host Luisa Rodriguez speaks to Ezra Karger — research director at the Forecasting Research Institute — about FRI’s recent Existential Risk Persuasion Tournament to come up with estimates of a range of catastrophic risks.

Links to learn more, highlights, and full transcript.

They cover:

  • How forecasting can improve our understanding of long-term catastrophic risks from things like AI, nuclear war, pandemics, and climate change.
  • What the Existential Risk Persuasion Tournament (XPT) is, how it was set up, and the results.
  • The challenges of predicting low-probability, high-impact events.
  • Why superforecasters’ estimates of catastrophic risks seem so much lower than experts’, and which group Ezra puts the most weight on.
  • The specific underlying disagreements that superforecasters and experts had about how likely catastrophic risks from AI are.
  • Why Ezra thinks forecasting tournaments can help build consensus on complex topics, and what he wants to do differently in future tournaments and studies.
  • Recent advances in the science of forecasting and the areas Ezra is most excited about exploring next.
  • Whether large language models could help or outperform human forecasters.
  • How people can improve their calibration and start making better forecasts personally.
  • Why Ezra thinks high-quality forecasts are relevant to policymakers, and whether they can really improve decision-making.
  • And plenty more.

Chapters:

  • Cold open (00:00:00)
  • Luisa’s intro (00:01:07)
  • The interview begins (00:02:54)
  • The Existential Risk Persuasion Tournament (00:05:13)
  • Why is this project important? (00:12:34)
  • How was the tournament set up? (00:17:54)
  • Results from the tournament (00:22:38)
  • Risk from artificial intelligence (00:30:59)
  • How to think about these numbers (00:46:50)
  • Should we trust experts or superforecasters more? (00:49:16)
  • The effect of debate and persuasion (01:02:10)
  • Forecasts from the general public (01:08:33)
  • How can we improve people’s forecasts? (01:18:59)
  • Incentives and recruitment (01:26:30)
  • Criticisms of the tournament (01:33:51)
  • AI adversarial collaboration (01:46:20)
  • Hypotheses about stark differences in views of AI risk (01:51:41)
  • Cruxes and different worldviews (02:17:15)
  • Ezra’s experience as a superforecaster (02:28:57)
  • Forecasting as a research field (02:31:00)
  • Can large language models help or outperform human forecasters? (02:35:01)
  • Is forecasting valuable in the real world? (02:39:11)
  • Ezra’s book recommendations (02:45:29)
  • Luisa's outro (02:47:54)


Producer: Keiran Harris
Audio engineering: Dominic Armstrong, Ben Cordell, Milo McGuire, and Simon Monsour
Content editing: Luisa Rodriguez, Katy Moore, and Keiran Harris
Transcriptions: Katy Moore

Jaksot(317)

#51 - Martin Gurri on the revolt of the public & crisis of authority in the information age

#51 - Martin Gurri on the revolt of the public & crisis of authority in the information age

Politics in rich countries seems to be going nuts. What's the explanation? Rising inequality? The decline of manufacturing jobs? Excessive immigration? Martin Gurri spent decades as a CIA analyst and...

29 Tammi 20192h 31min

#50 - David Denkenberger on how to feed all 8b people through an asteroid/nuclear winter

#50 - David Denkenberger on how to feed all 8b people through an asteroid/nuclear winter

If an asteroid impact or nuclear winter blocked the sun for years, our inability to grow food would result in billions dying of starvation, right? According to Dr David Denkenberger, co-author of Feed...

27 Joulu 20182h 57min

#49 - Rachel Glennerster on a year's worth of education for 30c & other development 'best buys'

#49 - Rachel Glennerster on a year's worth of education for 30c & other development 'best buys'

If I told you it's possible to deliver an extra year of ideal primary-level education for under $1, would you believe me? Hopefully not - the claim is absurd on its face. But it may be true nonetheles...

20 Joulu 20181h 35min

#48 - Brian Christian on better living through the wisdom of computer science

#48 - Brian Christian on better living through the wisdom of computer science

Please let us know if we've helped you: Fill out our annual impact survey Ever felt that you were so busy you spent all your time paralysed trying to figure out where to start, and couldn't get much ...

22 Marras 20183h 15min

#47 - Catherine Olsson & Daniel Ziegler on the fast path into high-impact ML engineering roles

#47 - Catherine Olsson & Daniel Ziegler on the fast path into high-impact ML engineering roles

After dropping out of a machine learning PhD at Stanford, Daniel Ziegler needed to decide what to do next. He’d always enjoyed building stuff and wanted to shape the development of AI, so he thought a...

2 Marras 20182h 4min

#46 - Hilary Greaves on moral cluelessness & tackling crucial questions in academia

#46 - Hilary Greaves on moral cluelessness & tackling crucial questions in academia

The barista gives you your coffee and change, and you walk away from the busy line. But you suddenly realise she gave you $1 less than she should have. Do you brush your way past the people now waitin...

23 Loka 20182h 49min

#45 - Tyler Cowen's case for maximising econ growth, stabilising civilization & thinking long-term

#45 - Tyler Cowen's case for maximising econ growth, stabilising civilization & thinking long-term

I've probably spent more time reading Tyler Cowen - Professor of Economics at George Mason University - than any other author. Indeed it's his incredibly popular blog Marginal Revolution that prompted...

17 Loka 20182h 30min

#44 - Paul Christiano on how we'll hand the future off to AI, & solving the alignment problem

#44 - Paul Christiano on how we'll hand the future off to AI, & solving the alignment problem

Paul Christiano is one of the smartest people I know. After our first session produced such great material, we decided to do a second recording, resulting in our longest interview so far. While challe...

2 Loka 20183h 51min

Suosittua kategoriassa Koulutus

rss-murhan-anatomia
psykopodiaa-podcast
voi-hyvin-meditaatiot-2
rss-valo-minussa-2
adhd-podi
psykologia
rss-narsisti
salainen-paivakirja
rss-liian-kuuma-peruna
rss-duodecim-lehti
rahapuhetta
aloita-meditaatio
rss-vapaudu-voimaasi
rss-niinku-asia-on
kesken
rss-luonnollinen-synnytys-podcast
aamukahvilla
rss-uskonto-on-tylsaa
rss-selvat-savelet
rss-koira-haudattuna