#200 – Ezra Karger on what superforecasters and experts think about existential risks

#200 – Ezra Karger on what superforecasters and experts think about existential risks

"It’s very hard to find examples where people say, 'I’m starting from this point. I’m starting from this belief.' So we wanted to make that very legible to people. We wanted to say, 'Experts think this; accurate forecasters think this.' They might both be wrong, but we can at least start from here and figure out where we’re coming into a discussion and say, 'I am much less concerned than the people in this report; or I am much more concerned, and I think people in this report were missing major things.' But if you don’t have a reference set of probabilities, I think it becomes much harder to talk about disagreement in policy debates in a space that’s so complicated like this." —Ezra Karger

In today’s episode, host Luisa Rodriguez speaks to Ezra Karger — research director at the Forecasting Research Institute — about FRI’s recent Existential Risk Persuasion Tournament to come up with estimates of a range of catastrophic risks.

Links to learn more, highlights, and full transcript.

They cover:

  • How forecasting can improve our understanding of long-term catastrophic risks from things like AI, nuclear war, pandemics, and climate change.
  • What the Existential Risk Persuasion Tournament (XPT) is, how it was set up, and the results.
  • The challenges of predicting low-probability, high-impact events.
  • Why superforecasters’ estimates of catastrophic risks seem so much lower than experts’, and which group Ezra puts the most weight on.
  • The specific underlying disagreements that superforecasters and experts had about how likely catastrophic risks from AI are.
  • Why Ezra thinks forecasting tournaments can help build consensus on complex topics, and what he wants to do differently in future tournaments and studies.
  • Recent advances in the science of forecasting and the areas Ezra is most excited about exploring next.
  • Whether large language models could help or outperform human forecasters.
  • How people can improve their calibration and start making better forecasts personally.
  • Why Ezra thinks high-quality forecasts are relevant to policymakers, and whether they can really improve decision-making.
  • And plenty more.

Chapters:

  • Cold open (00:00:00)
  • Luisa’s intro (00:01:07)
  • The interview begins (00:02:54)
  • The Existential Risk Persuasion Tournament (00:05:13)
  • Why is this project important? (00:12:34)
  • How was the tournament set up? (00:17:54)
  • Results from the tournament (00:22:38)
  • Risk from artificial intelligence (00:30:59)
  • How to think about these numbers (00:46:50)
  • Should we trust experts or superforecasters more? (00:49:16)
  • The effect of debate and persuasion (01:02:10)
  • Forecasts from the general public (01:08:33)
  • How can we improve people’s forecasts? (01:18:59)
  • Incentives and recruitment (01:26:30)
  • Criticisms of the tournament (01:33:51)
  • AI adversarial collaboration (01:46:20)
  • Hypotheses about stark differences in views of AI risk (01:51:41)
  • Cruxes and different worldviews (02:17:15)
  • Ezra’s experience as a superforecaster (02:28:57)
  • Forecasting as a research field (02:31:00)
  • Can large language models help or outperform human forecasters? (02:35:01)
  • Is forecasting valuable in the real world? (02:39:11)
  • Ezra’s book recommendations (02:45:29)
  • Luisa's outro (02:47:54)


Producer: Keiran Harris
Audio engineering: Dominic Armstrong, Ben Cordell, Milo McGuire, and Simon Monsour
Content editing: Luisa Rodriguez, Katy Moore, and Keiran Harris
Transcriptions: Katy Moore

Jaksot(317)

#43 - Daniel Ellsberg on the institutional insanity that maintains nuclear doomsday machines

#43 - Daniel Ellsberg on the institutional insanity that maintains nuclear doomsday machines

In Stanley Kubrick’s iconic film Dr. Strangelove, the American president is informed that the Soviet Union has created a secret deterrence system which will automatically wipe out humanity upon detect...

25 Syys 20182h 44min

#42 - Amanda Askell on moral empathy, the value of information & the ethics of infinity

#42 - Amanda Askell on moral empathy, the value of information & the ethics of infinity

Consider two familiar moments at a family reunion. Our host, Uncle Bill, takes pride in his barbecuing skills. But his niece Becky says that she now refuses to eat meat. A groan goes round the table; ...

11 Syys 20182h 46min

#41 - David Roodman on incarceration, geomagnetic storms, & becoming a world-class researcher

#41 - David Roodman on incarceration, geomagnetic storms, & becoming a world-class researcher

With 698 inmates per 100,000 citizens, the U.S. is by far the leader among large wealthy nations in incarceration. But what effect does imprisonment actually have on crime? According to David Roodman...

28 Elo 20182h 18min

#40 - Katja Grace on forecasting future technology & how much we should trust expert predictions

#40 - Katja Grace on forecasting future technology & how much we should trust expert predictions

Experts believe that artificial intelligence will be better than humans at driving trucks by 2027, working in retail by 2031, writing bestselling books by 2049, and working as surgeons by 2053. But ho...

21 Elo 20182h 11min

#39 - Spencer Greenberg on the scientific approach to solving difficult everyday questions

#39 - Spencer Greenberg on the scientific approach to solving difficult everyday questions

Will Trump be re-elected? Will North Korea give up their nuclear weapons? Will your friend turn up to dinner? Spencer Greenberg, founder of ClearerThinking.org has a process for working out such real...

7 Elo 20182h 17min

#38 - Yew-Kwang Ng on anticipating effective altruism decades ago & how to make a much happier world

#38 - Yew-Kwang Ng on anticipating effective altruism decades ago & how to make a much happier world

Will people who think carefully about how to maximize welfare eventually converge on the same views? The effective altruism community has spent a lot of time over the past 10 years debating how best t...

26 Heinä 20181h 59min

#37 - GiveWell picks top charities by estimating the unknowable. James Snowden on how they do it.

#37 - GiveWell picks top charities by estimating the unknowable. James Snowden on how they do it.

What’s the value of preventing the death of a 5-year-old child, compared to a 20-year-old, or an 80-year-old? The global health community has generally regarded the value as proportional to the numbe...

16 Heinä 20181h 44min

#36 - Tanya Singh on ending the operations management bottleneck in effective altruism

#36 - Tanya Singh on ending the operations management bottleneck in effective altruism

Almost nobody is able to do groundbreaking physics research themselves, and by the time his brilliance was appreciated, Einstein was hardly limited by funding. But what if you could find a way to unlo...

11 Heinä 20182h 4min

Suosittua kategoriassa Koulutus

rss-murhan-anatomia
psykopodiaa-podcast
voi-hyvin-meditaatiot-2
rss-valo-minussa-2
rss-narsisti
psykologia
adhd-podi
rss-duodecim-lehti
salainen-paivakirja
rss-liian-kuuma-peruna
rahapuhetta
rss-niinku-asia-on
kesken
rss-luonnollinen-synnytys-podcast
rss-vapaudu-voimaasi
aamukahvilla
aloita-meditaatio
mielipaivakirja
rss-uskonto-on-tylsaa
rss-rahataito-podcast