#200 – Ezra Karger on what superforecasters and experts think about existential risks
Jaksokuvaus
"It’s very hard to find examples where people say, 'I’m starting from this point. I’m starting from this belief.' So we wanted to make that very legible to people. We wanted to say, 'Experts think this; accurate forecasters think this.' They might both be wrong, but we can at least start from here and figure out where we’re coming into a discussion and say, 'I am much less concerned than the people in this report; or I am much more concerned, and I think people in this report were missing major things.' But if you don’t have a reference set of probabilities, I think it becomes much harder to talk about disagreement in policy debates in a space that’s so complicated like this." —Ezra KargerIn today’s episode, host Luisa Rodriguez speaks to Ezra Karger — research director at the Forecasting Research Institute — about FRI’s recent Existential Risk Persuasion Tournament to come up with estimates of a range of catastrophic risks.Links to learn more, highlights, and full transcript.They cover:How forecasting can improve our understanding of long-term catastrophic risks from things like AI, nuclear war, pandemics, and climate change.What the Existential Risk Persuasion Tournament (XPT) is, how it was set up, and the results.The challenges of predicting low-probability, high-impact events.Why superforecasters’ estimates of catastrophic risks seem so much lower than experts’, and which group Ezra puts the most weight on.The specific underlying disagreements that superforecasters and experts had about how likely catastrophic risks from AI are.Why Ezra thinks forecasting tournaments can help build consensus on complex topics, and what he wants to do differently in future tournaments and studies.Recent advances in the science of forecasting and the areas Ezra is most excited about exploring next.Whether large language models could help or outperform human forecasters.How people can improve their calibration and start making better forecasts personally.Why Ezra thinks high-quality forecasts are relevant to policymakers, and whether they can really improve decision-making.And plenty more.Chapters:Cold open (00:00:00)Luisa’s intro (00:01:07)The interview begins (00:02:54)The Existential Risk Persuasion Tournament (00:05:13)Why is this project important? (00:12:34)How was the tournament set up? (00:17:54)Results from the tournament (00:22:38)Risk from artificial intelligence (00:30:59)How to think about these numbers (00:46:50)Should we trust experts or superforecasters more? (00:49:16)The effect of debate and persuasion (01:02:10)Forecasts from the general public (01:08:33)How can we improve people’s forecasts? (01:18:59)Incentives and recruitment (01:26:30)Criticisms of the tournament (01:33:51)AI adversarial collaboration (01:46:20)Hypotheses about stark differences in views of AI risk (01:51:41)Cruxes and different worldviews (02:17:15)Ezra’s experience as a superforecaster (02:28:57)Forecasting as a research field (02:31:00)Can large language models help or outperform human forecasters? (02:35:01)Is forecasting valuable in the real world? (02:39:11)Ezra’s book recommendations (02:45:29)Luisa's outro (02:47:54)Producer: Keiran HarrisAudio engineering: Dominic Armstrong, Ben Cordell, Milo McGuire, and Simon MonsourContent editing: Luisa Rodriguez, Katy Moore, and Keiran HarrisTranscriptions: Katy Moore