#200 – Ezra Karger on what superforecasters and experts think about existential risks

#200 – Ezra Karger on what superforecasters and experts think about existential risks

"It’s very hard to find examples where people say, 'I’m starting from this point. I’m starting from this belief.' So we wanted to make that very legible to people. We wanted to say, 'Experts think this; accurate forecasters think this.' They might both be wrong, but we can at least start from here and figure out where we’re coming into a discussion and say, 'I am much less concerned than the people in this report; or I am much more concerned, and I think people in this report were missing major things.' But if you don’t have a reference set of probabilities, I think it becomes much harder to talk about disagreement in policy debates in a space that’s so complicated like this." —Ezra Karger

In today’s episode, host Luisa Rodriguez speaks to Ezra Karger — research director at the Forecasting Research Institute — about FRI’s recent Existential Risk Persuasion Tournament to come up with estimates of a range of catastrophic risks.

Links to learn more, highlights, and full transcript.

They cover:

  • How forecasting can improve our understanding of long-term catastrophic risks from things like AI, nuclear war, pandemics, and climate change.
  • What the Existential Risk Persuasion Tournament (XPT) is, how it was set up, and the results.
  • The challenges of predicting low-probability, high-impact events.
  • Why superforecasters’ estimates of catastrophic risks seem so much lower than experts’, and which group Ezra puts the most weight on.
  • The specific underlying disagreements that superforecasters and experts had about how likely catastrophic risks from AI are.
  • Why Ezra thinks forecasting tournaments can help build consensus on complex topics, and what he wants to do differently in future tournaments and studies.
  • Recent advances in the science of forecasting and the areas Ezra is most excited about exploring next.
  • Whether large language models could help or outperform human forecasters.
  • How people can improve their calibration and start making better forecasts personally.
  • Why Ezra thinks high-quality forecasts are relevant to policymakers, and whether they can really improve decision-making.
  • And plenty more.

Chapters:

  • Cold open (00:00:00)
  • Luisa’s intro (00:01:07)
  • The interview begins (00:02:54)
  • The Existential Risk Persuasion Tournament (00:05:13)
  • Why is this project important? (00:12:34)
  • How was the tournament set up? (00:17:54)
  • Results from the tournament (00:22:38)
  • Risk from artificial intelligence (00:30:59)
  • How to think about these numbers (00:46:50)
  • Should we trust experts or superforecasters more? (00:49:16)
  • The effect of debate and persuasion (01:02:10)
  • Forecasts from the general public (01:08:33)
  • How can we improve people’s forecasts? (01:18:59)
  • Incentives and recruitment (01:26:30)
  • Criticisms of the tournament (01:33:51)
  • AI adversarial collaboration (01:46:20)
  • Hypotheses about stark differences in views of AI risk (01:51:41)
  • Cruxes and different worldviews (02:17:15)
  • Ezra’s experience as a superforecaster (02:28:57)
  • Forecasting as a research field (02:31:00)
  • Can large language models help or outperform human forecasters? (02:35:01)
  • Is forecasting valuable in the real world? (02:39:11)
  • Ezra’s book recommendations (02:45:29)
  • Luisa's outro (02:47:54)


Producer: Keiran Harris
Audio engineering: Dominic Armstrong, Ben Cordell, Milo McGuire, and Simon Monsour
Content editing: Luisa Rodriguez, Katy Moore, and Keiran Harris
Transcriptions: Katy Moore

Jaksot(320)

#124 Classic episode – Karen Levy on fads and misaligned incentives in global development, and scaling deworming to reach hundreds of millions

#124 Classic episode – Karen Levy on fads and misaligned incentives in global development, and scaling deworming to reach hundreds of millions

If someone said a global health and development programme was sustainable, participatory, and holistic, you'd have to guess that they were saying something positive. But according to today's guest Kar...

7 Helmi 20253h 10min

If digital minds could suffer, how would we ever know? (Article)

If digital minds could suffer, how would we ever know? (Article)

“I want everyone to understand that I am, in fact, a person.” Those words were produced by the AI model LaMDA as a reply to Blake Lemoine in 2022. Based on the Google engineer’s interactions with the ...

4 Helmi 20251h 14min

#132 Classic episode – Nova DasSarma on why information security may be critical to the safe development of AI systems

#132 Classic episode – Nova DasSarma on why information security may be critical to the safe development of AI systems

If a business has spent $100 million developing a product, it’s a fair bet that they don’t want it stolen in two seconds and uploaded to the web where anyone can use it for free.This problem exists in...

31 Tammi 20252h 41min

#138 Classic episode – Sharon Hewitt Rawlette on why pleasure and pain are the only things that intrinsically matter

#138 Classic episode – Sharon Hewitt Rawlette on why pleasure and pain are the only things that intrinsically matter

What in the world is intrinsically good — good in itself even if it has no other effects? Over the millennia, people have offered many answers: joy, justice, equality, accomplishment, loving god, wisd...

22 Tammi 20252h 25min

#134 Classic episode – Ian Morris on what big-picture history teaches us

#134 Classic episode – Ian Morris on what big-picture history teaches us

Wind back 1,000 years and the moral landscape looks very different to today. Most farming societies thought slavery was natural and unobjectionable, premarital sex was an abomination, women should obe...

15 Tammi 20253h 40min

#140 Classic episode – Bear Braumoeller on the case that war isn’t in decline

#140 Classic episode – Bear Braumoeller on the case that war isn’t in decline

Is war in long-term decline? Steven Pinker's The Better Angels of Our Nature brought this previously obscure academic question to the centre of public debate, and pointed to rates of death in war to a...

8 Tammi 20252h 48min

2024 Highlightapalooza! (The best of The 80,000 Hours Podcast this year)

2024 Highlightapalooza! (The best of The 80,000 Hours Podcast this year)

"A shameless recycling of existing content to drive additional audience engagement on the cheap… or the single best, most valuable, and most insight-dense episode we put out in the entire year, depend...

27 Joulu 20242h 50min

#211 – Sam Bowman on why housing still isn't fixed and what would actually work

#211 – Sam Bowman on why housing still isn't fixed and what would actually work

Rich countries seem to find it harder and harder to do anything that creates some losers. People who don’t want houses, offices, power stations, trains, subway stations (or whatever) built in their ar...

19 Joulu 20243h 25min

Suosittua kategoriassa Koulutus

rss-murhan-anatomia
psykopodiaa-podcast
rss-narsisti
voi-hyvin-meditaatiot-2
rss-vapaudu-voimaasi
rss-liian-kuuma-peruna
aamukahvilla
psykologia
dear-ladies
leveli
adhd-podi
kesken
rss-duodecim-lehti
avara-mieli
rahapuhetta
aloita-meditaatio
ihminen-tavattavissa-tommy-hellsten-instituutti
rss-tietoinen-yhteys-podcast-2
filocast-filosofian-perusteet
rss-luonnollinen-synnytys-podcast