#200 – Ezra Karger on what superforecasters and experts think about existential risks

#200 – Ezra Karger on what superforecasters and experts think about existential risks

"It’s very hard to find examples where people say, 'I’m starting from this point. I’m starting from this belief.' So we wanted to make that very legible to people. We wanted to say, 'Experts think this; accurate forecasters think this.' They might both be wrong, but we can at least start from here and figure out where we’re coming into a discussion and say, 'I am much less concerned than the people in this report; or I am much more concerned, and I think people in this report were missing major things.' But if you don’t have a reference set of probabilities, I think it becomes much harder to talk about disagreement in policy debates in a space that’s so complicated like this." —Ezra Karger

In today’s episode, host Luisa Rodriguez speaks to Ezra Karger — research director at the Forecasting Research Institute — about FRI’s recent Existential Risk Persuasion Tournament to come up with estimates of a range of catastrophic risks.

Links to learn more, highlights, and full transcript.

They cover:

  • How forecasting can improve our understanding of long-term catastrophic risks from things like AI, nuclear war, pandemics, and climate change.
  • What the Existential Risk Persuasion Tournament (XPT) is, how it was set up, and the results.
  • The challenges of predicting low-probability, high-impact events.
  • Why superforecasters’ estimates of catastrophic risks seem so much lower than experts’, and which group Ezra puts the most weight on.
  • The specific underlying disagreements that superforecasters and experts had about how likely catastrophic risks from AI are.
  • Why Ezra thinks forecasting tournaments can help build consensus on complex topics, and what he wants to do differently in future tournaments and studies.
  • Recent advances in the science of forecasting and the areas Ezra is most excited about exploring next.
  • Whether large language models could help or outperform human forecasters.
  • How people can improve their calibration and start making better forecasts personally.
  • Why Ezra thinks high-quality forecasts are relevant to policymakers, and whether they can really improve decision-making.
  • And plenty more.

Chapters:

  • Cold open (00:00:00)
  • Luisa’s intro (00:01:07)
  • The interview begins (00:02:54)
  • The Existential Risk Persuasion Tournament (00:05:13)
  • Why is this project important? (00:12:34)
  • How was the tournament set up? (00:17:54)
  • Results from the tournament (00:22:38)
  • Risk from artificial intelligence (00:30:59)
  • How to think about these numbers (00:46:50)
  • Should we trust experts or superforecasters more? (00:49:16)
  • The effect of debate and persuasion (01:02:10)
  • Forecasts from the general public (01:08:33)
  • How can we improve people’s forecasts? (01:18:59)
  • Incentives and recruitment (01:26:30)
  • Criticisms of the tournament (01:33:51)
  • AI adversarial collaboration (01:46:20)
  • Hypotheses about stark differences in views of AI risk (01:51:41)
  • Cruxes and different worldviews (02:17:15)
  • Ezra’s experience as a superforecaster (02:28:57)
  • Forecasting as a research field (02:31:00)
  • Can large language models help or outperform human forecasters? (02:35:01)
  • Is forecasting valuable in the real world? (02:39:11)
  • Ezra’s book recommendations (02:45:29)
  • Luisa's outro (02:47:54)


Producer: Keiran Harris
Audio engineering: Dominic Armstrong, Ben Cordell, Milo McGuire, and Simon Monsour
Content editing: Luisa Rodriguez, Katy Moore, and Keiran Harris
Transcriptions: Katy Moore

Avsnitt(325)

#28 - Owen Cotton-Barratt on why scientists should need insurance, PhD strategy & fast AI progresses

#28 - Owen Cotton-Barratt on why scientists should need insurance, PhD strategy & fast AI progresses

A researcher is working on creating a new virus – one more dangerous than any that exist naturally. They believe they’re being as careful as possible. After all, if things go wrong, their own life and...

27 Apr 20181h 3min

#27 - Dr Tom Inglesby on careers and policies that reduce global catastrophic biological risks

#27 - Dr Tom Inglesby on careers and policies that reduce global catastrophic biological risks

How about this for a movie idea: a main character has to prevent a new contagious strain of Ebola spreading around the world. She’s the best of the best. So good in fact, that her work on early detect...

18 Apr 20182h 16min

#26 - Marie Gibbons on how exactly clean meat is made & what's needed to get it in every supermarket

#26 - Marie Gibbons on how exactly clean meat is made & what's needed to get it in every supermarket

First, decide on the type of animal. Next, pick the cell type. Then take a small, painless biopsy, and put the cells in a solution that makes them feel like they’re still in the body. Once the cells a...

10 Apr 20181h 44min

#25 - Robin Hanson on why we have to lie to ourselves about why we do what we do

#25 - Robin Hanson on why we have to lie to ourselves about why we do what we do

On February 2, 1685, England’s King Charles II was struck by a sudden illness. Fortunately his physicians were the best of the best. To reassure the public they kept them abreast of the King’s treatme...

28 Mars 20182h 39min

#24 - Stefan Schubert on why it’s a bad idea to break the rules, even if it’s for a good cause

#24 - Stefan Schubert on why it’s a bad idea to break the rules, even if it’s for a good cause

How honest should we be? How helpful? How friendly? If our society claims to value honesty, for instance, but in reality accepts an awful lot of lying – should we go along with those lax standards? Or...

20 Mars 201855min

#23 - How to actually become an AI alignment researcher, according to Dr Jan Leike

#23 - How to actually become an AI alignment researcher, according to Dr Jan Leike

Want to help steer the 21st century’s most transformative technology? First complete an undergrad degree in computer science and mathematics. Prioritize harder courses over easier ones. Publish at lea...

16 Mars 201845min

#22 - Leah Utyasheva on the non-profit that figured out how to massively cut suicide rates

#22 - Leah Utyasheva on the non-profit that figured out how to massively cut suicide rates

How people kill themselves varies enormously depending on which means are most easily available. In the United States, suicide by firearm stands out. In Hong Kong, where most people live in high rise ...

7 Mars 20181h 8min

#21 - Holden Karnofsky on times philanthropy transformed the world & Open Phil’s plan to do the same

#21 - Holden Karnofsky on times philanthropy transformed the world & Open Phil’s plan to do the same

The Green Revolution averted mass famine during the 20th century. The contraceptive pill gave women unprecedented freedom in planning their own lives. Both are widely recognised as scientific breakthr...

27 Feb 20182h 35min

Populärt inom Utbildning

rss-bara-en-till-om-missbruk-medberoende-2
historiepodden-se
det-skaver
alska-oss
nu-blir-det-historia
harrisons-dramatiska-historia
not-fanny-anymore
johannes-hansen-podcast
sektledare
allt-du-velat-veta
rss-viktmedicinpodden
roda-vita-rosen
sa-in-i-sjalen
i-vantan-pa-katastrofen
rss-basta-livet
rss-max-tant-med-max-villman
rss-sjalsligt-avkladd
rikatillsammans-om-privatekonomi-rikedom-i-livet
rss-om-vi-ska-vara-arliga
rss-relationsrevolutionen