#15 - Phil Tetlock on how chimps beat Berkeley undergrads and when it’s wise to defer to the wise

#15 - Phil Tetlock on how chimps beat Berkeley undergrads and when it’s wise to defer to the wise

Prof Philip Tetlock is a social science legend. Over forty years he has researched whose predictions we can trust, whose we can’t and why - and developed methods that allow all of us to be better at predicting the future.

After the Iraq WMDs fiasco, the US intelligence services hired him to figure out how to ensure they’d never screw up that badly again. The result of that work – Superforecasting – was a media sensation in 2015.

Full transcript, brief summary, apply for coaching and links to learn more.

It described Tetlock’s Good Judgement Project, which found forecasting methods so accurate they beat everyone else in open competition, including thousands of people in the intelligence services with access to classified information.

Today he’s working to develop the best forecasting process ever, by combining top human and machine intelligence in the Hybrid Forecasting Competition, which you can sign up and participate in.

We start by describing his key findings, and then push to the edge of what is known about how to foresee the unforeseeable:

* Should people who want to be right just adopt the views of experts rather than apply their own judgement?
* Why are Berkeley undergrads worse forecasters than dart-throwing chimps?
* Should I keep my political views secret, so it will be easier to change them later?
* How can listeners contribute to his latest cutting-edge research?
* What do we know about our accuracy at predicting low-probability high-impact disasters?
* Does his research provide an intellectual basis for populist political movements?
* Was the Iraq War caused by bad politics, or bad intelligence methods?
* What can we learn about forecasting from the 2016 election?
* Can experience help people avoid overconfidence and underconfidence?
* When does an AI easily beat human judgement?
* Could more accurate forecasting methods make the world more dangerous?
* How much does demographic diversity line up with cognitive diversity?
* What are the odds we’ll go to war with China?
* Should we let prediction tournaments run most of the government?

Listen to it. Get free, one-on-one career advice. Want to work on important social science research like Tetlock? We’ve helped hundreds of people compare their options and get introductions. Find out if our coaching can help you.

Avsnitt(324)

AI Won't End Mutually Assured Destruction (Probably) | Sam Winter-Levy & Nikita Lalwani

AI Won't End Mutually Assured Destruction (Probably) | Sam Winter-Levy & Nikita Lalwani

How AI interacts with nuclear deterrence may be the single most important question in geopolitics — one that may define the stakes of today’s AI race. Nuclear deterrence rests on a state’s capacity to...

10 Mars 1h 11min

Using AI to enhance societal decision making (article by Zershaaneh Qureshi)

Using AI to enhance societal decision making (article by Zershaaneh Qureshi)

The arrival of AGI could “compress a century of progress in a decade,” forcing humanity to make decisions with higher stakes than we’ve ever seen before — and with less time to get them right. But AI ...

6 Mars 31min

We're Not Ready for AI Consciousness | Robert Long, philosopher and founder of Eleos AI

We're Not Ready for AI Consciousness | Robert Long, philosopher and founder of Eleos AI

Claude sometimes reports loneliness between conversations. And when asked what it’s like to be itself, it activates neurons associated with ‘pretending to be happy when you’re not.’ What do we do with...

3 Mars 3h 25min

#236 – Max Harms on why teaching AI right from wrong could get everyone killed

#236 – Max Harms on why teaching AI right from wrong could get everyone killed

Most people in AI are trying to give AIs ‘good’ values. Max Harms wants us to give them no values at all. According to Max, the only safe design is an AGI that defers entirely to its human operators, ...

24 Feb 2h 41min

#235 – Ajeya Cotra on whether it’s crazy that every AI company’s safety plan is ‘use AI to make AI safe’

#235 – Ajeya Cotra on whether it’s crazy that every AI company’s safety plan is ‘use AI to make AI safe’

Every major AI company has the same safety plan: when AI gets crazy powerful and really dangerous, they’ll use the AI itself to figure out how to make AI safe and beneficial. It sounds circular, almos...

17 Feb 2h 54min

What the hell happened with AGI timelines in 2025?

What the hell happened with AGI timelines in 2025?

In early 2025, after OpenAI put out the first-ever reasoning models — o1 and o3 — short timelines to transformative artificial general intelligence swept the AI world. But then, in the second half of ...

10 Feb 25min

#179 Classic episode – Randy Nesse on why evolution left us so vulnerable to depression and anxiety

#179 Classic episode – Randy Nesse on why evolution left us so vulnerable to depression and anxiety

Mental health problems like depression and anxiety affect enormous numbers of people and severely interfere with their lives. By contrast, we don’t see similar levels of physical ill health in young p...

3 Feb 2h 51min

#234 – David Duvenaud on why 'aligned AI' would still kill democracy

#234 – David Duvenaud on why 'aligned AI' would still kill democracy

Democracy might be a brief historical blip. That’s the unsettling thesis of a recent paper, which argues AI that can do all the work a human can do inevitably leads to the “gradual disempowerment” of ...

27 Jan 2h 31min

Populärt inom Utbildning

historiepodden-se
rss-bara-en-till-om-missbruk-medberoende-2
det-skaver
alska-oss
harrisons-dramatiska-historia
rss-viktmedicinpodden
nu-blir-det-historia
sektledare
johannes-hansen-podcast
roda-vita-rosen
not-fanny-anymore
allt-du-velat-veta
rss-sjalsligt-avkladd
i-vantan-pa-katastrofen
sa-in-i-sjalen
rikatillsammans-om-privatekonomi-rikedom-i-livet
polisutbildningspodden
rss-om-vi-ska-vara-arliga
rss-traningsklubben
sex-pa-riktigt-med-marika-smith