#39 - Spencer Greenberg on the scientific approach to solving difficult everyday questions

#39 - Spencer Greenberg on the scientific approach to solving difficult everyday questions

Will Trump be re-elected? Will North Korea give up their nuclear weapons? Will your friend turn up to dinner?

Spencer Greenberg, founder of ClearerThinking.org has a process for working out such real life problems.

Let’s work through one here: how likely is it that you’ll enjoy listening to this episode?

The first step is to figure out your ‘prior probability’; what’s your estimate of how likely you are to enjoy the interview before getting any further evidence?

Other than applying common sense, one way to figure this out is called reference class forecasting: looking at similar cases and seeing how often something is true, on average.

Spencer is our first ever return guest. So one reference class might be, how many Spencer Greenberg episodes of the 80,000 Hours Podcast have you enjoyed so far? Being this specific limits bias in your answer, but with a sample size of at most 1 - you’d probably want to add more data points to reduce variability.

Zooming out, how many episodes of the 80,000 Hours Podcast have you enjoyed? Let’s say you’ve listened to 10, and enjoyed 8 of them. If so 8 out of 10 might be your prior probability.

But maybe the two you didn’t enjoy had something in common. If you’ve liked similar episodes in the past, you’d update in favour of expecting to enjoy it, and if you’ve disliked similar episodes in the past, you’d update negatively.

You can zoom out further; what fraction of long-form interview podcasts have you ever enjoyed?

Then you’d look to update whenever new information became available. Do the topics seem interesting? Did Spencer make a great point in the first 5 minutes? Was this description unbearably self-referential?

Speaking of the Question of Evidence: in a world where Spencer was not worth listening to, how likely is it that we’d invite him back for a second episode?

Links to learn more, summary and full transcript.

We’ll run through several diverse examples, and how to actually work out the changing probabilities as you update. But that’s only a fraction of the conversation. We also discuss:

* How could we generate 20-30 new happy thoughts a day? What would that do to our welfare?
* What do people actually value? How do EAs differ from non EAs?
* Why should we care about the distinction between intrinsic and instrumental values?
* Would hedonic utilitarians really want to hook themselves up to happiness machines?
* What types of activities are people generally under-confident about? Why?
* When should you give a lot of weight to your prior belief?
* When should we trust common sense?
* Does power posing have any effect?
* Are resumes worthless?
* Did Trump explicitly collude with Russia? What are the odds of him getting re-elected?
* What’s the probability that China and the US go to War in the 21st century?
* How should we treat claims of expertise on diets?
* Why were Spencer’s friends suspicious of Theranos for years?
* How should we think about the placebo effect?
* Does a shift towards rationality typically cause alienation from family and friends? How do you deal with that?

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

The 80,000 Hours podcast is produced by Keiran Harris.

Avsnitt(324)

AGI Won't End Mutually Assured Destruction (Probably) | Sam Winter-Levy & Nikita Lalwani

AGI Won't End Mutually Assured Destruction (Probably) | Sam Winter-Levy & Nikita Lalwani

How AI interacts with nuclear deterrence may be the single most important question in geopolitics — one that may define the stakes of today’s AI race. Nuclear deterrence rests on a state’s capacity to...

10 Mars 1h 11min

Using AI to enhance societal decision making (article by Zershaaneh Qureshi)

Using AI to enhance societal decision making (article by Zershaaneh Qureshi)

The arrival of AGI could “compress a century of progress in a decade,” forcing humanity to make decisions with higher stakes than we’ve ever seen before — and with less time to get them right. But AI ...

6 Mars 31min

We're Not Ready for AI Consciousness | Robert Long, philosopher and founder of Eleos AI

We're Not Ready for AI Consciousness | Robert Long, philosopher and founder of Eleos AI

Claude sometimes reports loneliness between conversations. And when asked what it’s like to be itself, it activates neurons associated with ‘pretending to be happy when you’re not.’ What do we do with...

3 Mars 3h 25min

#236 – Max Harms on why teaching AI right from wrong could get everyone killed

#236 – Max Harms on why teaching AI right from wrong could get everyone killed

Most people in AI are trying to give AIs ‘good’ values. Max Harms wants us to give them no values at all. According to Max, the only safe design is an AGI that defers entirely to its human operators, ...

24 Feb 2h 41min

#235 – Ajeya Cotra on whether it’s crazy that every AI company’s safety plan is ‘use AI to make AI safe’

#235 – Ajeya Cotra on whether it’s crazy that every AI company’s safety plan is ‘use AI to make AI safe’

Every major AI company has the same safety plan: when AI gets crazy powerful and really dangerous, they’ll use the AI itself to figure out how to make AI safe and beneficial. It sounds circular, almos...

17 Feb 2h 54min

What the hell happened with AGI timelines in 2025?

What the hell happened with AGI timelines in 2025?

In early 2025, after OpenAI put out the first-ever reasoning models — o1 and o3 — short timelines to transformative artificial general intelligence swept the AI world. But then, in the second half of ...

10 Feb 25min

#179 Classic episode – Randy Nesse on why evolution left us so vulnerable to depression and anxiety

#179 Classic episode – Randy Nesse on why evolution left us so vulnerable to depression and anxiety

Mental health problems like depression and anxiety affect enormous numbers of people and severely interfere with their lives. By contrast, we don’t see similar levels of physical ill health in young p...

3 Feb 2h 51min

#234 – David Duvenaud on why 'aligned AI' would still kill democracy

#234 – David Duvenaud on why 'aligned AI' would still kill democracy

Democracy might be a brief historical blip. That’s the unsettling thesis of a recent paper, which argues AI that can do all the work a human can do inevitably leads to the “gradual disempowerment” of ...

27 Jan 2h 31min

Populärt inom Utbildning

historiepodden-se
rss-bara-en-till-om-missbruk-medberoende-2
det-skaver
alska-oss
harrisons-dramatiska-historia
nu-blir-det-historia
rss-viktmedicinpodden
sektledare
johannes-hansen-podcast
not-fanny-anymore
allt-du-velat-veta
rss-sjalsligt-avkladd
roda-vita-rosen
i-vantan-pa-katastrofen
rss-max-tant-med-max-villman
sa-in-i-sjalen
rss-om-vi-ska-vara-arliga
rikatillsammans-om-privatekonomi-rikedom-i-livet
polisutbildningspodden
rss-basta-livet