#128 – Chris Blattman on the five reasons wars happen

#128 – Chris Blattman on the five reasons wars happen

In nature, animals roar and bare their teeth to intimidate adversaries — but one side usually backs down, and real fights are rare. The wisdom of evolution is that the risk of violence is just too great.

Which might make one wonder: if war is so destructive, why does it happen? The question may sound naïve, but in fact it represents a deep puzzle. If a war will cost trillions and kill tens of thousands, it should be easy for either side to make a peace offer that both they and their opponents prefer to actually fighting it out.

The conundrum of how humans can engage in incredibly costly and protracted conflicts has occupied academics across the social sciences for years. In today's episode, we speak with economist Chris Blattman about his new book, Why We Fight: The Roots of War and the Paths to Peace, which summarises what they think they've learned.

Links to learn more, summary and full transcript.

Chris's first point is that while organised violence may feel like it's all around us, it's actually very rare in humans, just as it is with other animals. Across the world, hundreds of groups dislike one another — but knowing the cost of war, they prefer to simply loathe one another in peace.

In order to understand what’s wrong with a sick patient, a doctor needs to know what a healthy person looks like. And to understand war, social scientists need to study all the wars that could have happened but didn't — so they can see what a healthy society looks like and what's missing in the places where war does take hold.

Chris argues that social scientists have generated five cogent models of when war can be 'rational' for both sides of a conflict:

1. Unchecked interests — such as national leaders who bear few of the costs of launching a war.
2. Intangible incentives — such as an intrinsic desire for revenge.
3. Uncertainty — such as both sides underestimating each other's resolve to fight.
4. Commitment problems — such as the inability to credibly promise not to use your growing military might to attack others in future.
5. Misperceptions — such as our inability to see the world through other people's eyes.

In today's interview, we walk through how each of the five explanations work and what specific wars or actions they might explain.

In the process, Chris outlines how many of the most popular explanations for interstate war are wildly overused (e.g. leaders who are unhinged or male) or misguided from the outset (e.g. resource scarcity).

The interview also covers:

• What Chris and Rob got wrong about the war in Ukraine
• What causes might not fit into these five categories
• The role of people's choice to escalate or deescalate a conflict
• How great power wars or nuclear wars are different, and what can be done to prevent them
• How much representative government helps to prevent war
• And much more

Chapters:

  • Rob’s intro (00:00:00)
  • The interview begins (00:01:43)
  • What people get wrong about violence (00:04:40)
  • Medellín gangs (00:11:48)
  • Overrated causes of violence (00:23:53)
  • Cause of war #1: Unchecked interests (00:36:40)
  • Cause of war #2: Intangible incentives (00:41:40)
  • Cause of war #3: Uncertainty (00:53:04)
  • Cause of war #4: Commitment problems (01:02:24)
  • Cause of war #5: Misperceptions (01:12:18)
  • Weaknesses of the model (01:26:08)
  • Dancing on the edge of a cliff (01:29:06)
  • Confusion around escalation (01:35:26)
  • Applying the model to the war between Russia and Ukraine (01:42:34)
  • Great power wars (02:01:46)
  • Preventing nuclear war (02:18:57)
  • Why undirected approaches won't work (02:22:51)
  • Democratic peace theory (02:31:10)
  • Exchanging hostages (02:37:21)
  • What you can actually do to help (02:41:25)

Producer: Keiran Harris
Audio mastering: Ben Cordell
Transcriptions: Katy Moore

Avsnitt(320)

Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

Every major AI company has the same safety plan: when AI gets crazy powerful and really dangerous, they’ll use the AI itself to figure out how to make AI safe and beneficial. It sounds circular, almos...

17 Feb 2h 54min

What the hell happened with AGI timelines in 2025?

What the hell happened with AGI timelines in 2025?

In early 2025, after OpenAI put out the first-ever reasoning models — o1 and o3 — short timelines to transformative artificial general intelligence swept the AI world. But then, in the second half of ...

10 Feb 25min

#179 Classic episode – Randy Nesse on why evolution left us so vulnerable to depression and anxiety

#179 Classic episode – Randy Nesse on why evolution left us so vulnerable to depression and anxiety

Mental health problems like depression and anxiety affect enormous numbers of people and severely interfere with their lives. By contrast, we don’t see similar levels of physical ill health in young p...

3 Feb 2h 51min

#234 – David Duvenaud on why 'aligned AI' would still kill democracy

#234 – David Duvenaud on why 'aligned AI' would still kill democracy

Democracy might be a brief historical blip. That’s the unsettling thesis of a recent paper, which argues AI that can do all the work a human can do inevitably leads to the “gradual disempowerment” of ...

27 Jan 2h 31min

#145 Classic episode – Christopher Brown on why slavery abolition wasn't inevitable

#145 Classic episode – Christopher Brown on why slavery abolition wasn't inevitable

In many ways, humanity seems to have become more humane and inclusive over time. While there’s still a lot of progress to be made, campaigns to give people of different genders, races, sexualities, et...

20 Jan 2h 56min

#233 – James Smith on how to prevent a mirror life catastrophe

#233 – James Smith on how to prevent a mirror life catastrophe

When James Smith first heard about mirror bacteria, he was sceptical. But within two weeks, he’d dropped everything to work on it full time, considering it the worst biothreat that he’d seen described...

13 Jan 2h 9min

#144 Classic episode – Athena Aktipis on why cancer is a fundamental universal phenomena

#144 Classic episode – Athena Aktipis on why cancer is a fundamental universal phenomena

What’s the opposite of cancer? If you answered “cure,” “antidote,” or “antivenom” — you’ve obviously been reading the antonym section at www.merriam-webster.com/thesaurus/cancer.But today’s guest Athe...

9 Jan 3h 30min

#142 Classic episode – John McWhorter on why the optimal number of languages might be one, and other provocative claims about language

#142 Classic episode – John McWhorter on why the optimal number of languages might be one, and other provocative claims about language

John McWhorter is a linguistics professor at Columbia University specialising in research on creole languages. He's also a content-producing machine, never afraid to give his frank opinion on anything...

6 Jan 1h 35min

Populärt inom Utbildning

rss-bara-en-till-om-missbruk-medberoende-2
historiepodden-se
det-skaver
nu-blir-det-historia
rss-viktmedicinpodden
harrisons-dramatiska-historia
johannes-hansen-podcast
not-fanny-anymore
alska-oss
allt-du-velat-veta
roda-vita-rosen
sektledare
sa-in-i-sjalen
rss-sjalsligt-avkladd
i-vantan-pa-katastrofen
dumforklarat
rss-beratta-alltid-det-har
rss-max-tant-med-max-villman
rss-basta-livet
psykologsnack