“Long-term risks from ideological fanaticism” by David_Althaus, Jamie_Harris, vanessa16, Clare_Diane, Will Aldred

“Long-term risks from ideological fanaticism” by David_Althaus, Jamie_Harris, vanessa16, Clare_Diane, Will Aldred

Cross-posted to LessWrong.

Summary
  • History's most destructive ideologies—like Nazism, totalitarian communism, and religious fundamentalism—exhibited remarkably similar characteristics:
    • epistemic and moral certainty
    • extreme tribalism dividing humanity into a sacred “us” and an evil “them”
    • a willingness to use whatever means necessary, including brutal violence.
  • Such ideological fanaticism was a major driver of eight of the ten greatest atrocities since 1800, including the Taiping Rebellion, World War II, and the regimes of Stalin, Mao, and Hitler.
  • We focus on ideological fanaticism over related concepts like totalitarianism partly because it better captures terminal preferences, which plausibly matter most as we approach superintelligent AI and technological maturity.
  • Ideological fanaticism is considerably less influential than in the past, controlling only a small fraction of world GDP. Yet at least hundreds of millions still hold fanatical views, many regimes exhibit concerning ideological tendencies, and the past two decades have seen widespread democratic backsliding.
  • The long-term influence of ideological fanaticism is uncertain. Fanaticism faces many disadvantages including a weak starting position, poor epistemics, and difficulty assembling broad coalitions. But it benefits from greater willingness to use extreme measures, fervent mass followings, and a historical tendency to survive and even thrive amid technological and societal upheaval. Beyond complete victory or defeat, multipolarity may [...]

---

Outline:

(00:16) Summary

(05:19) What do we mean by ideological fanaticism?

(08:40) I. Dogmatic certainty: epistemic and moral lock-in

(10:02) II. Manichean tribalism: total devotion to us, total hatred for them

(12:42) III. Unconstrained violence: any means necessary

(14:33) Fanaticism as a multidimensional continuum

(16:09) Ideological fanaticism drove most of recent historys worst atrocities

(19:24) Death tolls dont capture all harm

(20:55) Intentional versus natural or accidental harm

(22:44) Why emphasize ideological fanaticism over political systems like totalitarianism?

(25:07) Fanatical and totalitarian regimes have caused far more harm than all other regime types

(26:29) Authoritarianism as a risk factor

(27:19) Values change political systems: Ideological fanatics seek totalitarianism, not democracy

(29:50) Terminal values may matter independently of political systems, especially with AGI

(31:02) Fanaticisms connection to malevolence (dark personality traits)

(34:22) The current influence of ideological fanaticism

(34:42) Historical perspective: it was much worse, but we are sliding back

(37:19) Estimating the global scale of ideological fanaticism

(43:57) State actors

(48:12) How much influence will ideological fanaticism have in the long-term future?

(48:57) Reasons for optimism: Why ideological fanaticism will likely lose

(49:45) A worse starting point and historical track record

(50:33) Fanatics intolerance results in coalitional disadvantages

(51:53) The epistemic penalty of irrational dogmatism

(54:21) The marketplace of ideas and human preferences

(55:57) Reasons for pessimism: Why ideological fanatics may gain power

(56:04) The fragility of democratic leadership in AI

(56:37) Fanatical actors may grab power via coups or revolutions

(59:36) Fanatics have fewer moral constraints

(01:01:13) Fanatics prioritize destructive capabilities

(01:02:13) Some ideologies with fanatical elements have been remarkably resilient and successful

(01:03:01) Novel fanatical ideologies could emerge--or existing ones could mutate

(01:05:08) Fanatics may have longer time horizons, greater scope-sensitivity, and prioritize growth more

(01:07:15) A possible middle ground: Persistent multipolar worlds

(01:08:33) Why multipolar futures seem plausible

(01:10:00) Why multipolar worlds might persist indefinitely

(01:15:42) Ideological fanaticism increases existential and suffering risks

(01:17:09) Ideological fanaticism increases the risk of war and conflict

(01:17:44) Reasons for war and ideological fanaticism

(01:26:27) Fanatical ideologies are non-democratic, which increases the risk of war

(01:27:00) These risks are both time-sensitive and timeless

(01:27:44) Fanatical retributivism may lead to astronomical suffering

(01:29:50) Empirical evidence: how many people endorse eternal extreme punishment?

(01:33:53) Religious fanatical retributivism

(01:40:45) Secular fanatical retributivism

(01:41:43) Ideological fanaticism could undermine long-reflection-style frameworks and AI alignment

(01:42:33) Ideological fanaticism threatens collective moral deliberation

(01:47:35) AI alignment may not solve the fanaticism problem either

(01:53:33) Prevalence of reality-denying, anti-pluralistic, and punitive worldviews

(01:55:44) Ideological fanaticism could worsen many other risks

(01:55:49) Differential intellectual regress

(01:56:51) Ideological fanaticism may give rise to extreme optimization and insatiable moral desires

(01:59:21) Apocalyptic terrorism

(02:00:05) S-risk-conducive propensities and reverse cooperative intelligence

(02:01:28) More speculative dynamics: purity spirals and self-inflicted suffering

(02:03:00) Unknown unknowns and navigating exotic scenarios

(02:03:43) Interventions

(02:05:31) Societal or political interventions

(02:05:51) Safeguarding democracy

(02:06:40) Reducing political polarization

(02:10:26) Promoting anti-fanatical values: classical liberalism and Enlightenment principles

(02:13:55) Growing the influence of liberal democracies

(02:15:54) Encouraging reform in illiberal countries

(02:16:51) Promoting international cooperation

(02:22:36) Artificial intelligence-related interventions

(02:22:41) Reducing the chance that transformative AI falls into the hands of fanatics

(02:27:58) Making transformative AIs themselves less likely to be fanatical

(02:36:14) Using AI to improve epistemics and deliberation

(02:38:13) Fanaticism-resistant post-AGI governance

(02:39:51) Addressing deeper causes of ideological fanaticism

(02:41:26) Supplementary materials

(02:41:39) Acknowledgments

(02:42:22) References

---

First published:
February 12th, 2026

Source:
https://forum.effectivealtruism.org/posts/EDBQPT65XJsgszwmL/long-term-risks-from-ideological-fanaticism

---

Narrated by TYPE III AUDIO.

---

Images from the article:

Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Avsnitt(250)

[Linkpost] “Starfish” by Aaron Gertler 🔸

[Linkpost] “Starfish” by Aaron Gertler 🔸

This is a link post. By Alexander Wales Thousands of starfish had washed up on the beach, and a little girl was diligently throwing them back into the water, one at a time. A man came up to the girl a...

1 Maj 4min

“Time Sensitive Urgent Animal Welfare Action” by Bentham’s Bulldog

“Time Sensitive Urgent Animal Welfare Action” by Bentham’s Bulldog

The EATS act—now called the save our bacon act—would make it illegal for states to pass animal welfare laws that apply to products produced out of state. This would gut most state level animal protect...

29 Apr 1min

“Forecasting is Way Overrated, and We Should Stop Funding It” by Marcus Abramovitch 🔸

“Forecasting is Way Overrated, and We Should Stop Funding It” by Marcus Abramovitch 🔸

Summary EA and rationalists got enamoured with forecasting and prediction markets and made them part of the culture, but this hasn’t proven very useful, yet it continues to receive substantial EA fun...

26 Apr 8min

“My lover, effective altruism” by Natalie_Cargill

“My lover, effective altruism” by Natalie_Cargill

Crossposted from Substack. This post is part of a 30-posts-in-30-days ordeal at Inkhaven. All suboptimalities are the result of that. This is part 2, here is part 1 in my EA mini series! On my way to ...

26 Apr 8min

“A Database of Near-Term Interventions for Wild Animals” by Bob Fischer

“A Database of Near-Term Interventions for Wild Animals” by Bob Fischer

The Animal Welfare Department (AWD) at Rethink Priorities supports high-impact strategies to help animals, especially where suffering is vast and largely neglected. Therefore, one of our focus areas i...

24 Apr 17min

“The AI people have been right a lot” by Dylan Matthews

“The AI people have been right a lot” by Dylan Matthews

This post was crossposted from Dylan Matthew's blog by the EA Forum team. The author may not see or reply to comments. Subtitle: Try to keep an open mind as the world gets increasingly wild.The crowd ...

20 Apr 10min

[Linkpost] “The Anthropic IPO Is Coming. We Aren’t Ready for It.” by Sophie Kim

[Linkpost] “The Anthropic IPO Is Coming. We Aren’t Ready for It.” by Sophie Kim

This is a link post. More money is coming than AI safety has ever seen. The capacity to deploy it doesn't exist yet.Image Source: Fortune.com This week, Anthropic announced Claude Mythos Preview– a mo...

17 Apr 15min

“AI Safety’s Biggest Talent Gap Isn’t Researchers. It’s Generalists.” by Topaz, Agustín Covarrubias 🔸, Alexandra Bates, Parv Mahajan, Kairos

“AI Safety’s Biggest Talent Gap Isn’t Researchers. It’s Generalists.” by Topaz, Agustín Covarrubias 🔸, Alexandra Bates, Parv Mahajan, Kairos

This post was cross posted to LessWrong TL;DR: One of the largest talent gaps in AI safety is competent generalists: program managers, fieldbuilders, operators, org leaders, chiefs of staff, founders....

16 Apr 13min

Populärt inom Samhälle & Kultur

podme-dokumentar
gynning-berg
aftonbladet-krim
en-mork-historia
p3-dokumentar
creepypodden-med-jack-werner
skaringer-nessvold
svenska-fall
spar
aftonbladet-daily
killradet
hor-har
kod-katastrof
mardromsgasten
rss-brottsutredarna
flashback-forever
vad-blir-det-for-mord
rysarpodden
rss-mer-an-bara-morsa
larm-vi-minns