#43 - Daniel Ellsberg on the institutional insanity that maintains nuclear doomsday machines

#43 - Daniel Ellsberg on the institutional insanity that maintains nuclear doomsday machines

In Stanley Kubrick’s iconic film Dr. Strangelove, the American president is informed that the Soviet Union has created a secret deterrence system which will automatically wipe out humanity upon detection of a single nuclear explosion in Russia. With US bombs heading towards the USSR and unable to be recalled, Dr Strangelove points out that “the whole point of this Doomsday Machine is lost if you keep it a secret – why didn’t you tell the world, eh?” The Soviet ambassador replies that it was to be announced at the Party Congress the following Monday: “The Premier loves surprises”.

Daniel Ellsberg - leaker of the Pentagon Papers which helped end the Vietnam War and Nixon presidency - claims in his new book The Doomsday Machine: Confessions of a Nuclear War Planner that Dr. Strangelove might as well be a documentary. After attending the film in Washington DC in 1964, he and a colleague wondered how so many details of their nuclear planning had leaked.

Links to learn more, summary and full transcript.

The USSR did in fact develop a doomsday machine, Dead Hand, which probably remains active today.

If the system can’t contact military leaders, it checks for signs of a nuclear strike, and if it detects them, automatically launches all remaining Soviet weapons at targets across the northern hemisphere.

As in the film, the Soviet Union long kept Dead Hand completely secret, eliminating any strategic benefit, and rendering it a pointless menace to humanity.

You might think the United States would have a more sensible nuclear launch policy. You’d be wrong.

As Ellsberg explains, based on first-hand experience as a nuclear war planner in the 50s, that the notion that only the president is able to authorize the use of US nuclear weapons is a carefully cultivated myth.

The authority to launch nuclear weapons is delegated alarmingly far down the chain of command – significantly raising the chance that a lone wolf or communication breakdown could trigger a nuclear catastrophe.

The whole justification for this is to defend against a ‘decapitating attack’, where a first strike on Washington disables the ability of the US hierarchy to retaliate. In a moment of crisis, the Russians might view this as their best hope of survival.

Ostensibly, this delegation removes Russia’s temptation to attempt a decapitating attack – the US can retaliate even if its leadership is destroyed. This strategy only works, though, if the tell the enemy you’ve done it.

Instead, since the 50s this delegation has been one of the United States most closely guarded secrets, eliminating its strategic benefit, and rendering it another pointless menace to humanity.

Strategically, the setup is stupid. Ethically, it is monstrous.

So – how was such a system built? Why does it remain to this day? And how might we shrink our nuclear arsenals to the point they don’t risk the destruction of civilization?

Daniel explores these questions eloquently and urgently in his book. Today we cover:

* Why full disarmament today would be a mistake and the optimal number of nuclear weapons to hold
* How well are secrets kept in the government?
* What was the risk of the first atomic bomb test?
* The effect of Trump on nuclear security
* Do we have a reliable estimate of the magnitude of a ‘nuclear winter’?
* Why Gorbachev allowed Russia’s covert biological warfare program to continue

Get this episode by subscribing: type 80,000 Hours into your podcasting app.

The 80,000 Hours Podcast is produced by Keiran Harris.

Jaksot(318)

Luisa and Keiran on free will, and the consequences of never feeling enduring guilt or shame

Luisa and Keiran on free will, and the consequences of never feeling enduring guilt or shame

In this episode from our second show, 80k After Hours, Luisa Rodriguez and Keiran Harris chat about the consequences of letting go of enduring guilt, shame, anger, and pride.Links to learn more, highl...

27 Syys 20241h 36min

#202 – Venki Ramakrishnan on the cutting edge of anti-ageing science

#202 – Venki Ramakrishnan on the cutting edge of anti-ageing science

"For every far-out idea that turns out to be true, there were probably hundreds that were simply crackpot ideas. In general, [science] advances building on the knowledge we have, and seeing what the n...

19 Syys 20242h 20min

#201 – Ken Goldberg on why your robot butler isn’t here yet

#201 – Ken Goldberg on why your robot butler isn’t here yet

"Perception is quite difficult with cameras: even if you have a stereo camera, you still can’t really build a map of where everything is in space. It’s just very difficult. And I know that sounds surp...

13 Syys 20242h 1min

#200 – Ezra Karger on what superforecasters and experts think about existential risks

#200 – Ezra Karger on what superforecasters and experts think about existential risks

"It’s very hard to find examples where people say, 'I’m starting from this point. I’m starting from this belief.' So we wanted to make that very legible to people. We wanted to say, 'Experts think thi...

4 Syys 20242h 49min

#199 – Nathan Calvin on California’s AI bill SB 1047 and its potential to shape US AI policy

#199 – Nathan Calvin on California’s AI bill SB 1047 and its potential to shape US AI policy

"I do think that there is a really significant sentiment among parts of the opposition that it’s not really just that this bill itself is that bad or extreme — when you really drill into it, it feels ...

29 Elo 20241h 12min

#198 – Meghan Barrett on upending everything you thought you knew about bugs in 3 hours

#198 – Meghan Barrett on upending everything you thought you knew about bugs in 3 hours

"This is a group of animals I think people are particularly unfamiliar with. They are especially poorly covered in our science curriculum; they are especially poorly understood, because people don’t s...

26 Elo 20243h 48min

#197 – Nick Joseph on whether Anthropic's AI safety policy is up to the task

#197 – Nick Joseph on whether Anthropic's AI safety policy is up to the task

The three biggest AI companies — Anthropic, OpenAI, and DeepMind — have now all released policies designed to make their AI models less likely to go rogue or cause catastrophic damage as they approach...

22 Elo 20242h 29min

#196 – Jonathan Birch on the edge cases of sentience and why they matter

#196 – Jonathan Birch on the edge cases of sentience and why they matter

"In the 1980s, it was still apparently common to perform surgery on newborn babies without anaesthetic on both sides of the Atlantic. This led to appalling cases, and to public outcry, and to campaign...

15 Elo 20242h 1min

Suosittua kategoriassa Koulutus

rss-murhan-anatomia
voi-hyvin-meditaatiot-2
psykopodiaa-podcast
adhd-podi
rss-narsisti
psykologia
kesken
rahapuhetta
rss-niinku-asia-on
rss-vapaudu-voimaasi
rss-liian-kuuma-peruna
rss-duodecim-lehti
rss-luonnollinen-synnytys-podcast
rss-tietoinen-yhteys-podcast-2
aamukahvilla
rss-uskonto-on-tylsaa
rss-valo-minussa-2
rss-honest-talk-with-laurrenna
rss-opeklubi
rss-psykalab