#85 - Mark Lynas on climate change, societal collapse & nuclear energy

#85 - Mark Lynas on climate change, societal collapse & nuclear energy

A golf-ball sized lump of uranium can deliver more than enough power to cover all of your lifetime energy use. To get the same energy from coal, you’d need 3,200 tonnes of black rock — a mass equivalent to 800 adult elephants, which would produce more than 11,000 tonnes of CO2. That’s about 11,000 tonnes more than the uranium.

Many people aren’t comfortable with the danger posed by nuclear power. But given the climatic stakes, it’s worth asking: Just how much more dangerous is it compared to fossil fuels?

According to today’s guest, Mark Lynas — author of Six Degrees: Our Future on a Hotter Planet (winner of the prestigious Royal Society Prizes for Science Books) and Nuclear 2.0 — it’s actually much, much safer.

Links to learn more, summary and full transcript.

Climatologists James Hansen and Pushker Kharecha calculated that the use of nuclear power between 1971 and 2009 avoided the premature deaths of 1.84 million people by avoiding air pollution from burning coal.

What about radiation or nuclear disasters? According to Our World In Data, in generating a given amount of electricity, nuclear, wind, and solar all cause about the same number of deaths — and it's a tiny number.

So what’s going on? Why isn’t everyone demanding a massive scale-up of nuclear energy to save lives and stop climate change? Mark and many other activists believe that unchecked climate change will result in the collapse of human civilization, so the stakes could not be higher.

Mark says that many environmentalists — including him — simply grew up with anti-nuclear attitudes all around them (possibly stemming from a conflation of nuclear weapons and nuclear energy) and haven't thought to question them.

But he thinks that once you believe in the climate emergency, you have to rethink your opposition to nuclear energy.

At 80,000 Hours we haven’t analysed the merits and flaws of the case for nuclear energy — especially compared to wind and solar paired with gas, hydro, or battery power to handle intermittency — but Mark is convinced.

He says it comes down to physics: Nuclear power is just so much denser.

We need to find an energy source that provides carbon-free power to ~10 billion people, and we need to do it while humanity is doubling or tripling (or more) its energy demand.

How do you do that without destroying the world's ecology? Mark thinks that nuclear is the only way.

Read a more in-depth version of the case for nuclear energy in the full blog post.

For Mark, the only argument against nuclear power is a political one -- that people won't want or accept it.

He says that he knows people in all kinds of mainstream environmental groups — such as Greenpeace — who agree that nuclear must be a vital part of any plan to solve climate change. But, because they think they'll be ostracized if they speak up, they keep their mouths shut.

Mark thinks this willingness to indulge beliefs that contradict scientific evidence stands in the way of actually fully addressing climate change, and so he’s helping to build a movement of folks who are out and proud about their support for nuclear energy.

This is only one topic of many in today’s interview. Arden, Rob, and Mark also discuss:

• At what degrees of warming does societal collapse become likely
• Whether climate change could lead to human extinction
• What environmentalists are getting wrong about climate change
• And much more.

Get this episode by subscribing: type 80,000 Hours into your podcasting app. Or read the linked transcript.

Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcriptions: Zakee Ulhaq.

Jaksot(323)

Inside the Biden admin’s AI policy approach | Jake Sullivan, Biden’s NSA | via The Cognitive Revolution

Inside the Biden admin’s AI policy approach | Jake Sullivan, Biden’s NSA | via The Cognitive Revolution

Jake Sullivan was the US National Security Advisor from 2021-2025. He joined our friends on The Cognitive Revolution podcast in August to discuss AI as a critical national security issue. We thought i...

26 Syys 20251h 5min

#223 – Neel Nanda on leading a Google DeepMind team at 26 – and advice if you want to work at an AI company (part 2)

#223 – Neel Nanda on leading a Google DeepMind team at 26 – and advice if you want to work at an AI company (part 2)

At 26, Neel Nanda leads an AI safety team at Google DeepMind, has published dozens of influential papers, and mentored 50 junior researchers — seven of whom now work at major AI companies. His secret?...

15 Syys 20251h 46min

#222 – Can we tell if an AI is loyal by reading its mind? DeepMind's Neel Nanda (part 1)

#222 – Can we tell if an AI is loyal by reading its mind? DeepMind's Neel Nanda (part 1)

We don’t know how AIs think or why they do what they do. Or at least, we don’t know much. That fact is only becoming more troubling as AIs grow more capable and appear on track to wield enormous cultu...

8 Syys 20253h 1min

#221 – Kyle Fish on the most bizarre findings from 5 AI welfare experiments

#221 – Kyle Fish on the most bizarre findings from 5 AI welfare experiments

What happens when you lock two AI systems in a room together and tell them they can discuss anything they want?According to experiments run by Kyle Fish — Anthropic’s first AI welfare researcher — som...

28 Elo 20252h 28min

How not to lose your job to AI (article by Benjamin Todd)

How not to lose your job to AI (article by Benjamin Todd)

About half of people are worried they’ll lose their job to AI. They’re right to be concerned: AI can now complete real-world coding tasks on GitHub, generate photorealistic video, drive a taxi more sa...

31 Heinä 202551min

Rebuilding after apocalypse: What 13 experts say about bouncing back

Rebuilding after apocalypse: What 13 experts say about bouncing back

What happens when civilisation faces its greatest tests?This compilation brings together insights from researchers, defence experts, philosophers, and policymakers on humanity’s ability to survive and...

15 Heinä 20254h 26min

#220 – Ryan Greenblatt on the 4 most likely ways for AI to take over, and the case for and against AGI in <8 years

#220 – Ryan Greenblatt on the 4 most likely ways for AI to take over, and the case for and against AGI in <8 years

Ryan Greenblatt — lead author on the explosive paper “Alignment faking in large language models” and chief scientist at Redwood Research — thinks there’s a 25% chance that within four years, AI will b...

8 Heinä 20252h 50min

#219 – Toby Ord on graphs AI companies would prefer you didn't (fully) understand

#219 – Toby Ord on graphs AI companies would prefer you didn't (fully) understand

The era of making AI smarter just by making it bigger is ending. But that doesn’t mean progress is slowing down — far from it. AI models continue to get much more powerful, just using very different m...

24 Kesä 20252h 48min

Suosittua kategoriassa Koulutus

rss-murhan-anatomia
voi-hyvin-meditaatiot-2
rss-narsisti
psykopodiaa-podcast
rss-uskonto-on-tylsaa
rss-vapaudu-voimaasi
psykologia
kesken
rss-liian-kuuma-peruna
rahapuhetta
adhd-podi
rss-taloustaito-podcast
rss-niinku-asia-on
jari-sarasvuo-podcast
ihminen-tavattavissa-tommy-hellsten-instituutti
rss-the-amicast
rss-xamk-podcast
rss-arkea-ja-aurinkoa-podcast-espanjasta
rss-opi-espanjaa
rss-tyohyvinvoinnin-aakkoset