Algorithmic Cancer: Why AI Development Is Not What You Think with Connor Leahy

Algorithmic Cancer: Why AI Development Is Not What You Think with Connor Leahy

Recently, the risks about Artificial Intelligence and the need for 'alignment' have been flooding our cultural discourse – with Artificial Super Intelligence acting as both the most promising goal and most pressing threat. But amid the moral debate, there's been surprisingly little attention paid to a basic question: do we even have the technical capability to guide where any of this is headed? And if not, should we slow the pace of innovation until we better understand how these complex systems actually work?

In this episode, Nate is joined by Artificial Intelligence developer and researcher, Connor Leahy, to discuss the rapid advancements in AI, the potential risks associated with its development, and the challenges of controlling these technologies as they evolve. Connor also explains the phenomenon of what he calls 'algorithmic cancer' – AI generated content that crowds out true human creations, propelled by algorithms that can't tell the difference. Together, they unpack the implications of AI acceleration, from widespread job disruption and energy-intensive computing to the concentration of wealth and power to tech companies.

What kinds of policy and regulatory approaches could help slow down AI's acceleration in order to create safer development pathways? Is there a world where AI becomes a tool to aid human work and creativity, rather than replacing it? And how do these AI risks connect to the deeper cultural conversation about technology's impacts on mental health, meaning, and societal well-being?

(Conversation recorded on May 21st, 2025)

About Connor Leahy:

Connor Leahy is the founder and CEO of Conjecture, which works on aligning artificial intelligence systems by building infrastructure that allows for the creation of scalable, auditable, and controllable AI.

Previously, he co-founded EleutherAI, which was one of the earliest and most successful open-source Large Language Model communities, as well as a home for early discussions on the risks of those same advanced AI systems. Prior to that, Connor worked as an AI researcher and engineer for Aleph Alpha GmbH.

Show Notes and More

Watch this video episode on YouTube

Want to learn the broad overview of The Great Simplification in 30 minutes? Watch our Animated Movie.

---

Support The Institute for the Study of Energy and Our Future

Join our Substack newsletter

Join our Discord channel and connect with other listeners

Jaksot(359)

Bioregional Futures: Reconnecting to Place for Planetary Health with Daniel Christian Wahl

Bioregional Futures: Reconnecting to Place for Planetary Health with Daniel Christian Wahl

(Conversation recorded on July 24th, 2024) In the past century of abundant energy surplus, humanity's globalized, large-scale approach to problem-solving has yielded remarkable benefits and innovati...

4 Syys 20241h 45min

The Physics of Connection: Understanding Relationships and Ecology with Fritjof Capra

The Physics of Connection: Understanding Relationships and Ecology with Fritjof Capra

(Conversation recorded on May 8th, 2024) Without a systems lens, the full reality of the human predicament will never be understood. It is only when we adopt this kind of holistic, wide-boundary thi...

28 Elo 20241h 3min

The Art of Movement Building: Personal Liberation for Public Change with Mamphela Ramphele

The Art of Movement Building: Personal Liberation for Public Change with Mamphela Ramphele

(Conversation recorded on July 17th, 2024) Addressing the risks we face on a global scale is a challenge that can feel both enormous in execution and personally daunting. When it comes to finding th...

21 Elo 20241h 23min

Ask Me Anything - Your Questions About TGS Answered | Frankly 70

Ask Me Anything - Your Questions About TGS Answered | Frankly 70

(Recorded August 11, 2024) The content of The Great Simplification (on Youtube and in real life) can be complex, nuanced and multi-faceted. In today's Frankly, Nate offers reflections on a selection o...

16 Elo 202439min

The Population Problem: Human Impact, Extinctions, and the Biodiversity Crisis with Corey Bradshaw

The Population Problem: Human Impact, Extinctions, and the Biodiversity Crisis with Corey Bradshaw

(Conversation recorded on July 25th, 2024) Show Summary: Human overpopulation is often depicted in the media in one of two ways: as either a catastrophic disaster or an overly-exaggerated concern. ...

14 Elo 20242h

Goldilocks Technology - A Preliminary Checklist | Frankly 69

Goldilocks Technology - A Preliminary Checklist | Frankly 69

(Recorded August 5 2024) As a problem-solving species, technology is an embedded part of the human experience – we assess, innovate, invent and adapt. But as we move out of the anomalous era we have j...

9 Elo 202419min

Biomimicry: Applying Nature's Wisdom to Human Problems with Janine Benyus

Biomimicry: Applying Nature's Wisdom to Human Problems with Janine Benyus

(Conversation recorded on June 25th, 2024) Although artificial intelligence tends to dominate conversations about solving our most daunting global challenges, we may actually find some of the most p...

7 Elo 20241h 36min

Overshoot and Its 7 Fundamental Drivers | Frankly 68

Overshoot and Its 7 Fundamental Drivers | Frankly 68

(Recorded July 23 2024) Description In this week's Frankly, (coincidentally released the day after Earth Overshoot Day), Nate breaks down seven factors contributing to humanity's increasing overshoot...

2 Elo 202415min

Suosittua kategoriassa Tiede

rss-mita-tulisi-tietaa
rss-poliisin-mieli
rss-duodecim-lehti
tiedekulma-podcast
rss-lihavuudesta-podcast
docemilia
utelias-mieli
mielipaivakirja
radio-antro
sotataidon-ytimessa
filocast-filosofian-perusteet
rss-laakaripodi
rss-mental-race
rss-opeklubi
rss-luontopodi-samuel-glassar-tutkii-luonnon-ihmeita
rss-sosiopodi