Algorithmic Cancer: Why AI Development Is Not What You Think with Connor Leahy

Algorithmic Cancer: Why AI Development Is Not What You Think with Connor Leahy

Recently, the risks about Artificial Intelligence and the need for 'alignment' have been flooding our cultural discourse – with Artificial Super Intelligence acting as both the most promising goal and most pressing threat. But amid the moral debate, there's been surprisingly little attention paid to a basic question: do we even have the technical capability to guide where any of this is headed? And if not, should we slow the pace of innovation until we better understand how these complex systems actually work?

In this episode, Nate is joined by Artificial Intelligence developer and researcher, Connor Leahy, to discuss the rapid advancements in AI, the potential risks associated with its development, and the challenges of controlling these technologies as they evolve. Connor also explains the phenomenon of what he calls 'algorithmic cancer' – AI generated content that crowds out true human creations, propelled by algorithms that can't tell the difference. Together, they unpack the implications of AI acceleration, from widespread job disruption and energy-intensive computing to the concentration of wealth and power to tech companies.

What kinds of policy and regulatory approaches could help slow down AI's acceleration in order to create safer development pathways? Is there a world where AI becomes a tool to aid human work and creativity, rather than replacing it? And how do these AI risks connect to the deeper cultural conversation about technology's impacts on mental health, meaning, and societal well-being?

(Conversation recorded on May 21st, 2025)

About Connor Leahy:

Connor Leahy is the founder and CEO of Conjecture, which works on aligning artificial intelligence systems by building infrastructure that allows for the creation of scalable, auditable, and controllable AI.

Previously, he co-founded EleutherAI, which was one of the earliest and most successful open-source Large Language Model communities, as well as a home for early discussions on the risks of those same advanced AI systems. Prior to that, Connor worked as an AI researcher and engineer for Aleph Alpha GmbH.

Show Notes and More

Watch this video episode on YouTube

Want to learn the broad overview of The Great Simplification in 30 minutes? Watch our Animated Movie.

---

Support The Institute for the Study of Energy and Our Future

Join our Substack newsletter

Join our Discord channel and connect with other listeners

Jaksot(357)

Energy Blindness | Frankly #3

Energy Blindness | Frankly #3

Nate explains how our culture is "energy blind" and the implications. The YouTube video, featuring charts and graphs, of this podcast is available now: https://www.youtube.com/watch?v=mVjhb8Nu1Sk For ...

21 Kesä 202224min

Tim Watkins: "From Living Like Gods to Living Your Own Story"

Tim Watkins: "From Living Like Gods to Living Your Own Story"

On this episode, we meet with author, social scientist, policy researcher, and mental health advocate Tim Watkins. Watkins gives us a bird's eye view of how energy, the economy, the environment, and m...

15 Kesä 20221h 21min

Aza Raskin: "AI, The Shape of Language, and Earth's Species"

Aza Raskin: "AI, The Shape of Language, and Earth's Species"

On this episode, we meet with cofounder of the Earth Species Project, cofounder of the Center for Humane Technology, and cohost of the podcast Your Undivided Attention, Aza Raskin. Raskin gives us a ...

8 Kesä 20221h 49min

Vicki Robin "Money and LIfe's Energy"

Vicki Robin "Money and LIfe's Energy"

Show Summary:  On this episode, we meet with social innovator, writer, and speaker, Vicki Robin. Robin unpacks how the machine of community begins. How does being vulnerable, sharing, and being obliga...

1 Kesä 20221h 2min

Daniel Schmachtenberger: "Bend not Break #2: Maximum Power and Hyper Agents"

Daniel Schmachtenberger: "Bend not Break #2: Maximum Power and Hyper Agents"

On this episode we meet with founding member of The Consilience Project, Daniel Schmachtenberger. In the second of a four-part series, Nate and Daniel explore the relationship between energy, informat...

25 Touko 20221h 51min

Dr. Simon Michaux: "Minerals and Materials Blindness""

Dr. Simon Michaux: "Minerals and Materials Blindness""

On this episode, we meet with Associate Professor of Geometallurgy at the Geological Survey of Finland, Dr. Simon Michaux. Why do humans ignore important mineral and material limits that will affect h...

18 Touko 20221h 19min

Thomas Murphy: "Physics and Planetary Ambitions"

Thomas Murphy: "Physics and Planetary Ambitions"

On this episode, we meet with Professor of Physics at UCSD and the Associate Director of CASS, the Center for Astrophysics and Space Sciences, Tom Murphy. Murphy shows us how continued growth and ener...

11 Touko 20221h 9min

Chuck Watson: "Nuclear War - All the Questions You Were Afraid to Ask"

Chuck Watson: "Nuclear War - All the Questions You Were Afraid to Ask"

Show Summary: On this episode, we meet again with risk expert Chuck Watson. How can we avoid a nuclear conflict? Watson gives a primer on how to reduce the risk of nuclear conflict and the measures w...

4 Touko 20221h 38min

Suosittua kategoriassa Tiede

rss-mita-tulisi-tietaa
rss-poliisin-mieli
rss-lihavuudesta-podcast
utelias-mieli
tiedekulma-podcast
rss-duodecim-lehti
rss-opeklubi
docemilia
hippokrateen-vastaanotolla
mielipaivakirja
radio-antro
rss-laakaripodi
rss-mental-race
rss-luontopodi-samuel-glassar-tutkii-luonnon-ihmeita