Algorithmic Cancer: Why AI Development Is Not What You Think with Connor Leahy

Algorithmic Cancer: Why AI Development Is Not What You Think with Connor Leahy

Recently, the risks about Artificial Intelligence and the need for 'alignment' have been flooding our cultural discourse – with Artificial Super Intelligence acting as both the most promising goal and most pressing threat. But amid the moral debate, there's been surprisingly little attention paid to a basic question: do we even have the technical capability to guide where any of this is headed? And if not, should we slow the pace of innovation until we better understand how these complex systems actually work?

In this episode, Nate is joined by Artificial Intelligence developer and researcher, Connor Leahy, to discuss the rapid advancements in AI, the potential risks associated with its development, and the challenges of controlling these technologies as they evolve. Connor also explains the phenomenon of what he calls 'algorithmic cancer' – AI generated content that crowds out true human creations, propelled by algorithms that can't tell the difference. Together, they unpack the implications of AI acceleration, from widespread job disruption and energy-intensive computing to the concentration of wealth and power to tech companies.

What kinds of policy and regulatory approaches could help slow down AI's acceleration in order to create safer development pathways? Is there a world where AI becomes a tool to aid human work and creativity, rather than replacing it? And how do these AI risks connect to the deeper cultural conversation about technology's impacts on mental health, meaning, and societal well-being?

(Conversation recorded on May 21st, 2025)

About Connor Leahy:

Connor Leahy is the founder and CEO of Conjecture, which works on aligning artificial intelligence systems by building infrastructure that allows for the creation of scalable, auditable, and controllable AI.

Previously, he co-founded EleutherAI, which was one of the earliest and most successful open-source Large Language Model communities, as well as a home for early discussions on the risks of those same advanced AI systems. Prior to that, Connor worked as an AI researcher and engineer for Aleph Alpha GmbH.

Show Notes and More

Watch this video episode on YouTube

Want to learn the broad overview of The Great Simplification in 30 minutes? Watch our Animated Movie.

---

Support The Institute for the Study of Energy and Our Future

Join our Substack newsletter

Join our Discord channel and connect with other listeners

Jaksot(359)

Planetary Boundaries: Exceeding Earth's Safe Limits with Johan Rockström

Planetary Boundaries: Exceeding Earth's Safe Limits with Johan Rockström

(Conversation recorded on June 19th, 2024) Show Summary: While the mainstream conversation about our planet's future is heavily dominated by the topic of climate change, there are other systems whi...

31 Heinä 20241h 32min

The Ecology of Communication: Moving Beyond Polarization in Service of Life | Reality Roundtable 10

The Ecology of Communication: Moving Beyond Polarization in Service of Life | Reality Roundtable 10

(Conversation recorded on June 14th, 2024) Show Summary: There's a growing understanding of the need for biodiversity across ecosystems for a healthy and resilient biosphere. What if we applied the...

28 Heinä 20241h 48min

The Solutions that can be Named are not the Solutions | Frankly #67

The Solutions that can be Named are not the Solutions | Frankly #67

Recorded July 23 2024 In this week's Frankly, Nate addresses the common desire for solutions to the human predicament - and why the championing of "solutions" is less clear-cut than we might perceiv...

26 Heinä 202422min

Indigenous Wisdom: Resilience, Adaptation, and Seeing Nature as Ourselves with Casey Camp-Horinek

Indigenous Wisdom: Resilience, Adaptation, and Seeing Nature as Ourselves with Casey Camp-Horinek

(Conversation recorded on June 12th, 2024) Show Summary: As we move through difficult cultural transitions and rethink our governance systems, it will be critical that we listen to voices that are ...

24 Heinä 20241h 34min

The Reality Party | Frankly #66

The Reality Party | Frankly #66

Recorded July 16 2024   Description   Following the attempted assassination of former United States President Donald J. Trump, Nate reflects on the dysfunctional social dynamics which have brought man...

19 Heinä 202414min

Silicon Dreams and Carbon Nightmares: The Wide Boundary Impacts of AI with Daniel Schmachtenberger

Silicon Dreams and Carbon Nightmares: The Wide Boundary Impacts of AI with Daniel Schmachtenberger

(Conversation recorded on June 27th, 2024) Show Summary: Artificial intelligence has been advancing at a break-neck pace. Accompanying this is an almost frenzied optimism that AI will fix our most...

17 Heinä 20241h 47min

And Then What?: Using Wide-Boundary Lenses | Frankly 65

And Then What?: Using Wide-Boundary Lenses | Frankly 65

(Recorded July 8 2024) There are many so-called 'solutions' out there that, upon first glance, seem like great ideas - yet when we look beyond the narrow scope of the immediate benefits, we discover a...

12 Heinä 202423min

Eat, Poop, Die: Animals as the Arteries of the Biosphere with Joe Roman

Eat, Poop, Die: Animals as the Arteries of the Biosphere with Joe Roman

(Conversation recorded on June 14th, 2024) Show Summary: If plants are considered the lungs of the Earth, cycling CO2 into oxygen for animals to breathe, then animals act as the heart and arteries,...

10 Heinä 20241h 33min

Suosittua kategoriassa Tiede

rss-mita-tulisi-tietaa
rss-poliisin-mieli
rss-duodecim-lehti
tiedekulma-podcast
rss-lihavuudesta-podcast
docemilia
utelias-mieli
mielipaivakirja
radio-antro
sotataidon-ytimessa
filocast-filosofian-perusteet
rss-laakaripodi
rss-mental-race
rss-opeklubi
rss-luontopodi-samuel-glassar-tutkii-luonnon-ihmeita
rss-sosiopodi