Algorithmic Cancer: Why AI Development Is Not What You Think with Connor Leahy

Algorithmic Cancer: Why AI Development Is Not What You Think with Connor Leahy

Recently, the risks about Artificial Intelligence and the need for 'alignment' have been flooding our cultural discourse – with Artificial Super Intelligence acting as both the most promising goal and most pressing threat. But amid the moral debate, there's been surprisingly little attention paid to a basic question: do we even have the technical capability to guide where any of this is headed? And if not, should we slow the pace of innovation until we better understand how these complex systems actually work?

In this episode, Nate is joined by Artificial Intelligence developer and researcher, Connor Leahy, to discuss the rapid advancements in AI, the potential risks associated with its development, and the challenges of controlling these technologies as they evolve. Connor also explains the phenomenon of what he calls 'algorithmic cancer' – AI generated content that crowds out true human creations, propelled by algorithms that can't tell the difference. Together, they unpack the implications of AI acceleration, from widespread job disruption and energy-intensive computing to the concentration of wealth and power to tech companies.

What kinds of policy and regulatory approaches could help slow down AI's acceleration in order to create safer development pathways? Is there a world where AI becomes a tool to aid human work and creativity, rather than replacing it? And how do these AI risks connect to the deeper cultural conversation about technology's impacts on mental health, meaning, and societal well-being?

(Conversation recorded on May 21st, 2025)

About Connor Leahy:

Connor Leahy is the founder and CEO of Conjecture, which works on aligning artificial intelligence systems by building infrastructure that allows for the creation of scalable, auditable, and controllable AI.

Previously, he co-founded EleutherAI, which was one of the earliest and most successful open-source Large Language Model communities, as well as a home for early discussions on the risks of those same advanced AI systems. Prior to that, Connor worked as an AI researcher and engineer for Aleph Alpha GmbH.

Show Notes and More

Watch this video episode on YouTube

Want to learn the broad overview of The Great Simplification in 30 minutes? Watch our Animated Movie.

---

Support The Institute for the Study of Energy and Our Future

Join our Substack newsletter

Join our Discord channel and connect with other listeners

Episoder(359)

Sandra Faber: "The Universe and Our Place in It"

Sandra Faber: "The Universe and Our Place in It"

On this episode, astrophysicist Sandra Faber joins Nate for a wideview cosmological conversation on the development of the known-universe and the moral implications for humanity's role within it. We a...

28 Feb 20241h 26min

John Robb: "Networked Tribalism, AI, and Asteroids"

John Robb: "Networked Tribalism, AI, and Asteroids"

On this episode, Nate is joined by author and technology analyst John Robb to discuss how geopolitics, information warfare, and technology are shaping how we understand the world and interact with eac...

21 Feb 20241h 51min

Reflections From India | Frankly #54

Reflections From India | Frankly #54

Recorded February 13 2024 Description Returning from his first visit to India for a six-week limbic reset, Nate shares insights on both his personal experiences in the country and how its history,...

16 Feb 202412min

Ashley Hodgson: "The New Enlightenment and Behavioral Economics"

Ashley Hodgson: "The New Enlightenment and Behavioral Economics"

On this episode, Nate is joined by Ashley Hodgson, a professor in behavioral economics, where she offers a perspective on the superorganism and what she calls 'The New Enlightenment'. By taking a wide...

14 Feb 20241h 5min

Steve Keen: "On the Origins of Energy Blindness"

Steve Keen: "On the Origins of Energy Blindness"

On this episode, economist Steve Keen offers a deep forensic history of why modern economic theory has neglected the role of energy in productivity - and why this "Energy Blindness" is now a major bli...

7 Feb 20241h 32min

Mario Giampietro: "Models with Meaning - Changing Social Practices"

Mario Giampietro: "Models with Meaning - Changing Social Practices"

On this episode, Nate is joined by biophysical analyst Mario Giampietro to unpack his decades of research on a wide-lens view of the challenges facing the human system. With current metrics that only ...

31 Jan 20241h 23min

Alexa Firmenich: "Biodiversity, Beauty, and Being"

Alexa Firmenich: "Biodiversity, Beauty, and Being"

On this episode, Nate is joined by Alexa Firmenich, whose work spans biodiversity advocacy, ESG investing, wilderness excursion facilitating, and podcasting/creative writing. Together, they philosophi...

24 Jan 20241h 44min

The Haves & The Have-Nots | Frankly #53

The Haves & The Have-Nots | Frankly #53

Recorded December 18 2023 Description In this Frankly, Nate follows up the recent Reality Roundtable on poverty with a wider perspective on the different types of "wealth" in our society that go b...

19 Jan 202414min

Populært innen Vitenskap

fastlegen
tingenes-tilstand
jss
rekommandert
tomprat-med-gunnar-tjomlid
rss-rekommandert
forskningno
sinnsyn
villmarksliv
rss-paradigmepodden
smart-forklart
tidlose-historier
diagnose
dekodet-2
grunnstoffene
fjellsportpodden
nevropodden
noen-har-snakket-sammen
vett-og-vitenskap-med-gaute-einevoll
rss-nysgjerrige-norge