#45 - Tyler Cowen's case for maximising econ growth, stabilising civilization & thinking long-term

#45 - Tyler Cowen's case for maximising econ growth, stabilising civilization & thinking long-term

I've probably spent more time reading Tyler Cowen - Professor of Economics at George Mason University - than any other author. Indeed it's his incredibly popular blog Marginal Revolution that prompted me to study economics in the first place. Having spent thousands of hours absorbing Tyler's work, it was a pleasure to be able to question him about his latest book and personal manifesto: Stubborn Attachments: A Vision for a Society of Free, Prosperous, and Responsible Individuals.

Tyler makes the case that, despite what you may have heard, we *can* make rational judgments about what is best for society as a whole. He argues:

1. Our top moral priority should be preserving and improving humanity's long-term future
2. The way to do that is to maximise the rate of sustainable economic growth
3. We should respect human rights and follow general principles while doing so.

We discuss why Tyler believes all these things, and I push back where I disagree. In particular: is higher economic growth actually an effective way to safeguard humanity's future, or should our focus really be elsewhere?

In the process we touch on many of moral philosophy's most pressing questions: Should we discount the future? How should we aggregate welfare across people? Should we follow rules or evaluate every situation individually? How should we deal with the massive uncertainty about the effects of our actions? And should we trust common sense morality or follow structured theories?

Links to learn more, summary and full transcript.

After covering the book, the conversation ranges far and wide. Will we leave the galaxy, and is it a tragedy if we don't? Is a multi-polar world less stable? Will humanity ever help wild animals? Why do we both agree that Kant and Rawls are overrated?

Today's interview is released on both the 80,000 Hours Podcast and Tyler's own show: Conversation with Tyler.

Tyler may have had more influence on me than any other writer but this conversation is richer for our remaining disagreements. If the above isn't enough to tempt you to listen, we also look at:

* Why couldn’t future technology make human life a hundred or a thousand times better than it is for people today?
* Why focus on increasing the rate of economic growth rather than making sure that it doesn’t go to zero?
* Why shouldn’t we dedicate substantial time to the successful introduction of genetic engineering?
* Why should we completely abstain from alcohol and make it a social norm?
* Why is Tyler so pessimistic about space? Is it likely that humans will go extinct before we manage to escape the galaxy?
* Is improving coordination and international cooperation a major priority?
* Why does Tyler think institutions are keeping up with technology?
* Given that our actions seem to have very large and morally significant effects in the long run, are our moral obligations very onerous?
* Can art be intrinsically valuable?
* What does Tyler think Derek Parfit was most wrong about, and what was he was most right about that’s unappreciated today?

Get this episode by subscribing: type 80,000 Hours into your podcasting app.

The 80,000 Hours Podcast is produced by Keiran Harris.

Jaksot(320)

#197 – Nick Joseph on whether Anthropic's AI safety policy is up to the task

#197 – Nick Joseph on whether Anthropic's AI safety policy is up to the task

The three biggest AI companies — Anthropic, OpenAI, and DeepMind — have now all released policies designed to make their AI models less likely to go rogue or cause catastrophic damage as they approach...

22 Elo 20242h 29min

#196 – Jonathan Birch on the edge cases of sentience and why they matter

#196 – Jonathan Birch on the edge cases of sentience and why they matter

"In the 1980s, it was still apparently common to perform surgery on newborn babies without anaesthetic on both sides of the Atlantic. This led to appalling cases, and to public outcry, and to campaign...

15 Elo 20242h 1min

#195 – Sella Nevo on who's trying to steal frontier AI models, and what they could do with them

#195 – Sella Nevo on who's trying to steal frontier AI models, and what they could do with them

"Computational systems have literally millions of physical and conceptual components, and around 98% of them are embedded into your infrastructure without you ever having heard of them. And an inordin...

1 Elo 20242h 8min

#194 – Vitalik Buterin on defensive acceleration and how to regulate AI when you fear government

#194 – Vitalik Buterin on defensive acceleration and how to regulate AI when you fear government

"If you’re a power that is an island and that goes by sea, then you’re more likely to do things like valuing freedom, being democratic, being pro-foreigner, being open-minded, being interested in trad...

26 Heinä 20243h 4min

#193 – Sihao Huang on navigating the geopolitics of US–China AI competition

#193 – Sihao Huang on navigating the geopolitics of US–China AI competition

"You don’t necessarily need world-leading compute to create highly risky AI systems. The biggest biological design tools right now, like AlphaFold’s, are orders of magnitude smaller in terms of comput...

18 Heinä 20242h 23min

#192 – Annie Jacobsen on what would happen if North Korea launched a nuclear weapon at the US

#192 – Annie Jacobsen on what would happen if North Korea launched a nuclear weapon at the US

"Ring one: total annihilation; no cellular life remains. Ring two, another three-mile diameter out: everything is ablaze. Ring three, another three or five miles out on every side: third-degree burns ...

12 Heinä 20241h 54min

#191 (Part 2) – Carl Shulman on government and society after AGI

#191 (Part 2) – Carl Shulman on government and society after AGI

This is the second part of our marathon interview with Carl Shulman. The first episode is on the economy and national security after AGI. You can listen to them in either order!If we develop artificia...

5 Heinä 20242h 20min

#191 (Part 1) – Carl Shulman on the economy and national security after AGI

#191 (Part 1) – Carl Shulman on the economy and national security after AGI

This is the first part of our marathon interview with Carl Shulman. The second episode is on government and society after AGI. You can listen to them in either order!The human brain does what it does ...

27 Kesä 20244h 14min

Suosittua kategoriassa Koulutus

rss-murhan-anatomia
psykopodiaa-podcast
rss-narsisti
voi-hyvin-meditaatiot-2
aamukahvilla
rss-vapaudu-voimaasi
rss-niinku-asia-on
adhd-podi
rss-liian-kuuma-peruna
kesken
psykologia
dear-ladies
rss-koira-haudattuna
leveli
rss-luonnollinen-synnytys-podcast
rahapuhetta
aloita-meditaatio
rss-duodecim-lehti
jari-sarasvuo-podcast
rss-palopaikalla-podcast