#45 - Tyler Cowen's case for maximising econ growth, stabilising civilization & thinking long-term

#45 - Tyler Cowen's case for maximising econ growth, stabilising civilization & thinking long-term

I've probably spent more time reading Tyler Cowen - Professor of Economics at George Mason University - than any other author. Indeed it's his incredibly popular blog Marginal Revolution that prompted me to study economics in the first place. Having spent thousands of hours absorbing Tyler's work, it was a pleasure to be able to question him about his latest book and personal manifesto: Stubborn Attachments: A Vision for a Society of Free, Prosperous, and Responsible Individuals.

Tyler makes the case that, despite what you may have heard, we *can* make rational judgments about what is best for society as a whole. He argues:

1. Our top moral priority should be preserving and improving humanity's long-term future
2. The way to do that is to maximise the rate of sustainable economic growth
3. We should respect human rights and follow general principles while doing so.

We discuss why Tyler believes all these things, and I push back where I disagree. In particular: is higher economic growth actually an effective way to safeguard humanity's future, or should our focus really be elsewhere?

In the process we touch on many of moral philosophy's most pressing questions: Should we discount the future? How should we aggregate welfare across people? Should we follow rules or evaluate every situation individually? How should we deal with the massive uncertainty about the effects of our actions? And should we trust common sense morality or follow structured theories?

Links to learn more, summary and full transcript.

After covering the book, the conversation ranges far and wide. Will we leave the galaxy, and is it a tragedy if we don't? Is a multi-polar world less stable? Will humanity ever help wild animals? Why do we both agree that Kant and Rawls are overrated?

Today's interview is released on both the 80,000 Hours Podcast and Tyler's own show: Conversation with Tyler.

Tyler may have had more influence on me than any other writer but this conversation is richer for our remaining disagreements. If the above isn't enough to tempt you to listen, we also look at:

* Why couldn’t future technology make human life a hundred or a thousand times better than it is for people today?
* Why focus on increasing the rate of economic growth rather than making sure that it doesn’t go to zero?
* Why shouldn’t we dedicate substantial time to the successful introduction of genetic engineering?
* Why should we completely abstain from alcohol and make it a social norm?
* Why is Tyler so pessimistic about space? Is it likely that humans will go extinct before we manage to escape the galaxy?
* Is improving coordination and international cooperation a major priority?
* Why does Tyler think institutions are keeping up with technology?
* Given that our actions seem to have very large and morally significant effects in the long run, are our moral obligations very onerous?
* Can art be intrinsically valuable?
* What does Tyler think Derek Parfit was most wrong about, and what was he was most right about that’s unappreciated today?

Get this episode by subscribing: type 80,000 Hours into your podcasting app.

The 80,000 Hours Podcast is produced by Keiran Harris.

Avsnitt(326)

#157 – Ezra Klein on existential risk from AI and what DC could do about it

#157 – Ezra Klein on existential risk from AI and what DC could do about it

In Oppenheimer, scientists detonate a nuclear weapon despite thinking there's some 'near zero' chance it would ignite the atmosphere, putting an end to life on Earth. Today, scientists working on AI t...

24 Juli 20231h 18min

#156 – Markus Anderljung on how to regulate cutting-edge AI models

#156 – Markus Anderljung on how to regulate cutting-edge AI models

"At the front of the pack we have these frontier AI developers, and we want them to identify particularly dangerous models ahead of time. Once those mines have been discovered, and the frontier develo...

10 Juli 20232h 6min

Bonus: The Worst Ideas in the History of the World

Bonus: The Worst Ideas in the History of the World

Today’s bonus release is a pilot for a new podcast called ‘The Worst Ideas in the History of the World’, created by Keiran Harris — producer of the 80,000 Hours Podcast.If you have strong opinions abo...

30 Juni 202335min

#155 – Lennart Heim on the compute governance era and what has to come after

#155 – Lennart Heim on the compute governance era and what has to come after

As AI advances ever more quickly, concerns about potential misuse of highly capable models are growing. From hostile foreign governments and terrorists to reckless entrepreneurs, the threat of AI fall...

22 Juni 20233h 12min

#154 - Rohin Shah on DeepMind and trying to fairly hear out both AI doomers and doubters

#154 - Rohin Shah on DeepMind and trying to fairly hear out both AI doomers and doubters

Can there be a more exciting and strange place to work today than a leading AI lab? Your CEO has said they're worried your research could cause human extinction. The government is setting up meetings ...

9 Juni 20233h 9min

#153 – Elie Hassenfeld on 2 big picture critiques of GiveWell's approach, and 6 lessons from their recent work

#153 – Elie Hassenfeld on 2 big picture critiques of GiveWell's approach, and 6 lessons from their recent work

GiveWell is one of the world's best-known charity evaluators, with the goal of "searching for the charities that save or improve lives the most per dollar." It mostly recommends projects that help the...

2 Juni 20232h 56min

#152 – Joe Carlsmith on navigating serious philosophical confusion

#152 – Joe Carlsmith on navigating serious philosophical confusion

What is the nature of the universe? How do we make decisions correctly? What differentiates right actions from wrong ones?Such fundamental questions have been the subject of philosophical and theologi...

19 Maj 20233h 26min

#151 – Ajeya Cotra on accidentally teaching AI models to deceive us

#151 – Ajeya Cotra on accidentally teaching AI models to deceive us

Imagine you are an orphaned eight-year-old whose parents left you a $1 trillion company, and no trusted adult to serve as your guide to the world. You have to hire a smart adult to run that company, g...

12 Maj 20232h 49min

Populärt inom Utbildning

historiepodden-se
rss-bara-en-till-om-missbruk-medberoende-2
det-skaver
alska-oss
nu-blir-det-historia
harrisons-dramatiska-historia
roda-vita-rosen
not-fanny-anymore
johannes-hansen-podcast
sektledare
rss-viktmedicinpodden
rss-foraldramotet-bring-lagercrantz
sa-in-i-sjalen
i-vantan-pa-katastrofen
allt-du-velat-veta
rss-max-tant-med-max-villman
rikatillsammans-om-privatekonomi-rikedom-i-livet
rib-podcast
rss-sjalsligt-avkladd
rss-om-vi-ska-vara-arliga