#16 - Michelle Hutchinson on global priorities research & shaping the ideas of intellectuals

#16 - Michelle Hutchinson on global priorities research & shaping the ideas of intellectuals

In the 40s and 50s neoliberalism was a fringe movement within economics. But by the 80s it had become a dominant school of thought in public policy, and achieved major policy changes across the English speaking world. How did this happen?

In part because its leaders invested heavily in training academics to study and develop their ideas. Whether you think neoliberalism was good or bad, its history demonstrates the impact building a strong intellectual base within universities can have.

Michelle Hutchinson is working to get a different set of ideas a hearing in academia by setting up the Global Priorities Institute (GPI) at Oxford University. The Institute, which is currently hiring for three roles, aims to bring together outstanding philosophers and economists to research how to most improve the world. The hope is that it will spark widespread academic engagement with effective altruist thinking, which will hone the ideas and help them gradually percolate into society more broadly.

Link to the full blog post about this episode including transcript and links to learn more

Its research agenda includes questions like:

* How do we compare the good done by focussing on really different types of causes?
* How does saving lives actually affect the world relative to other things we could do?
* What are the biggest wins governments should be focussed on getting?

Before moving to GPI, Michelle was the Executive Director of Giving What We Can and a founding figure of the effective altruism movement. She has a PhD in Applied Ethics from Oxford on prioritization and global health.

We discuss:

* What is global priorities research and why does it matter?
* How is effective altruism seen in academia? Is it important to convince academics of the value of your work, or is it OK to ignore them?
* Operating inside a university is quite expensive, so is it even worth doing? Who can pay for this kind of thing?
* How hard is it to do something innovative inside a university? How serious are the administrative and other barriers?
* Is it harder to fundraise for a new institute, or hire the right people?
* Have other social movements benefitted from having a prominent academic arm?
* How can people prepare themselves to get research roles at a place like GPI?
* Many people want to have roles doing this kind of research. How many are actually cut out for it? What should those who aren’t do instead?
* What are the odds of the Institute’s work having an effect on the real world?

Get free, one-on-one career advice

We’ve helped hundreds of people compare their options, get introductions, and find high impact jobs. If you want to work on global priorities research or other important questions in academia, find out if our coaching can help you.

Jaksot(324)

#224 – There's a cheap and low-tech way to save humanity from any engineered disease | Andrew Snyder-Beattie

#224 – There's a cheap and low-tech way to save humanity from any engineered disease | Andrew Snyder-Beattie

Conventional wisdom is that safeguarding humanity from the worst biological risks — microbes optimised to kill as many as possible — is difficult bordering on impossible, making bioweapons humanity’s ...

2 Loka 20252h 31min

Inside the Biden admin’s AI policy approach | Jake Sullivan, Biden’s NSA | via The Cognitive Revolution

Inside the Biden admin’s AI policy approach | Jake Sullivan, Biden’s NSA | via The Cognitive Revolution

Jake Sullivan was the US National Security Advisor from 2021-2025. He joined our friends on The Cognitive Revolution podcast in August to discuss AI as a critical national security issue. We thought i...

26 Syys 20251h 5min

#223 – Neel Nanda on leading a Google DeepMind team at 26 – and advice if you want to work at an AI company (part 2)

#223 – Neel Nanda on leading a Google DeepMind team at 26 – and advice if you want to work at an AI company (part 2)

At 26, Neel Nanda leads an AI safety team at Google DeepMind, has published dozens of influential papers, and mentored 50 junior researchers — seven of whom now work at major AI companies. His secret?...

15 Syys 20251h 46min

#222 – Can we tell if an AI is loyal by reading its mind? DeepMind's Neel Nanda (part 1)

#222 – Can we tell if an AI is loyal by reading its mind? DeepMind's Neel Nanda (part 1)

We don’t know how AIs think or why they do what they do. Or at least, we don’t know much. That fact is only becoming more troubling as AIs grow more capable and appear on track to wield enormous cultu...

8 Syys 20253h 1min

#221 – Kyle Fish on the most bizarre findings from 5 AI welfare experiments

#221 – Kyle Fish on the most bizarre findings from 5 AI welfare experiments

What happens when you lock two AI systems in a room together and tell them they can discuss anything they want?According to experiments run by Kyle Fish — Anthropic’s first AI welfare researcher — som...

28 Elo 20252h 28min

How not to lose your job to AI (article by Benjamin Todd)

How not to lose your job to AI (article by Benjamin Todd)

About half of people are worried they’ll lose their job to AI. They’re right to be concerned: AI can now complete real-world coding tasks on GitHub, generate photorealistic video, drive a taxi more sa...

31 Heinä 202551min

Rebuilding after apocalypse: What 13 experts say about bouncing back

Rebuilding after apocalypse: What 13 experts say about bouncing back

What happens when civilisation faces its greatest tests?This compilation brings together insights from researchers, defence experts, philosophers, and policymakers on humanity’s ability to survive and...

15 Heinä 20254h 26min

#220 – Ryan Greenblatt on the 4 most likely ways for AI to take over, and the case for and against AGI in <8 years

#220 – Ryan Greenblatt on the 4 most likely ways for AI to take over, and the case for and against AGI in <8 years

Ryan Greenblatt — lead author on the explosive paper “Alignment faking in large language models” and chief scientist at Redwood Research — thinks there’s a 25% chance that within four years, AI will b...

8 Heinä 20252h 50min

Suosittua kategoriassa Koulutus

rss-murhan-anatomia
rss-narsisti
voi-hyvin-meditaatiot-2
psykopodiaa-podcast
adhd-podi
rss-rahamania
rss-niinku-asia-on
rss-valo-minussa-2
rss-vapaudu-voimaasi
psykologia
aamukahvilla
kesken
rss-koira-haudattuna
koulu-podcast-2
mielipaivakirja
rss-uskonto-on-tylsaa
rss-tietoinen-yhteys-podcast-2
ilona-rauhala
rss-duodecim-lehti
rss-opi-espanjaa