#21 - Holden Karnofsky on times philanthropy transformed the world & Open Phil’s plan to do the same

#21 - Holden Karnofsky on times philanthropy transformed the world & Open Phil’s plan to do the same

The Green Revolution averted mass famine during the 20th century. The contraceptive pill gave women unprecedented freedom in planning their own lives. Both are widely recognised as scientific breakthroughs that transformed the world. But few know that those breakthroughs only happened when they did because of a philanthropist willing to take a risky bet on a new idea.

Today’s guest, Holden Karnofsky, has been looking for philanthropy’s biggest success stories because he’s Executive Director of the Open Philanthropy Project, which gives away over $100 million per year - and he’s hungry for big wins.

Full transcript, related links, job opportunities and summary of the interview.

In the 1940s, poverty reduction overseas was not a big priority for many. But the Rockefeller Foundation decided to fund agricultural scientists to breed much better crops for the developing world - thereby massively increasing their food production.

In the 1950s, society was a long way from demanding effective birth control. Activist Margaret Sanger had the idea for the pill, and endocrinologist Gregory Pincus the research team – but they couldn’t proceed without a $40,000 research check from biologist and women’s rights activist Katherine McCormick.

In both cases, it was philanthropists rather than governments that led the way.

The reason, according to Holden, is that while governments have enormous resources, they’re constrained by only being able to fund reasonably sure bets. Philanthropists can transform the world by filling the gaps government leaves - but to seize that opportunity they have to hire outstanding researchers, think long-term and be willing to fail most of the time.

Holden knows more about this type of giving than almost anyone. As founder of GiveWell and then the Open Philanthropy Project, he has been working feverishly since 2007 to find outstanding giving opportunities. This practical experience has made him one of the most influential figures in the development of the school of thought that has come to be known as effective altruism.

We’ve recorded this episode now because [the Open Philanthropy Project is hiring](https://www.openphilanthropy.org/get-involved/jobs) for a large number of positions, which we think would allow the right person to have a very large positive influence on the world. They’re looking for a large number of entry lever researchers to train up, 3 specialist researchers into potential risks from advanced artificial intelligence, as well as a Director of Operations, Operations Associate and General Counsel.

But the conversation goes well beyond specifics about these jobs. We also discuss:

* How did they pick the problems they focus on, and how will they change over time?
* What would Holden do differently if he were starting Open Phil again today?
* What can we learn from the history of philanthropy?
* What makes a good Program Officer.
* The importance of not letting hype get ahead of the science in an emerging field.
* The importance of honest feedback for philanthropists, and the difficulty getting it.
* How do they decide what’s above the bar to fund, and when it’s better to hold onto the money?
* How philanthropic funding can most influence politics.
* What Holden would say to a new billionaire who wanted to give away most of their wealth.
* Why Open Phil is building a research field around the safe development of artificial intelligence
* Why they invested in OpenAI.
* Academia’s faulty approach to answering practical questions.
* What potential utopias do people most want, according to opinion polls?

Keiran Harris helped produce today’s episode.

Jaksot(322)

#223 – Neel Nanda on leading a Google DeepMind team at 26 – and advice if you want to work at an AI company (part 2)

#223 – Neel Nanda on leading a Google DeepMind team at 26 – and advice if you want to work at an AI company (part 2)

At 26, Neel Nanda leads an AI safety team at Google DeepMind, has published dozens of influential papers, and mentored 50 junior researchers — seven of whom now work at major AI companies. His secret?...

15 Syys 20251h 46min

#222 – Can we tell if an AI is loyal by reading its mind? DeepMind's Neel Nanda (part 1)

#222 – Can we tell if an AI is loyal by reading its mind? DeepMind's Neel Nanda (part 1)

We don’t know how AIs think or why they do what they do. Or at least, we don’t know much. That fact is only becoming more troubling as AIs grow more capable and appear on track to wield enormous cultu...

8 Syys 20253h 1min

#221 – Kyle Fish on the most bizarre findings from 5 AI welfare experiments

#221 – Kyle Fish on the most bizarre findings from 5 AI welfare experiments

What happens when you lock two AI systems in a room together and tell them they can discuss anything they want?According to experiments run by Kyle Fish — Anthropic’s first AI welfare researcher — som...

28 Elo 20252h 28min

How not to lose your job to AI (article by Benjamin Todd)

How not to lose your job to AI (article by Benjamin Todd)

About half of people are worried they’ll lose their job to AI. They’re right to be concerned: AI can now complete real-world coding tasks on GitHub, generate photorealistic video, drive a taxi more sa...

31 Heinä 202551min

Rebuilding after apocalypse: What 13 experts say about bouncing back

Rebuilding after apocalypse: What 13 experts say about bouncing back

What happens when civilisation faces its greatest tests?This compilation brings together insights from researchers, defence experts, philosophers, and policymakers on humanity’s ability to survive and...

15 Heinä 20254h 26min

#220 – Ryan Greenblatt on the 4 most likely ways for AI to take over, and the case for and against AGI in <8 years

#220 – Ryan Greenblatt on the 4 most likely ways for AI to take over, and the case for and against AGI in <8 years

Ryan Greenblatt — lead author on the explosive paper “Alignment faking in large language models” and chief scientist at Redwood Research — thinks there’s a 25% chance that within four years, AI will b...

8 Heinä 20252h 50min

#219 – Toby Ord on graphs AI companies would prefer you didn't (fully) understand

#219 – Toby Ord on graphs AI companies would prefer you didn't (fully) understand

The era of making AI smarter just by making it bigger is ending. But that doesn’t mean progress is slowing down — far from it. AI models continue to get much more powerful, just using very different m...

24 Kesä 20252h 48min

#218 – Hugh White on why Trump is abandoning US hegemony – and that’s probably good

#218 – Hugh White on why Trump is abandoning US hegemony – and that’s probably good

For decades, US allies have slept soundly under the protection of America’s overwhelming military might. Donald Trump — with his threats to ditch NATO, seize Greenland, and abandon Taiwan — seems hell...

12 Kesä 20252h 48min

Suosittua kategoriassa Koulutus

rss-murhan-anatomia
voi-hyvin-meditaatiot-2
rss-narsisti
psykopodiaa-podcast
psykologia
rss-liian-kuuma-peruna
rss-vapaudu-voimaasi
kesken
rss-uskonto-on-tylsaa
rahapuhetta
rss-niinku-asia-on
rss-duodecim-lehti
rss-valo-minussa-2
rss-tietoinen-yhteys-podcast-2
rss-hereilla
adhd-podi
ihminen-tavattavissa-tommy-hellsten-instituutti
rss-taloustaito-podcast
rss-sielun-aani-podcast
rss-arkea-ja-aurinkoa-podcast-espanjasta