#124 Classic episode – Karen Levy on fads and misaligned incentives in global development, and scaling deworming to reach hundreds of millions

#124 Classic episode – Karen Levy on fads and misaligned incentives in global development, and scaling deworming to reach hundreds of millions

If someone said a global health and development programme was sustainable, participatory, and holistic, you'd have to guess that they were saying something positive. But according to today's guest Karen Levy — deworming pioneer and veteran of Innovations for Poverty Action, Evidence Action, and Y Combinator — each of those three concepts has become so fashionable that they're at risk of being seriously overrated and applied where they don't belong.

Rebroadcast: this episode was originally released in March 2022.

Links to learn more, highlights, and full transcript.

Such concepts might even cause harm — trying to make a project embody all three is as likely to ruin it as help it flourish.

First, what do people mean by 'sustainability'? Usually they mean something like the programme will eventually be able to continue without needing further financial support from the donor. But how is that possible? Governments, nonprofits, and aid agencies aim to provide health services, education, infrastructure, financial services, and so on — and all of these require ongoing funding to pay for materials and staff to keep them running.

Given that someone needs to keep paying, Karen tells us that in practice, 'sustainability' is usually a euphemism for the programme at some point being passed on to someone else to fund — usually the national government. And while that can be fine, the national government of Kenya only spends $400 per person to provide each and every government service — just 2% of what the US spends on each resident. Incredibly tight budgets like that are typical of low-income countries.

'Participatory' also sounds nice, and inasmuch as it means leaders are accountable to the people they're trying to help, it probably is. But Karen tells us that in the field, ‘participatory’ usually means that recipients are expected to be involved in planning and delivering services themselves.

While that might be suitable in some situations, it's hardly something people in rich countries always want for themselves. Ideally we want government healthcare and education to be high quality without us having to attend meetings to keep it on track — and people in poor countries have as many or more pressures on their time. While accountability is desirable, an expectation of participation can be as much a burden as a blessing.

Finally, making a programme 'holistic' could be smart, but as Karen lays out, it also has some major downsides. For one, it means you're doing lots of things at once, which makes it hard to tell which parts of the project are making the biggest difference relative to their cost. For another, when you have a lot of goals at once, it's hard to tell whether you're making progress, or really put your mind to focusing on making one thing go extremely well. And finally, holistic programmes can be impractically expensive — Karen tells the story of a wonderful 'holistic school health' programme that, if continued, was going to cost 3.5 times the entire school's budget.

In this in-depth conversation, originally released in March 2022, Karen Levy and host Rob Wiblin chat about the above, as well as:

  • Why it pays to figure out how you'll interpret the results of an experiment ahead of time
  • The trouble with misaligned incentives within the development industry
  • Projects that don't deliver value for money and should be scaled down
  • How Karen accidentally became a leading figure in the push to deworm tens of millions of schoolchildren
  • Logistical challenges in reaching huge numbers of people with essential services
  • Lessons from Karen's many-decades career
  • And much more

Chapters:

  • Cold open (00:00:00)
  • Rob's intro (00:01:33)
  • The interview begins (00:02:21)
  • Funding for effective altruist–mentality development projects (00:04:59)
  • Pre-policy plans (00:08:36)
  • ‘Sustainability’, and other myths in typical international development practice (00:21:37)
  • ‘Participatoriness’ (00:36:20)
  • ‘Holistic approaches’ (00:40:20)
  • How the development industry sees evidence-based development (00:51:31)
  • Initiatives in Africa that should be significantly curtailed (00:56:30)
  • Misaligned incentives within the development industry (01:05:46)
  • Deworming: the early days (01:21:09)
  • The problem of deworming (01:34:27)
  • Deworm the World (01:45:43)
  • Where the majority of the work was happening (01:55:38)
  • Logistical issues (02:20:41)
  • The importance of a theory of change (02:31:46)
  • Ways that things have changed since 2006 (02:36:07)
  • Academic work vs policy work (02:38:33)
  • Fit for Purpose (02:43:40)
  • Living in Kenya (03:00:32)
  • Underrated life advice (03:05:29)
  • Rob’s outro (03:09:18)

Producer: Keiran Harris
Audio mastering: Ben Cordell and Ryan Kessler
Transcriptions: Katy Moore

Jaksot(314)

#142 – John McWhorter on key lessons from linguistics, the virtue of creoles, and language extinction

#142 – John McWhorter on key lessons from linguistics, the virtue of creoles, and language extinction

John McWhorter is a linguistics professor at Columbia University specialising in research on creole languages.He's also a content-producing machine, never afraid to give his frank opinion on anything and everything. On top of his academic work he's also written 22 books, produced five online university courses, hosts one and a half podcasts, and now writes a regular New York Times op-ed column.Links to learn more, summary, and full transcript Video version of the interview Lecture: Why the world looks the same in any languageOur show is mostly about the world's most pressing problems and what you can do to solve them. But what's the point of hosting a podcast if you can't occasionally just talk about something fascinating with someone whose work you appreciate?So today, just before the holidays, we're sharing this interview with John about language and linguistics — including what we think are some of the most important things everyone ought to know about those topics. We ask him:Can you communicate faster in some languages than others, or is there some constraint that prevents that?Does learning a second or third language make you smarter or not?Can a language decay and get worse at communicating what people want to say?If children aren't taught a language, how many generations does it take them to invent a fully fledged one of their own?Did Shakespeare write in a foreign language, and if so, should we translate his plays?How much does language really shape the way we think?Are creoles the best languages in the world — languages that ideally we would all speak?What would be the optimal number of languages globally?Does trying to save dying languages do their speakers a favour, or is it more of an imposition?Should we bother to teach foreign languages in UK and US schools?Is it possible to save the important cultural aspects embedded in a dying language without saving the language itself?Will AI models speak a language of their own in the future, one that humans can't understand but which better serves the tradeoffs AI models need to make?We then put some of these questions to ChatGPT itself, asking it to play the role of a linguistics professor at Columbia University.We’ve also added John’s talk “Why the World Looks the Same in Any Language” to the end of this episode. So stick around after the credits! And if you’d rather see Rob and John’s facial expressions or beautiful high cheekbones while listening to this conversation, you can watch the video of the full conversation here. Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Producer: Keiran HarrisAudio mastering: Ben CordellVideo editing: Ryan KesslerTranscriptions: Katy Moore

20 Joulu 20221h 47min

#141 – Richard Ngo on large language models, OpenAI, and striving to make the future go well

#141 – Richard Ngo on large language models, OpenAI, and striving to make the future go well

Large language models like GPT-3, and now ChatGPT, are neural networks trained on a large fraction of all text available on the internet to do one thing: predict the next word in a passage. This simple technique has led to something extraordinary — black boxes able to write TV scripts, explain jokes, produce satirical poetry, answer common factual questions, argue sensibly for political positions, and more. Every month their capabilities grow. But do they really 'understand' what they're saying, or do they just give the illusion of understanding? Today's guest, Richard Ngo, thinks that in the most important sense they understand many things. Richard is a researcher at OpenAI — the company that created ChatGPT — who works to foresee where AI advances are going and develop strategies that will keep these models from 'acting out' as they become more powerful, are deployed and ultimately given power in society. Links to learn more, summary and full transcript. One way to think about 'understanding' is as a subjective experience. Whether it feels like something to be a large language model is an important question, but one we currently have no way to answer. However, as Richard explains, another way to think about 'understanding' is as a functional matter. If you really understand an idea you're able to use it to reason and draw inferences in new situations. And that kind of understanding is observable and testable. Richard argues that language models are developing sophisticated representations of the world which can be manipulated to draw sensible conclusions — maybe not so different from what happens in the human mind. And experiments have found that, as models get more parameters and are trained on more data, these types of capabilities consistently improve. We might feel reluctant to say a computer understands something the way that we do. But if it walks like a duck and it quacks like a duck, we should consider that maybe we have a duck, or at least something sufficiently close to a duck it doesn't matter. In today's conversation we discuss the above, as well as: • Could speeding up AI development be a bad thing? • The balance between excitement and fear when it comes to AI advances • What OpenAI focuses its efforts where it does • Common misconceptions about machine learning • How many computer chips it might require to be able to do most of the things humans do • How Richard understands the 'alignment problem' differently than other people • Why 'situational awareness' may be a key concept for understanding the behaviour of AI models • What work to positively shape the development of AI Richard is and isn't excited about • The AGI Safety Fundamentals course that Richard developed to help people learn more about this field Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Producer: Keiran Harris Audio mastering: Milo McGuire and Ben Cordell Transcriptions: Katy Moore

13 Joulu 20222h 44min

My experience with imposter syndrome — and how to (partly) overcome it (Article)

My experience with imposter syndrome — and how to (partly) overcome it (Article)

Today’s release is a reading of our article called My experience with imposter syndrome — and how to (partly) overcome it, written and narrated by Luisa Rodriguez. If you want to check out the links, footnotes and figures in today’s article, you can find those here. And if you like this article, you’ll probably enjoy episode #100 of this show: Having a successful career with depression, anxiety, and imposter syndrome Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Producer: Keiran Harris Audio mastering and editing for this episode: Milo McGuire

8 Joulu 202244min

Rob's thoughts on the FTX bankruptcy

Rob's thoughts on the FTX bankruptcy

In this episode, usual host of the show Rob Wiblin gives his thoughts on the recent collapse of FTX. Click here for an official 80,000 Hours statement. And here are links to some potentially relevant 80,000 Hours pieces: • Episode #24 of this show – Stefan Schubert on why it’s a bad idea to break the rules, even if it’s for a good cause. • Is it ever OK to take a harmful job in order to do more good? An in-depth analysis • What are the 10 most harmful jobs? • Ways people trying to do good accidentally make things worse, and how to avoid them

23 Marras 20225min

#140 – Bear Braumoeller on the case that war isn't in decline

#140 – Bear Braumoeller on the case that war isn't in decline

Is war in long-term decline? Steven Pinker's The Better Angels of Our Nature brought this previously obscure academic question to the centre of public debate, and pointed to rates of death in war to argue energetically that war is on the way out. But that idea divides war scholars and statisticians, and so Better Angels has prompted a spirited debate, with datasets and statistical analyses exchanged back and forth year after year. The lack of consensus has left a somewhat bewildered public (including host Rob Wiblin) unsure quite what to believe. Today's guest, professor in political science Bear Braumoeller, is one of the scholars who believes we lack convincing evidence that warlikeness is in long-term decline. He collected the analysis that led him to that conclusion in his 2019 book, Only the Dead: The Persistence of War in the Modern Age. Links to learn more, summary and full transcript. The question is of great practical importance. The US and PRC are entering a period of renewed great power competition, with Taiwan as a potential trigger for war, and Russia is once more invading and attempting to annex the territory of its neighbours. If war has been going out of fashion since the start of the Enlightenment, we might console ourselves that however nerve-wracking these present circumstances may feel, modern culture will throw up powerful barriers to another world war. But if we're as war-prone as we ever have been, one need only inspect the record of the 20th century to recoil in horror at what might await us in the 21st. Bear argues that the second reaction is the appropriate one. The world has gone up in flames many times through history, with roughly 0.5% of the population dying in the Napoleonic Wars, 1% in World War I, 3% in World War II, and perhaps 10% during the Mongol conquests. And with no reason to think similar catastrophes are any less likely today, complacency could lead us to sleepwalk into disaster. He gets to this conclusion primarily by analysing the datasets of the decades-old Correlates of War project, which aspires to track all interstate conflicts and battlefield deaths since 1815. In Only the Dead, he chops up and inspects this data dozens of different ways, to test if there are any shifts over time which seem larger than what could be explained by chance variation alone. In a nutshell, Bear simply finds no general trend in either direction from 1815 through today. It seems like, as philosopher George Santayana lamented in 1922, "only the dead have seen the end of war". In today's conversation, Bear and Rob discuss all of the above in more detail than even a usual 80,000 Hours podcast episode, as well as: • Why haven't modern ideas about the immorality of violence led to the decline of war, when it's such a natural thing to expect? • What would Bear's critics say in response to all this? • What do the optimists get right? • How does one do proper statistical tests for events that are clumped together, like war deaths? • Why are deaths in war so concentrated in a handful of the most extreme events? • Did the ideas of the Enlightenment promote nonviolence, on balance? • Were early states more or less violent than groups of hunter-gatherers? • If Bear is right, what can be done? • How did the 'Concert of Europe' or 'Bismarckian system' maintain peace in the 19th century? • Which wars are remarkable but largely unknown? Chapters:Rob’s intro (00:00:00)The interview begins (00:03:32)Only the Dead (00:06:28)The Enlightenment (00:16:47)Democratic peace theory (00:26:22)Is religion a key driver of war? (00:29:27)International orders (00:33:07)The Concert of Europe (00:42:15)The Bismarckian system (00:53:43)The current international order (00:58:16)The Better Angels of Our Nature (01:17:30)War datasets (01:32:03)Seeing patterns in data where none exist (01:45:32)Change-point analysis (01:49:33)Rates of violent death throughout history (01:54:32)War initiation (02:02:55)Escalation (02:17:57)Getting massively different results from the same data (02:28:38)How worried we should be (02:34:07)Most likely ways Only the Dead is wrong (02:36:25)Astonishing smaller wars (02:40:39)Producer: Keiran HarrisAudio mastering: Ryan KesslerTranscriptions: Katy Moore

8 Marras 20222h 47min

#139 – Alan Hájek on puzzles and paradoxes in probability and expected value

#139 – Alan Hájek on puzzles and paradoxes in probability and expected value

A casino offers you a game. A coin will be tossed. If it comes up heads on the first flip you win $2. If it comes up on the second flip you win $4. If it comes up on the third you win $8, the fourth you win $16, and so on. How much should you be willing to pay to play? The standard way of analysing gambling problems, ‘expected value’ — in which you multiply probabilities by the value of each outcome and then sum them up — says your expected earnings are infinite. You have a 50% chance of winning $2, for '0.5 * $2 = $1' in expected earnings. A 25% chance of winning $4, for '0.25 * $4 = $1' in expected earnings, and on and on. A never-ending series of $1s added together comes to infinity. And that's despite the fact that you know with certainty you can only ever win a finite amount! Today's guest — philosopher Alan Hájek of the Australian National University — thinks of much of philosophy as “the demolition of common sense followed by damage control” and is an expert on paradoxes related to probability and decision-making rules like “maximise expected value.” Links to learn more, summary and full transcript. The problem described above, known as the St. Petersburg paradox, has been a staple of the field since the 18th century, with many proposed solutions. In the interview, Alan explains how very natural attempts to resolve the paradox — such as factoring in the low likelihood that the casino can pay out very large sums, or the fact that money becomes less and less valuable the more of it you already have — fail to work as hoped. We might reject the setup as a hypothetical that could never exist in the real world, and therefore of mere intellectual curiosity. But Alan doesn't find that objection persuasive. If expected value fails in extreme cases, that should make us worry that something could be rotten at the heart of the standard procedure we use to make decisions in government, business, and nonprofits. These issues regularly show up in 80,000 Hours' efforts to try to find the best ways to improve the world, as the best approach will arguably involve long-shot attempts to do very large amounts of good. Consider which is better: saving one life for sure, or three lives with 50% probability? Expected value says the second, which will probably strike you as reasonable enough. But what if we repeat this process and evaluate the chance to save nine lives with 25% probability, or 27 lives with 12.5% probability, or after 17 more iterations, 3,486,784,401 lives with a 0.00000009% chance. Expected value says this final offer is better than the others — 1,000 times better, in fact. Ultimately Alan leans towards the view that our best choice is to “bite the bullet” and stick with expected value, even with its sometimes counterintuitive implications. Where we want to do damage control, we're better off looking for ways our probability estimates might be wrong. In today's conversation, Alan and Rob explore these issues and many others: • Simple rules of thumb for having philosophical insights • A key flaw that hid in Pascal's wager from the very beginning • Whether we have to simply ignore infinities because they mess everything up • What fundamentally is 'probability'? • Some of the many reasons 'frequentism' doesn't work as an account of probability • Why the standard account of counterfactuals in philosophy is deeply flawed • And why counterfactuals present a fatal problem for one sort of consequentialism Chapters:Rob’s intro (00:00:00)The interview begins (00:01:48)Philosophical methodology (00:02:54)Theories of probability (00:37:17)Everyday Bayesianism (00:46:01)Frequentism (01:04:56)Ranges of probabilities (01:16:23)Implications for how to live (01:21:24)Expected value (01:26:58)The St. Petersburg paradox (01:31:40)Pascal's wager (01:49:44)Using expected value in everyday life (02:03:53)Counterfactuals (02:16:38)Most counterfactuals are false (02:52:25)Relevance to objective consequentialism (03:09:47)Marker 18 (03:10:21)Alan’s best conference story (03:33:37)Producer: Keiran HarrisAudio mastering: Ben Cordell and Ryan KesslerTranscriptions: Katy Moore

28 Loka 20223h 38min

Preventing an AI-related catastrophe (Article)

Preventing an AI-related catastrophe (Article)

Today’s release is a professional reading of our new problem profile on preventing an AI-related catastrophe, written by Benjamin Hilton. We expect that there will be substantial progress in AI in the next few decades, potentially even to the point where machines come to outperform humans in many, if not all, tasks. This could have enormous benefits, helping to solve currently intractable global problems, but could also pose severe risks. These risks could arise accidentally (for example, if we don’t find technical solutions to concerns about the safety of AI systems), or deliberately (for example, if AI systems worsen geopolitical conflict). We think more work needs to be done to reduce these risks. Some of these risks from advanced AI could be existential — meaning they could cause human extinction, or an equally permanent and severe disempowerment of humanity. There have not yet been any satisfying answers to concerns about how this rapidly approaching, transformative technology can be safely developed and integrated into our society. Finding answers to these concerns is very neglected, and may well be tractable. We estimate that there are around 300 people worldwide working directly on this. As a result, the possibility of AI-related catastrophe may be the world’s most pressing problem — and the best thing to work on for those who are well-placed to contribute. Promising options for working on this problem include technical research on how to create safe AI systems, strategy research into the particular risks AI might pose, and policy research into ways in which companies and governments could mitigate these risks. If worthwhile policies are developed, we’ll need people to put them in place and implement them. There are also many opportunities to have a big impact in a variety of complementary roles, such as operations management, journalism, earning to give, and more. If you want to check out the links, footnotes and figures in today’s article, you can find those here. Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Producer: Keiran Harris Editing and narration: Perrin Walker and Shaun Acker Audio proofing: Katy Moore

14 Loka 20222h 24min

#138 – Sharon Hewitt Rawlette on why pleasure and pain are the only things that intrinsically matter

#138 – Sharon Hewitt Rawlette on why pleasure and pain are the only things that intrinsically matter

What in the world is intrinsically good — good in itself even if it has no other effects? Over the millennia, people have offered many answers: joy, justice, equality, accomplishment, loving god, wisdom, and plenty more. The question is a classic that makes for great dorm-room philosophy discussion. But it's hardly just of academic interest. The issue of what (if anything) is intrinsically valuable bears on every action we take, whether we’re looking to improve our own lives, or to help others. The wrong answer might lead us to the wrong project and render our efforts to improve the world entirely ineffective. Today's guest, Sharon Hewitt Rawlette — philosopher and author of The Feeling of Value: Moral Realism Grounded in Phenomenal Consciousness — wants to resuscitate an answer to this question that is as old as philosophy itself. Links to learn more, summary, full transcript, and full version of this blog post. That idea, in a nutshell, is that there is only one thing of true intrinsic value: positive feelings and sensations. And similarly, there is only one thing that is intrinsically of negative value: suffering, pain, and other unpleasant sensations. Lots of other things are valuable too: friendship, fairness, loyalty, integrity, wealth, patience, houses, and so on. But they are only instrumentally valuable — that is to say, they’re valuable as means to the end of ensuring that all conscious beings experience more pleasure and other positive sensations, and less suffering. As Sharon notes, from Athens in 400 BC to Britain in 1850, the idea that only subjective experiences can be good or bad in themselves -- a position known as 'philosophical hedonism' -- has been one of the most enduringly popular ideas in ethics. And few will be taken aback by the notion that, all else equal, more pleasure is good and less suffering is bad. But can they really be the only intrinsically valuable things? Over the 20th century, philosophical hedonism became increasingly controversial in the face of some seemingly very counterintuitive implications. For this reason the famous philosopher of mind Thomas Nagel called The Feeling of Value "a radical and important philosophical contribution." In today's interview, Sharon explains the case for a theory of value grounded in subjective experiences, and why she believes the most popular counterarguments are misguided. Host Rob Wiblin and Sharon also cover: • The essential need to disentangle intrinsic, instrumental, and other sorts of value • Why Sharon’s arguments lead to hedonistic utilitarianism rather than hedonistic egoism (in which we only care about our own feelings) • How do people react to the 'experience machine' thought experiment when surveyed? • Why hedonism recommends often thinking and acting as though it were false • Whether it's crazy to think that relationships are only useful because of their effects on our subjective experiences • Whether it will ever be possible to eliminate pain, and whether doing so would be desirable • If we didn't have positive or negative experiences, whether that would cause us to simply never talk about goodness and badness • Whether the plausibility of hedonism is affected by our theory of mind • And plenty more Chapters:Rob’s intro (00:00:00)The interview begins (00:02:45)Metaethics (00:04:16)Anti-realism (00:10:39)Sharon's theory of moral realism (00:16:17)The history of hedonism (00:23:11)Intrinsic value vs instrumental value (00:28:49)Egoistic hedonism (00:36:30)Single axis of value (00:42:19)Key objections to Sharon’s brand of hedonism (00:56:18)The experience machine (01:06:08)Robot spouses (01:22:29)Most common misunderstanding of Sharon’s view (01:27:10)How might a hedonist actually live (01:37:46)The organ transplant case (01:53:34)Counterintuitive implications of hedonistic utilitarianism (02:03:40)How could we discover moral facts? (02:18:05)Producer: Keiran HarrisAudio mastering: Ryan KesslerTranscriptions: Katy Moore

30 Syys 20222h 24min

Suosittua kategoriassa Koulutus

rss-murhan-anatomia
psykopodiaa-podcast
voi-hyvin-meditaatiot-2
rss-duodecim-lehti
aloita-meditaatio
rss-psykalab
jari-sarasvuo-podcast
rss-narsisti
rss-vapaudu-voimaasi
psykologia
adhd-podi
kesken
ihminen-tavattavissa-tommy-hellsten-instituutti
rss-koira-haudattuna
rss-niinku-asia-on
aamukahvilla
rss-liian-kuuma-peruna
rss-metropolia-ammattikorkeakoulu
rss-anteeks-etukateen
aamupore