#30 - Eva Vivalt on how little social science findings generalize from one study to another

#30 - Eva Vivalt on how little social science findings generalize from one study to another

If we have a study on the impact of a social program in a particular place and time, how confident can we be that we’ll get a similar result if we study the same program again somewhere else?

Dr Eva Vivalt is a lecturer in the Research School of Economics at the Australian National University. She compiled a huge database of impact evaluations in global development - including 15,024 estimates from 635 papers across 20 types of intervention - to help answer this question.

Her finding: not confident at all.

The typical study result differs from the average effect found in similar studies so far by almost 100%. That is to say, if all existing studies of a particular education program find that it improves test scores by 10 points - the next result is as likely to be negative or greater than 20 points, as it is to be between 0-20 points.

She also observed that results from smaller studies done with an NGO - often pilot studies - were more likely to look promising. But when governments tried to implement scaled-up versions of those programs, their performance would drop considerably.

For researchers hoping to figure out what works and then take those programs global, these failures of generalizability and ‘external validity’ should be disconcerting.

Is ‘evidence-based development’ writing a cheque its methodology can’t cash? Should this make us invest less in empirical research, or more to get actually reliable results?

Or as some critics say, is interest in impact evaluation distracting us from more important issues, like national or macroeconomic reforms that can’t be easily trialled?

We discuss this as well as Eva’s other research, including Y Combinator’s basic income study where she is a principal investigator.

Full transcript, links to related papers, and highlights from the conversation.

Links mentioned at the start of the show:
* 80,000 Hours Job Board
* 2018 Effective Altruism Survey

**Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type *80,000 Hours* into your podcasting app.**

Questions include:

* What is the YC basic income study looking at, and what motivates it?
* How do we get people to accept clean meat?
* How much can we generalize from impact evaluations?
* How much can we generalize from studies in development economics?
* Should we be running more or fewer studies?
* Do most social programs work or not?
* The academic incentives around data aggregation
* How much can impact evaluations inform policy decisions?
* How often do people change their minds?
* Do policy makers update too much or too little in the real world?
* How good or bad are the predictions of experts? How does that change when looking at individuals versus the average of a group?
* How often should we believe positive results?
* What’s the state of development economics?
* Eva’s thoughts on our article on social interventions
* How much can we really learn from being empirical?
* How much should we really value RCTs?
* Is an Economics PhD overrated or underrated?

Get this episode by subscribing to our podcast: search for '80,000 Hours' in your podcasting app.

The 80,000 Hours Podcast is produced by Keiran Harris.

Jaksot(317)

AI might let a few people control everything — permanently (article by Rose Hadshar)

AI might let a few people control everything — permanently (article by Rose Hadshar)

Power is already concentrated today: over 800 million people live on less than $3 a day, the three richest men in the world are worth over $1 trillion, and almost six billion people live in countries ...

12 Joulu 20251h

#230 – Dean Ball on how AI is a huge deal — but we shouldn’t regulate it yet

#230 – Dean Ball on how AI is a huge deal — but we shouldn’t regulate it yet

Former White House staffer Dean Ball thinks it's very likely some form of 'superintelligence' arrives in under 20 years. He thinks AI being used for bioweapon research is "a real threat model, obvious...

10 Joulu 20252h 54min

#229 – Marius Hobbhahn on the race to solve AI scheming before models go superhuman

#229 – Marius Hobbhahn on the race to solve AI scheming before models go superhuman

We often worry about AI models “hallucinating” or making honest mistakes. But what happens when a model knows the truth, but decides to deceive you anyway to achieve a goal of its own? This isn’t sci-...

3 Joulu 20253h 3min

Rob & Luisa chat kids, the 2016 fertility crash, and how the 50s invented parenting that makes us miserable

Rob & Luisa chat kids, the 2016 fertility crash, and how the 50s invented parenting that makes us miserable

Global fertility rates aren’t just falling: the rate of decline is accelerating. From 2006 to 2016, fertility dropped gradually, but since 2016 the rate of decline has increased 4.5-fold. In many weal...

25 Marras 20251h 59min

#228 – Eileen Yam on how we're completely out of touch with what the public thinks about AI

#228 – Eileen Yam on how we're completely out of touch with what the public thinks about AI

If you work in AI, you probably think it’s going to boost productivity, create wealth, advance science, and improve your life. If you’re a member of the American public, you probably strongly disagree...

20 Marras 20251h 43min

OpenAI: The nonprofit refuses to be killed (with Tyler Whitmer)

OpenAI: The nonprofit refuses to be killed (with Tyler Whitmer)

Last December, the OpenAI business put forward a plan to completely sideline its nonprofit board. But two state attorneys general have now blocked that effort and kept that board very much alive and k...

11 Marras 20251h 56min

#227 – Helen Toner on the geopolitics of AGI in China and the Middle East

#227 – Helen Toner on the geopolitics of AGI in China and the Middle East

With the US racing to develop AGI and superintelligence ahead of China, you might expect the two countries to be negotiating how they’ll deploy AI, including in the military, without coming to blows. ...

5 Marras 20252h 20min

#226 – Holden Karnofsky on unexploited opportunities to make AI safer — and all his AGI takes

#226 – Holden Karnofsky on unexploited opportunities to make AI safer — and all his AGI takes

For years, working on AI safety usually meant theorising about the ‘alignment problem’ or trying to convince other people to give a damn. If you could find any way to help, the work was frustrating an...

30 Loka 20254h 30min

Suosittua kategoriassa Koulutus

rss-murhan-anatomia
psykopodiaa-podcast
voi-hyvin-meditaatiot-2
rss-valo-minussa-2
rss-narsisti
psykologia
adhd-podi
rss-duodecim-lehti
salainen-paivakirja
rss-liian-kuuma-peruna
rahapuhetta
rss-niinku-asia-on
kesken
rss-luonnollinen-synnytys-podcast
rss-vapaudu-voimaasi
aamukahvilla
aloita-meditaatio
mielipaivakirja
rss-uskonto-on-tylsaa
rss-rahataito-podcast