Podme logo
HemUpptäckKategorierSökStudent
#147 – Spencer Greenberg on stopping valueless papers from getting into top journals

#147 – Spencer Greenberg on stopping valueless papers from getting into top journals

02:38:082023-03-24

Om avsnittet

Can you trust the things you read in published scientific research? Not really. About 40% of experiments in top social science journals don't get the same result if the experiments are repeated.Two key reasons are 'p-hacking' and 'publication bias'. P-hacking is when researchers run a lot of slightly different statistical tests until they find a way to make findings appear statistically significant when they're actually not — a problem first discussed over 50 years ago. And because journals are more likely to publish positive than negative results, you might be reading about the one time an experiment worked, while the 10 times was run and got a 'null result' never saw the light of day. The resulting phenomenon of publication bias is one we've understood for 60 years.Today's repeat guest, social scientist and entrepreneur Spencer Greenberg, has followed these issues closely for years.Links to learn more, summary and full transcript. He recently checked whether p-values, an indicator of how likely a result was to occur by pure chance, could tell us how likely an outcome would be to recur if an experiment were repeated. From his sample of 325 replications of psychology studies, the answer seemed to be yes. According to Spencer, "when the original study's p-value was less than 0.01 about 72% replicated — not bad. On the other hand, when the p-value is greater than 0.01, only about 48% replicated. A pretty big difference." To do his bit to help get these numbers up, Spencer has launched an effort to repeat almost every social science experiment published in the journals Nature and Science, and see if they find the same results. But while progress is being made on some fronts, Spencer thinks there are other serious problems with published research that aren't yet fully appreciated. One of these Spencer calls 'importance hacking': passing off obvious or unimportant results as surprising and meaningful. Spencer suspects that importance hacking of this kind causes a similar amount of damage to the issues mentioned above, like p-hacking and publication bias, but is much less discussed. His replication project tries to identify importance hacking by comparing how a paper’s findings are described in the abstract to what the experiment actually showed. But the cat-and-mouse game between academics and journal reviewers is fierce, and it's far from easy to stop people exaggerating the importance of their work. In this wide-ranging conversation, Rob and Spencer discuss the above as well as: • When you should and shouldn't use intuition to make decisions. • How to properly model why some people succeed more than others. • The difference between “Soldier Altruists” and “Scout Altruists.” • A paper that tested dozens of methods for forming the habit of going to the gym, why Spencer thinks it was presented in a very misleading way, and what it really found. • Whether a 15-minute intervention could make people more likely to sustain a new habit two months later. • The most common way for groups with good intentions to turn bad and cause harm. • And Spencer's approach to a fulfilling life and doing good, which he calls “Valuism.” Here are two flashcard decks that might make it easier to fully integrate the most important ideas they talk about: • The first covers 18 core concepts from the episode • The second includes 16 definitions of unusual terms.Chapters:Rob’s intro (00:00:00)The interview begins (00:02:16)Social science reform (00:08:46)Importance hacking (00:18:23)How often papers replicate with different p-values (00:43:31)The Transparent Replications project (00:48:17)How do we predict high levels of success? (00:55:26)Soldier Altruists vs. Scout Altruists (01:08:18)The Clearer Thinking podcast (01:16:27)Creating habits more reliably (01:18:16)Behaviour change is incredibly hard (01:32:27)The FIRE Framework (01:46:21)How ideology eats itself (01:54:56)Valuism (02:08:31)“I dropped the whip” (02:35:06)Rob’s outro (02:36:40) Producer: Keiran Harris Audio mastering: Ben Cordell and Milo McGuire Transcriptions: Katy Moore

Senaste avsnitten

80,000 Hours Podcast
80,000 Hours Podcast

#202 – Venki Ramakrishnan on the cutting edge of anti-ageing science

2024-09-192h 20min
80,000 Hours Podcast
80,000 Hours Podcast

#201 – Ken Goldberg on why your robot butler isn’t here yet

2024-09-132h 1min
80,000 Hours Podcast
80,000 Hours Podcast

#200 – Ezra Karger on what superforecasters and experts think about existential risks

2024-09-042h 49min
80,000 Hours Podcast
80,000 Hours Podcast

#199 – Nathan Calvin on California’s AI bill SB 1047 and its potential to shape US AI policy

2024-08-291h 12min
80,000 Hours Podcast
80,000 Hours Podcast

#198 – Meghan Barrett on challenging our assumptions about insects

2024-08-263h 48min
80,000 Hours Podcast
80,000 Hours Podcast

#197 – Nick Joseph on whether Anthropic's AI safety policy is up to the task

2024-08-222h 29min
80,000 Hours Podcast
80,000 Hours Podcast

#196 – Jonathan Birch on the edge cases of sentience and why they matter

2024-08-152h 1min
80,000 Hours Podcast
80,000 Hours Podcast

#195 – Sella Nevo on who's trying to steal frontier AI models, and what they could do with them

2024-08-012h 8min
80,000 Hours Podcast
80,000 Hours Podcast

#194 – Vitalik Buterin on defensive acceleration and how to regulate AI when you fear government

2024-07-263h 4min
80,000 Hours Podcast
80,000 Hours Podcast

#193 – Sihao Huang on the risk that US–China AI competition leads to war

2024-07-182h 23min
logo

PODME

INFORMATION

  • Om kakor
  • Allmänna villkor
  • Integritetspolicy
  • Press

LADDA NED APPEN

app storegoogle play store

REGION

flag
  • sweden_flag
  • norway_flag
  • finland_flag

© Podme AB 2024