#121 – Matthew Yglesias on avoiding the pundit's fallacy and how much military intervention can be used for good

#121 – Matthew Yglesias on avoiding the pundit's fallacy and how much military intervention can be used for good

If you read polls saying that the public supports a carbon tax, should you believe them? According to today's guest — journalist and blogger Matthew Yglesias — it's complicated, but probably not.

Links to learn more, summary and full transcript.

Interpreting opinion polls about specific policies can be a challenge, and it's easy to trick yourself into believing what you want to believe. Matthew invented a term for a particular type of self-delusion called the 'pundit's fallacy': "the belief that what a politician needs to do to improve his or her political standing is do what the pundit wants substantively."

If we want to advocate not just for ideas that would be good if implemented, but ideas that have a real shot at getting implemented, we should do our best to understand public opinion as it really is.

The least trustworthy polls are published by think tanks and advocacy campaigns that would love to make their preferred policy seem popular. These surveys can be designed to nudge respondents toward the desired result — for example, by tinkering with question wording and order or shifting how participants are sampled. And if a poll produces the 'wrong answer', there's no need to publish it at all, so the 'publication bias' with these sorts of surveys is large.

Matthew says polling run by firms or researchers without any particular desired outcome can be taken more seriously. But the results that we ought to give by far the most weight are those from professional political campaigns trying to win votes and get their candidate elected because they have both the expertise to do polling properly, and a very strong incentive to understand what the public really thinks.

The problem is, campaigns run these expensive surveys because they think that having exclusive access to reliable information will give them a competitive advantage. As a result, they often don’t publish the findings, and instead use them to shape what their candidate says and does.

Journalists like Matthew can call up their contacts and get a summary from people they trust. But being unable to publish the polling itself, they're unlikely to be able to persuade sceptics.

When assessing what ideas are winners, one thing Matthew would like everyone to keep in mind is that politics is competitive, and politicians aren't (all) stupid. If advocating for your pet idea were a great way to win elections, someone would try it and win, and others would copy.

One other thing to check that's more reliable than polling is real-world experience. For example, voters may say they like a carbon tax on the phone — but the very liberal Washington State roundly rejected one in ballot initiatives in 2016 and 2018.

Of course you may want to advocate for what you think is best, even if it wouldn't pass a popular vote in the face of organised opposition. The public's ideas can shift, sometimes dramatically and unexpectedly. But at least you'll be going into the debate with your eyes wide open.

In this extensive conversation, host Rob Wiblin and Matthew also cover:

• How should a humanitarian think about US military interventions overseas?
• From an 'effective altruist' perspective, was the US wrong to withdraw from Afghanistan?
• Has NATO ultimately screwed over Ukrainians by misrepresenting the extent of its commitment to their independence?
• What philosopher does Matthew think is underrated?
• How big a risk is ubiquitous surveillance?
• What does Matthew think about wild animal suffering, anti-ageing research, and autonomous weapons?
• And much more

Chapters:

  • Rob’s intro (00:00:00)
  • The interview begins (00:02:05)
  • Autonomous weapons (00:04:42)
  • India and the US (00:07:25)
  • Evidence-backed interventions for reducing the harm done by racial prejudices (00:08:38)
  • Factory farming (00:10:44)
  • Wild animal suffering (00:12:41)
  • Vaccine development (00:15:20)
  • Anti-ageing research (00:16:27)
  • Should the US develop a semiconductor industry? (00:19:13)
  • What we should do about various existential risks (00:21:58)
  • What governments should do to stop the next pandemic (00:24:00)
  • Comets and supervolcanoes (00:31:30)
  • Nuclear weapons (00:34:25)
  • Advances in AI (00:35:46)
  • Surveillance systems (00:38:45)
  • How Matt thinks about public opinion research (00:43:22)
  • Issues with trusting public opinion polls (00:51:18)
  • The influence of prior beliefs (01:05:53)
  • Loss aversion (01:12:19)
  • Matt's take on military adventurism (01:18:54)
  • How military intervention looks as a humanitarian intervention (01:29:12)
  • Where Matt does favour military intervention (01:38:27)
  • Why smart people disagree (01:44:24)
  • The case for NATO taking an active stance in Ukraine (01:57:34)
  • One Billion Americans (02:08:02)
  • Matt’s views on the effective altruism community (02:11:46)
  • Matt’s views on the longtermist community (02:19:48)
  • Matt’s struggle to become more of a rationalist (02:22:42)
  • Megaprojects (02:26:20)
  • The impact of Matt’s work (02:32:28)
  • Matt’s philosophical views (02:47:58)
  • The value of formal education (02:56:59)
  • Worst thing Matt’s ever advocated for (03:02:25)
  • Rob’s outro (03:03:22)


Producer: Keiran Harris
Audio mastering: Ben Cordell
Transcriptions: Katy Moore

Jaksot(325)

#220 – Ryan Greenblatt on the 4 most likely ways for AI to take over, and the case for and against AGI in <8 years

#220 – Ryan Greenblatt on the 4 most likely ways for AI to take over, and the case for and against AGI in <8 years

Ryan Greenblatt — lead author on the explosive paper “Alignment faking in large language models” and chief scientist at Redwood Research — thinks there’s a 25% chance that within four years, AI will b...

8 Heinä 20252h 50min

#219 – Toby Ord on graphs AI companies would prefer you didn't (fully) understand

#219 – Toby Ord on graphs AI companies would prefer you didn't (fully) understand

The era of making AI smarter just by making it bigger is ending. But that doesn’t mean progress is slowing down — far from it. AI models continue to get much more powerful, just using very different m...

24 Kesä 20252h 48min

#218 – Hugh White on why Trump is abandoning US hegemony – and that’s probably good

#218 – Hugh White on why Trump is abandoning US hegemony – and that’s probably good

For decades, US allies have slept soundly under the protection of America’s overwhelming military might. Donald Trump — with his threats to ditch NATO, seize Greenland, and abandon Taiwan — seems hell...

12 Kesä 20252h 48min

#217 – Beth Barnes on the most important graph in AI right now — and the 7-month rule that governs its progress

#217 – Beth Barnes on the most important graph in AI right now — and the 7-month rule that governs its progress

AI models today have a 50% chance of successfully completing a task that would take an expert human one hour. Seven months ago, that number was roughly 30 minutes — and seven months before that, 15 mi...

2 Kesä 20253h 47min

Beyond human minds: The bewildering frontier of consciousness in insects, AI, and more

Beyond human minds: The bewildering frontier of consciousness in insects, AI, and more

What if there’s something it’s like to be a shrimp — or a chatbot?For centuries, humans have debated the nature of consciousness, often placing ourselves at the very top. But what about the minds of o...

23 Touko 20253h 34min

Don’t believe OpenAI’s “nonprofit” spin (emergency pod with Tyler Whitmer)

Don’t believe OpenAI’s “nonprofit” spin (emergency pod with Tyler Whitmer)

OpenAI’s recent announcement that its nonprofit would “retain control” of its for-profit business sounds reassuring. But this seemingly major concession, celebrated by so many, is in itself largely me...

15 Touko 20251h 12min

The case for and against AGI by 2030 (article by Benjamin Todd)

The case for and against AGI by 2030 (article by Benjamin Todd)

More and more people have been saying that we might have AGI (artificial general intelligence) before 2030. Is that really plausible? This article by Benjamin Todd looks into the cases for and against...

12 Touko 20251h

Emergency pod: Did OpenAI give up, or is this just a new trap? (with Rose Chan Loui)

Emergency pod: Did OpenAI give up, or is this just a new trap? (with Rose Chan Loui)

When attorneys general intervene in corporate affairs, it usually means something has gone seriously wrong. In OpenAI’s case, it appears to have forced a dramatic reversal of the company’s plans to si...

8 Touko 20251h 2min

Suosittua kategoriassa Koulutus

rss-murhan-anatomia
voi-hyvin-meditaatiot-2
rss-narsisti
psykopodiaa-podcast
adhd-podi
rss-rahamania
rss-valo-minussa-2
rss-vapaudu-voimaasi
rss-niinku-asia-on
mielipaivakirja
rss-uskonto-on-tylsaa
aamukahvilla
rss-duodecim-lehti
ilona-rauhala
kesken
psykologia
rss-eron-alkemiaa
rss-koira-haudattuna
rss-arkea-ja-aurinkoa-podcast-espanjasta
ihminen-tavattavissa-tommy-hellsten-instituutti