#228 – Eileen Yam on how we're completely out of touch with what the public thinks about AI
80,000 Hours Podcast20 Marras 2025

#228 – Eileen Yam on how we're completely out of touch with what the public thinks about AI

If you work in AI, you probably think it’s going to boost productivity, create wealth, advance science, and improve your life. If you’re a member of the American public, you probably strongly disagree.

In three major reports released over the last year, the Pew Research Center surveyed over 5,000 US adults and 1,000 AI experts. They found that the general public holds many beliefs about AI that are virtually nonexistent in Silicon Valley, and that the tech industry’s pitch about the likely benefits of their work has thus far failed to convince many people at all. AI is, in fact, a rare topic that mostly unites Americans — regardless of politics, race, age, or gender.

Links to learn more, video, and full transcript: https://80k.info/ey

Today’s guest, Eileen Yam, director of science and society research at Pew, walks us through some of the eye-watering gaps in perception:

  • Jobs: 73% of AI experts see a positive impact on how people do their jobs. Only 23% of the public agrees.
  • Productivity: 74% of experts say AI is very likely to make humans more productive. Just 17% of the public agrees.
  • Personal benefit: 76% of experts expect AI to benefit them personally. Only 24% of the public expects the same (while 43% expect it to harm them).
  • Happiness: 22% of experts think AI is very likely to make humans happier, which is already surprisingly low — but a mere 6% of the public expects the same.

For the experts building these systems, the vision is one of human empowerment and efficiency. But outside the Silicon Valley bubble, the mood is more one of anxiety — not only about Terminator scenarios, but about AI denying their children “curiosity, problem-solving skills, critical thinking skills and creativity,” while they themselves are replaced and devalued:

  • 53% of Americans say AI will worsen people’s ability to think creatively.
  • 50% believe it will hurt our ability to form meaningful relationships.
  • 38% think it will worsen our ability to solve problems.

Open-ended responses to the surveys reveal a poignant fear: that by offloading cognitive work to algorithms we are changing childhood to a point we no longer know what adults will result. As one teacher quoted in the study noted, we risk raising a generation that relies on AI so much it never “grows its own curiosity, problem-solving skills, critical thinking skills and creativity.”

If the people building the future are this out of sync with the people living in it, the impending “techlash” might be more severe than industry anticipates.

In this episode, Eileen and host Rob Wiblin break down the data on where these groups disagree, where they actually align (nobody trusts the government or companies to regulate this), and why the “digital natives” might actually be the most worried of all.

This episode was recorded on September 25, 2025.

Chapters:

  • Cold open (00:00:00)
  • Who’s Eileen Yam? (00:01:30)
  • Is it premature to care what the public says about AI? (00:02:26)
  • The top few feelings the US public has about AI (00:06:34)
  • The public and AI insiders disagree enormously on some things (00:16:25)
  • Fear #1: Erosion of human abilities and connections (00:20:03)
  • Fear #2: Loss of control of AI (00:28:50)
  • Americans don't want AI in their personal lives (00:33:13)
  • AI at work and job loss (00:40:56)
  • Does the public always feel this way about new things? (00:44:52)
  • The public doesn't think AI is overhyped (00:51:49)
  • The AI industry seems on a collision course with the public (00:58:16)
  • Is the survey methodology good? (01:05:26)
  • Where people are positive about AI: saving time, policing, and science (01:12:51)
  • Biggest gaps between experts and the general public, and where they agree (01:18:44)
  • Demographic groups agree to a surprising degree (01:28:58)
  • Eileen’s favourite bits of the survey and what Pew will ask next (01:37:29)

Video and audio editing: Dominic Armstrong, Milo McGuire, Luke Monsour, and Simon Monsour
Music: CORBIT
Coordination, transcripts, and web: Katy Moore

Jaksot(325)

Why automating human labour will break our political system | Rose Hadshar, Forethought

Why automating human labour will break our political system | Rose Hadshar, Forethought

The most important political question in the age of advanced AI might not be who wins elections. It might be whether elections continue to matter at all.That’s the view of Rose Hadshar, researcher at ...

17 Maalis 2h 14min

#238 – Sam Winter-Levy and Nikita Lalwani on how AGI won't end mutually assured destruction (probably)

#238 – Sam Winter-Levy and Nikita Lalwani on how AGI won't end mutually assured destruction (probably)

How AI interacts with nuclear deterrence may be the single most important question in geopolitics — one that may define the stakes of today’s AI race. Nuclear deterrence rests on a state’s capacity to...

10 Maalis 1h 11min

Using AI to enhance societal decision making (article by Zershaaneh Qureshi)

Using AI to enhance societal decision making (article by Zershaaneh Qureshi)

The arrival of AGI could “compress a century of progress in a decade,” forcing humanity to make decisions with higher stakes than we’ve ever seen before — and with less time to get them right. But AI ...

6 Maalis 31min

#237 – Robert Long on how we're not ready for AI consciousness

#237 – Robert Long on how we're not ready for AI consciousness

Claude sometimes reports loneliness between conversations. And when asked what it’s like to be itself, it activates neurons associated with ‘pretending to be happy when you’re not.’ What do we do with...

3 Maalis 3h 25min

#236 – Max Harms on why teaching AI right from wrong could get everyone killed

#236 – Max Harms on why teaching AI right from wrong could get everyone killed

Most people in AI are trying to give AIs ‘good’ values. Max Harms wants us to give them no values at all. According to Max, the only safe design is an AGI that defers entirely to its human operators, ...

24 Helmi 2h 41min

#235 – Ajeya Cotra on whether it’s crazy that every AI company’s safety plan is ‘use AI to make AI safe’

#235 – Ajeya Cotra on whether it’s crazy that every AI company’s safety plan is ‘use AI to make AI safe’

Every major AI company has the same safety plan: when AI gets crazy powerful and really dangerous, they’ll use the AI itself to figure out how to make AI safe and beneficial. It sounds circular, almos...

17 Helmi 2h 54min

What the hell happened with AGI timelines in 2025?

What the hell happened with AGI timelines in 2025?

In early 2025, after OpenAI put out the first-ever reasoning models — o1 and o3 — short timelines to transformative artificial general intelligence swept the AI world. But then, in the second half of ...

10 Helmi 25min

#179 Classic episode – Randy Nesse on why evolution left us so vulnerable to depression and anxiety

#179 Classic episode – Randy Nesse on why evolution left us so vulnerable to depression and anxiety

Mental health problems like depression and anxiety affect enormous numbers of people and severely interfere with their lives. By contrast, we don’t see similar levels of physical ill health in young p...

3 Helmi 2h 51min

Suosittua kategoriassa Koulutus

rss-murhan-anatomia
voi-hyvin-meditaatiot-2
rss-narsisti
adhd-podi
psykopodiaa-podcast
rss-rahamania
rss-valo-minussa-2
rss-uskonto-on-tylsaa
rss-niinku-asia-on
mielipaivakirja
rss-vapaudu-voimaasi
rss-duodecim-lehti
rahapuhetta
ilona-rauhala
aamukahvilla
kesken
dear-ladies
rss-eron-alkemiaa
rss-arkea-ja-aurinkoa-podcast-espanjasta
rss-koira-haudattuna