#193 – Sihao Huang on navigating the geopolitics of US–China AI competition
80,000 Hours Podcast18 Heinä 2024

#193 – Sihao Huang on navigating the geopolitics of US–China AI competition

"You don’t necessarily need world-leading compute to create highly risky AI systems. The biggest biological design tools right now, like AlphaFold’s, are orders of magnitude smaller in terms of compute requirements than the frontier large language models. And China has the compute to train these systems. And if you’re, for instance, building a cyber agent or something that conducts cyberattacks, perhaps you also don’t need the general reasoning or mathematical ability of a large language model. You train on a much smaller subset of data. You fine-tune it on a smaller subset of data. And those systems — one, if China intentionally misuses them, and two, if they get proliferated because China just releases them as open source, or China does not have as comprehensive AI regulations — this could cause a lot of harm in the world." —Sihao Huang

In today’s episode, host Luisa Rodriguez speaks to Sihao Huang about his work on AI governance and tech policy in China, what’s happening on the ground in China in AI development and regulation, and the importance of US–China cooperation on AI governance.

Links to learn more, highlights, video, and full transcript.

They cover:

  • Whether the US and China are in an AI race, and the global implications if they are.
  • The state of the art of AI in China.
  • China’s response to American export controls, and whether China is on track to indigenise its semiconductor supply chain.
  • How China’s current AI regulations try to maintain a delicate balance between fostering innovation and keeping strict information control over the Chinese people.
  • Whether China’s extensive AI regulations signal real commitment to safety or just censorship — and how AI is already used in China for surveillance and authoritarian control.
  • How advancements in AI could reshape global power dynamics, and Sihao’s vision of international cooperation to manage this responsibly.
  • And plenty more.

Chapters:

  • Cold open (00:00:00)
  • Luisa's intro (00:01:02)
  • The interview begins (00:02:06)
  • Is China in an AI race with the West? (00:03:20)
  • How advanced is Chinese AI? (00:15:21)
  • Bottlenecks in Chinese AI development (00:22:30)
  • China and AI risks (00:27:41)
  • Information control and censorship (00:31:32)
  • AI safety research in China (00:36:31)
  • Could China be a source of catastrophic AI risk? (00:41:58)
  • AI enabling human rights abuses and undermining democracy (00:50:10)
  • China’s semiconductor industry (00:59:47)
  • China’s domestic AI governance landscape (01:29:22)
  • China’s international AI governance strategy (01:49:56)
  • Coordination (01:53:56)
  • Track two dialogues (02:03:04)
  • Misunderstandings Western actors have about Chinese approaches (02:07:34)
  • Complexity thinking (02:14:40)
  • Sihao’s pet bacteria hobby (02:20:34)
  • Luisa's outro (02:22:47)


Producer and editor: Keiran Harris
Audio engineering team: Ben Cordell, Simon Monsour, Milo McGuire, and Dominic Armstrong
Additional content editing: Katy Moore and Luisa Rodriguez
Transcriptions: Katy Moore

Jaksot(320)

OpenAI: The nonprofit refuses to be killed (with Tyler Whitmer)

OpenAI: The nonprofit refuses to be killed (with Tyler Whitmer)

Last December, the OpenAI business put forward a plan to completely sideline its nonprofit board. But two state attorneys general have now blocked that effort and kept that board very much alive and k...

11 Marras 20251h 56min

#227 – Helen Toner on the geopolitics of AGI in China and the Middle East

#227 – Helen Toner on the geopolitics of AGI in China and the Middle East

With the US racing to develop AGI and superintelligence ahead of China, you might expect the two countries to be negotiating how they’ll deploy AI, including in the military, without coming to blows. ...

5 Marras 20252h 20min

#226 – Holden Karnofsky on unexploited opportunities to make AI safer — and all his AGI takes

#226 – Holden Karnofsky on unexploited opportunities to make AI safer — and all his AGI takes

For years, working on AI safety usually meant theorising about the ‘alignment problem’ or trying to convince other people to give a damn. If you could find any way to help, the work was frustrating an...

30 Loka 20254h 30min

#225 – Daniel Kokotajlo on what a hyperspeed robot economy might look like

#225 – Daniel Kokotajlo on what a hyperspeed robot economy might look like

When Daniel Kokotajlo talks to security experts at major AI labs, they tell him something chilling: “Of course we’re probably penetrated by the CCP already, and if they really wanted something, they c...

27 Loka 20252h 12min

#224 – There's a cheap and low-tech way to save humanity from any engineered disease | Andrew Snyder-Beattie

#224 – There's a cheap and low-tech way to save humanity from any engineered disease | Andrew Snyder-Beattie

Conventional wisdom is that safeguarding humanity from the worst biological risks — microbes optimised to kill as many as possible — is difficult bordering on impossible, making bioweapons humanity’s ...

2 Loka 20252h 31min

Inside the Biden admin’s AI policy approach | Jake Sullivan, Biden’s NSA | via The Cognitive Revolution

Inside the Biden admin’s AI policy approach | Jake Sullivan, Biden’s NSA | via The Cognitive Revolution

Jake Sullivan was the US National Security Advisor from 2021-2025. He joined our friends on The Cognitive Revolution podcast in August to discuss AI as a critical national security issue. We thought i...

26 Syys 20251h 5min

#223 – Neel Nanda on leading a Google DeepMind team at 26 – and advice if you want to work at an AI company (part 2)

#223 – Neel Nanda on leading a Google DeepMind team at 26 – and advice if you want to work at an AI company (part 2)

At 26, Neel Nanda leads an AI safety team at Google DeepMind, has published dozens of influential papers, and mentored 50 junior researchers — seven of whom now work at major AI companies. His secret?...

15 Syys 20251h 46min

#222 – Can we tell if an AI is loyal by reading its mind? DeepMind's Neel Nanda (part 1)

#222 – Can we tell if an AI is loyal by reading its mind? DeepMind's Neel Nanda (part 1)

We don’t know how AIs think or why they do what they do. Or at least, we don’t know much. That fact is only becoming more troubling as AIs grow more capable and appear on track to wield enormous cultu...

8 Syys 20253h 1min

Suosittua kategoriassa Koulutus

rss-murhan-anatomia
psykopodiaa-podcast
rss-narsisti
voi-hyvin-meditaatiot-2
aamukahvilla
rss-vapaudu-voimaasi
rss-niinku-asia-on
psykologia
rss-liian-kuuma-peruna
adhd-podi
kesken
dear-ladies
leveli
rss-duodecim-lehti
rss-koira-haudattuna
rss-luonnollinen-synnytys-podcast
rahapuhetta
aloita-meditaatio
ihminen-tavattavissa-tommy-hellsten-instituutti
rss-ai-mita-siskopodcast