#193 – Sihao Huang on navigating the geopolitics of US–China AI competition

#193 – Sihao Huang on navigating the geopolitics of US–China AI competition

"You don’t necessarily need world-leading compute to create highly risky AI systems. The biggest biological design tools right now, like AlphaFold’s, are orders of magnitude smaller in terms of compute requirements than the frontier large language models. And China has the compute to train these systems. And if you’re, for instance, building a cyber agent or something that conducts cyberattacks, perhaps you also don’t need the general reasoning or mathematical ability of a large language model. You train on a much smaller subset of data. You fine-tune it on a smaller subset of data. And those systems — one, if China intentionally misuses them, and two, if they get proliferated because China just releases them as open source, or China does not have as comprehensive AI regulations — this could cause a lot of harm in the world." —Sihao Huang

In today’s episode, host Luisa Rodriguez speaks to Sihao Huang about his work on AI governance and tech policy in China, what’s happening on the ground in China in AI development and regulation, and the importance of US–China cooperation on AI governance.

Links to learn more, highlights, video, and full transcript.

They cover:

  • Whether the US and China are in an AI race, and the global implications if they are.
  • The state of the art of AI in China.
  • China’s response to American export controls, and whether China is on track to indigenise its semiconductor supply chain.
  • How China’s current AI regulations try to maintain a delicate balance between fostering innovation and keeping strict information control over the Chinese people.
  • Whether China’s extensive AI regulations signal real commitment to safety or just censorship — and how AI is already used in China for surveillance and authoritarian control.
  • How advancements in AI could reshape global power dynamics, and Sihao’s vision of international cooperation to manage this responsibly.
  • And plenty more.

Chapters:

  • Cold open (00:00:00)
  • Luisa's intro (00:01:02)
  • The interview begins (00:02:06)
  • Is China in an AI race with the West? (00:03:20)
  • How advanced is Chinese AI? (00:15:21)
  • Bottlenecks in Chinese AI development (00:22:30)
  • China and AI risks (00:27:41)
  • Information control and censorship (00:31:32)
  • AI safety research in China (00:36:31)
  • Could China be a source of catastrophic AI risk? (00:41:58)
  • AI enabling human rights abuses and undermining democracy (00:50:10)
  • China’s semiconductor industry (00:59:47)
  • China’s domestic AI governance landscape (01:29:22)
  • China’s international AI governance strategy (01:49:56)
  • Coordination (01:53:56)
  • Track two dialogues (02:03:04)
  • Misunderstandings Western actors have about Chinese approaches (02:07:34)
  • Complexity thinking (02:14:40)
  • Sihao’s pet bacteria hobby (02:20:34)
  • Luisa's outro (02:22:47)


Producer and editor: Keiran Harris
Audio engineering team: Ben Cordell, Simon Monsour, Milo McGuire, and Dominic Armstrong
Additional content editing: Katy Moore and Luisa Rodriguez
Transcriptions: Katy Moore

Episoder(324)

#224 – There's a cheap and low-tech way to save humanity from any engineered disease | Andrew Snyder-Beattie

#224 – There's a cheap and low-tech way to save humanity from any engineered disease | Andrew Snyder-Beattie

Conventional wisdom is that safeguarding humanity from the worst biological risks — microbes optimised to kill as many as possible — is difficult bordering on impossible, making bioweapons humanity’s ...

2 Okt 20252h 31min

Inside the Biden admin’s AI policy approach | Jake Sullivan, Biden’s NSA | via The Cognitive Revolution

Inside the Biden admin’s AI policy approach | Jake Sullivan, Biden’s NSA | via The Cognitive Revolution

Jake Sullivan was the US National Security Advisor from 2021-2025. He joined our friends on The Cognitive Revolution podcast in August to discuss AI as a critical national security issue. We thought i...

26 Sep 20251h 5min

#223 – Neel Nanda on leading a Google DeepMind team at 26 – and advice if you want to work at an AI company (part 2)

#223 – Neel Nanda on leading a Google DeepMind team at 26 – and advice if you want to work at an AI company (part 2)

At 26, Neel Nanda leads an AI safety team at Google DeepMind, has published dozens of influential papers, and mentored 50 junior researchers — seven of whom now work at major AI companies. His secret?...

15 Sep 20251h 46min

#222 – Can we tell if an AI is loyal by reading its mind? DeepMind's Neel Nanda (part 1)

#222 – Can we tell if an AI is loyal by reading its mind? DeepMind's Neel Nanda (part 1)

We don’t know how AIs think or why they do what they do. Or at least, we don’t know much. That fact is only becoming more troubling as AIs grow more capable and appear on track to wield enormous cultu...

8 Sep 20253h 1min

#221 – Kyle Fish on the most bizarre findings from 5 AI welfare experiments

#221 – Kyle Fish on the most bizarre findings from 5 AI welfare experiments

What happens when you lock two AI systems in a room together and tell them they can discuss anything they want?According to experiments run by Kyle Fish — Anthropic’s first AI welfare researcher — som...

28 Aug 20252h 28min

How not to lose your job to AI (article by Benjamin Todd)

How not to lose your job to AI (article by Benjamin Todd)

About half of people are worried they’ll lose their job to AI. They’re right to be concerned: AI can now complete real-world coding tasks on GitHub, generate photorealistic video, drive a taxi more sa...

31 Jul 202551min

Rebuilding after apocalypse: What 13 experts say about bouncing back

Rebuilding after apocalypse: What 13 experts say about bouncing back

What happens when civilisation faces its greatest tests?This compilation brings together insights from researchers, defence experts, philosophers, and policymakers on humanity’s ability to survive and...

15 Jul 20254h 26min

#220 – Ryan Greenblatt on the 4 most likely ways for AI to take over, and the case for and against AGI in <8 years

#220 – Ryan Greenblatt on the 4 most likely ways for AI to take over, and the case for and against AGI in <8 years

Ryan Greenblatt — lead author on the explosive paper “Alignment faking in large language models” and chief scientist at Redwood Research — thinks there’s a 25% chance that within four years, AI will b...

8 Jul 20252h 50min

Populært innen Fakta

fastlegen
dine-penger-pengeradet
relasjonspodden-med-dora-thorhallsdottir-kjersti-idem
treningspodden
rss-strid-de-norske-borgerkrigene
foreldreradet
rss-sunn-okonomi
jakt-og-fiskepodden
hverdagspsyken
sinnsyn
merry-quizmas
gravid-uke-for-uke
tomprat-med-gunnar-tjomlid
rss-kunsten-a-leve
smart-forklart
takk-og-lov-med-anine-kierulf
fryktlos
rss-impressions-2
hagespiren-podcast
rss-kull