#227 – Helen Toner on the geopolitics of AGI in China and the Middle East

#227 – Helen Toner on the geopolitics of AGI in China and the Middle East

With the US racing to develop AGI and superintelligence ahead of China, you might expect the two countries to be negotiating how they’ll deploy AI, including in the military, without coming to blows. But according to Helen Toner, director of the Center for Security and Emerging Technology in DC, “the US and Chinese governments are barely talking at all.”

Links to learn more, video, and full transcript: https://80k.info/ht25

In her role as a founder, and now leader, of DC’s top think tank focused on the geopolitical and military implications of AI, Helen has been closely tracking the US’s AI diplomacy since 2019.

“Over the last couple of years there have been some direct [US–China] talks on some small number of issues, but they’ve also often been completely suspended.” China knows the US wants to talk more, so “that becomes a bargaining chip for China to say, ‘We don’t want to talk to you. We’re not going to do these military-to-military talks about extremely sensitive, important issues, because we’re mad.'”

Helen isn’t sure the groundwork exists for productive dialogue in any case. “At the government level, [there’s] very little agreement” on what AGI is, whether it’s possible soon, whether it poses major risks. Without shared understanding of the problem, negotiating solutions is very difficult.

Another issue is that so far the Chinese Communist Party doesn’t seem especially “AGI-pilled.” While a few Chinese companies like DeepSeek are betting on scaling, she sees little evidence Chinese leadership shares Silicon Valley’s conviction that AGI will arrive any minute now, and export controls have made it very difficult for them to access compute to match US competitors.

When DeepSeek released R1 just three months after OpenAI’s o1, observers declared the US–China gap on AI had all but disappeared. But Helen notes OpenAI has since scaled to o3 and o4, with nothing to match on the Chinese side. “We’re now at something like a nine-month gap, and that might be longer.”

To find a properly AGI-pilled autocracy, we might need to look at nominal US allies. The US has approved massive data centres in the UAE and Saudi Arabia with “hundreds of thousands of next-generation Nvidia chips” — delivering colossal levels of computing power.

When OpenAI announced this deal with the UAE, they celebrated that it was “rooted in democratic values,” and would advance “democratic AI rails” and provide “a clear alternative to authoritarian versions of AI.”

But the UAE scores 18 out of 100 on Freedom House’s democracy index. “This is really not a country that respects rule of law,” Helen observes. Political parties are banned, elections are fake, dissidents are persecuted.

If AI access really determines future national power, handing world-class supercomputers to Gulf autocracies seems pretty questionable. The justification is typically that “if we don’t sell it, China will” — a transparently false claim, given severe Chinese production constraints. It also raises eyebrows that Gulf countries conduct joint military exercises with China and their rulers have “very tight personal and commercial relationships with Chinese political leaders and business leaders.”

In today’s episode, host Rob Wiblin and Helen discuss all that and more.

This episode was recorded on September 25, 2025.

CSET is hiring a frontier AI research fellow! https://80k.info/cset-role
Check out its careers page for current roles: https://cset.georgetown.edu/careers/

Chapters:

  • Cold open (00:00:00)
  • Who’s Helen Toner? (00:01:02)
  • Helen’s role on the OpenAI board, and what happened with Sam Altman (00:01:31)
  • The Center for Security and Emerging Technology (CSET) (00:07:35)
  • CSET’s role in export controls against China (00:10:43)
  • Does it matter if the world uses US AI models? (00:21:24)
  • Is China actually racing to build AGI? (00:27:10)
  • Could China easily steal AI model weights from US companies? (00:38:14)
  • The next big thing is probably robotics (00:46:42)
  • Why is the Trump administration sabotaging the US high-tech sector? (00:48:17)
  • Are data centres in the UAE “good for democracy”? (00:51:31)
  • Will AI inevitably concentrate power? (01:06:20)
  • “Adaptation buffers” vs non-proliferation (01:28:16)
  • Will the military use AI for decision-making? (01:36:09)
  • “Alignment” is (usually) a terrible term (01:42:51)
  • Is Congress starting to take superintelligence seriously? (01:45:19)
  • AI progress isn't actually slowing down (01:47:44)
  • What's legit vs not about OpenAI’s restructure (01:55:28)
  • Is Helen unusually “normal”? (01:58:57)
  • How to keep up with rapid changes in AI and geopolitics (02:02:42)
  • What CSET can uniquely add to the DC policy world (02:05:51)
  • Talent bottlenecks in DC (02:13:26)
  • What evidence, if any, could settle how worried we should be about AI risk? (02:16:28)
  • Is CSET hiring? (02:18:22)

Video editing: Luke Monsour and Simon Monsour
Audio engineering: Milo McGuire, Simon Monsour, and Dominic Armstrong
Music: CORBIT
Coordination, transcriptions, and web: Katy Moore

Episoder(305)

#0 – Introducing the 80,000 Hours Podcast

#0 – Introducing the 80,000 Hours Podcast

80,000 Hours is a non-profit that provides research and other support to help people switch into careers that effectively tackle the world's most pressing problems. This podcast is just one of many things we offer, the others of which you can find at 80000hours.org. Since 2017 this show has been putting out interviews about the world's most pressing problems and how to solve them — which some people enjoy because they love to learn about important things, and others are using to figure out what they want to do with their careers or with their charitable giving. If you haven't yet spent a lot of time with 80,000 Hours or our general style of thinking, called effective altruism, it's probably really helpful to first go through the episodes that set the scene, explain our overall perspective on things, and generally offer all the background information you need to get the most out of the episodes we're making now. That's why we've made a new feed with ten carefully selected episodes from the show's archives, called 'Effective Altruism: An Introduction'. You can find it by searching for 'Effective Altruism' in your podcasting app or at 80000hours.org/intro. Or, if you’d rather listen on this feed, here are the ten episodes we recommend you listen to first: • #21 – Holden Karnofsky on the world's most intellectual foundation and how philanthropy can have maximum impact by taking big risks • #6 – Toby Ord on why the long-term future of humanity matters more than anything else and what we should do about it • #17 – Will MacAskill on why our descendants might view us as moral monsters • #39 – Spencer Greenberg on the scientific approach to updating your beliefs when you get new evidence • #44 – Paul Christiano on developing real solutions to the 'AI alignment problem' • #60 – What Professor Tetlock learned from 40 years studying how to predict the future • #46 – Hilary Greaves on moral cluelessness, population ethics and tackling global issues in academia • #71 – Benjamin Todd on the key ideas of 80,000 Hours • #50 – Dave Denkenberger on how we might feed all 8 billion people through a nuclear winter • 80,000 Hours Team chat #3 – Koehler and Todd on the core idea of effective altruism and how to argue for it

1 Mai 20173min

Populært innen Fakta

fastlegen
dine-penger-pengeradet
hanna-de-heldige
relasjonspodden-med-dora-thorhallsdottir-kjersti-idem
rss-strid-de-norske-borgerkrigene
treningspodden
foreldreradet
jakt-og-fiskepodden
dypdykk
fryktlos
sinnsyn
mikkels-paskenotter
takk-og-lov-med-anine-kierulf
rss-kunsten-a-leve
hverdagspsyken
gravid-uke-for-uke
tomprat-med-gunnar-tjomlid
rss-sunn-okonomi
rss-var-forste-kaffe
doktor-fives-podcast