#69 – Jeffrey Ding on China, its AI dream, and what we get wrong about both

#69 – Jeffrey Ding on China, its AI dream, and what we get wrong about both

The State Council of China's 2017 AI plan was the starting point of China’s AI planning; China’s approach to AI is defined by its top-down and monolithic nature; China is winning the AI arms race; and there is little to no discussion of issues of AI ethics and safety in China. How many of these ideas have you heard?

In his paper Deciphering China's AI Dream, today's guest, PhD student Jeff Ding, outlines why he believes none of these claims are true.

Links to learn more, summary and full transcript.

What’s the best charity to donate to?

He first places China’s new AI strategy in the context of its past science and technology plans, as well as other countries’ AI plans. What is China actually doing in the space of AI development?

Jeff emphasises that China's AI strategy did not appear out of nowhere with the 2017 state council AI development plan, which attracted a lot of overseas attention. Rather that was just another step forward in a long trajectory of increasing focus on science and technology. It's connected with a plan to develop an 'Internet of Things', and linked to a history of strategic planning for technology in areas like aerospace and biotechnology.

And it was not just the central government that was moving in this space; companies were already pushing forward in AI development, and local level governments already had their own AI plans. You could argue that the central government was following their lead in AI more than the reverse.

What are the different levers that China is pulling to try to spur AI development?

Here, Jeff wanted to challenge the myth that China's AI development plan is based on a monolithic central plan requiring people to develop AI. In fact, bureaucratic agencies, companies, academic labs, and local governments each set up their own strategies, which sometimes conflict with the central government.

Are China's AI capabilities especially impressive? In the paper Jeff develops a new index to measure and compare the US and China's progress in AI.

Jeff’s AI Potential Index — which incorporates trends and capabilities in data, hardware, research and talent, and the commercial AI ecosystem — indicates China’s AI capabilities are about half those of America. His measure, though imperfect, dispels the notion that China's AI capabilities have surpassed the US or make it the world's leading AI power.

Following that 2017 plan, a lot of Western observers thought that to have a good national AI strategy we'd need to figure out how to play catch-up with China. Yet Chinese strategic thinkers and writers at the time actually thought that they were behind — because the Obama administration had issued a series of three white papers in 2016.

Finally, Jeff turns to the potential consequences of China’s AI dream for issues of national security, economic development, AI safety and social governance.

He claims that, despite the widespread belief to the contrary, substantive discussions about AI safety and ethics are indeed emerging in China. For instance, a new book from Tencent’s Research Institute is proactive in calling for stronger awareness of AI safety issues.

In today’s episode, Rob and Jeff go through this widely-discussed report, and also cover:

• The best analogies for thinking about the growing influence of AI
• How do prominent Chinese figures think about AI?
• Coordination with China
• China’s social credit system
• Suggestions for people who want to become professional China specialists
• And more.

Chapters:

  • Rob’s intro (00:00:00)
  • The interview begins (00:01:02)
  • Deciphering China’s AI Dream (00:04:17)
  • Analogies for thinking about AI (00:12:30)
  • How do prominent Chinese figures think about AI? (00:16:15)
  • Cultural cliches in the West and China (00:18:59)
  • Coordination with China on AI (00:24:03)
  • Private companies vs. government research (00:28:55)
  • Compute (00:31:58)
  • China’s social credit system (00:41:26)
  • Relationship between China and other countries beyond AI (00:43:51)
  • Careers advice (00:54:40)
  • Jeffrey’s talk at EAG (01:16:01)
  • Rob’s outro (01:37:12)


Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcriptions: Zakee Ulhaq.

Jaksot(325)

#202 – Venki Ramakrishnan on the cutting edge of anti-ageing science

#202 – Venki Ramakrishnan on the cutting edge of anti-ageing science

"For every far-out idea that turns out to be true, there were probably hundreds that were simply crackpot ideas. In general, [science] advances building on the knowledge we have, and seeing what the n...

19 Syys 20242h 20min

#201 – Ken Goldberg on why your robot butler isn’t here yet

#201 – Ken Goldberg on why your robot butler isn’t here yet

"Perception is quite difficult with cameras: even if you have a stereo camera, you still can’t really build a map of where everything is in space. It’s just very difficult. And I know that sounds surp...

13 Syys 20242h 1min

#200 – Ezra Karger on what superforecasters and experts think about existential risks

#200 – Ezra Karger on what superforecasters and experts think about existential risks

"It’s very hard to find examples where people say, 'I’m starting from this point. I’m starting from this belief.' So we wanted to make that very legible to people. We wanted to say, 'Experts think thi...

4 Syys 20242h 49min

#199 – Nathan Calvin on California’s AI bill SB 1047 and its potential to shape US AI policy

#199 – Nathan Calvin on California’s AI bill SB 1047 and its potential to shape US AI policy

"I do think that there is a really significant sentiment among parts of the opposition that it’s not really just that this bill itself is that bad or extreme — when you really drill into it, it feels ...

29 Elo 20241h 12min

#198 – Meghan Barrett on upending everything you thought you knew about bugs in 3 hours

#198 – Meghan Barrett on upending everything you thought you knew about bugs in 3 hours

"This is a group of animals I think people are particularly unfamiliar with. They are especially poorly covered in our science curriculum; they are especially poorly understood, because people don’t s...

26 Elo 20243h 48min

#197 – Nick Joseph on whether Anthropic's AI safety policy is up to the task

#197 – Nick Joseph on whether Anthropic's AI safety policy is up to the task

The three biggest AI companies — Anthropic, OpenAI, and DeepMind — have now all released policies designed to make their AI models less likely to go rogue or cause catastrophic damage as they approach...

22 Elo 20242h 29min

#196 – Jonathan Birch on the edge cases of sentience and why they matter

#196 – Jonathan Birch on the edge cases of sentience and why they matter

"In the 1980s, it was still apparently common to perform surgery on newborn babies without anaesthetic on both sides of the Atlantic. This led to appalling cases, and to public outcry, and to campaign...

15 Elo 20242h 1min

#195 – Sella Nevo on who's trying to steal frontier AI models, and what they could do with them

#195 – Sella Nevo on who's trying to steal frontier AI models, and what they could do with them

"Computational systems have literally millions of physical and conceptual components, and around 98% of them are embedded into your infrastructure without you ever having heard of them. And an inordin...

1 Elo 20242h 8min

Suosittua kategoriassa Koulutus

rss-murhan-anatomia
voi-hyvin-meditaatiot-2
rss-narsisti
adhd-podi
psykopodiaa-podcast
rss-uskonto-on-tylsaa
rss-rahamania
rss-valo-minussa-2
rss-duodecim-lehti
rss-niinku-asia-on
mielipaivakirja
rahapuhetta
aamukahvilla
rss-liian-kuuma-peruna
rss-vapaudu-voimaasi
aloita-meditaatio
kesken
dear-ladies
rss-eron-alkemiaa
rss-arkea-ja-aurinkoa-podcast-espanjasta