#155 – Lennart Heim on the compute governance era and what has to come after

#155 – Lennart Heim on the compute governance era and what has to come after

As AI advances ever more quickly, concerns about potential misuse of highly capable models are growing. From hostile foreign governments and terrorists to reckless entrepreneurs, the threat of AI falling into the wrong hands is top of mind for the national security community.

With growing concerns about the use of AI in military applications, the US has banned the export of certain types of chips to China.

But unlike the uranium required to make nuclear weapons, or the material inputs to a bioweapons programme, computer chips and machine learning models are absolutely everywhere. So is it actually possible to keep dangerous capabilities out of the wrong hands?

In today's interview, Lennart Heim — who researches compute governance at the Centre for the Governance of AI — explains why limiting access to supercomputers may represent our best shot.

Links to learn more, summary and full transcript.

As Lennart explains, an AI research project requires many inputs, including the classic triad of compute, algorithms, and data.

If we want to limit access to the most advanced AI models, focusing on access to supercomputing resources -- usually called 'compute' -- might be the way to go. Both algorithms and data are hard to control because they live on hard drives and can be easily copied. By contrast, advanced chips are physical items that can't be used by multiple people at once and come from a small number of sources.

According to Lennart, the hope would be to enforce AI safety regulations by controlling access to the most advanced chips specialised for AI applications. For instance, projects training 'frontier' AI models — the newest and most capable models — might only gain access to the supercomputers they need if they obtain a licence and follow industry best practices.

We have similar safety rules for companies that fly planes or manufacture volatile chemicals — so why not for people producing the most powerful and perhaps the most dangerous technology humanity has ever played with?

But Lennart is quick to note that the approach faces many practical challenges. Currently, AI chips are readily available and untracked. Changing that will require the collaboration of many actors, which might be difficult, especially given that some of them aren't convinced of the seriousness of the problem.

Host Rob Wiblin is particularly concerned about a different challenge: the increasing efficiency of AI training algorithms. As these algorithms become more efficient, what once required a specialised AI supercomputer to train might soon be achievable with a home computer.

By that point, tracking every aggregation of compute that could prove to be very dangerous would be both impractical and invasive.

With only a decade or two left before that becomes a reality, the window during which compute governance is a viable solution may be a brief one. Top AI labs have already stopped publishing their latest algorithms, which might extend this 'compute governance era', but not for very long.

If compute governance is only a temporary phase between the era of difficult-to-train superhuman AI models and the time when such models are widely accessible, what can we do to prevent misuse of AI systems after that point?

Lennart and Rob both think the only enduring approach requires taking advantage of the AI capabilities that should be in the hands of police and governments — which will hopefully remain superior to those held by criminals, terrorists, or fools. But as they describe, this means maintaining a peaceful standoff between AI models with conflicting goals that can act and fight with one another on the microsecond timescale. Being far too slow to follow what's happening -- let alone participate -- humans would have to be cut out of any defensive decision-making.

Both agree that while this may be our best option, such a vision of the future is more terrifying than reassuring.

Lennart and Rob discuss the above as well as:

  • How can we best categorise all the ways AI could go wrong?
  • Why did the US restrict the export of some chips to China and what impact has that had?
  • Is the US in an 'arms race' with China or is that more an illusion?
  • What is the deal with chips specialised for AI applications?
  • How is the 'compute' industry organised?
  • Downsides of using compute as a target for regulations
  • Could safety mechanisms be built into computer chips themselves?
  • Who would have the legal authority to govern compute if some disaster made it seem necessary?
  • The reasons Rob doubts that any of this stuff will work
  • Could AI be trained to operate as a far more severe computer worm than any we've seen before?
  • What does the world look like when sluggish human reaction times leave us completely outclassed?
  • And plenty more

Chapters:

  • Rob’s intro (00:00:00)
  • The interview begins (00:04:35)
  • What is compute exactly? (00:09:46)
  • Structural risks (00:13:25)
  • Why focus on compute? (00:21:43)
  • Weaknesses of targeting compute (00:30:41)
  • Chip specialisation (00:37:11)
  • Export restrictions (00:40:13)
  • Compute governance is happening (00:59:00)
  • Reactions to AI regulation (01:05:03)
  • Creating legal authority to intervene quickly (01:10:09)
  • Building mechanisms into chips themselves (01:18:57)
  • Rob not buying that any of this will work (01:39:28)
  • Are we doomed to become irrelevant? (01:59:10)
  • Rob’s computer security bad dreams (02:10:22)
  • Concrete advice (02:26:58)
  • Article reading: Information security in high-impact areas (02:49:36)
  • Rob’s outro (03:10:38)

Producer: Keiran Harris

Audio mastering: Milo McGuire, Dominic Armstrong, and Ben Cordell

Transcriptions: Katy Moore

Avsnitt(325)

#12 - Beth Cameron works to stop you dying in a pandemic. Here’s what keeps her up at night.

#12 - Beth Cameron works to stop you dying in a pandemic. Here’s what keeps her up at night.

“When you're in the middle of a crisis and you have to ask for money, you're already too late.” That’s Dr Beth Cameron, who leads Global Biological Policy and Programs at the Nuclear Threat Initiative...

25 Okt 20171h 45min

#11 - Spencer Greenberg on speeding up social science 10-fold & why plenty of startups cause harm

#11 - Spencer Greenberg on speeding up social science 10-fold & why plenty of startups cause harm

Do most meat eaters think it’s wrong to hurt animals? Do Americans think climate change is likely to cause human extinction? What is the best, state-of-the-art therapy for depression? How can we make ...

17 Okt 20171h 29min

#10 - Nick Beckstead on how to spend billions of dollars preventing human extinction

#10 - Nick Beckstead on how to spend billions of dollars preventing human extinction

What if you were in a position to give away billions of dollars to improve the world? What would you do with it? This is the problem facing Program Officers at the Open Philanthropy Project - people l...

11 Okt 20171h 51min

#9 - Christine Peterson on how insecure computers could lead to global disaster, and how to fix it

#9 - Christine Peterson on how insecure computers could lead to global disaster, and how to fix it

Take a trip to Silicon Valley in the 70s and 80s, when going to space sounded like a good way to get around environmental limits, people started cryogenically freezing themselves, and nanotechnology l...

4 Okt 20171h 45min

#8 - Lewis Bollard on how to end factory farming in our lifetimes

#8 - Lewis Bollard on how to end factory farming in our lifetimes

Every year tens of billions of animals are raised in terrible conditions in factory farms before being killed for human consumption. Over the last two years Lewis Bollard – Project Officer for Farm An...

27 Sep 20173h 16min

#7 - Julia Galef on making humanity more rational, what EA does wrong, and why Twitter isn’t all bad

#7 - Julia Galef on making humanity more rational, what EA does wrong, and why Twitter isn’t all bad

The scientific revolution in the 16th century was one of the biggest societal shifts in human history, driven by the discovery of new and better methods of figuring out who was right and who was wrong...

13 Sep 20171h 14min

#6 - Toby Ord on why the long-term future matters more than anything else & what to do about it

#6 - Toby Ord on why the long-term future matters more than anything else & what to do about it

Of all the people whose well-being we should care about, only a small fraction are alive today. The rest are members of future generations who are yet to exist. Whether they’ll be born into a world th...

6 Sep 20172h 8min

#5 - Alex Gordon-Brown on how to donate millions in your 20s working in quantitative trading

#5 - Alex Gordon-Brown on how to donate millions in your 20s working in quantitative trading

Quantitative financial trading is one of the highest paying parts of the world’s highest paying industry. 25 to 30 year olds with outstanding maths skills can earn millions a year in an obscure set of...

28 Aug 20171h 45min

Populärt inom Utbildning

historiepodden-se
rss-bara-en-till-om-missbruk-medberoende-2
det-skaver
harrisons-dramatiska-historia
nu-blir-det-historia
sektledare
alska-oss
not-fanny-anymore
roda-vita-rosen
johannes-hansen-podcast
allt-du-velat-veta
rss-viktmedicinpodden
rss-sjalsligt-avkladd
sa-in-i-sjalen
rss-basta-livet
rss-om-vi-ska-vara-arliga
i-vantan-pa-katastrofen
rikatillsammans-om-privatekonomi-rikedom-i-livet
rss-relationsrevolutionen
rss-pa-insidan-med-bjorn-rudman