#155 – Lennart Heim on the compute governance era and what has to come after

#155 – Lennart Heim on the compute governance era and what has to come after

As AI advances ever more quickly, concerns about potential misuse of highly capable models are growing. From hostile foreign governments and terrorists to reckless entrepreneurs, the threat of AI falling into the wrong hands is top of mind for the national security community.

With growing concerns about the use of AI in military applications, the US has banned the export of certain types of chips to China.

But unlike the uranium required to make nuclear weapons, or the material inputs to a bioweapons programme, computer chips and machine learning models are absolutely everywhere. So is it actually possible to keep dangerous capabilities out of the wrong hands?

In today's interview, Lennart Heim — who researches compute governance at the Centre for the Governance of AI — explains why limiting access to supercomputers may represent our best shot.

Links to learn more, summary and full transcript.

As Lennart explains, an AI research project requires many inputs, including the classic triad of compute, algorithms, and data.

If we want to limit access to the most advanced AI models, focusing on access to supercomputing resources -- usually called 'compute' -- might be the way to go. Both algorithms and data are hard to control because they live on hard drives and can be easily copied. By contrast, advanced chips are physical items that can't be used by multiple people at once and come from a small number of sources.

According to Lennart, the hope would be to enforce AI safety regulations by controlling access to the most advanced chips specialised for AI applications. For instance, projects training 'frontier' AI models — the newest and most capable models — might only gain access to the supercomputers they need if they obtain a licence and follow industry best practices.

We have similar safety rules for companies that fly planes or manufacture volatile chemicals — so why not for people producing the most powerful and perhaps the most dangerous technology humanity has ever played with?

But Lennart is quick to note that the approach faces many practical challenges. Currently, AI chips are readily available and untracked. Changing that will require the collaboration of many actors, which might be difficult, especially given that some of them aren't convinced of the seriousness of the problem.

Host Rob Wiblin is particularly concerned about a different challenge: the increasing efficiency of AI training algorithms. As these algorithms become more efficient, what once required a specialised AI supercomputer to train might soon be achievable with a home computer.

By that point, tracking every aggregation of compute that could prove to be very dangerous would be both impractical and invasive.

With only a decade or two left before that becomes a reality, the window during which compute governance is a viable solution may be a brief one. Top AI labs have already stopped publishing their latest algorithms, which might extend this 'compute governance era', but not for very long.

If compute governance is only a temporary phase between the era of difficult-to-train superhuman AI models and the time when such models are widely accessible, what can we do to prevent misuse of AI systems after that point?

Lennart and Rob both think the only enduring approach requires taking advantage of the AI capabilities that should be in the hands of police and governments — which will hopefully remain superior to those held by criminals, terrorists, or fools. But as they describe, this means maintaining a peaceful standoff between AI models with conflicting goals that can act and fight with one another on the microsecond timescale. Being far too slow to follow what's happening -- let alone participate -- humans would have to be cut out of any defensive decision-making.

Both agree that while this may be our best option, such a vision of the future is more terrifying than reassuring.

Lennart and Rob discuss the above as well as:

  • How can we best categorise all the ways AI could go wrong?
  • Why did the US restrict the export of some chips to China and what impact has that had?
  • Is the US in an 'arms race' with China or is that more an illusion?
  • What is the deal with chips specialised for AI applications?
  • How is the 'compute' industry organised?
  • Downsides of using compute as a target for regulations
  • Could safety mechanisms be built into computer chips themselves?
  • Who would have the legal authority to govern compute if some disaster made it seem necessary?
  • The reasons Rob doubts that any of this stuff will work
  • Could AI be trained to operate as a far more severe computer worm than any we've seen before?
  • What does the world look like when sluggish human reaction times leave us completely outclassed?
  • And plenty more

Chapters:

  • Rob’s intro (00:00:00)
  • The interview begins (00:04:35)
  • What is compute exactly? (00:09:46)
  • Structural risks (00:13:25)
  • Why focus on compute? (00:21:43)
  • Weaknesses of targeting compute (00:30:41)
  • Chip specialisation (00:37:11)
  • Export restrictions (00:40:13)
  • Compute governance is happening (00:59:00)
  • Reactions to AI regulation (01:05:03)
  • Creating legal authority to intervene quickly (01:10:09)
  • Building mechanisms into chips themselves (01:18:57)
  • Rob not buying that any of this will work (01:39:28)
  • Are we doomed to become irrelevant? (01:59:10)
  • Rob’s computer security bad dreams (02:10:22)
  • Concrete advice (02:26:58)
  • Article reading: Information security in high-impact areas (02:49:36)
  • Rob’s outro (03:10:38)

Producer: Keiran Harris

Audio mastering: Milo McGuire, Dominic Armstrong, and Ben Cordell

Transcriptions: Katy Moore

Jaksot(320)

#66 – Peter Singer on being provocative, effective altruism, & how his moral views have changed

#66 – Peter Singer on being provocative, effective altruism, & how his moral views have changed

In 1989, the professor of moral philosophy Peter Singer was all over the news for his inflammatory opinions about abortion. But the controversy stemmed from Practical Ethics — a book he’d actually rel...

5 Joulu 20192h 1min

#65 – Ambassador Bonnie Jenkins on 8 years pursuing WMD arms control, & diversity in diplomacy

#65 – Ambassador Bonnie Jenkins on 8 years pursuing WMD arms control, & diversity in diplomacy

"…it started when the Soviet Union fell apart and there was a real desire to ensure security of nuclear materials and pathogens, and that scientists with [WMD-related] knowledge could get paid so that...

19 Marras 20191h 40min

#64 – Bruce Schneier on how insecure electronic voting could break the United States — and surveillance without tyranny

#64 – Bruce Schneier on how insecure electronic voting could break the United States — and surveillance without tyranny

November 3 2020, 10:32PM: CNN, NBC, and FOX report that Donald Trump has narrowly won Florida, and with it, re-election.  November 3 2020, 11:46PM: The NY Times and Wall Street Journal report that so...

25 Loka 20192h 11min

Rob Wiblin on plastic straws, nicotine, doping, & whether changing the long-term is really possible

Rob Wiblin on plastic straws, nicotine, doping, & whether changing the long-term is really possible

Today's episode is a compilation of interviews I recently recorded for two other shows, Love Your Work and The Neoliberal Podcast.  If you've listened to absolutely everything on this podcast feed, y...

25 Syys 20193h 14min

Have we helped you have a bigger social impact? Our annual survey, plus other ways we can help you.

Have we helped you have a bigger social impact? Our annual survey, plus other ways we can help you.

1. Fill out our annual impact survey here. 2. Find a great vacancy on our job board. 3. Learn about our key ideas, and get links to our top articles. 4. Join our newsletter for an email about what's n...

16 Syys 20193min

#63 – Vitalik Buterin on better ways to fund public goods, blockchain's failures, & effective giving

#63 – Vitalik Buterin on better ways to fund public goods, blockchain's failures, & effective giving

Historically, progress in the field of cryptography has had major consequences. It has changed the course of major wars, made it possible to do business on the internet, and enabled private communicat...

3 Syys 20193h 18min

#62 – Paul Christiano on messaging the future, increasing compute, & how CO2 impacts your brain

#62 – Paul Christiano on messaging the future, increasing compute, & how CO2 impacts your brain

Imagine that – one day – humanity dies out. At some point, many millions of years later, intelligent life might well evolve again. Is there any message we could leave that would reliably help them out...

5 Elo 20192h 11min

#61 - Helen Toner on emerging technology, national security, and China

#61 - Helen Toner on emerging technology, national security, and China

From 1870 to 1950, the introduction of electricity transformed life in the US and UK, as people gained access to lighting, radio and a wide range of household appliances for the first time. Electricit...

17 Heinä 20191h 54min

Suosittua kategoriassa Koulutus

rss-murhan-anatomia
psykopodiaa-podcast
rss-narsisti
voi-hyvin-meditaatiot-2
aamukahvilla
rss-niinku-asia-on
rss-liian-kuuma-peruna
adhd-podi
psykologia
kesken
rss-vapaudu-voimaasi
rss-valo-minussa-2
dear-ladies
rss-koira-haudattuna
jari-sarasvuo-podcast
esa-saarinen-filosofia-ja-systeemiajattelu
leveli
rss-duodecim-lehti
rss-luonnollinen-synnytys-podcast
rss-ihana-elamani