#155 – Lennart Heim on the compute governance era and what has to come after

#155 – Lennart Heim on the compute governance era and what has to come after

As AI advances ever more quickly, concerns about potential misuse of highly capable models are growing. From hostile foreign governments and terrorists to reckless entrepreneurs, the threat of AI falling into the wrong hands is top of mind for the national security community.

With growing concerns about the use of AI in military applications, the US has banned the export of certain types of chips to China.

But unlike the uranium required to make nuclear weapons, or the material inputs to a bioweapons programme, computer chips and machine learning models are absolutely everywhere. So is it actually possible to keep dangerous capabilities out of the wrong hands?

In today's interview, Lennart Heim — who researches compute governance at the Centre for the Governance of AI — explains why limiting access to supercomputers may represent our best shot.

Links to learn more, summary and full transcript.

As Lennart explains, an AI research project requires many inputs, including the classic triad of compute, algorithms, and data.

If we want to limit access to the most advanced AI models, focusing on access to supercomputing resources -- usually called 'compute' -- might be the way to go. Both algorithms and data are hard to control because they live on hard drives and can be easily copied. By contrast, advanced chips are physical items that can't be used by multiple people at once and come from a small number of sources.

According to Lennart, the hope would be to enforce AI safety regulations by controlling access to the most advanced chips specialised for AI applications. For instance, projects training 'frontier' AI models — the newest and most capable models — might only gain access to the supercomputers they need if they obtain a licence and follow industry best practices.

We have similar safety rules for companies that fly planes or manufacture volatile chemicals — so why not for people producing the most powerful and perhaps the most dangerous technology humanity has ever played with?

But Lennart is quick to note that the approach faces many practical challenges. Currently, AI chips are readily available and untracked. Changing that will require the collaboration of many actors, which might be difficult, especially given that some of them aren't convinced of the seriousness of the problem.

Host Rob Wiblin is particularly concerned about a different challenge: the increasing efficiency of AI training algorithms. As these algorithms become more efficient, what once required a specialised AI supercomputer to train might soon be achievable with a home computer.

By that point, tracking every aggregation of compute that could prove to be very dangerous would be both impractical and invasive.

With only a decade or two left before that becomes a reality, the window during which compute governance is a viable solution may be a brief one. Top AI labs have already stopped publishing their latest algorithms, which might extend this 'compute governance era', but not for very long.

If compute governance is only a temporary phase between the era of difficult-to-train superhuman AI models and the time when such models are widely accessible, what can we do to prevent misuse of AI systems after that point?

Lennart and Rob both think the only enduring approach requires taking advantage of the AI capabilities that should be in the hands of police and governments — which will hopefully remain superior to those held by criminals, terrorists, or fools. But as they describe, this means maintaining a peaceful standoff between AI models with conflicting goals that can act and fight with one another on the microsecond timescale. Being far too slow to follow what's happening -- let alone participate -- humans would have to be cut out of any defensive decision-making.

Both agree that while this may be our best option, such a vision of the future is more terrifying than reassuring.

Lennart and Rob discuss the above as well as:

  • How can we best categorise all the ways AI could go wrong?
  • Why did the US restrict the export of some chips to China and what impact has that had?
  • Is the US in an 'arms race' with China or is that more an illusion?
  • What is the deal with chips specialised for AI applications?
  • How is the 'compute' industry organised?
  • Downsides of using compute as a target for regulations
  • Could safety mechanisms be built into computer chips themselves?
  • Who would have the legal authority to govern compute if some disaster made it seem necessary?
  • The reasons Rob doubts that any of this stuff will work
  • Could AI be trained to operate as a far more severe computer worm than any we've seen before?
  • What does the world look like when sluggish human reaction times leave us completely outclassed?
  • And plenty more

Chapters:

  • Rob’s intro (00:00:00)
  • The interview begins (00:04:35)
  • What is compute exactly? (00:09:46)
  • Structural risks (00:13:25)
  • Why focus on compute? (00:21:43)
  • Weaknesses of targeting compute (00:30:41)
  • Chip specialisation (00:37:11)
  • Export restrictions (00:40:13)
  • Compute governance is happening (00:59:00)
  • Reactions to AI regulation (01:05:03)
  • Creating legal authority to intervene quickly (01:10:09)
  • Building mechanisms into chips themselves (01:18:57)
  • Rob not buying that any of this will work (01:39:28)
  • Are we doomed to become irrelevant? (01:59:10)
  • Rob’s computer security bad dreams (02:10:22)
  • Concrete advice (02:26:58)
  • Article reading: Information security in high-impact areas (02:49:36)
  • Rob’s outro (03:10:38)

Producer: Keiran Harris

Audio mastering: Milo McGuire, Dominic Armstrong, and Ben Cordell

Transcriptions: Katy Moore

Avsnitt(324)

Have we helped you have a bigger social impact? Our annual survey, plus other ways we can help you.

Have we helped you have a bigger social impact? Our annual survey, plus other ways we can help you.

1. Fill out our annual impact survey here. 2. Find a great vacancy on our job board. 3. Learn about our key ideas, and get links to our top articles. 4. Join our newsletter for an email about what's n...

16 Sep 20193min

#63 – Vitalik Buterin on better ways to fund public goods, blockchain's failures, & effective giving

#63 – Vitalik Buterin on better ways to fund public goods, blockchain's failures, & effective giving

Historically, progress in the field of cryptography has had major consequences. It has changed the course of major wars, made it possible to do business on the internet, and enabled private communicat...

3 Sep 20193h 18min

#62 – Paul Christiano on messaging the future, increasing compute, & how CO2 impacts your brain

#62 – Paul Christiano on messaging the future, increasing compute, & how CO2 impacts your brain

Imagine that – one day – humanity dies out. At some point, many millions of years later, intelligent life might well evolve again. Is there any message we could leave that would reliably help them out...

5 Aug 20192h 11min

#61 - Helen Toner on emerging technology, national security, and China

#61 - Helen Toner on emerging technology, national security, and China

From 1870 to 1950, the introduction of electricity transformed life in the US and UK, as people gained access to lighting, radio and a wide range of household appliances for the first time. Electricit...

17 Juli 20191h 54min

#60 - Phil Tetlock on why accurate forecasting matters for everything, and how you can do it better

#60 - Phil Tetlock on why accurate forecasting matters for everything, and how you can do it better

Have you ever been infuriated by a doctor's unwillingness to give you an honest, probabilistic estimate about what to expect? Or a lawyer who won't tell you the chances you'll win your case? Their beh...

28 Juni 20192h 11min

#59 – Cass Sunstein on how change happens, and why it's so often abrupt & unpredictable

#59 – Cass Sunstein on how change happens, and why it's so often abrupt & unpredictable

It can often feel hopeless to be an activist seeking social change on an obscure issue where most people seem opposed or at best indifferent to you. But according to a new book by Professor Cass Sunst...

17 Juni 20191h 43min

#58 – Pushmeet Kohli of DeepMind on designing robust & reliable AI systems and how to succeed in AI

#58 – Pushmeet Kohli of DeepMind on designing robust & reliable AI systems and how to succeed in AI

When you're building a bridge, responsibility for making sure it won't fall over isn't handed over to a few 'bridge not falling down engineers'. Making sure a bridge is safe to use and remains standin...

3 Juni 20191h 30min

Rob Wiblin on human nature, new technology, and living a happy, healthy & ethical life

Rob Wiblin on human nature, new technology, and living a happy, healthy & ethical life

This is a cross-post of some interviews Rob did recently on two other podcasts — Mission Daily (from 2m) and The Good Life (from 1h13m). Some of the content will be familiar to regular listeners — bu...

13 Maj 20192h 18min

Populärt inom Utbildning

historiepodden-se
rss-bara-en-till-om-missbruk-medberoende-2
det-skaver
alska-oss
harrisons-dramatiska-historia
nu-blir-det-historia
rss-viktmedicinpodden
sektledare
johannes-hansen-podcast
not-fanny-anymore
allt-du-velat-veta
rss-sjalsligt-avkladd
roda-vita-rosen
i-vantan-pa-katastrofen
rss-max-tant-med-max-villman
sa-in-i-sjalen
rss-om-vi-ska-vara-arliga
rikatillsammans-om-privatekonomi-rikedom-i-livet
polisutbildningspodden
rss-basta-livet