#162 – Mustafa Suleyman on getting Washington and Silicon Valley to tame AI

#162 – Mustafa Suleyman on getting Washington and Silicon Valley to tame AI

Mustafa Suleyman was part of the trio that founded DeepMind, and his new AI project is building one of the world's largest supercomputers to train a large language model on 10–100x the compute used to train ChatGPT.

But far from the stereotype of the incorrigibly optimistic tech founder, Mustafa is deeply worried about the future, for reasons he lays out in his new book The Coming Wave: Technology, Power, and the 21st Century's Greatest Dilemma (coauthored with Michael Bhaskar). The future could be really good, but only if we grab the bull by the horns and solve the new problems technology is throwing at us.

Links to learn more, summary and full transcript.

On Mustafa's telling, AI and biotechnology will soon be a huge aid to criminals and terrorists, empowering small groups to cause harm on previously unimaginable scales. Democratic countries have learned to walk a 'narrow path' between chaos on the one hand and authoritarianism on the other, avoiding the downsides that come from both extreme openness and extreme closure. AI could easily destabilise that present equilibrium, throwing us off dangerously in either direction. And ultimately, within our lifetimes humans may not need to work to live any more -- or indeed, even have the option to do so.

And those are just three of the challenges confronting us. In Mustafa's view, 'misaligned' AI that goes rogue and pursues its own agenda won't be an issue for the next few years, and it isn't a problem for the current style of large language models. But he thinks that at some point -- in eight, ten, or twelve years -- it will become an entirely legitimate concern, and says that we need to be planning ahead.

In The Coming Wave, Mustafa lays out a 10-part agenda for 'containment' -- that is to say, for limiting the negative and unforeseen consequences of emerging technologies:

1. Developing an Apollo programme for technical AI safety
2. Instituting capability audits for AI models
3. Buying time by exploiting hardware choke points
4. Getting critics involved in directly engineering AI models
5. Getting AI labs to be guided by motives other than profit
6. Radically increasing governments’ understanding of AI and their capabilities to sensibly regulate it
7. Creating international treaties to prevent proliferation of the most dangerous AI capabilities
8. Building a self-critical culture in AI labs of openly accepting when the status quo isn't working
9. Creating a mass public movement that understands AI and can demand the necessary controls
10. Not relying too much on delay, but instead seeking to move into a new somewhat-stable equilibria

As Mustafa put it, "AI is a technology with almost every use case imaginable" and that will demand that, in time, we rethink everything.

Rob and Mustafa discuss the above, as well as:

  • Whether we should be open sourcing AI models
  • Whether Mustafa's policy views are consistent with his timelines for transformative AI
  • How people with very different views on these issues get along at AI labs
  • The failed efforts (so far) to get a wider range of people involved in these decisions
  • Whether it's dangerous for Mustafa's new company to be training far larger models than GPT-4
  • Whether we'll be blown away by AI progress over the next year
  • What mandatory regulations government should be imposing on AI labs right now
  • Appropriate priorities for the UK's upcoming AI safety summit

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript.

Producer and editor: Keiran Harris
Audio Engineering Lead: Ben Cordell
Technical editing: Milo McGuire
Transcriptions: Katy Moore

Jaksot(324)

#27 - Dr Tom Inglesby on careers and policies that reduce global catastrophic biological risks

#27 - Dr Tom Inglesby on careers and policies that reduce global catastrophic biological risks

How about this for a movie idea: a main character has to prevent a new contagious strain of Ebola spreading around the world. She’s the best of the best. So good in fact, that her work on early detect...

18 Huhti 20182h 16min

#26 - Marie Gibbons on how exactly clean meat is made & what's needed to get it in every supermarket

#26 - Marie Gibbons on how exactly clean meat is made & what's needed to get it in every supermarket

First, decide on the type of animal. Next, pick the cell type. Then take a small, painless biopsy, and put the cells in a solution that makes them feel like they’re still in the body. Once the cells a...

10 Huhti 20181h 44min

#25 - Robin Hanson on why we have to lie to ourselves about why we do what we do

#25 - Robin Hanson on why we have to lie to ourselves about why we do what we do

On February 2, 1685, England’s King Charles II was struck by a sudden illness. Fortunately his physicians were the best of the best. To reassure the public they kept them abreast of the King’s treatme...

28 Maalis 20182h 39min

#24 - Stefan Schubert on why it’s a bad idea to break the rules, even if it’s for a good cause

#24 - Stefan Schubert on why it’s a bad idea to break the rules, even if it’s for a good cause

How honest should we be? How helpful? How friendly? If our society claims to value honesty, for instance, but in reality accepts an awful lot of lying – should we go along with those lax standards? Or...

20 Maalis 201855min

#23 - How to actually become an AI alignment researcher, according to Dr Jan Leike

#23 - How to actually become an AI alignment researcher, according to Dr Jan Leike

Want to help steer the 21st century’s most transformative technology? First complete an undergrad degree in computer science and mathematics. Prioritize harder courses over easier ones. Publish at lea...

16 Maalis 201845min

#22 - Leah Utyasheva on the non-profit that figured out how to massively cut suicide rates

#22 - Leah Utyasheva on the non-profit that figured out how to massively cut suicide rates

How people kill themselves varies enormously depending on which means are most easily available. In the United States, suicide by firearm stands out. In Hong Kong, where most people live in high rise ...

7 Maalis 20181h 8min

#21 - Holden Karnofsky on times philanthropy transformed the world & Open Phil’s plan to do the same

#21 - Holden Karnofsky on times philanthropy transformed the world & Open Phil’s plan to do the same

The Green Revolution averted mass famine during the 20th century. The contraceptive pill gave women unprecedented freedom in planning their own lives. Both are widely recognised as scientific breakthr...

27 Helmi 20182h 35min

#20 - Bruce Friedrich on inventing outstanding meat substitutes to end speciesism & factory farming

#20 - Bruce Friedrich on inventing outstanding meat substitutes to end speciesism & factory farming

Before the US Civil War, it was easier for the North to morally oppose slavery. Why? Because unlike the South they weren’t profiting much from its existence. The fight for abolition was partly won bec...

19 Helmi 20181h 18min

Suosittua kategoriassa Koulutus

rss-murhan-anatomia
voi-hyvin-meditaatiot-2
rss-narsisti
psykopodiaa-podcast
rss-vapaudu-voimaasi
rss-niinku-asia-on
psykologia
rss-liian-kuuma-peruna
rss-duodecim-lehti
rss-uskonto-on-tylsaa
rss-valo-minussa-2
adhd-podi
aamukahvilla
kesken
koulu-podcast-2
adhd-tyylilla
jari-sarasvuo-podcast
rss-turun-yliopisto
rss-luonnollinen-synnytys-podcast
rss-laiska-joogi