#162 – Mustafa Suleyman on getting Washington and Silicon Valley to tame AI

#162 – Mustafa Suleyman on getting Washington and Silicon Valley to tame AI

Mustafa Suleyman was part of the trio that founded DeepMind, and his new AI project is building one of the world's largest supercomputers to train a large language model on 10–100x the compute used to train ChatGPT.

But far from the stereotype of the incorrigibly optimistic tech founder, Mustafa is deeply worried about the future, for reasons he lays out in his new book The Coming Wave: Technology, Power, and the 21st Century's Greatest Dilemma (coauthored with Michael Bhaskar). The future could be really good, but only if we grab the bull by the horns and solve the new problems technology is throwing at us.

Links to learn more, summary and full transcript.

On Mustafa's telling, AI and biotechnology will soon be a huge aid to criminals and terrorists, empowering small groups to cause harm on previously unimaginable scales. Democratic countries have learned to walk a 'narrow path' between chaos on the one hand and authoritarianism on the other, avoiding the downsides that come from both extreme openness and extreme closure. AI could easily destabilise that present equilibrium, throwing us off dangerously in either direction. And ultimately, within our lifetimes humans may not need to work to live any more -- or indeed, even have the option to do so.

And those are just three of the challenges confronting us. In Mustafa's view, 'misaligned' AI that goes rogue and pursues its own agenda won't be an issue for the next few years, and it isn't a problem for the current style of large language models. But he thinks that at some point -- in eight, ten, or twelve years -- it will become an entirely legitimate concern, and says that we need to be planning ahead.

In The Coming Wave, Mustafa lays out a 10-part agenda for 'containment' -- that is to say, for limiting the negative and unforeseen consequences of emerging technologies:

1. Developing an Apollo programme for technical AI safety
2. Instituting capability audits for AI models
3. Buying time by exploiting hardware choke points
4. Getting critics involved in directly engineering AI models
5. Getting AI labs to be guided by motives other than profit
6. Radically increasing governments’ understanding of AI and their capabilities to sensibly regulate it
7. Creating international treaties to prevent proliferation of the most dangerous AI capabilities
8. Building a self-critical culture in AI labs of openly accepting when the status quo isn't working
9. Creating a mass public movement that understands AI and can demand the necessary controls
10. Not relying too much on delay, but instead seeking to move into a new somewhat-stable equilibria

As Mustafa put it, "AI is a technology with almost every use case imaginable" and that will demand that, in time, we rethink everything.

Rob and Mustafa discuss the above, as well as:

  • Whether we should be open sourcing AI models
  • Whether Mustafa's policy views are consistent with his timelines for transformative AI
  • How people with very different views on these issues get along at AI labs
  • The failed efforts (so far) to get a wider range of people involved in these decisions
  • Whether it's dangerous for Mustafa's new company to be training far larger models than GPT-4
  • Whether we'll be blown away by AI progress over the next year
  • What mandatory regulations government should be imposing on AI labs right now
  • Appropriate priorities for the UK's upcoming AI safety summit

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript.

Producer and editor: Keiran Harris
Audio Engineering Lead: Ben Cordell
Technical editing: Milo McGuire
Transcriptions: Katy Moore

Episoder(333)

#180 – Hugo Mercier on why gullibility and misinformation are overrated

#180 – Hugo Mercier on why gullibility and misinformation are overrated

The World Economic Forum’s global risks survey of 1,400 experts, policymakers, and industry leaders ranked misinformation and disinformation as the number one global risk over the next two years — ran...

21 Feb 20242h 36min

#179 – Randy Nesse on why evolution left us so vulnerable to depression and anxiety

#179 – Randy Nesse on why evolution left us so vulnerable to depression and anxiety

Mental health problems like depression and anxiety affect enormous numbers of people and severely interfere with their lives. By contrast, we don’t see similar levels of physical ill health in young p...

12 Feb 20242h 56min

#178 – Emily Oster on what the evidence actually says about pregnancy and parenting

#178 – Emily Oster on what the evidence actually says about pregnancy and parenting

"I think at various times — before you have the kid, after you have the kid — it's useful to sit down and think about: What do I want the shape of this to look like? What time do I want to be spending...

1 Feb 20242h 22min

#177 – Nathan Labenz on recent AI breakthroughs and navigating the growing rift between AI safety and accelerationist camps

#177 – Nathan Labenz on recent AI breakthroughs and navigating the growing rift between AI safety and accelerationist camps

Back in December we spoke with Nathan Labenz — AI entrepreneur and host of The Cognitive Revolution Podcast — about the speed of progress towards AGI and OpenAI's leadership drama, drawing on Nathan's...

24 Jan 20242h 47min

#90 Classic episode – Ajeya Cotra on worldview diversification and how big the future could be

#90 Classic episode – Ajeya Cotra on worldview diversification and how big the future could be

You wake up in a mysterious box, and hear the booming voice of God: “I just flipped a coin. If it came up heads, I made ten boxes, labeled 1 through 10 — each of which has a human in it. If it came up...

12 Jan 20242h 59min

#112 Classic episode – Carl Shulman on the common-sense case for existential risk work and its practical implications

#112 Classic episode – Carl Shulman on the common-sense case for existential risk work and its practical implications

Preventing the apocalypse may sound like an idiosyncratic activity, and it sometimes is justified on exotic grounds, such as the potential for humanity to become a galaxy-spanning civilisation.But the...

8 Jan 20243h 50min

#111 Classic episode – Mushtaq Khan on using institutional economics to predict effective government reforms

#111 Classic episode – Mushtaq Khan on using institutional economics to predict effective government reforms

If you’re living in the Niger Delta in Nigeria, your best bet at a high-paying career is probably ‘artisanal refining’ — or, in plain language, stealing oil from pipelines.The resulting oil spills dam...

4 Jan 20243h 22min

2023 Mega-highlights Extravaganza

2023 Mega-highlights Extravaganza

Happy new year! We've got a different kind of holiday release for you today. Rather than a 'classic episode,' we've put together one of our favourite highlights from each episode of the show that came...

31 Des 20231h 53min

Populært innen Fakta

fastlegen
dine-penger-pengeradet
relasjonspodden-med-dora-thorhallsdottir-kjersti-idem
mikkels-paskenotter
foreldreradet
rss-bisarr-historie
treningspodden
rss-strid-de-norske-borgerkrigene
jakt-og-fiskepodden
rss-sunn-okonomi
ukast
hverdagspsyken
lederskap-nhhs-podkast-om-ledelse
sinnsyn
rss-bak-luftfarten
takk-og-lov-med-anine-kierulf
fryktlos
rss-kunsten-a-leve
rss-kull
gravid-uke-for-uke