#194 – Vitalik Buterin on defensive acceleration and how to regulate AI when you fear government

#194 – Vitalik Buterin on defensive acceleration and how to regulate AI when you fear government

"If you’re a power that is an island and that goes by sea, then you’re more likely to do things like valuing freedom, being democratic, being pro-foreigner, being open-minded, being interested in trade. If you are on the Mongolian steppes, then your entire mindset is kill or be killed, conquer or be conquered … the breeding ground for basically everything that all of us consider to be dystopian governance. If you want more utopian governance and less dystopian governance, then find ways to basically change the landscape, to try to make the world look more like mountains and rivers and less like the Mongolian steppes." —Vitalik Buterin

Can ‘effective accelerationists’ and AI ‘doomers’ agree on a common philosophy of technology? Common sense says no. But programmer and Ethereum cofounder Vitalik Buterin showed otherwise with his essay “My techno-optimism,” which both camps agreed was basically reasonable.

Links to learn more, highlights, video, and full transcript.

Seeing his social circle divided and fighting, Vitalik hoped to write a careful synthesis of the best ideas from both the optimists and the apprehensive.

Accelerationists are right: most technologies leave us better off, the human cost of delaying further advances can be dreadful, and centralising control in government hands often ends disastrously.

But the fearful are also right: some technologies are important exceptions, AGI has an unusually high chance of being one of those, and there are options to advance AI in safer directions.

The upshot? Defensive acceleration: humanity should run boldly but also intelligently into the future — speeding up technology to get its benefits, but preferentially developing ‘defensive’ technologies that lower systemic risks, permit safe decentralisation of power, and help both individuals and countries defend themselves against aggression and domination.

Entrepreneur First is running a defensive acceleration incubation programme with $250,000 of investment. If these ideas resonate with you, learn about the programme and apply by August 2, 2024. You don’t need a business idea yet — just the hustle to start a technology company.

In addition to all of that, host Rob Wiblin and Vitalik discuss:

  • AI regulation disagreements being less about AI in particular, and more whether you’re typically more scared of anarchy or totalitarianism.
  • Vitalik’s updated p(doom).
  • Whether the social impact of blockchain and crypto has been a disappointment.
  • Whether humans can merge with AI, and if that’s even desirable.
  • The most valuable defensive technologies to accelerate.
  • How to trustlessly identify what everyone will agree is misinformation
  • Whether AGI is offence-dominant or defence-dominant.
  • Vitalik’s updated take on effective altruism.
  • Plenty more.

Chapters:

  • Cold open (00:00:00)
  • Rob’s intro (00:00:56)
  • The interview begins (00:04:47)
  • Three different views on technology (00:05:46)
  • Vitalik’s updated probability of doom (00:09:25)
  • Technology is amazing, and AI is fundamentally different from other tech (00:15:55)
  • Fear of totalitarianism and finding middle ground (00:22:44)
  • Should AI be more centralised or more decentralised? (00:42:20)
  • Humans merging with AIs to remain relevant (01:06:59)
  • Vitalik’s “d/acc” alternative (01:18:48)
  • Biodefence (01:24:01)
  • Pushback on Vitalik’s vision (01:37:09)
  • How much do people actually disagree? (01:42:14)
  • Cybersecurity (01:47:28)
  • Information defence (02:01:44)
  • Is AI more offence-dominant or defence-dominant? (02:21:00)
  • How Vitalik communicates among different camps (02:25:44)
  • Blockchain applications with social impact (02:34:37)
  • Rob’s outro (03:01:00)

Producer and editor: Keiran Harris
Audio engineering team: Ben Cordell, Simon Monsour, Milo McGuire, and Dominic Armstrong
Transcriptions: Katy Moore

Episoder(320)

15 expert takes on infosec in the age of AI

15 expert takes on infosec in the age of AI

"There’s almost no story of the future going well that doesn’t have a part that’s like '…and no evil person steals the AI weights and goes and does evil stuff.' So it has highlighted the importance of...

28 Mar 20252h 35min

#213 – Will MacAskill on AI causing a “century in a decade” – and how we're completely unprepared

#213 – Will MacAskill on AI causing a “century in a decade” – and how we're completely unprepared

The 20th century saw unprecedented change: nuclear weapons, satellites, the rise and fall of communism, third-wave feminism, the internet, postmodernism, game theory, genetic engineering, the Big Bang...

11 Mar 20253h 57min

Emergency pod: Judge plants a legal time bomb under OpenAI (with Rose Chan Loui)

Emergency pod: Judge plants a legal time bomb under OpenAI (with Rose Chan Loui)

When OpenAI announced plans to convert from nonprofit to for-profit control last October, it likely didn’t anticipate the legal labyrinth it now faces. A recent court order in Elon Musk’s lawsuit agai...

7 Mar 202536min

#139 Classic episode – Alan Hájek on puzzles and paradoxes in probability and expected value

#139 Classic episode – Alan Hájek on puzzles and paradoxes in probability and expected value

A casino offers you a game. A coin will be tossed. If it comes up heads on the first flip you win $2. If it comes up on the second flip you win $4. If it comes up on the third you win $8, the fourth y...

25 Feb 20253h 41min

#143 Classic episode – Jeffrey Lewis on the most common misconceptions about nuclear weapons

#143 Classic episode – Jeffrey Lewis on the most common misconceptions about nuclear weapons

America aims to avoid nuclear war by relying on the principle of 'mutually assured destruction,' right? Wrong. Or at least... not officially.As today's guest — Jeffrey Lewis, founder of Arms Control W...

19 Feb 20252h 40min

#212 – Allan Dafoe on why technology is unstoppable & how to shape AI development anyway

#212 – Allan Dafoe on why technology is unstoppable & how to shape AI development anyway

Technology doesn’t force us to do anything — it merely opens doors. But military and economic competition pushes us through.That’s how today’s guest Allan Dafoe — director of frontier safety and gover...

14 Feb 20252h 44min

Emergency pod: Elon tries to crash OpenAI's party (with Rose Chan Loui)

Emergency pod: Elon tries to crash OpenAI's party (with Rose Chan Loui)

On Monday Musk made the OpenAI nonprofit foundation an offer they want to refuse, but might have trouble doing so: $97.4 billion for its stake in the for-profit company, plus the freedom to stick with...

12 Feb 202557min

AGI disagreements and misconceptions: Rob, Luisa, & past guests hash it out

AGI disagreements and misconceptions: Rob, Luisa, & past guests hash it out

Will LLMs soon be made into autonomous agents? Will they lead to job losses? Is AI misinformation overblown? Will it prove easy or hard to create AGI? And how likely is it that it will feel like somet...

10 Feb 20253h 12min

Populært innen Fakta

fastlegen
dine-penger-pengeradet
relasjonspodden-med-dora-thorhallsdottir-kjersti-idem
treningspodden
rss-strid-de-norske-borgerkrigene
foreldreradet
jakt-og-fiskepodden
rss-sunn-okonomi
merry-quizmas
gravid-uke-for-uke
fryktlos
sinnsyn
smart-forklart
rss-mann-i-krise-med-sagen
hverdagspsyken
rss-kunsten-a-leve
dopet
aldring-og-helse-podden
rss-adhd-i-klasserommet
generasjonspodden