#194 – Vitalik Buterin on defensive acceleration and how to regulate AI when you fear government

#194 – Vitalik Buterin on defensive acceleration and how to regulate AI when you fear government

"If you’re a power that is an island and that goes by sea, then you’re more likely to do things like valuing freedom, being democratic, being pro-foreigner, being open-minded, being interested in trade. If you are on the Mongolian steppes, then your entire mindset is kill or be killed, conquer or be conquered … the breeding ground for basically everything that all of us consider to be dystopian governance. If you want more utopian governance and less dystopian governance, then find ways to basically change the landscape, to try to make the world look more like mountains and rivers and less like the Mongolian steppes." —Vitalik Buterin

Can ‘effective accelerationists’ and AI ‘doomers’ agree on a common philosophy of technology? Common sense says no. But programmer and Ethereum cofounder Vitalik Buterin showed otherwise with his essay “My techno-optimism,” which both camps agreed was basically reasonable.

Links to learn more, highlights, video, and full transcript.

Seeing his social circle divided and fighting, Vitalik hoped to write a careful synthesis of the best ideas from both the optimists and the apprehensive.

Accelerationists are right: most technologies leave us better off, the human cost of delaying further advances can be dreadful, and centralising control in government hands often ends disastrously.

But the fearful are also right: some technologies are important exceptions, AGI has an unusually high chance of being one of those, and there are options to advance AI in safer directions.

The upshot? Defensive acceleration: humanity should run boldly but also intelligently into the future — speeding up technology to get its benefits, but preferentially developing ‘defensive’ technologies that lower systemic risks, permit safe decentralisation of power, and help both individuals and countries defend themselves against aggression and domination.

Entrepreneur First is running a defensive acceleration incubation programme with $250,000 of investment. If these ideas resonate with you, learn about the programme and apply by August 2, 2024. You don’t need a business idea yet — just the hustle to start a technology company.

In addition to all of that, host Rob Wiblin and Vitalik discuss:

  • AI regulation disagreements being less about AI in particular, and more whether you’re typically more scared of anarchy or totalitarianism.
  • Vitalik’s updated p(doom).
  • Whether the social impact of blockchain and crypto has been a disappointment.
  • Whether humans can merge with AI, and if that’s even desirable.
  • The most valuable defensive technologies to accelerate.
  • How to trustlessly identify what everyone will agree is misinformation
  • Whether AGI is offence-dominant or defence-dominant.
  • Vitalik’s updated take on effective altruism.
  • Plenty more.

Chapters:

  • Cold open (00:00:00)
  • Rob’s intro (00:00:56)
  • The interview begins (00:04:47)
  • Three different views on technology (00:05:46)
  • Vitalik’s updated probability of doom (00:09:25)
  • Technology is amazing, and AI is fundamentally different from other tech (00:15:55)
  • Fear of totalitarianism and finding middle ground (00:22:44)
  • Should AI be more centralised or more decentralised? (00:42:20)
  • Humans merging with AIs to remain relevant (01:06:59)
  • Vitalik’s “d/acc” alternative (01:18:48)
  • Biodefence (01:24:01)
  • Pushback on Vitalik’s vision (01:37:09)
  • How much do people actually disagree? (01:42:14)
  • Cybersecurity (01:47:28)
  • Information defence (02:01:44)
  • Is AI more offence-dominant or defence-dominant? (02:21:00)
  • How Vitalik communicates among different camps (02:25:44)
  • Blockchain applications with social impact (02:34:37)
  • Rob’s outro (03:01:00)

Producer and editor: Keiran Harris
Audio engineering team: Ben Cordell, Simon Monsour, Milo McGuire, and Dominic Armstrong
Transcriptions: Katy Moore

Avsnitt(325)

#28 - Owen Cotton-Barratt on why scientists should need insurance, PhD strategy & fast AI progresses

#28 - Owen Cotton-Barratt on why scientists should need insurance, PhD strategy & fast AI progresses

A researcher is working on creating a new virus – one more dangerous than any that exist naturally. They believe they’re being as careful as possible. After all, if things go wrong, their own life and...

27 Apr 20181h 3min

#27 - Dr Tom Inglesby on careers and policies that reduce global catastrophic biological risks

#27 - Dr Tom Inglesby on careers and policies that reduce global catastrophic biological risks

How about this for a movie idea: a main character has to prevent a new contagious strain of Ebola spreading around the world. She’s the best of the best. So good in fact, that her work on early detect...

18 Apr 20182h 16min

#26 - Marie Gibbons on how exactly clean meat is made & what's needed to get it in every supermarket

#26 - Marie Gibbons on how exactly clean meat is made & what's needed to get it in every supermarket

First, decide on the type of animal. Next, pick the cell type. Then take a small, painless biopsy, and put the cells in a solution that makes them feel like they’re still in the body. Once the cells a...

10 Apr 20181h 44min

#25 - Robin Hanson on why we have to lie to ourselves about why we do what we do

#25 - Robin Hanson on why we have to lie to ourselves about why we do what we do

On February 2, 1685, England’s King Charles II was struck by a sudden illness. Fortunately his physicians were the best of the best. To reassure the public they kept them abreast of the King’s treatme...

28 Mars 20182h 39min

#24 - Stefan Schubert on why it’s a bad idea to break the rules, even if it’s for a good cause

#24 - Stefan Schubert on why it’s a bad idea to break the rules, even if it’s for a good cause

How honest should we be? How helpful? How friendly? If our society claims to value honesty, for instance, but in reality accepts an awful lot of lying – should we go along with those lax standards? Or...

20 Mars 201855min

#23 - How to actually become an AI alignment researcher, according to Dr Jan Leike

#23 - How to actually become an AI alignment researcher, according to Dr Jan Leike

Want to help steer the 21st century’s most transformative technology? First complete an undergrad degree in computer science and mathematics. Prioritize harder courses over easier ones. Publish at lea...

16 Mars 201845min

#22 - Leah Utyasheva on the non-profit that figured out how to massively cut suicide rates

#22 - Leah Utyasheva on the non-profit that figured out how to massively cut suicide rates

How people kill themselves varies enormously depending on which means are most easily available. In the United States, suicide by firearm stands out. In Hong Kong, where most people live in high rise ...

7 Mars 20181h 8min

#21 - Holden Karnofsky on times philanthropy transformed the world & Open Phil’s plan to do the same

#21 - Holden Karnofsky on times philanthropy transformed the world & Open Phil’s plan to do the same

The Green Revolution averted mass famine during the 20th century. The contraceptive pill gave women unprecedented freedom in planning their own lives. Both are widely recognised as scientific breakthr...

27 Feb 20182h 35min

Populärt inom Utbildning

rss-bara-en-till-om-missbruk-medberoende-2
historiepodden-se
det-skaver
alska-oss
nu-blir-det-historia
sektledare
harrisons-dramatiska-historia
not-fanny-anymore
roda-vita-rosen
johannes-hansen-podcast
allt-du-velat-veta
rss-viktmedicinpodden
sa-in-i-sjalen
i-vantan-pa-katastrofen
rss-sjalsligt-avkladd
rss-basta-livet
rikatillsammans-om-privatekonomi-rikedom-i-livet
rss-om-vi-ska-vara-arliga
rss-max-tant-med-max-villman
vi-gar-till-historien