#194 – Vitalik Buterin on defensive acceleration and how to regulate AI when you fear government
80,000 Hours Podcast26 Heinä 2024

#194 – Vitalik Buterin on defensive acceleration and how to regulate AI when you fear government

"If you’re a power that is an island and that goes by sea, then you’re more likely to do things like valuing freedom, being democratic, being pro-foreigner, being open-minded, being interested in trade. If you are on the Mongolian steppes, then your entire mindset is kill or be killed, conquer or be conquered … the breeding ground for basically everything that all of us consider to be dystopian governance. If you want more utopian governance and less dystopian governance, then find ways to basically change the landscape, to try to make the world look more like mountains and rivers and less like the Mongolian steppes." —Vitalik Buterin

Can ‘effective accelerationists’ and AI ‘doomers’ agree on a common philosophy of technology? Common sense says no. But programmer and Ethereum cofounder Vitalik Buterin showed otherwise with his essay “My techno-optimism,” which both camps agreed was basically reasonable.

Links to learn more, highlights, video, and full transcript.

Seeing his social circle divided and fighting, Vitalik hoped to write a careful synthesis of the best ideas from both the optimists and the apprehensive.

Accelerationists are right: most technologies leave us better off, the human cost of delaying further advances can be dreadful, and centralising control in government hands often ends disastrously.

But the fearful are also right: some technologies are important exceptions, AGI has an unusually high chance of being one of those, and there are options to advance AI in safer directions.

The upshot? Defensive acceleration: humanity should run boldly but also intelligently into the future — speeding up technology to get its benefits, but preferentially developing ‘defensive’ technologies that lower systemic risks, permit safe decentralisation of power, and help both individuals and countries defend themselves against aggression and domination.

Entrepreneur First is running a defensive acceleration incubation programme with $250,000 of investment. If these ideas resonate with you, learn about the programme and apply by August 2, 2024. You don’t need a business idea yet — just the hustle to start a technology company.

In addition to all of that, host Rob Wiblin and Vitalik discuss:

  • AI regulation disagreements being less about AI in particular, and more whether you’re typically more scared of anarchy or totalitarianism.
  • Vitalik’s updated p(doom).
  • Whether the social impact of blockchain and crypto has been a disappointment.
  • Whether humans can merge with AI, and if that’s even desirable.
  • The most valuable defensive technologies to accelerate.
  • How to trustlessly identify what everyone will agree is misinformation
  • Whether AGI is offence-dominant or defence-dominant.
  • Vitalik’s updated take on effective altruism.
  • Plenty more.

Chapters:

  • Cold open (00:00:00)
  • Rob’s intro (00:00:56)
  • The interview begins (00:04:47)
  • Three different views on technology (00:05:46)
  • Vitalik’s updated probability of doom (00:09:25)
  • Technology is amazing, and AI is fundamentally different from other tech (00:15:55)
  • Fear of totalitarianism and finding middle ground (00:22:44)
  • Should AI be more centralised or more decentralised? (00:42:20)
  • Humans merging with AIs to remain relevant (01:06:59)
  • Vitalik’s “d/acc” alternative (01:18:48)
  • Biodefence (01:24:01)
  • Pushback on Vitalik’s vision (01:37:09)
  • How much do people actually disagree? (01:42:14)
  • Cybersecurity (01:47:28)
  • Information defence (02:01:44)
  • Is AI more offence-dominant or defence-dominant? (02:21:00)
  • How Vitalik communicates among different camps (02:25:44)
  • Blockchain applications with social impact (02:34:37)
  • Rob’s outro (03:01:00)

Producer and editor: Keiran Harris
Audio engineering team: Ben Cordell, Simon Monsour, Milo McGuire, and Dominic Armstrong
Transcriptions: Katy Moore

Jaksot(320)

#204 – Nate Silver on making sense of SBF, and his biggest critiques of effective altruism

#204 – Nate Silver on making sense of SBF, and his biggest critiques of effective altruism

Rob Wiblin speaks with FiveThirtyEight election forecaster and author Nate Silver about his new book: On the Edge: The Art of Risking Everything.Links to learn more, highlights, video, and full transc...

16 Loka 20241h 57min

#203 – Peter Godfrey-Smith on interfering with wild nature, accepting death, and the origin of complex civilisation

#203 – Peter Godfrey-Smith on interfering with wild nature, accepting death, and the origin of complex civilisation

"In the human case, it would be mistaken to give a kind of hour-by-hour accounting. You know, 'I had +4 level of experience for this hour, then I had -2 for the next hour, and then I had -1' — and you...

3 Loka 20241h 25min

Luisa and Keiran on free will, and the consequences of never feeling enduring guilt or shame

Luisa and Keiran on free will, and the consequences of never feeling enduring guilt or shame

In this episode from our second show, 80k After Hours, Luisa Rodriguez and Keiran Harris chat about the consequences of letting go of enduring guilt, shame, anger, and pride.Links to learn more, highl...

27 Syys 20241h 36min

#202 – Venki Ramakrishnan on the cutting edge of anti-ageing science

#202 – Venki Ramakrishnan on the cutting edge of anti-ageing science

"For every far-out idea that turns out to be true, there were probably hundreds that were simply crackpot ideas. In general, [science] advances building on the knowledge we have, and seeing what the n...

19 Syys 20242h 20min

#201 – Ken Goldberg on why your robot butler isn’t here yet

#201 – Ken Goldberg on why your robot butler isn’t here yet

"Perception is quite difficult with cameras: even if you have a stereo camera, you still can’t really build a map of where everything is in space. It’s just very difficult. And I know that sounds surp...

13 Syys 20242h 1min

#200 – Ezra Karger on what superforecasters and experts think about existential risks

#200 – Ezra Karger on what superforecasters and experts think about existential risks

"It’s very hard to find examples where people say, 'I’m starting from this point. I’m starting from this belief.' So we wanted to make that very legible to people. We wanted to say, 'Experts think thi...

4 Syys 20242h 49min

#199 – Nathan Calvin on California’s AI bill SB 1047 and its potential to shape US AI policy

#199 – Nathan Calvin on California’s AI bill SB 1047 and its potential to shape US AI policy

"I do think that there is a really significant sentiment among parts of the opposition that it’s not really just that this bill itself is that bad or extreme — when you really drill into it, it feels ...

29 Elo 20241h 12min

#198 – Meghan Barrett on upending everything you thought you knew about bugs in 3 hours

#198 – Meghan Barrett on upending everything you thought you knew about bugs in 3 hours

"This is a group of animals I think people are particularly unfamiliar with. They are especially poorly covered in our science curriculum; they are especially poorly understood, because people don’t s...

26 Elo 20243h 48min

Suosittua kategoriassa Koulutus

rss-murhan-anatomia
psykopodiaa-podcast
rss-narsisti
voi-hyvin-meditaatiot-2
aamukahvilla
rss-vapaudu-voimaasi
rss-niinku-asia-on
adhd-podi
rss-liian-kuuma-peruna
kesken
psykologia
dear-ladies
rss-koira-haudattuna
leveli
rss-luonnollinen-synnytys-podcast
rahapuhetta
aloita-meditaatio
rss-duodecim-lehti
jari-sarasvuo-podcast
rss-palopaikalla-podcast