#212 – Allan Dafoe on why technology is unstoppable & how to shape AI development anyway

#212 – Allan Dafoe on why technology is unstoppable & how to shape AI development anyway

Technology doesn’t force us to do anything — it merely opens doors. But military and economic competition pushes us through.

That’s how today’s guest Allan Dafoe — director of frontier safety and governance at Google DeepMind — explains one of the deepest patterns in technological history: once a powerful new capability becomes available, societies that adopt it tend to outcompete those that don’t. Those who resist too much can find themselves taken over or rendered irrelevant.

Links to learn more, highlights, video, and full transcript.

This dynamic played out dramatically in 1853 when US Commodore Perry sailed into Tokyo Bay with steam-powered warships that seemed magical to the Japanese, who had spent centuries deliberately limiting their technological development. With far greater military power, the US was able to force Japan to open itself to trade. Within 15 years, Japan had undergone the Meiji Restoration and transformed itself in a desperate scramble to catch up.

Today we see hints of similar pressure around artificial intelligence. Even companies, countries, and researchers deeply concerned about where AI could take us feel compelled to push ahead — worried that if they don’t, less careful actors will develop transformative AI capabilities at around the same time anyway.

But Allan argues this technological determinism isn’t absolute. While broad patterns may be inevitable, history shows we do have some ability to steer how technologies are developed, by who, and what they’re used for first.

As part of that approach, Allan has been promoting efforts to make AI more capable of sophisticated cooperation, and improving the tests Google uses to measure how well its models could do things like mislead people, hack and take control of their own servers, or spread autonomously in the wild.

As of mid-2024 they didn’t seem dangerous at all, but we’ve learned that our ability to measure these capabilities is good, but imperfect. If we don’t find the right way to ‘elicit’ an ability we can miss that it’s there.

Subsequent research from Anthropic and Redwood Research suggests there’s even a risk that future models may play dumb to avoid their goals being altered.

That has led DeepMind to a “defence in depth” approach: carefully staged deployment starting with internal testing, then trusted external testers, then limited release, then watching how models are used in the real world. By not releasing model weights, DeepMind is able to back up and add additional safeguards if experience shows they’re necessary.

But with much more powerful and general models on the way, individual company policies won’t be sufficient by themselves. Drawing on his academic research into how societies handle transformative technologies, Allan argues we need coordinated international governance that balances safety with our desire to get the massive potential benefits of AI in areas like healthcare and education as quickly as possible.

Host Rob and Allan also cover:

  • The most exciting beneficial applications of AI
  • Whether and how we can influence the development of technology
  • What DeepMind is doing to evaluate and mitigate risks from frontier AI systems
  • Why cooperative AI may be as important as aligned AI
  • The role of democratic input in AI governance
  • What kinds of experts are most needed in AI safety and governance
  • And much more

Chapters:

  • Cold open (00:00:00)
  • Who's Allan Dafoe? (00:00:48)
  • Allan's role at DeepMind (00:01:27)
  • Why join DeepMind over everyone else? (00:04:27)
  • Do humans control technological change? (00:09:17)
  • Arguments for technological determinism (00:20:24)
  • The synthesis of agency with tech determinism (00:26:29)
  • Competition took away Japan's choice (00:37:13)
  • Can speeding up one tech redirect history? (00:42:09)
  • Structural pushback against alignment efforts (00:47:55)
  • Do AIs need to be 'cooperatively skilled'? (00:52:25)
  • How AI could boost cooperation between people and states (01:01:59)
  • The super-cooperative AGI hypothesis and backdoor risks (01:06:58)
  • Aren’t today’s models already very cooperative? (01:13:22)
  • How would we make AIs cooperative anyway? (01:16:22)
  • Ways making AI more cooperative could backfire (01:22:24)
  • AGI is an essential idea we should define well (01:30:16)
  • It matters what AGI learns first vs last (01:41:01)
  • How Google tests for dangerous capabilities (01:45:39)
  • Evals 'in the wild' (01:57:46)
  • What to do given no single approach works that well (02:01:44)
  • We don't, but could, forecast AI capabilities (02:05:34)
  • DeepMind's strategy for ensuring its frontier models don't cause harm (02:11:25)
  • How 'structural risks' can force everyone into a worse world (02:15:01)
  • Is AI being built democratically? Should it? (02:19:35)
  • How much do AI companies really want external regulation? (02:24:34)
  • Social science can contribute a lot here (02:33:21)
  • How AI could make life way better: self-driving cars, medicine, education, and sustainability (02:35:55)

Video editing: Simon Monsour
Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
Camera operator: Jeremy Chevillotte
Transcriptions: Katy Moore

Jaksot(317)

#220 – Ryan Greenblatt on the 4 most likely ways for AI to take over, and the case for and against AGI in <8 years

#220 – Ryan Greenblatt on the 4 most likely ways for AI to take over, and the case for and against AGI in <8 years

Ryan Greenblatt — lead author on the explosive paper “Alignment faking in large language models” and chief scientist at Redwood Research — thinks there’s a 25% chance that within four years, AI will b...

8 Heinä 20252h 50min

#219 – Toby Ord on graphs AI companies would prefer you didn't (fully) understand

#219 – Toby Ord on graphs AI companies would prefer you didn't (fully) understand

The era of making AI smarter just by making it bigger is ending. But that doesn’t mean progress is slowing down — far from it. AI models continue to get much more powerful, just using very different m...

24 Kesä 20252h 48min

#218 – Hugh White on why Trump is abandoning US hegemony – and that’s probably good

#218 – Hugh White on why Trump is abandoning US hegemony – and that’s probably good

For decades, US allies have slept soundly under the protection of America’s overwhelming military might. Donald Trump — with his threats to ditch NATO, seize Greenland, and abandon Taiwan — seems hell...

12 Kesä 20252h 48min

#217 – Beth Barnes on the most important graph in AI right now — and the 7-month rule that governs its progress

#217 – Beth Barnes on the most important graph in AI right now — and the 7-month rule that governs its progress

AI models today have a 50% chance of successfully completing a task that would take an expert human one hour. Seven months ago, that number was roughly 30 minutes — and seven months before that, 15 mi...

2 Kesä 20253h 47min

Beyond human minds: The bewildering frontier of consciousness in insects, AI, and more

Beyond human minds: The bewildering frontier of consciousness in insects, AI, and more

What if there’s something it’s like to be a shrimp — or a chatbot?For centuries, humans have debated the nature of consciousness, often placing ourselves at the very top. But what about the minds of o...

23 Touko 20253h 34min

Don’t believe OpenAI’s “nonprofit” spin (emergency pod with Tyler Whitmer)

Don’t believe OpenAI’s “nonprofit” spin (emergency pod with Tyler Whitmer)

OpenAI’s recent announcement that its nonprofit would “retain control” of its for-profit business sounds reassuring. But this seemingly major concession, celebrated by so many, is in itself largely me...

15 Touko 20251h 12min

The case for and against AGI by 2030 (article by Benjamin Todd)

The case for and against AGI by 2030 (article by Benjamin Todd)

More and more people have been saying that we might have AGI (artificial general intelligence) before 2030. Is that really plausible? This article by Benjamin Todd looks into the cases for and against...

12 Touko 20251h

Emergency pod: Did OpenAI give up, or is this just a new trap? (with Rose Chan Loui)

Emergency pod: Did OpenAI give up, or is this just a new trap? (with Rose Chan Loui)

When attorneys general intervene in corporate affairs, it usually means something has gone seriously wrong. In OpenAI’s case, it appears to have forced a dramatic reversal of the company’s plans to si...

8 Touko 20251h 2min

Suosittua kategoriassa Koulutus

rss-murhan-anatomia
psykopodiaa-podcast
voi-hyvin-meditaatiot-2
rss-valo-minussa-2
rss-niinku-asia-on
aamukahvilla
rss-narsisti
adhd-podi
rss-duodecim-lehti
rahapuhetta
aloita-meditaatio
kesken
rss-elamankoulu
koulu-podcast-2
salainen-paivakirja
rss-uskonto-on-tylsaa
rss-liian-kuuma-peruna
rss-luonnollinen-synnytys-podcast
rss-koira-haudattuna
rss-hereilla