#212 – Allan Dafoe on why technology is unstoppable & how to shape AI development anyway

#212 – Allan Dafoe on why technology is unstoppable & how to shape AI development anyway

Technology doesn’t force us to do anything — it merely opens doors. But military and economic competition pushes us through.

That’s how today’s guest Allan Dafoe — director of frontier safety and governance at Google DeepMind — explains one of the deepest patterns in technological history: once a powerful new capability becomes available, societies that adopt it tend to outcompete those that don’t. Those who resist too much can find themselves taken over or rendered irrelevant.

Links to learn more, highlights, video, and full transcript.

This dynamic played out dramatically in 1853 when US Commodore Perry sailed into Tokyo Bay with steam-powered warships that seemed magical to the Japanese, who had spent centuries deliberately limiting their technological development. With far greater military power, the US was able to force Japan to open itself to trade. Within 15 years, Japan had undergone the Meiji Restoration and transformed itself in a desperate scramble to catch up.

Today we see hints of similar pressure around artificial intelligence. Even companies, countries, and researchers deeply concerned about where AI could take us feel compelled to push ahead — worried that if they don’t, less careful actors will develop transformative AI capabilities at around the same time anyway.

But Allan argues this technological determinism isn’t absolute. While broad patterns may be inevitable, history shows we do have some ability to steer how technologies are developed, by who, and what they’re used for first.

As part of that approach, Allan has been promoting efforts to make AI more capable of sophisticated cooperation, and improving the tests Google uses to measure how well its models could do things like mislead people, hack and take control of their own servers, or spread autonomously in the wild.

As of mid-2024 they didn’t seem dangerous at all, but we’ve learned that our ability to measure these capabilities is good, but imperfect. If we don’t find the right way to ‘elicit’ an ability we can miss that it’s there.

Subsequent research from Anthropic and Redwood Research suggests there’s even a risk that future models may play dumb to avoid their goals being altered.

That has led DeepMind to a “defence in depth” approach: carefully staged deployment starting with internal testing, then trusted external testers, then limited release, then watching how models are used in the real world. By not releasing model weights, DeepMind is able to back up and add additional safeguards if experience shows they’re necessary.

But with much more powerful and general models on the way, individual company policies won’t be sufficient by themselves. Drawing on his academic research into how societies handle transformative technologies, Allan argues we need coordinated international governance that balances safety with our desire to get the massive potential benefits of AI in areas like healthcare and education as quickly as possible.

Host Rob and Allan also cover:

  • The most exciting beneficial applications of AI
  • Whether and how we can influence the development of technology
  • What DeepMind is doing to evaluate and mitigate risks from frontier AI systems
  • Why cooperative AI may be as important as aligned AI
  • The role of democratic input in AI governance
  • What kinds of experts are most needed in AI safety and governance
  • And much more

Chapters:

  • Cold open (00:00:00)
  • Who's Allan Dafoe? (00:00:48)
  • Allan's role at DeepMind (00:01:27)
  • Why join DeepMind over everyone else? (00:04:27)
  • Do humans control technological change? (00:09:17)
  • Arguments for technological determinism (00:20:24)
  • The synthesis of agency with tech determinism (00:26:29)
  • Competition took away Japan's choice (00:37:13)
  • Can speeding up one tech redirect history? (00:42:09)
  • Structural pushback against alignment efforts (00:47:55)
  • Do AIs need to be 'cooperatively skilled'? (00:52:25)
  • How AI could boost cooperation between people and states (01:01:59)
  • The super-cooperative AGI hypothesis and backdoor risks (01:06:58)
  • Aren’t today’s models already very cooperative? (01:13:22)
  • How would we make AIs cooperative anyway? (01:16:22)
  • Ways making AI more cooperative could backfire (01:22:24)
  • AGI is an essential idea we should define well (01:30:16)
  • It matters what AGI learns first vs last (01:41:01)
  • How Google tests for dangerous capabilities (01:45:39)
  • Evals 'in the wild' (01:57:46)
  • What to do given no single approach works that well (02:01:44)
  • We don't, but could, forecast AI capabilities (02:05:34)
  • DeepMind's strategy for ensuring its frontier models don't cause harm (02:11:25)
  • How 'structural risks' can force everyone into a worse world (02:15:01)
  • Is AI being built democratically? Should it? (02:19:35)
  • How much do AI companies really want external regulation? (02:24:34)
  • Social science can contribute a lot here (02:33:21)
  • How AI could make life way better: self-driving cars, medicine, education, and sustainability (02:35:55)

Video editing: Simon Monsour
Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
Camera operator: Jeremy Chevillotte
Transcriptions: Katy Moore

Jaksot(332)

Will MacAskill – how we survive the 'intelligence explosion', AI character, and the case for Viatopia

Will MacAskill – how we survive the 'intelligence explosion', AI character, and the case for Viatopia

Hundreds of millions already turn to AI on the most personal of topics — therapy, political opinions, and how to treat others. And as AI takes over more of the economy, the character of these systems ...

22 Huhti 3h 9min

Risks from power-seeking AI systems (article narration by Zershaaneh Qureshi)

Risks from power-seeking AI systems (article narration by Zershaaneh Qureshi)

Hundreds of prominent AI scientists and other notable figures signed a statement in 2023 saying that mitigating the risk of extinction from AI should be a global priority. At 80,000 Hours, we’ve consi...

16 Huhti 1h 29min

How scary is Claude Mythos? 303 pages in 21 minutes

How scary is Claude Mythos? 303 pages in 21 minutes

With Claude Mythos we have an AI that knows when it's being tested, can obscure its reasoning when it wants, and is better at breaking into (and out of) computers than any human alive. Rob Wiblin work...

10 Huhti 21min

Village gossip, pesticide bans, and gene drives: 17 experts on the future of global health

Village gossip, pesticide bans, and gene drives: 17 experts on the future of global health

What does it really take to lift millions out of poverty and prevent needless deaths?In this special compilation episode, 17 past guests — including economists, nonprofit founders, and policy advisors...

7 Huhti 4h 6min

What everyone is missing about Anthropic vs the Pentagon. And: The Meta leaks are worse than you think.

What everyone is missing about Anthropic vs the Pentagon. And: The Meta leaks are worse than you think.

When the Pentagon tried to strong-arm Anthropic into dropping its ban on AI-only kill decisions and mass domestic surveillance, the company refused. Its critics went on the attack: Anthropic and its s...

3 Huhti 20min

#241 – Richard Moulange on how now AI codes viable genomes from scratch and outperforms virologists at lab work — what could go wrong?

#241 – Richard Moulange on how now AI codes viable genomes from scratch and outperforms virologists at lab work — what could go wrong?

Last September, scientists used an AI model to design genomes for entirely new bacteriophages (viruses that infect bacteria). They then built them in a lab. Many were viable. And despite being entirel...

31 Maalis 3h 7min

#240 – Samuel Charap on how a Ukraine ceasefire could accidentally set Europe up for a bigger war

#240 – Samuel Charap on how a Ukraine ceasefire could accidentally set Europe up for a bigger war

Many people believe a ceasefire in Ukraine will leave Europe safer. But today's guest lays out how a deal could potentially generate insidious new risks — leaving us in a situation that's equally dang...

24 Maalis 1h 12min

#239 – Rose Hadshar on why automating all human labour will break our political system

#239 – Rose Hadshar on why automating all human labour will break our political system

The most important political question in the age of advanced AI might not be who wins elections. It might be whether elections continue to matter at all.That’s the view of Rose Hadshar, researcher at ...

17 Maalis 2h 14min

Suosittua kategoriassa Koulutus

rss-murhan-anatomia
psykopodiaa-podcast
voi-hyvin-meditaatiot-2
adhd-podi
rss-tietoinen-yhteys-podcast-2
rss-valo-minussa-2
psykologia
rss-liian-kuuma-peruna
rss-rahamania
kesken
rss-niinku-asia-on
rss-arkea-ja-aurinkoa-podcast-espanjasta
rahapuhetta
rss-uskonto-on-tylsaa
rss-vapaudu-voimaasi
jari-sarasvuo-podcast
ihminen-tavattavissa-tommy-hellsten-instituutti
rss-hereilla
aamukahvilla
dear-ladies