#132 Classic episode – Nova DasSarma on why information security may be critical to the safe development of AI systems

#132 Classic episode – Nova DasSarma on why information security may be critical to the safe development of AI systems

If a business has spent $100 million developing a product, it’s a fair bet that they don’t want it stolen in two seconds and uploaded to the web where anyone can use it for free.

This problem exists in extreme form for AI companies. These days, the electricity and equipment required to train cutting-edge machine learning models that generate uncanny human text and images can cost tens or hundreds of millions of dollars. But once trained, such models may be only a few gigabytes in size and run just fine on ordinary laptops.

Today’s guest, the computer scientist and polymath Nova DasSarma, works on computer and information security for the AI company Anthropic with the security team. One of her jobs is to stop hackers exfiltrating Anthropic’s incredibly expensive intellectual property, as recently happened to Nvidia.

Rebroadcast: this episode was originally released in June 2022.

Links to learn more, highlights, and full transcript.

As she explains, given models’ small size, the need to store such models on internet-connected servers, and the poor state of computer security in general, this is a serious challenge.

The worries aren’t purely commercial though. This problem looms especially large for the growing number of people who expect that in coming decades we’ll develop so-called artificial ‘general’ intelligence systems that can learn and apply a wide range of skills all at once, and thereby have a transformative effect on society.

If aligned with the goals of their owners, such general AI models could operate like a team of super-skilled assistants, going out and doing whatever wonderful (or malicious) things are asked of them. This might represent a huge leap forward for humanity, though the transition to a very different new economy and power structure would have to be handled delicately.

If unaligned with the goals of their owners or humanity as a whole, such broadly capable models would naturally ‘go rogue,’ breaking their way into additional computer systems to grab more computing power — all the better to pursue their goals and make sure they can’t be shut off.

As Nova explains, in either case, we don’t want such models disseminated all over the world before we’ve confirmed they are deeply safe and law-abiding, and have figured out how to integrate them peacefully into society. In the first scenario, premature mass deployment would be risky and destabilising. In the second scenario, it could be catastrophic — perhaps even leading to human extinction if such general AI systems turn out to be able to self-improve rapidly rather than slowly, something we can only speculate on at this point.

If highly capable general AI systems are coming in the next 10 or 20 years, Nova may be flying below the radar with one of the most important jobs in the world.

We’ll soon need the ability to ‘sandbox’ (i.e. contain) models with a wide range of superhuman capabilities, including the ability to learn new skills, for a period of careful testing and limited deployment — preventing the model from breaking out, and criminals from breaking in. Nova and her colleagues are trying to figure out how to do this, but as this episode reveals, even the state of the art is nowhere near good enough.

Chapters:

  • Cold open (00:00:00)
  • Rob's intro (00:00:52)
  • The interview begins (00:02:44)
  • Why computer security matters for AI safety (00:07:39)
  • State of the art in information security (00:17:21)
  • The hack of Nvidia (00:26:50)
  • The most secure systems that exist (00:36:27)
  • Formal verification (00:48:03)
  • How organisations can protect against hacks (00:54:18)
  • Is ML making security better or worse? (00:58:11)
  • Motivated 14-year-old hackers (01:01:08)
  • Disincentivising actors from attacking in the first place (01:05:48)
  • Hofvarpnir Studios (01:12:40)
  • Capabilities vs safety (01:19:47)
  • Interesting design choices with big ML models (01:28:44)
  • Nova’s work and how she got into it (01:45:21)
  • Anthropic and career advice (02:05:52)
  • $600M Ethereum hack (02:18:37)
  • Personal computer security advice (02:23:06)
  • LastPass (02:31:04)
  • Stuxnet (02:38:07)
  • Rob's outro (02:40:18)

Producer: Keiran Harris
Audio mastering: Ben Cordell and Beppe Rådvik
Transcriptions: Katy Moore

Jaksot(320)

#38 - Yew-Kwang Ng on anticipating effective altruism decades ago & how to make a much happier world

#38 - Yew-Kwang Ng on anticipating effective altruism decades ago & how to make a much happier world

Will people who think carefully about how to maximize welfare eventually converge on the same views? The effective altruism community has spent a lot of time over the past 10 years debating how best t...

26 Heinä 20181h 59min

#37 - GiveWell picks top charities by estimating the unknowable. James Snowden on how they do it.

#37 - GiveWell picks top charities by estimating the unknowable. James Snowden on how they do it.

What’s the value of preventing the death of a 5-year-old child, compared to a 20-year-old, or an 80-year-old? The global health community has generally regarded the value as proportional to the numbe...

16 Heinä 20181h 44min

#36 - Tanya Singh on ending the operations management bottleneck in effective altruism

#36 - Tanya Singh on ending the operations management bottleneck in effective altruism

Almost nobody is able to do groundbreaking physics research themselves, and by the time his brilliance was appreciated, Einstein was hardly limited by funding. But what if you could find a way to unlo...

11 Heinä 20182h 4min

#35 - Tara Mac Aulay on the audacity to fix the world without asking permission

#35 - Tara Mac Aulay on the audacity to fix the world without asking permission

"You don't need permission. You don't need to be allowed to do something that's not in your job description. If you think that it's gonna make your company or your organization more successful and mor...

21 Kesä 20181h 22min

Rob Wiblin on the art/science of a high impact career

Rob Wiblin on the art/science of a high impact career

Today's episode is a cross-post of an interview I did with The Jolly Swagmen Podcast which came out this week. I recommend regular listeners skip to 24 minutes in to avoid hearing things they already ...

8 Kesä 20181h 31min

#34 - We use the worst voting system that exists. Here's how Aaron Hamlin is going to fix it.

#34 - We use the worst voting system that exists. Here's how Aaron Hamlin is going to fix it.

In 1991 Edwin Edwards won the Louisiana gubernatorial election. In 2001, he was found guilty of racketeering and received a 10 year invitation to Federal prison. The strange thing about that election?...

1 Kesä 20182h 18min

#33 - Anders Sandberg on what if we ended ageing, solar flares & the annual risk of nuclear war

#33 - Anders Sandberg on what if we ended ageing, solar flares & the annual risk of nuclear war

Joseph Stalin had a life-extension program dedicated to making himself immortal. What if he had succeeded?  According to our last guest, Bryan Caplan, there’s an 80% chance that Stalin would still be ...

29 Touko 20181h 24min

#32 - Bryan Caplan on whether his Case Against Education holds up, totalitarianism, & open borders

#32 - Bryan Caplan on whether his Case Against Education holds up, totalitarianism, & open borders

Bryan Caplan’s claim in *The Case Against Education* is striking: education doesn’t teach people much, we use little of what we learn, and college is mostly about trying to seem smarter than other peo...

22 Touko 20182h 25min

Suosittua kategoriassa Koulutus

rss-murhan-anatomia
psykopodiaa-podcast
rss-narsisti
voi-hyvin-meditaatiot-2
aamukahvilla
rss-vapaudu-voimaasi
rss-niinku-asia-on
adhd-podi
rss-liian-kuuma-peruna
kesken
psykologia
dear-ladies
rss-koira-haudattuna
leveli
rss-luonnollinen-synnytys-podcast
rahapuhetta
aloita-meditaatio
rss-duodecim-lehti
jari-sarasvuo-podcast
rss-palopaikalla-podcast