#132 Classic episode – Nova DasSarma on why information security may be critical to the safe development of AI systems

#132 Classic episode – Nova DasSarma on why information security may be critical to the safe development of AI systems

If a business has spent $100 million developing a product, it’s a fair bet that they don’t want it stolen in two seconds and uploaded to the web where anyone can use it for free.

This problem exists in extreme form for AI companies. These days, the electricity and equipment required to train cutting-edge machine learning models that generate uncanny human text and images can cost tens or hundreds of millions of dollars. But once trained, such models may be only a few gigabytes in size and run just fine on ordinary laptops.

Today’s guest, the computer scientist and polymath Nova DasSarma, works on computer and information security for the AI company Anthropic with the security team. One of her jobs is to stop hackers exfiltrating Anthropic’s incredibly expensive intellectual property, as recently happened to Nvidia.

Rebroadcast: this episode was originally released in June 2022.

Links to learn more, highlights, and full transcript.

As she explains, given models’ small size, the need to store such models on internet-connected servers, and the poor state of computer security in general, this is a serious challenge.

The worries aren’t purely commercial though. This problem looms especially large for the growing number of people who expect that in coming decades we’ll develop so-called artificial ‘general’ intelligence systems that can learn and apply a wide range of skills all at once, and thereby have a transformative effect on society.

If aligned with the goals of their owners, such general AI models could operate like a team of super-skilled assistants, going out and doing whatever wonderful (or malicious) things are asked of them. This might represent a huge leap forward for humanity, though the transition to a very different new economy and power structure would have to be handled delicately.

If unaligned with the goals of their owners or humanity as a whole, such broadly capable models would naturally ‘go rogue,’ breaking their way into additional computer systems to grab more computing power — all the better to pursue their goals and make sure they can’t be shut off.

As Nova explains, in either case, we don’t want such models disseminated all over the world before we’ve confirmed they are deeply safe and law-abiding, and have figured out how to integrate them peacefully into society. In the first scenario, premature mass deployment would be risky and destabilising. In the second scenario, it could be catastrophic — perhaps even leading to human extinction if such general AI systems turn out to be able to self-improve rapidly rather than slowly, something we can only speculate on at this point.

If highly capable general AI systems are coming in the next 10 or 20 years, Nova may be flying below the radar with one of the most important jobs in the world.

We’ll soon need the ability to ‘sandbox’ (i.e. contain) models with a wide range of superhuman capabilities, including the ability to learn new skills, for a period of careful testing and limited deployment — preventing the model from breaking out, and criminals from breaking in. Nova and her colleagues are trying to figure out how to do this, but as this episode reveals, even the state of the art is nowhere near good enough.

Chapters:

  • Cold open (00:00:00)
  • Rob's intro (00:00:52)
  • The interview begins (00:02:44)
  • Why computer security matters for AI safety (00:07:39)
  • State of the art in information security (00:17:21)
  • The hack of Nvidia (00:26:50)
  • The most secure systems that exist (00:36:27)
  • Formal verification (00:48:03)
  • How organisations can protect against hacks (00:54:18)
  • Is ML making security better or worse? (00:58:11)
  • Motivated 14-year-old hackers (01:01:08)
  • Disincentivising actors from attacking in the first place (01:05:48)
  • Hofvarpnir Studios (01:12:40)
  • Capabilities vs safety (01:19:47)
  • Interesting design choices with big ML models (01:28:44)
  • Nova’s work and how she got into it (01:45:21)
  • Anthropic and career advice (02:05:52)
  • $600M Ethereum hack (02:18:37)
  • Personal computer security advice (02:23:06)
  • LastPass (02:31:04)
  • Stuxnet (02:38:07)
  • Rob's outro (02:40:18)

Producer: Keiran Harris
Audio mastering: Ben Cordell and Beppe Rådvik
Transcriptions: Katy Moore

Avsnitt(324)

Rob Wiblin on how he ended up the way he is

Rob Wiblin on how he ended up the way he is

This is a crosspost of an episode of the Eureka Podcast. The interviewer is Misha Saul, a childhood friend of Rob's, who he has known for over 20 years. While it's not an episode of our own show, we...

3 Feb 20211h 57min

#90 – Ajeya Cotra on worldview diversification and how big the future could be

#90 – Ajeya Cotra on worldview diversification and how big the future could be

You wake up in a mysterious box, and hear the booming voice of God: “I just flipped a coin. If it came up heads, I made ten boxes, labeled 1 through 10 — each of which has a human in it. If it ca...

21 Jan 20212h 59min

Rob Wiblin on self-improvement and research ethics

Rob Wiblin on self-improvement and research ethics

This is a crosspost of an episode of the Clearer Thinking Podcast: 022: Self-Improvement and Research Ethics with Rob Wiblin. Rob chats with Spencer Greenberg, who has been an audience favourite in...

13 Jan 20212h 30min

#73 - Phil Trammell on patient philanthropy and waiting to do good [re-release]

#73 - Phil Trammell on patient philanthropy and waiting to do good [re-release]

Rebroadcast: this episode was originally released in March 2020. To do good, most of us look to use our time and money to affect the world around us today. But perhaps that's all wrong. If you too...

7 Jan 20212h 41min

#75 – Michelle Hutchinson on what people most often ask 80,000 Hours [re-release]

#75 – Michelle Hutchinson on what people most often ask 80,000 Hours [re-release]

Rebroadcast: this episode was originally released in April 2020. Since it was founded, 80,000 Hours has done one-on-one calls to supplement our online content and offer more personalised advice. We ...

30 Dec 20202h 14min

#89 – Owen Cotton-Barratt on epistemic systems and layers of defense against potential global catastrophes

#89 – Owen Cotton-Barratt on epistemic systems and layers of defense against potential global catastrophes

From one point of view academia forms one big 'epistemic' system — a process which directs attention, generates ideas, and judges which are good. Traditional print media is another such system, and we...

17 Dec 20202h 38min

#88 – Tristan Harris on the need to change the incentives of social media companies

#88 – Tristan Harris on the need to change the incentives of social media companies

In its first 28 days on Netflix, the documentary The Social Dilemma — about the possible harms being caused by social media and other technology products — was seen by 38 million households in about 1...

3 Dec 20202h 35min

Benjamin Todd on what the effective altruism community most needs (80k team chat #4)

Benjamin Todd on what the effective altruism community most needs (80k team chat #4)

In the last '80k team chat' with Ben Todd and Arden Koehler, we discussed what effective altruism is and isn't, and how to argue for it. In this episode we turn now to what the effective altruism comm...

12 Nov 20201h 25min

Populärt inom Utbildning

rss-bara-en-till-om-missbruk-medberoende-2
historiepodden-se
det-skaver
alska-oss
harrisons-dramatiska-historia
rss-viktmedicinpodden
sektledare
nu-blir-det-historia
allt-du-velat-veta
johannes-hansen-podcast
roda-vita-rosen
rss-sjalsligt-avkladd
i-vantan-pa-katastrofen
sa-in-i-sjalen
not-fanny-anymore
sex-pa-riktigt-med-marika-smith
polisutbildningspodden
rss-om-vi-ska-vara-arliga
rss-max-tant-med-max-villman
rss-traningsklubben