#132 Classic episode – Nova DasSarma on why information security may be critical to the safe development of AI systems

#132 Classic episode – Nova DasSarma on why information security may be critical to the safe development of AI systems

If a business has spent $100 million developing a product, it’s a fair bet that they don’t want it stolen in two seconds and uploaded to the web where anyone can use it for free.

This problem exists in extreme form for AI companies. These days, the electricity and equipment required to train cutting-edge machine learning models that generate uncanny human text and images can cost tens or hundreds of millions of dollars. But once trained, such models may be only a few gigabytes in size and run just fine on ordinary laptops.

Today’s guest, the computer scientist and polymath Nova DasSarma, works on computer and information security for the AI company Anthropic with the security team. One of her jobs is to stop hackers exfiltrating Anthropic’s incredibly expensive intellectual property, as recently happened to Nvidia.

Rebroadcast: this episode was originally released in June 2022.

Links to learn more, highlights, and full transcript.

As she explains, given models’ small size, the need to store such models on internet-connected servers, and the poor state of computer security in general, this is a serious challenge.

The worries aren’t purely commercial though. This problem looms especially large for the growing number of people who expect that in coming decades we’ll develop so-called artificial ‘general’ intelligence systems that can learn and apply a wide range of skills all at once, and thereby have a transformative effect on society.

If aligned with the goals of their owners, such general AI models could operate like a team of super-skilled assistants, going out and doing whatever wonderful (or malicious) things are asked of them. This might represent a huge leap forward for humanity, though the transition to a very different new economy and power structure would have to be handled delicately.

If unaligned with the goals of their owners or humanity as a whole, such broadly capable models would naturally ‘go rogue,’ breaking their way into additional computer systems to grab more computing power — all the better to pursue their goals and make sure they can’t be shut off.

As Nova explains, in either case, we don’t want such models disseminated all over the world before we’ve confirmed they are deeply safe and law-abiding, and have figured out how to integrate them peacefully into society. In the first scenario, premature mass deployment would be risky and destabilising. In the second scenario, it could be catastrophic — perhaps even leading to human extinction if such general AI systems turn out to be able to self-improve rapidly rather than slowly, something we can only speculate on at this point.

If highly capable general AI systems are coming in the next 10 or 20 years, Nova may be flying below the radar with one of the most important jobs in the world.

We’ll soon need the ability to ‘sandbox’ (i.e. contain) models with a wide range of superhuman capabilities, including the ability to learn new skills, for a period of careful testing and limited deployment — preventing the model from breaking out, and criminals from breaking in. Nova and her colleagues are trying to figure out how to do this, but as this episode reveals, even the state of the art is nowhere near good enough.

Chapters:

  • Cold open (00:00:00)
  • Rob's intro (00:00:52)
  • The interview begins (00:02:44)
  • Why computer security matters for AI safety (00:07:39)
  • State of the art in information security (00:17:21)
  • The hack of Nvidia (00:26:50)
  • The most secure systems that exist (00:36:27)
  • Formal verification (00:48:03)
  • How organisations can protect against hacks (00:54:18)
  • Is ML making security better or worse? (00:58:11)
  • Motivated 14-year-old hackers (01:01:08)
  • Disincentivising actors from attacking in the first place (01:05:48)
  • Hofvarpnir Studios (01:12:40)
  • Capabilities vs safety (01:19:47)
  • Interesting design choices with big ML models (01:28:44)
  • Nova’s work and how she got into it (01:45:21)
  • Anthropic and career advice (02:05:52)
  • $600M Ethereum hack (02:18:37)
  • Personal computer security advice (02:23:06)
  • LastPass (02:31:04)
  • Stuxnet (02:38:07)
  • Rob's outro (02:40:18)

Producer: Keiran Harris
Audio mastering: Ben Cordell and Beppe Rådvik
Transcriptions: Katy Moore

Avsnitt(320)

#15 - Phil Tetlock on how chimps beat Berkeley undergrads and when it’s wise to defer to the wise

#15 - Phil Tetlock on how chimps beat Berkeley undergrads and when it’s wise to defer to the wise

Prof Philip Tetlock is a social science legend. Over forty years he has researched whose predictions we can trust, whose we can’t and why - and developed methods that allow all of us to be better at p...

20 Nov 20171h 24min

#14 - Sharon Nunez & Jose Valle on going undercover to expose animal abuse

#14 - Sharon Nunez & Jose Valle on going undercover to expose animal abuse

What if you knew that ducks were being killed with pitchforks? Rabbits dumped alive into containers? Or pigs being strangled with forklifts? Would you be willing to go undercover to expose the crime? ...

13 Nov 20171h 25min

#13 - Claire Walsh on testing which policies work & how to get governments to listen to the results

#13 - Claire Walsh on testing which policies work & how to get governments to listen to the results

In both rich and poor countries, government policy is often based on no evidence at all and many programs don’t work. This has particularly harsh effects on the global poor - in some countries governm...

31 Okt 201752min

#12 - Beth Cameron works to stop you dying in a pandemic. Here’s what keeps her up at night.

#12 - Beth Cameron works to stop you dying in a pandemic. Here’s what keeps her up at night.

“When you're in the middle of a crisis and you have to ask for money, you're already too late.” That’s Dr Beth Cameron, who leads Global Biological Policy and Programs at the Nuclear Threat Initiative...

25 Okt 20171h 45min

#11 - Spencer Greenberg on speeding up social science 10-fold & why plenty of startups cause harm

#11 - Spencer Greenberg on speeding up social science 10-fold & why plenty of startups cause harm

Do most meat eaters think it’s wrong to hurt animals? Do Americans think climate change is likely to cause human extinction? What is the best, state-of-the-art therapy for depression? How can we make ...

17 Okt 20171h 29min

#10 - Nick Beckstead on how to spend billions of dollars preventing human extinction

#10 - Nick Beckstead on how to spend billions of dollars preventing human extinction

What if you were in a position to give away billions of dollars to improve the world? What would you do with it? This is the problem facing Program Officers at the Open Philanthropy Project - people l...

11 Okt 20171h 51min

#9 - Christine Peterson on how insecure computers could lead to global disaster, and how to fix it

#9 - Christine Peterson on how insecure computers could lead to global disaster, and how to fix it

Take a trip to Silicon Valley in the 70s and 80s, when going to space sounded like a good way to get around environmental limits, people started cryogenically freezing themselves, and nanotechnology l...

4 Okt 20171h 45min

#8 - Lewis Bollard on how to end factory farming in our lifetimes

#8 - Lewis Bollard on how to end factory farming in our lifetimes

Every year tens of billions of animals are raised in terrible conditions in factory farms before being killed for human consumption. Over the last two years Lewis Bollard – Project Officer for Farm An...

27 Sep 20173h 16min

Populärt inom Utbildning

rss-bara-en-till-om-missbruk-medberoende-2
historiepodden-se
det-skaver
nu-blir-det-historia
harrisons-dramatiska-historia
rss-viktmedicinpodden
johannes-hansen-podcast
not-fanny-anymore
alska-oss
allt-du-velat-veta
roda-vita-rosen
rss-sjalsligt-avkladd
sektledare
sa-in-i-sjalen
i-vantan-pa-katastrofen
rss-beratta-alltid-det-har
rss-max-tant-med-max-villman
rss-basta-livet
dumforklarat
psykologsnack