15 expert takes on infosec in the age of AI

15 expert takes on infosec in the age of AI

"There’s almost no story of the future going well that doesn’t have a part that’s like '…and no evil person steals the AI weights and goes and does evil stuff.' So it has highlighted the importance of information security: 'You’re training a powerful AI system; you should make it hard for someone to steal' has popped out to me as a thing that just keeps coming up in these stories, keeps being present. It’s hard to tell a story where it’s not a factor. It’s easy to tell a story where it is a factor." — Holden Karnofsky

What happens when a USB cable can secretly control your system? Are we hurtling toward a security nightmare as critical infrastructure connects to the internet? Is it possible to secure AI model weights from sophisticated attackers? And could AI might actually make computer security better rather than worse?

With AI security concerns becoming increasingly urgent, we bring you insights from 15 top experts across information security, AI safety, and governance, examining the challenges of protecting our most powerful AI models and digital infrastructure — including a sneak peek from an episode that hasn’t yet been released with Tom Davidson, where he explains how we should be more worried about “secret loyalties” in AI agents.

You’ll hear:

  • Holden Karnofsky on why every good future relies on strong infosec, and how hard it’s been to hire security experts (from episode #158)
  • Tantum Collins on why infosec might be the rare issue everyone agrees on (episode #166)
  • Nick Joseph on whether AI companies can develop frontier models safely with the current state of information security (episode #197)
  • Sella Nevo on why AI model weights are so valuable to steal, the weaknesses of air-gapped networks, and the risks of USBs (episode #195)
  • Kevin Esvelt on what cryptographers can teach biosecurity experts (episode #164)
  • Lennart Heim on on Rob’s computer security nightmares (episode #155)
  • Zvi Mowshowitz on the insane lack of security mindset at some AI companies (episode #184)
  • Nova DasSarma on the best current defences against well-funded adversaries, politically motivated cyberattacks, and exciting progress in infosecurity (episode #132)
  • Bruce Schneier on whether AI could eliminate software bugs for good, and why it’s bad to hook everything up to the internet (episode #64)
  • Nita Farahany on the dystopian risks of hacked neurotech (episode #174)
  • Vitalik Buterin on how cybersecurity is the key to defence-dominant futures (episode #194)
  • Nathan Labenz on how even internal teams at AI companies may not know what they’re building (episode #176)
  • Allan Dafoe on backdooring your own AI to prevent theft (episode #212)
  • Tom Davidson on how dangerous “secret loyalties” in AI models could be (episode to be released!)
  • Carl Shulman on the challenge of trusting foreign AI models (episode #191, part 2)
  • Plus lots of concrete advice on how to get into this field and find your fit

Check out the full transcript on the 80,000 Hours website.

Chapters:

  • Cold open (00:00:00)
  • Rob's intro (00:00:49)
  • Holden Karnofsky on why infosec could be the issue on which the future of humanity pivots (00:03:21)
  • Tantum Collins on why infosec is a rare AI issue that unifies everyone (00:12:39)
  • Nick Joseph on whether the current state of information security makes it impossible to responsibly train AGI (00:16:23)
  • Nova DasSarma on the best available defences against well-funded adversaries (00:22:10)
  • Sella Nevo on why AI model weights are so valuable to steal (00:28:56)
  • Kevin Esvelt on what cryptographers can teach biosecurity experts (00:32:24)
  • Lennart Heim on the possibility of an autonomously replicating AI computer worm (00:34:56)
  • Zvi Mowshowitz on the absurd lack of security mindset at some AI companies (00:48:22)
  • Sella Nevo on the weaknesses of air-gapped networks and the risks of USB devices (00:49:54)
  • Bruce Schneier on why it’s bad to hook everything up to the internet (00:55:54)
  • Nita Farahany on the possibility of hacking neural implants (01:04:47)
  • Vitalik Buterin on how cybersecurity is the key to defence-dominant futures (01:10:48)
  • Nova DasSarma on exciting progress in information security (01:19:28)
  • Nathan Labenz on how even internal teams at AI companies may not know what they’re building (01:30:47)
  • Allan Dafoe on backdooring your own AI to prevent someone else from stealing it (01:33:51)
  • Tom Davidson on how dangerous “secret loyalties” in AI models could get (01:35:57)
  • Carl Shulman on whether we should be worried about backdoors as governments adopt AI technology (01:52:45)
  • Nova DasSarma on politically motivated cyberattacks (02:03:44)
  • Bruce Schneier on the day-to-day benefits of improved security and recognising that there’s never zero risk (02:07:27)
  • Holden Karnofsky on why it’s so hard to hire security people despite the massive need (02:13:59)
  • Nova DasSarma on practical steps to getting into this field (02:16:37)
  • Bruce Schneier on finding your personal fit in a range of security careers (02:24:42)
  • Rob's outro (02:34:46)

Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
Content editing: Katy Moore and Milo McGuire
Transcriptions and web: Katy Moore

Avsnitt(321)

Why Teaching AI Right from Wrong Could Get Everyone Killed | Max Harms, MIRI

Why Teaching AI Right from Wrong Could Get Everyone Killed | Max Harms, MIRI

Most people in AI are trying to give AIs ‘good’ values. Max Harms wants us to give them no values at all. According to Max, the only safe design is an AGI that defers entirely to its human operators, ...

24 Feb 2h 41min

Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra

Every major AI company has the same safety plan: when AI gets crazy powerful and really dangerous, they’ll use the AI itself to figure out how to make AI safe and beneficial. It sounds circular, almos...

17 Feb 2h 54min

What the hell happened with AGI timelines in 2025?

What the hell happened with AGI timelines in 2025?

In early 2025, after OpenAI put out the first-ever reasoning models — o1 and o3 — short timelines to transformative artificial general intelligence swept the AI world. But then, in the second half of ...

10 Feb 25min

#179 Classic episode – Randy Nesse on why evolution left us so vulnerable to depression and anxiety

#179 Classic episode – Randy Nesse on why evolution left us so vulnerable to depression and anxiety

Mental health problems like depression and anxiety affect enormous numbers of people and severely interfere with their lives. By contrast, we don’t see similar levels of physical ill health in young p...

3 Feb 2h 51min

#234 – David Duvenaud on why 'aligned AI' would still kill democracy

#234 – David Duvenaud on why 'aligned AI' would still kill democracy

Democracy might be a brief historical blip. That’s the unsettling thesis of a recent paper, which argues AI that can do all the work a human can do inevitably leads to the “gradual disempowerment” of ...

27 Jan 2h 31min

#145 Classic episode – Christopher Brown on why slavery abolition wasn't inevitable

#145 Classic episode – Christopher Brown on why slavery abolition wasn't inevitable

In many ways, humanity seems to have become more humane and inclusive over time. While there’s still a lot of progress to be made, campaigns to give people of different genders, races, sexualities, et...

20 Jan 2h 56min

#233 – James Smith on how to prevent a mirror life catastrophe

#233 – James Smith on how to prevent a mirror life catastrophe

When James Smith first heard about mirror bacteria, he was sceptical. But within two weeks, he’d dropped everything to work on it full time, considering it the worst biothreat that he’d seen described...

13 Jan 2h 9min

#144 Classic episode – Athena Aktipis on why cancer is a fundamental universal phenomena

#144 Classic episode – Athena Aktipis on why cancer is a fundamental universal phenomena

What’s the opposite of cancer? If you answered “cure,” “antidote,” or “antivenom” — you’ve obviously been reading the antonym section at www.merriam-webster.com/thesaurus/cancer.But today’s guest Athe...

9 Jan 3h 30min

Populärt inom Utbildning

historiepodden-se
rss-bara-en-till-om-missbruk-medberoende-2
det-skaver
nu-blir-det-historia
alska-oss
sektledare
not-fanny-anymore
johannes-hansen-podcast
rss-viktmedicinpodden
harrisons-dramatiska-historia
roda-vita-rosen
rss-sjalsligt-avkladd
sa-in-i-sjalen
rss-max-tant-med-max-villman
allt-du-velat-veta
rikatillsammans-om-privatekonomi-rikedom-i-livet
rss-beratta-alltid-det-har
i-vantan-pa-katastrofen
sektpodden
nar-man-talar-om-trollen