#223 – Neel Nanda on leading a Google DeepMind team at 26 – and advice if you want to work at an AI company (part 2)

#223 – Neel Nanda on leading a Google DeepMind team at 26 – and advice if you want to work at an AI company (part 2)

At 26, Neel Nanda leads an AI safety team at Google DeepMind, has published dozens of influential papers, and mentored 50 junior researchers — seven of whom now work at major AI companies. His secret? “It’s mostly luck,” he says, but “another part is what I think of as maximising my luck surface area.”

Video, full transcript, and links to learn more: https://80k.info/nn2

This means creating as many opportunities as possible for surprisingly good things to happen:

  • Write publicly.
  • Reach out to researchers whose work you admire.
  • Say yes to unusual projects that seem a little scary.

Nanda’s own path illustrates this perfectly. He started a challenge to write one blog post per day for a month to overcome perfectionist paralysis. Those posts helped seed the field of mechanistic interpretability and, incidentally, led to meeting his partner of four years.

His YouTube channel features unedited three-hour videos of him reading through famous papers and sharing thoughts. One has 30,000 views. “People were into it,” he shrugs.

Most remarkably, he ended up running DeepMind’s mechanistic interpretability team. He’d joined expecting to be an individual contributor, but when the team lead stepped down, he stepped up despite having no management experience. “I did not know if I was going to be good at this. I think it’s gone reasonably well.”

His core lesson: “You can just do things.” This sounds trite but is a useful reminder all the same. Doing things is a skill that improves with practice. Most people overestimate the risks and underestimate their ability to recover from failures. And as Neel explains, junior researchers today have a superpower previous generations lacked: large language models that can dramatically accelerate learning and research.

In this extended conversation, Neel and host Rob Wiblin discuss all that and some other hot takes from Neel's four years at Google DeepMind. (And be sure to check out part one of Rob and Neel’s conversation!)


What did you think of the episode? https://forms.gle/6binZivKmjjiHU6dA

Chapters:

  • Cold open (00:00:00)
  • Who’s Neel Nanda? (00:01:12)
  • Luck surface area and making the right opportunities (00:01:46)
  • Writing cold emails that aren't insta-deleted (00:03:50)
  • How Neel uses LLMs to get much more done (00:09:08)
  • “If your safety work doesn't advance capabilities, it's probably bad safety work” (00:23:22)
  • Why Neel refuses to share his p(doom) (00:27:22)
  • How Neel went from the couch to an alignment rocketship (00:31:24)
  • Navigating towards impact at a frontier AI company (00:39:24)
  • How does impact differ inside and outside frontier companies? (00:49:56)
  • Is a special skill set needed to guide large companies? (00:56:06)
  • The benefit of risk frameworks: early preparation (01:00:05)
  • Should people work at the safest or most reckless company? (01:05:21)
  • Advice for getting hired by a frontier AI company (01:08:40)
  • What makes for a good ML researcher? (01:12:57)
  • Three stages of the research process (01:19:40)
  • How do supervisors actually add value? (01:31:53)
  • An AI PhD – with these timelines?! (01:34:11)
  • Is career advice generalisable, or does everyone get the advice they don't need? (01:40:52)
  • Remember: You can just do things (01:43:51)

This episode was recorded on July 21.

Video editing: Simon Monsour and Luke Monsour
Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
Music: Ben Cordell
Camera operator: Jeremy Chevillotte
Coordination, transcriptions, and web: Katy Moore

Jaksot(325)

Emergency episode: Rob & Howie on the menace of COVID-19, and what both governments & individuals might do to help

Emergency episode: Rob & Howie on the menace of COVID-19, and what both governments & individuals might do to help

From home isolation Rob and Howie just recorded an episode on: 1. How many could die in the crisis, and the risk to your health personally. 2. What individuals might be able to do help tackle the coro...

19 Maalis 20201h 52min

#73 – Phil Trammell on patient philanthropy and waiting to do good

#73 – Phil Trammell on patient philanthropy and waiting to do good

To do good, most of us look to use our time and money to affect the world around us today. But perhaps that's all wrong. If you took $1,000 you were going to donate and instead put it in the stock mar...

17 Maalis 20202h 35min

#72 - Toby Ord on the precipice and humanity's potential futures

#72 - Toby Ord on the precipice and humanity's potential futures

This week Oxford academic and 80,000 Hours trustee Dr Toby Ord released his new book The Precipice: Existential Risk and the Future of Humanity. It's about how our long-term future could be better tha...

7 Maalis 20203h 14min

#71 - Benjamin Todd on the key ideas of 80,000 Hours

#71 - Benjamin Todd on the key ideas of 80,000 Hours

The 80,000 Hours Podcast is about “the world’s most pressing problems and how you can use your career to solve them”, and in this episode we tackle that question in the most direct way possible. Las...

2 Maalis 20202h 57min

Arden & Rob on demandingness, work-life balance & injustice (80k team chat #1)

Arden & Rob on demandingness, work-life balance & injustice (80k team chat #1)

Today's bonus episode of the podcast is a quick conversation between me and my fellow 80,000 Hours researcher Arden Koehler about a few topics, including the demandingness of morality, work-life balan...

25 Helmi 202044min

#70 - Dr Cassidy Nelson on the 12 best ways to stop the next pandemic (and limit nCoV)

#70 - Dr Cassidy Nelson on the 12 best ways to stop the next pandemic (and limit nCoV)

nCoV is alarming governments and citizens around the world. It has killed more than 1,000 people, brought the Chinese economy to a standstill, and continues to show up in more and more places. But bad...

13 Helmi 20202h 26min

#69 – Jeffrey Ding on China, its AI dream, and what we get wrong about both

#69 – Jeffrey Ding on China, its AI dream, and what we get wrong about both

The State Council of China's 2017 AI plan was the starting point of China’s AI planning; China’s approach to AI is defined by its top-down and monolithic nature; China is winning the AI arms race; and...

6 Helmi 20201h 37min

Rob & Howie on what we do and don't know about 2019-nCoV

Rob & Howie on what we do and don't know about 2019-nCoV

Two 80,000 Hours researchers, Robert Wiblin and Howie Lempel, record an experimental bonus episode about the new 2019-nCoV virus.See this list of resources, including many discussed in the episode, to...

3 Helmi 20201h 18min

Suosittua kategoriassa Koulutus

rss-murhan-anatomia
voi-hyvin-meditaatiot-2
rss-narsisti
psykopodiaa-podcast
adhd-podi
rss-rahamania
rss-valo-minussa-2
rss-vapaudu-voimaasi
rss-niinku-asia-on
mielipaivakirja
rss-uskonto-on-tylsaa
aamukahvilla
rss-duodecim-lehti
ilona-rauhala
kesken
psykologia
rss-eron-alkemiaa
rss-koira-haudattuna
rss-arkea-ja-aurinkoa-podcast-espanjasta
ihminen-tavattavissa-tommy-hellsten-instituutti