#47 - Catherine Olsson & Daniel Ziegler on the fast path into high-impact ML engineering roles

#47 - Catherine Olsson & Daniel Ziegler on the fast path into high-impact ML engineering roles

After dropping out of a machine learning PhD at Stanford, Daniel Ziegler needed to decide what to do next. He’d always enjoyed building stuff and wanted to shape the development of AI, so he thought a research engineering position at an org dedicated to aligning AI with human interests could be his best option.

He decided to apply to OpenAI, and spent about 6 weeks preparing for the interview before landing the job. His PhD, by contrast, might have taken 6 years. Daniel thinks this highly accelerated career path may be possible for many others.

On today’s episode Daniel is joined by Catherine Olsson, who has also worked at OpenAI, and left her computational neuroscience PhD to become a research engineer at Google Brain. She and Daniel share this piece of advice for those curious about this career path: just dive in. If you're trying to get good at something, just start doing that thing, and figure out that way what's necessary to be able to do it well.

Catherine has even created a simple step-by-step guide for 80,000 Hours, to make it as easy as possible for others to copy her and Daniel's success.

Please let us know how we've helped you: fill out our 2018 annual impact survey so that 80,000 Hours can continue to operate and grow.

Blog post with links to learn more, a summary & full transcript.

Daniel thinks the key for him was nailing the job interview.

OpenAI needed him to be able to demonstrate the ability to do the kind of stuff he'd be working on day-to-day. So his approach was to take a list of 50 key deep reinforcement learning papers, read one or two a day, and pick a handful to actually reproduce. He spent a bunch of time coding in Python and TensorFlow, sometimes 12 hours a day, trying to debug and tune things until they were actually working.

Daniel emphasizes that the most important thing was to practice *exactly* those things that he knew he needed to be able to do. His dedicated preparation also led to an offer from the Machine Intelligence Research Institute, and so he had the opportunity to decide between two organisations focused on the global problem that most concerns him.

Daniel’s path might seem unusual, but both he and Catherine expect it can be replicated by others. If they're right, it could greatly increase our ability to get new people into important ML roles in which they can make a difference, as quickly as possible.

Catherine says that her move from OpenAI to an ML research team at Google now allows her to bring a different set of skills to the table. Technical AI safety is a multifaceted area of research, and the many sub-questions in areas such as reward learning, robustness, and interpretability all need to be answered to maximize the probability that AI development goes well for humanity.

Today’s episode combines the expertise of two pioneers and is a key resource for anyone wanting to follow in their footsteps. We cover:

* What are OpenAI and Google Brain doing?
* Why work on AI?
* Do you learn more on the job, or while doing a PhD?
* Controversial issues within ML
* Is replicating papers a good way of determining suitability?
* What % of software developers could make similar transitions?
* How in-demand are research engineers?
* The development of Dota 2 bots
* Do research scientists have more influence on the vision of an org?
* Has learning more made you more or less worried about the future?

Get this episode by subscribing: type '80,000 Hours' into your podcasting app.

The 80,000 Hours Podcast is produced by Keiran Harris.

Jaksot(317)

#216 – Ian Dunt on why governments in Britain and elsewhere can't get anything done – and how to fix it

#216 – Ian Dunt on why governments in Britain and elsewhere can't get anything done – and how to fix it

When you have a system where ministers almost never understand their portfolios, civil servants change jobs every few months, and MPs don't grasp parliamentary procedure even after decades in office —...

2 Touko 20253h 14min

Serendipity, weird bets, & cold emails that actually work: Career advice from 16 former guests

Serendipity, weird bets, & cold emails that actually work: Career advice from 16 former guests

How do you navigate a career path when the future of work is uncertain? How important is mentorship versus immediate impact? Is it better to focus on your strengths or on the world’s most pressing pro...

24 Huhti 20252h 18min

#215 – Tom Davidson on how AI-enabled coups could allow a tiny group to seize power

#215 – Tom Davidson on how AI-enabled coups could allow a tiny group to seize power

Throughout history, technological revolutions have fundamentally shifted the balance of power in society. The Industrial Revolution created conditions where democracies could flourish for the first ti...

16 Huhti 20253h 22min

Guilt, imposter syndrome & doing good: 16 past guests share their mental health journeys

Guilt, imposter syndrome & doing good: 16 past guests share their mental health journeys

"We are aiming for a place where we can decouple the scorecard from our worthiness. It’s of course the case that in trying to optimise the good, we will always be falling short. The question is how mu...

11 Huhti 20251h 47min

#214 – Buck Shlegeris on controlling AI that wants to take over – so we can use it anyway

#214 – Buck Shlegeris on controlling AI that wants to take over – so we can use it anyway

Most AI safety conversations centre on alignment: ensuring AI systems share our values and goals. But despite progress, we’re unlikely to know we’ve solved the problem before the arrival of human-leve...

4 Huhti 20252h 16min

15 expert takes on infosec in the age of AI

15 expert takes on infosec in the age of AI

"There’s almost no story of the future going well that doesn’t have a part that’s like '…and no evil person steals the AI weights and goes and does evil stuff.' So it has highlighted the importance of...

28 Maalis 20252h 35min

#213 – Will MacAskill on AI causing a “century in a decade” – and how we're completely unprepared

#213 – Will MacAskill on AI causing a “century in a decade” – and how we're completely unprepared

The 20th century saw unprecedented change: nuclear weapons, satellites, the rise and fall of communism, third-wave feminism, the internet, postmodernism, game theory, genetic engineering, the Big Bang...

11 Maalis 20253h 57min

Emergency pod: Judge plants a legal time bomb under OpenAI (with Rose Chan Loui)

Emergency pod: Judge plants a legal time bomb under OpenAI (with Rose Chan Loui)

When OpenAI announced plans to convert from nonprofit to for-profit control last October, it likely didn’t anticipate the legal labyrinth it now faces. A recent court order in Elon Musk’s lawsuit agai...

7 Maalis 202536min

Suosittua kategoriassa Koulutus

rss-murhan-anatomia
psykopodiaa-podcast
voi-hyvin-meditaatiot-2
rss-valo-minussa-2
adhd-podi
psykologia
rss-narsisti
salainen-paivakirja
rss-liian-kuuma-peruna
rss-duodecim-lehti
rahapuhetta
aloita-meditaatio
rss-vapaudu-voimaasi
rss-niinku-asia-on
kesken
rss-luonnollinen-synnytys-podcast
aamukahvilla
rss-uskonto-on-tylsaa
rss-selvat-savelet
rss-koira-haudattuna