#47 - Catherine Olsson & Daniel Ziegler on the fast path into high-impact ML engineering roles
Om avsnittet
After dropping out of a machine learning PhD at Stanford, Daniel Ziegler needed to decide what to do next. He’d always enjoyed building stuff and wanted to shape the development of AI, so he thought a research engineering position at an org dedicated to aligning AI with human interests could be his best option. He decided to apply to OpenAI, and spent about 6 weeks preparing for the interview before landing the job. His PhD, by contrast, might have taken 6 years. Daniel thinks this highly accelerated career path may be possible for many others. On today’s episode Daniel is joined by Catherine Olsson, who has also worked at OpenAI, and left her computational neuroscience PhD to become a research engineer at Google Brain. She and Daniel share this piece of advice for those curious about this career path: just dive in. If you're trying to get good at something, just start doing that thing, and figure out that way what's necessary to be able to do it well. Catherine has even created a simple step-by-step guide for 80,000 Hours, to make it as easy as possible for others to copy her and Daniel's success. Please let us know how we've helped you: fill out our 2018 annual impact survey so that 80,000 Hours can continue to operate and grow. Blog post with links to learn more, a summary & full transcript. Daniel thinks the key for him was nailing the job interview. OpenAI needed him to be able to demonstrate the ability to do the kind of stuff he'd be working on day-to-day. So his approach was to take a list of 50 key deep reinforcement learning papers, read one or two a day, and pick a handful to actually reproduce. He spent a bunch of time coding in Python and TensorFlow, sometimes 12 hours a day, trying to debug and tune things until they were actually working. Daniel emphasizes that the most important thing was to practice *exactly* those things that he knew he needed to be able to do. His dedicated preparation also led to an offer from the Machine Intelligence Research Institute, and so he had the opportunity to decide between two organisations focused on the global problem that most concerns him. Daniel’s path might seem unusual, but both he and Catherine expect it can be replicated by others. If they're right, it could greatly increase our ability to get new people into important ML roles in which they can make a difference, as quickly as possible. Catherine says that her move from OpenAI to an ML research team at Google now allows her to bring a different set of skills to the table. Technical AI safety is a multifaceted area of research, and the many sub-questions in areas such as reward learning, robustness, and interpretability all need to be answered to maximize the probability that AI development goes well for humanity. Today’s episode combines the expertise of two pioneers and is a key resource for anyone wanting to follow in their footsteps. We cover: * What are OpenAI and Google Brain doing? * Why work on AI? * Do you learn more on the job, or while doing a PhD? * Controversial issues within ML * Is replicating papers a good way of determining suitability? * What % of software developers could make similar transitions? * How in-demand are research engineers? * The development of Dota 2 bots * Do research scientists have more influence on the vision of an org? * Has learning more made you more or less worried about the future? Get this episode by subscribing: type '80,000 Hours' into your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris.