#176 – Nathan Labenz on the final push for AGI, understanding OpenAI's leadership drama, and red-teaming frontier models

#176 – Nathan Labenz on the final push for AGI, understanding OpenAI's leadership drama, and red-teaming frontier models

03:46:522023-12-22

Om episode

OpenAI says its mission is to build AGI — an AI system that is better than human beings at everything. Should the world trust them to do that safely?That’s the central theme of today’s episode with Nathan Labenz — entrepreneur, AI scout, and host of The Cognitive Revolution podcast.Links to learn more, video, highlights, and full transcript. Nathan saw the AI revolution coming years ago, and, astonished by the research he was seeing, set aside his role as CEO of Waymark and made it his full-time job to understand AI capabilities across every domain. He has been obsessively tracking the AI world since — including joining OpenAI’s “red team” that probed GPT-4 to find ways it could be abused, long before it was public.Whether OpenAI was taking AI safety seriously enough became a topic of dinner table conversation around the world after the shocking firing and reinstatement of Sam Altman as CEO last month.Nathan’s view: it’s complicated. Discussion of this topic has often been heated, polarising, and personal. But Nathan wants to avoid that and simply lay out, in a way that is impartial and fair to everyone involved, what OpenAI has done right and how it could do better in his view.When he started on the GPT-4 red team, the model would do anything from diagnose a skin condition to plan a terrorist attack without the slightest reservation or objection. When later shown a “Safety” version of GPT-4 that was almost the same, he approached a member of OpenAI’s board to share his concerns and tell them they really needed to try out GPT-4 for themselves and form an opinion.In today’s episode, we share this story as Nathan told it on his own show, The Cognitive Revolution, which he did in the hope that it would provide useful background to understanding the OpenAI board’s reservations about Sam Altman, which to this day have not been laid out in any detail.But while he feared throughout 2022 that OpenAI and Sam Altman didn’t understand the power and risk of their own system, he has since been repeatedly impressed, and came to think of OpenAI as among the better companies that could hypothetically be working to build AGI.Their efforts to make GPT-4 safe turned out to be much larger and more successful than Nathan was seeing. Sam Altman and other leaders at OpenAI seem to sincerely believe they’re playing with fire, and take the threat posed by their work very seriously. With the benefit of hindsight, Nathan suspects OpenAI’s decision to release GPT-4 when it did was for the best.On top of that, OpenAI has been among the most sane and sophisticated voices advocating for AI regulations that would target just the most powerful AI systems — the type they themselves are building — and that could make a real difference. They’ve also invested major resources into new ‘Superalignment’ and ‘Preparedness’ teams, while avoiding using competition with China as an excuse for recklessness.At the same time, it’s very hard to know whether it’s all enough. The challenge of making an AGI safe and beneficial may require much more than they hope or have bargained for. Given that, Nathan poses the question of whether it makes sense to try to build a fully general AGI that can outclass humans in every domain at the first opportunity. Maybe in the short term, we should focus on harvesting the enormous possible economic and humanitarian benefits of narrow applied AI models, and wait until we not only have a way to build AGI, but a good way to build AGI — an AGI that we’re confident we want, which we can prove will remain safe as its capabilities get ever greater.By threatening to follow Sam Altman to Microsoft before his reinstatement as OpenAI CEO, OpenAI’s research team has proven they have enormous influence over the direction of the company. If they put their minds to it, they’re also better placed than maybe anyone in the world to assess if the company’s strategy is on the right track and serving the interests of humanity as a whole. Nathan concludes that this power and insight only adds to the enormous weight of responsibility already resting on their shoulders.In today’s extensive conversation, Nathan and host Rob Wiblin discuss not only all of the above, but also:Speculation about the OpenAI boardroom drama with Sam Altman, given Nathan’s interactions with the board when he raised concerns from his red teaming efforts.Which AI applications we should be urgently rolling out, with less worry about safety.Whether governance issues at OpenAI demonstrate AI research can only be slowed by governments.Whether AI capabilities are advancing faster than safety efforts and controls.The costs and benefits of releasing powerful models like GPT-4.Nathan’s view on the game theory of AI arms races and China.Whether it’s worth taking some risk with AI for huge potential upside.The need for more “AI scouts” to understand and communicate AI progress.And plenty more.Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Milo McGuire and Dominic ArmstrongTranscriptions: Katy Moore

Oppdag

Mer

  • Om informasjonskapsler
  • Generelle vilkår
  • Personvernerklæringexternal link
logo
redaktorsansvar

Podme arbeider etter Vær Varsom-plakatens regler for god presseskikk. Ansvarlig redaktør og daglig leder er Kristin Ward Heimdal. Redaktør for eksterne innholdssamarbeid er Mathias Thaulow Lisberg.

© Podme AB 2024