The Benefit of Bottlenecks in Evolving Artificial Intelligence with David Ha - #535

The Benefit of Bottlenecks in Evolving Artificial Intelligence with David Ha - #535

Today we’re joined by David Ha, a research scientist at Google. In nature, there are many examples of “bottlenecks”, or constraints, that have shaped our development as a species. Building upon this idea, David posits that these same evolutionary bottlenecks could work when training neural network models as well. In our conversation with David, we cover a TON of ground, including the aforementioned biological inspiration for his work, then digging deeper into the different types of constraints he’s applied to ML systems. We explore abstract generative models and how advanced training agents inside of generative models has become, and quite a few papers including Neuroevolution of self-interpretable agents, World Models and Attention for Reinforcement Learning, and The Sensory Neuron as a Transformer: Permutation-Invariant Neural Networks for Reinforcement Learning. This interview is Nerd Alert certified, so get your notes ready! PS. David is one of our favorite follows on Twitter (@hardmaru), so check him out and share your thoughts on this interview and his work! The complete show notes for this episode can be found at twimlai.com/go/535

Avsnitt(782)

An Agentic Mixture of Experts for DevOps with Sunil Mallya - #708

An Agentic Mixture of Experts for DevOps with Sunil Mallya - #708

Today we're joined by Sunil Mallya, CTO and co-founder of Flip AI. We discuss Flip’s incident debugging system for DevOps, which was built using a custom mixture of experts (MoE) large language model ...

4 Nov 20241h 15min

Building AI Voice Agents with Scott Stephenson - #707

Building AI Voice Agents with Scott Stephenson - #707

Today, we're joined by Scott Stephenson, co-founder and CEO of Deepgram to discuss voice AI agents. We explore the importance of perception, understanding, and interaction and how these key components...

28 Okt 20241h 1min

Is Artificial Superintelligence Imminent? with Tim Rocktäschel - #706

Is Artificial Superintelligence Imminent? with Tim Rocktäschel - #706

Today, we're joined by Tim Rocktäschel, senior staff research scientist at Google DeepMind, professor of Artificial Intelligence at University College London, and author of the recently published popu...

21 Okt 202455min

ML Models for Safety-Critical Systems with Lucas García - #705

ML Models for Safety-Critical Systems with Lucas García - #705

Today, we're joined by Lucas García, principal product manager for deep learning at MathWorks to discuss incorporating ML models into safety-critical systems. We begin by exploring the critical role o...

14 Okt 20241h 16min

AI Agents: Substance or Snake Oil with Arvind Narayanan - #704

AI Agents: Substance or Snake Oil with Arvind Narayanan - #704

Today, we're joined by Arvind Narayanan, professor of Computer Science at Princeton University to discuss his recent works, AI Agents That Matter and AI Snake Oil. In “AI Agents That Matter”, we explo...

7 Okt 202454min

AI Agents for Data Analysis with Shreya Shankar - #703

AI Agents for Data Analysis with Shreya Shankar - #703

Today, we're joined by Shreya Shankar, a PhD student at UC Berkeley to discuss DocETL, a declarative system for building and optimizing LLM-powered data processing pipelines for large-scale and comple...

30 Sep 202448min

Stealing Part of a Production Language Model with Nicholas Carlini - #702

Stealing Part of a Production Language Model with Nicholas Carlini - #702

Today, we're joined by Nicholas Carlini, research scientist at Google DeepMind to discuss adversarial machine learning and model security, focusing on his 2024 ICML best paper winner, “Stealing part o...

23 Sep 20241h 3min

Supercharging Developer Productivity with ChatGPT and Claude with Simon Willison - #701

Supercharging Developer Productivity with ChatGPT and Claude with Simon Willison - #701

Today, we're joined by Simon Willison, independent researcher and creator of Datasette to discuss the many ways software developers and engineers can take advantage of large language models (LLMs) to ...

16 Sep 20241h 14min

Populärt inom Politik & nyheter

svenska-fall
p3-krim
rss-krimstad
aftonbladet-krim
fordomspodden
spar
flashback-forever
motiv
aftonbladet-daily
rss-vad-fan-hande
rss-sanning-konsekvens
krimmagasinet
rss-krimreportrarna
rss-klubbland-en-podd-mest-om-frolunda
sydsvenskan-dok
rss-aftonbladet-krim
politiken
blenda-2
grans
svd-ledarredaktionen