Inside Nano Banana 🍌 and the Future of Vision-Language Models with Oliver Wang - #748

Inside Nano Banana 🍌 and the Future of Vision-Language Models with Oliver Wang - #748

Today, we’re joined by Oliver Wang, principal scientist at Google DeepMind and tech lead for Gemini 2.5 Flash Image—better known by its code name, “Nano Banana.” We dive into the development and capabilities of this newly released frontier vision-language model, beginning with the broader shift from specialized image generators to general-purpose multimodal agents that can use both visual and textual data for a variety of tasks. Oliver explains how Nano Banana can generate and iteratively edit images while maintaining consistency, and how its integration with Gemini’s world knowledge expands creative and practical use cases. We discuss the tension between aesthetics and accuracy, the relative maturity of image models compared to text-based LLMs, and scaling as a driver of progress. Oliver also shares surprising emergent behaviors, the challenges of evaluating vision-language models, and the risks of training on AI-generated data. Finally, we look ahead to interactive world models and VLMs that may one day “think” and “reason” in images. The complete show notes for this episode can be found at https://twimlai.com/go/748.

Avsnitt(781)

The EU AI Act and Mitigating Bias in Automated Decisioning with Peter van der Putten - #699

The EU AI Act and Mitigating Bias in Automated Decisioning with Peter van der Putten - #699

Today, we're joined by Peter van der Putten, director of the AI Lab at Pega and assistant professor of AI at Leiden University. We discuss the newly adopted European AI Act and the challenges of apply...

27 Aug 202445min

The Building Blocks of Agentic Systems with Harrison Chase - #698

The Building Blocks of Agentic Systems with Harrison Chase - #698

Today, we're joined by Harrison Chase, co-founder and CEO of LangChain to discuss LLM frameworks, agentic systems, RAG, evaluation, and more. We dig into the elements of a modern LLM framework, includ...

19 Aug 202459min

Simplifying On-Device AI for Developers with Siddhika Nevrekar - #697

Simplifying On-Device AI for Developers with Siddhika Nevrekar - #697

Today, we're joined by Siddhika Nevrekar, AI Hub head at Qualcomm Technologies, to discuss on-device AI and how to make it easier for developers to take advantage of device capabilities. We unpack the...

12 Aug 202446min

Genie: Generative Interactive Environments with Ashley Edwards - #696

Genie: Generative Interactive Environments with Ashley Edwards - #696

Today, we're joined by Ashley Edwards, a member of technical staff at Runway, to discuss Genie: Generative Interactive Environments, a system for creating ‘playable’ video environments for training de...

5 Aug 202446min

Bridging the Sim2real Gap in Robotics with Marius Memmel - #695

Bridging the Sim2real Gap in Robotics with Marius Memmel - #695

Today, we're joined by Marius Memmel, a PhD student at the University of Washington, to discuss his research on sim-to-real transfer approaches for developing autonomous robotic agents in unstructured...

30 Juli 202457min

Building Real-World LLM Products with Fine-Tuning and More with Hamel Husain - #694

Building Real-World LLM Products with Fine-Tuning and More with Hamel Husain - #694

Today, we're joined by Hamel Husain, founder of Parlance Labs, to discuss the ins and outs of building real-world products using large language models (LLMs). We kick things off discussing novel appli...

23 Juli 20241h 20min

Mamba, Mamba-2 and Post-Transformer Architectures for Generative AI with Albert Gu - #693

Mamba, Mamba-2 and Post-Transformer Architectures for Generative AI with Albert Gu - #693

Today, we're joined by Albert Gu, assistant professor at Carnegie Mellon University, to discuss his research on post-transformer architectures for multi-modal foundation models, with a focus on state-...

17 Juli 202457min

Decoding Animal Behavior to Train Robots with EgoPet with Amir Bar - #692

Decoding Animal Behavior to Train Robots with EgoPet with Amir Bar - #692

Today, we're joined by Amir Bar, a PhD candidate at Tel Aviv University and UC Berkeley to discuss his research on visual-based learning, including his recent paper, “EgoPet: Egomotion and Interaction...

9 Juli 202443min

PopulÀrt inom Politik & nyheter

svenska-fall
aftonbladet-krim
p3-krim
rss-krimstad
fordomspodden
flashback-forever
rss-expressen-dok
motiv
aftonbladet-daily
spar
blenda-2
rss-sanning-konsekvens
svd-ledarredaktionen
rss-vad-fan-hande
olyckan-inifran
rss-krimreportrarna
dagens-eko
rss-frandfors-horna
kungligt
svd-nyhetsartiklar