Diogo Almeida - Deep Learning: Modular in Theory, Inflexible in Practice - TWiML Talk #8

Diogo Almeida - Deep Learning: Modular in Theory, Inflexible in Practice - TWiML Talk #8

My guest this time is Diogo Almeida, senior data scientist at healthcare startup Enlitic. Diogo and I met at the O'Reilly AI conference, where he delivered a great presentation on in-the-trenches deep learning titled “Deep Learning: Modular in theory, inflexible in practice,” which we discuss in this interview. Diogo is also a past 1st place Kaggle competition winner, and we spend some time discussing the competition he competed in and the approach he took as well. The notes for this show can be found at twimlai.com/talk/8.

Avsnitt(763)

Autoformalization and Verifiable Superintelligence with Christian Szegedy - #745

Autoformalization and Verifiable Superintelligence with Christian Szegedy - #745

In this episode, Christian Szegedy, Chief Scientist at Morph Labs, joins us to discuss how the application of formal mathematics and reasoning enables the creation of more robust and safer AI systems. A pioneer behind concepts like the Inception architecture and adversarial examples, Christian now focuses on autoformalization—the AI-driven process of translating mathematical concepts from their human-readable form into rigorously formal, machine-verifiable logic. We explore the critical distinction between the informal reasoning of current LLMs, which can be prone to errors and subversion, and the provably correct reasoning enabled by formal systems. Christian outlines how this approach provides a robust path toward AI safety and also creates the high-quality, verifiable data needed to train models capable of surpassing human scientists in specialized domains. We also delve into his predictions for achieving this superintelligence and his ultimate vision for AI as a tool that helps humanity understand itself. The complete show notes for this episode can be found at https://twimlai.com/go/745.

2 Sep 1h 11min

Multimodal AI Models on Apple Silicon with MLX with Prince Canuma - #744

Multimodal AI Models on Apple Silicon with MLX with Prince Canuma - #744

Today, we're joined by Prince Canuma, an ML engineer and open-source developer focused on optimizing AI inference on Apple Silicon devices. Prince shares his journey to becoming one of the most prolific contributors to Apple’s MLX ecosystem, having published over 1,000 models and libraries that make open, multimodal AI accessible and performant on Apple devices. We explore his workflow for adapting new models in MLX, the trade-offs between the GPU and Neural Engine, and how optimization methods like pruning and quantization enhance performance. We also cover his work on "Fusion," a weight-space method for combining model behaviors without retraining, and his popular packages—MLX-Audio, MLX-Embeddings, and MLX-VLM—which streamline the use of MLX across different modalities. Finally, Prince introduces Marvis, a real-time speech-to-speech voice agent, and shares his vision for the future of AI, emphasizing the move towards "media models" that can handle multiple modalities, and more. The complete show notes for this episode can be found at https://twimlai.com/go/744.

26 Aug 1h 10min

Genie 3: A New Frontier for World Models with Jack Parker-Holder and Shlomi Fruchter - #743

Genie 3: A New Frontier for World Models with Jack Parker-Holder and Shlomi Fruchter - #743

Today, we're joined by Jack Parker-Holder and Shlomi Fruchter, researchers at Google DeepMind, to discuss the recent release of Genie 3, a model capable of generating “playable” virtual worlds. We dig into the evolution of the Genie project and review the current model’s scaled-up capabilities, including creating real-time, interactive, and high-resolution environments. Jack and Shlomi share their perspectives on what defines a world model, the model's architecture, and key technical challenges and breakthroughs, including Genie 3’s visual memory and ability to handle “promptable world events.” Jack, Shlomi, and Sam share their favorite Genie 3 demos, and discuss its potential as a dynamic training environment for embodied AI agents. Finally, we will explore future directions for Genie research. The complete show notes for this episode can be found at https://twimlai.com/go/743.

19 Aug 1h 1min

Closing the Loop Between AI Training and Inference with Lin Qiao - #742

Closing the Loop Between AI Training and Inference with Lin Qiao - #742

In this episode, we're joined by Lin Qiao, CEO and co-founder of Fireworks AI. Drawing on key lessons from her time building PyTorch, Lin shares her perspective on the modern generative AI development lifecycle. She explains why aligning training and inference systems is essential for creating a seamless, fast-moving production pipeline, preventing the friction that often stalls deployment. We explore the strategic shift from treating models as commodities to viewing them as core product assets. Lin details how post-training methods, like reinforcement fine-tuning (RFT), allow teams to leverage their own proprietary data to continuously improve these assets. Lin also breaks down the complex challenge of what she calls "3D optimization"—balancing cost, latency, and quality—and emphasizes the role of clear evaluation criteria to guide this process, moving beyond unreliable methods like "vibe checking." Finally, we discuss the path toward the future of AI development: designing a closed-loop system for automated model improvement, a vision made more attainable by the exciting convergence of open and closed-source model capabilities. The complete show notes for this episode can be found at https://twimlai.com/go/742.

12 Aug 1h 1min

Context Engineering for Productive AI Agents with Filip Kozera - #741

Context Engineering for Productive AI Agents with Filip Kozera - #741

In this episode, Filip Kozera, founder and CEO of Wordware, explains his approach to building agentic workflows where natural language serves as the new programming interface. Filip breaks down the architecture of these "background agents," explaining how they use a reflection loop and tool-calling to execute complex tasks. He discusses the current limitations of agent protocols like MCPs and how developers can extend them to handle the required context and authority. The conversation challenges the idea that more powerful models lead to more autonomous agents, arguing instead for "graceful recovery" systems that proactively bring humans into the loop when the agent "knows what it doesn't know." We also get into the "application layer" fight, exploring how SaaS platforms are creating data silos and what this means for the future of interoperable AI agents. Filip also shares his vision for the "word artisan"—the non-technical user who can now build and manage a fleet of AI agents, fundamentally changing the nature of knowledge work. The complete show notes for this episode can be found at https://twimlai.com/go/741.

29 Juli 46min

Infrastructure Scaling and Compound AI Systems with Jared Quincy Davis - #740

Infrastructure Scaling and Compound AI Systems with Jared Quincy Davis - #740

In this episode, Jared Quincy Davis, founder and CEO at Foundry, introduces the concept of "compound AI systems," which allows users to create powerful, efficient applications by composing multiple, often diverse, AI models and services. We discuss how these "networks of networks" can push the Pareto frontier, delivering results that are simultaneously faster, more accurate, and even cheaper than single-model approaches. Using examples like "laconic decoding," Jared explains the practical techniques for building these systems and the underlying principles of inference-time scaling. The conversation also delves into the critical role of co-design, where the evolution of AI algorithms and the underlying cloud infrastructure are deeply intertwined, shaping the future of agentic AI and the compute landscape. The complete show notes for this episode can be found at https://twimlai.com/go/740.

22 Juli 1h 13min

Building Voice AI Agents That Don’t Suck with Kwindla Kramer - #739

Building Voice AI Agents That Don’t Suck with Kwindla Kramer - #739

In this episode, Kwindla Kramer, co-founder and CEO of Daily and creator of the open source Pipecat framework, joins us to discuss the architecture and challenges of building real-time, production-ready conversational voice AI. Kwin breaks down the full stack for voice agents—from the models and APIs to the critical orchestration layer that manages the complexities of multi-turn conversations. We explore why many production systems favor a modular, multi-model approach over the end-to-end models demonstrated by large AI labs, and how this impacts everything from latency and cost to observability and evaluation. Kwin also digs into the core challenges of interruption handling, turn-taking, and creating truly natural conversational dynamics, and how to overcome them. We discuss use cases, thoughts on where the technology is headed, the move toward hybrid edge-cloud pipelines, and the exciting future of real-time video avatars, and much more. The complete show notes for this episode can be found at https://twimlai.com/go/739.

15 Juli 1h 13min

Distilling Transformers and Diffusion Models for Robust Edge Use Cases with Fatih Porikli - #738

Distilling Transformers and Diffusion Models for Robust Edge Use Cases with Fatih Porikli - #738

Today, we're joined by Fatih Porikli, senior director of technology at Qualcomm AI Research for an in-depth look at several of Qualcomm's accepted papers and demos featured at this year’s CVPR conference. We start with “DiMA: Distilling Multi-modal Large Language Models for Autonomous Driving,” an end-to-end autonomous driving system that incorporates distilling large language models for structured scene understanding and safe planning motion in critical "long-tail" scenarios. We explore how DiMA utilizes LLMs' world knowledge and efficient transformer-based models to significantly reduce collision rates and trajectory errors. We then discuss “SharpDepth: Sharpening Metric Depth Predictions Using Diffusion Distillation,” a diffusion-distilled approach that combines generative models with metric depth estimation to produce sharp, accurate monocular depth maps. Additionally, Fatih also shares a look at Qualcomm’s on-device demos, including text-to-3D mesh generation, real-time image-to-video and video-to-video generation, and a multi-modal visual question-answering assistant. The complete show notes for this episode can be found at https://twimlai.com/go/738.

9 Juli 1h

Populärt inom Politik & nyheter

svenska-fall
p3-krim
rss-viva-fotboll
flashback-forever
rss-sanning-konsekvens
svd-dokumentara-berattelser-2
aftonbladet-daily
rss-vad-fan-hande
olyckan-inifran
rss-krimstad
fordomspodden
dagens-eko
motiv
rss-frandfors-horna
krimmagasinet
rss-krimreportrarna
blenda-2
svd-nyhetsartiklar
kungligt
svd-ledarredaktionen