Deep Learning is Eating 5G. Here’s How, w/ Joseph Soriaga - #525

Deep Learning is Eating 5G. Here’s How, w/ Joseph Soriaga - #525

Today we’re joined by Joseph Soriaga, a senior director of technology at Qualcomm. In our conversation with Joseph, we focus on a pair of papers that he and his team will be presenting at Globecom later this year. The first, Neural Augmentation of Kalman Filter with Hypernetwork for Channel Tracking, details the use of deep learning to augment an algorithm to address mismatches in models, allowing for more efficient training and making models more interpretable and predictable. The second paper, WiCluster: Passive Indoor 2D/3D Positioning using WiFi without Precise Labels, explores the use of rf signals to infer what the environment looks like, allowing for estimation of a person’s movement. We also discuss the ability for machine learning and AI to help enable 5G and make it more efficient for these applications, as well as the scenarios that ML would allow for more effective delivery of connected services, and look towards what might be possible in the near future. The complete show notes for this episode can be found at twimlai.com/go/525.

Avsnitt(775)

Scaling Agentic Inference Across Heterogeneous Compute with Zain Asgar - #757

Scaling Agentic Inference Across Heterogeneous Compute with Zain Asgar - #757

In this episode, Zain Asgar, co-founder and CEO of Gimlet Labs, joins us to discuss the heterogeneous AI inference across diverse hardware. Zain argues that the current industry standard of running all AI workloads on high-end GPUs is unsustainable for agents, which consume significantly more tokens than traditional LLM applications. We explore Gimlet’s approach to heterogeneous inference, which involves disaggregating workloads across a mix of hardware—from H100s to older GPUs and CPUs—to optimize unit economics without sacrificing performance. We dive into their "three-layer cake" architecture: workload disaggregation, a compilation layer that maps models to specific hardware targets, and a novel system that uses LLMs to autonomously rewrite and optimize compute kernels. Finally, we discuss the complexities of networking in heterogeneous environments, the trade-offs between numerical precision and application accuracy, and the future of hardware-aware scheduling. The complete show notes for this episode can be found at https://twimlai.com/go/757.

2 Dec 48min

Proactive Agents for the Web with Devi Parikh - #756

Proactive Agents for the Web with Devi Parikh - #756

Today, we're joined by Devi Parikh, co-founder and co-CEO of Yutori, to discuss browser use models and a future where we interact with the web through proactive, autonomous agents. We explore the technical challenges of creating reliable web agents, the advantages of visually-grounded models that operate on screenshots rather than the browser’s more brittle document object model, or DOM, and why this counterintuitive choice has proven far more robust and generalizable for handling complex web interfaces. Devi also shares insights into Yutori’s training pipeline, which has evolved from supervised fine-tuning to include rejection sampling and reinforcement learning. Finally, we discuss how Yutori’s “Scouts” agents orchestrate multiple tools and sub-agents to handle complex queries, the importance of background, "ambient" operation for these systems, and what the path looks like from simple monitoring to full task automation on the web. The complete show notes for this episode can be found at https://twimlai.com/go/756.

19 Nov 56min

AI Orchestration for Smart Cities and the Enterprise with Robin Braun and Luke Norris - #755

AI Orchestration for Smart Cities and the Enterprise with Robin Braun and Luke Norris - #755

Today, we're joined by Robin Braun, VP of AI business development for hybrid cloud at HPE, and Luke Norris, co-founder and CEO of Kamiwaza, to discuss how AI systems can be used to automate complex workflows and unlock value from legacy enterprise data. Robin and Luke detail high-impact use cases from HPE and Kamiwaza’s collaboration on an “Agentic Smart City” project for Vail, Colorado, including remediation and automation of website accessibility for 508 compliance, digitization and understanding of deed restrictions, and combining contextual information with camera feeds for fire detection and risk assessment. Additionally, we discuss the role of private cloud infrastructure in overcoming challenges like cost, data privacy, and compliance. Robin and Luke also share their lessons learned, including the importance of fresh data, and the value of a "mud puddle by mud puddle" approach in achieving practical AI wins. The complete show notes for this episode can be found at https://twimlai.com/go/755.

12 Nov 54min

Building an AI Mathematician with Carina Hong - #754

Building an AI Mathematician with Carina Hong - #754

In this episode, Carina Hong, founder and CEO of Axiom, joins us to discuss her work building an "AI Mathematician." Carina explains why this is a pivotal moment for AI in mathematics, citing a convergence of three key areas: the advanced reasoning capabilities of modern LLMs, the rise of formal proof languages like Lean, and breakthroughs in code generation. We explore the core technical challenges, including the massive data gap between general-purpose code and formal math code, and the difficult problem of "autoformalization," or translating natural language proofs into a machine-verifiable format. Carina also shares Axiom's vision for a self-improving system that uses a self-play loop of conjecturing and proving to discover new mathematical knowledge. Finally, we discuss the broader applications of this technology in areas like formal verification for high-stakes software and hardware. The complete show notes for this episode can be found at https://twimlai.com/go/754.

4 Nov 55min

High-Efficiency Diffusion Models for On-Device Image Generation and Editing with Hung Bui - #753

High-Efficiency Diffusion Models for On-Device Image Generation and Editing with Hung Bui - #753

In this episode, Hung Bui, Technology Vice President at Qualcomm, joins us to explore the latest high-efficiency techniques for running generative AI, particularly diffusion models, on-device. We dive deep into the technical challenges of deploying these models, which are powerful but computationally expensive due to their iterative sampling process. Hung details his team's work on SwiftBrush and SwiftEdit, which enable high-quality text-to-image generation and editing in a single inference step. He explains their novel distillation framework, where a multi-step teacher model guides the training of an efficient, single-step student model. We explore the architecture and training, including the use of a secondary 'coach' network that aligns the student's denoising function with the teacher's, allowing the model to bypass the iterative process entirely. Finally, we discuss how these efficiency breakthroughs pave the way for personalized on-device agents and the challenges of running reasoning models with techniques like inference-time scaling under a fixed compute budget. The complete show notes for this episode can be found at  https://twimlai.com/go/753.

28 Okt 52min

Vibe Coding's Uncanny Valley with Alexandre Pesant - #752

Vibe Coding's Uncanny Valley with Alexandre Pesant - #752

Today, we're joined by Alexandre Pesant, AI lead at Lovable, who joins us to discuss the evolution and practice of vibe coding. Alex shares his take on how AI is enabling a shift in software development from typing characters to expressing intent, creating a new layer of abstraction similar to how high-level code compiles to machine code. We explore the current capabilities and limitations of coding agents, the importance of context engineering, and the practices that separate successful vibe coders from frustrated ones. Alex also shares Lovable’s technical journey, from an early, complex agent architecture that failed, to a simpler workflow-based system, and back again to an agentic approach as foundation models improved. He also details the company's massive scaling challenges—like accidentally taking down GitHub—and makes the case for why robust evaluations and more expressive user interfaces are the most critical components for AI-native development tools to succeed in the near future. The complete show notes for this episode can be found at https://twimlai.com/go/752.

22 Okt 1h 12min

Dataflow Computing for AI Inference with Kunle Olukotun - #751

Dataflow Computing for AI Inference with Kunle Olukotun - #751

In this episode, we're joined by Kunle Olukotun, professor of electrical engineering and computer science at Stanford University and co-founder and chief technologist at Sambanova Systems, to discuss reconfigurable dataflow architectures for AI inference. Kunle explains the core idea of building computers that are dynamically configured to match the dataflow graph of an AI model, moving beyond the traditional instruction-fetch paradigm of CPUs and GPUs. We explore how this architecture is well-suited for LLM inference, reducing memory bandwidth bottlenecks and improving performance. Kunle reviews how this system also enables efficient multi-model serving and agentic workflows through its large, tiered memory and fast model-switching capabilities. Finally, we discuss his research into future dynamic reconfigurable architectures, and the use of AI agents to build compilers for new hardware. The complete show notes for this episode can be found at https://twimlai.com/go/751.

14 Okt 57min

Recurrence and Attention for Long-Context Transformers with Jacob Buckman - #750

Recurrence and Attention for Long-Context Transformers with Jacob Buckman - #750

Today, we're joined by Jacob Buckman, co-founder and CEO of Manifest AI to discuss achieving long context in transformers. We discuss the bottlenecks of scaling context length and recent techniques to overcome them, including windowed attention, grouped query attention, and latent space attention. We explore the idea of weight-state balance and the weight-state FLOP ratio as a way of reasoning about the optimality of compute architectures, and we dig into the Power Retention architecture, which blends the parallelization of attention with the linear scaling of recurrence and promises speedups of >10x during training and >100x during inference. We review Manifest AI’s recent open source projects as well: Vidrial—a custom CUDA framework for building highly optimized GPU kernels in Python, and PowerCoder—a 3B-parameter coding model fine-tuned from StarCoder to use power retention. Our chat also covers the use of metrics like in-context learning curves and negative log likelihood to measure context utility, the implications of scaling laws, and the future of long context lengths in AI applications. The complete show notes for this episode can be found at https://twimlai.com/go/750.

7 Okt 57min

Populärt inom Politik & nyheter

aftonbladet-krim
svenska-fall
motiv
p3-krim
fordomspodden
blenda-2
rss-viva-fotboll
rss-krimstad
flashback-forever
aftonbladet-daily
rss-sanning-konsekvens
rss-vad-fan-hande
rss-krimreportrarna
rss-frandfors-horna
dagens-eko
sydsvenskan-dok
olyckan-inifran
rss-flodet
rss-svalan-krim
krimmagasinet