Scaling Agentic Inference Across Heterogeneous Compute with Zain Asgar - #757

Scaling Agentic Inference Across Heterogeneous Compute with Zain Asgar - #757

In this episode, Zain Asgar, co-founder and CEO of Gimlet Labs, joins us to discuss the heterogeneous AI inference across diverse hardware. Zain argues that the current industry standard of running all AI workloads on high-end GPUs is unsustainable for agents, which consume significantly more tokens than traditional LLM applications. We explore Gimlet’s approach to heterogeneous inference, which involves disaggregating workloads across a mix of hardware—from H100s to older GPUs and CPUs—to optimize unit economics without sacrificing performance. We dive into their "three-layer cake" architecture: workload disaggregation, a compilation layer that maps models to specific hardware targets, and a novel system that uses LLMs to autonomously rewrite and optimize compute kernels. Finally, we discuss the complexities of networking in heterogeneous environments, the trade-offs between numerical precision and application accuracy, and the future of hardware-aware scheduling. The complete show notes for this episode can be found at https://twimlai.com/go/757.

Populært innen Politikk og nyheter

giver-og-gjengen-vg
aftenpodden
forklart
aftenpodden-usa
popradet
nokon-ma-ga
stopp-verden
fotballpodden-2
det-store-bildet
dine-penger-pengeradet
rss-gukild-johaug
e24-podden
frokostshowet-pa-p5
rss-ness
aftenbla-bla
rss-penger-polser-og-politikk
rss-dannet-uten-piano
unitedno
bt-dokumentar-2
oppdatert