d-Matrix - Ultra-low Latency Batched Inference for Gen AI

d-Matrix - Ultra-low Latency Batched Inference for Gen AI

What happens when the real bottleneck in artificial intelligence is no longer training models, but actually running them at scale?

In this episode of Tech Talks Daily, I sit down with Satyam Srivastava from d-Matrix to explore a shift that is quietly reshaping the entire AI infrastructure landscape. While much of the early AI race focused on training ever larger models, the next phase of AI adoption is increasingly defined by inference. That is the moment when trained models are deployed and used to generate real-world results millions of times a day.

Satyam brings a unique perspective shaped by years of experience in signal processing, machine learning, and hardware architecture, including time spent at NVIDIA and Intel working on graphics, media technologies, and AI systems. Now at d-Matrix, he is helping design next-generation computing architectures focused on one of the biggest challenges facing the AI industry today: efficiently running large language models without overwhelming data centers with unsustainable power and infrastructure demands.

During our conversation, we explored why the industry underestimated the infrastructure implications of inference at scale. While training large models grabs headlines, the real operational pressure often comes later when those models must serve millions of queries in real time. That shift places enormous strain on memory bandwidth, energy consumption, and data movement inside modern data centers.

Satyam explains how d-Matrix identified this challenge years before generative AI exploded into the mainstream. Instead of focusing on training hardware like many AI startups at the time, the company concentrated on inference efficiency. That decision is becoming increasingly relevant as organizations begin to realize that simply adding more GPUs to data centers is not a sustainable long-term strategy.

We also discuss the growing power constraints surrounding AI infrastructure, and why efficiency-driven design may be the only realistic path forward. With electricity supply, cooling capacity, and semiconductor availability all becoming limiting factors, the industry is being forced to rethink how AI systems are architected. Custom silicon, purpose-built accelerators, and heterogeneous computing environments are now emerging as key pieces of the puzzle.

The conversation also touches on the geopolitical and economic importance of AI semiconductor leadership, and why the relationship between frontier AI labs, infrastructure providers, and chip designers is becoming increasingly strategic. As governments and companies compete to maintain technological leadership, the question of who controls the hardware powering AI may prove just as important as the models themselves.

Looking ahead, Satyam shares his perspective on how the role of engineers will evolve as AI infrastructure becomes more specialized and energy-aware. Foundational engineering skills remain essential, but the next generation of engineers will also need to think in terms of entire systems, combining software, hardware, and AI tools to build more efficient computing environments.

As AI continues to move from research labs into everyday products and services, are organizations prepared for the infrastructure shift that comes with an inference-driven future? And could efficiency, rather than raw computing power, become the defining metric of the next phase of the AI race?

Avsnitt(2000)

BlackBerry - A Strategy For Post Quantum Secure Communications

BlackBerry - A Strategy For Post Quantum Secure Communications

How prepared are organizations for a world where today's encrypted communications could be quietly stored and cracked years from now? In this episode of Tech Talks Daily, I sat down with Nate Jenniges...

16 Mars 24min

Inside Ricoh's Research On Workflow Friction And Document Chaos

Inside Ricoh's Research On Workflow Friction And Document Chaos

Why are employees still drowning in administrative work despite years of digital transformation, new software platforms, and constant promises that technology will make work easier? In this episode of...

15 Mars 22min

From NASA Engineer To Drata CEO: Adam Markowitz On Building Trust In The AI Age

From NASA Engineer To Drata CEO: Adam Markowitz On Building Trust In The AI Age

How do you build trust in a business environment where security reviews, compliance demands, and vendor risk checks can slow everything down just when companies are trying to move faster? In this epis...

15 Mars 26min

Natterbox And The Future Of Voice AI In Customer Experience

Natterbox And The Future Of Voice AI In Customer Experience

*]:pointer-events-auto scroll-mt-(--header-height)" dir="auto" tabindex="-1" data-turn-id= "effc95df-294b-4192-9cc6-00e1eb5e3a7e" data-testid= "conversation-turn-1" data-scroll-anchor="false" data-tur...

14 Mars 26min

Pendo CEO Todd Olson On How AI Is Redefining The Product-Led Organization

Pendo CEO Todd Olson On How AI Is Redefining The Product-Led Organization

How do you turn trillions of user interactions into meaningful decisions without drowning in data? In this episode of Tech Talks Daily, I sit down with Todd Olson, co-founder and CEO of Pendo, to talk...

13 Mars 30min

Genesys Agentic Virtual Agent Powered by LAMs for Enterprise CX

Genesys Agentic Virtual Agent Powered by LAMs for Enterprise CX

Have you ever contacted customer support with a simple request, only to find yourself trapped in a loop of scripted chatbot responses that never actually solve the problem? It's an experience many of ...

12 Mars 25min

Inside o9 Solutions And The AI Systems Powering Modern Supply Chains

Inside o9 Solutions And The AI Systems Powering Modern Supply Chains

*]:pointer-events-auto scroll-mt-(--header-height)" dir="auto" tabindex="-1" data-turn-id= "616a78a9-936c-48a2-92f7-e1bbd7029cf6" data-testid= "conversation-turn-1" data-scroll-anchor="false" data-tur...

11 Mars 31min

How Gensler Is Designing Data Centers For A Faster AI Future

How Gensler Is Designing Data Centers For A Faster AI Future

What does it take to design a data center for a world where the technology inside it may change several times before the building even opens? In this episode of Tech Talks Daily, I sit down with Jacks...

11 Mars 37min

Populärt inom Politik & nyheter

aftonbladet-krim
svenska-fall
p3-krim
rss-krimstad
flashback-forever
spar
rss-sanning-konsekvens
rss-vad-fan-hande
aftonbladet-daily
motiv
politiken
rss-klubbland-en-podd-mest-om-frolunda
rss-aftonbladet-krim
grans
rss-krimreportrarna
olyckan-inifran
krimmagasinet
rss-flodet
rss-frandfors-horna
dagens-eko