Speculative Decoding and Efficient LLM Inference with Chris Lott - #717

Speculative Decoding and Efficient LLM Inference with Chris Lott - #717

Today, we're joined by Chris Lott, senior director of engineering at Qualcomm AI Research to discuss accelerating large language model inference. We explore the challenges presented by the LLM encoding and decoding (aka generation) and how these interact with various hardware constraints such as FLOPS, memory footprint and memory bandwidth to limit key inference metrics such as time-to-first-token, tokens per second, and tokens per joule. We then dig into a variety of techniques that can be used to accelerate inference such as KV compression, quantization, pruning, speculative decoding, and leveraging small language models (SLMs). We also discuss future directions for enabling on-device agentic experiences such as parallel generation and software tools like Qualcomm AI Orchestrator. The complete show notes for this episode can be found at https://twimlai.com/go/717.

Populärt inom Politik & nyheter

aftonbladet-krim
motiv
svenska-fall
p3-krim
fordomspodden
rss-krimstad
flashback-forever
rss-viva-fotboll
blenda-2
aftonbladet-daily
rss-sanning-konsekvens
rss-vad-fan-hande
rss-frandfors-horna
rss-krimreportrarna
grans
dagens-eko
olyckan-inifran
rss-flodet
svd-nyhetsartiklar
rss-expressen-dok