Speculative Decoding and Efficient LLM Inference with Chris Lott - #717

Speculative Decoding and Efficient LLM Inference with Chris Lott - #717

Today, we're joined by Chris Lott, senior director of engineering at Qualcomm AI Research to discuss accelerating large language model inference. We explore the challenges presented by the LLM encoding and decoding (aka generation) and how these interact with various hardware constraints such as FLOPS, memory footprint and memory bandwidth to limit key inference metrics such as time-to-first-token, tokens per second, and tokens per joule. We then dig into a variety of techniques that can be used to accelerate inference such as KV compression, quantization, pruning, speculative decoding, and leveraging small language models (SLMs). We also discuss future directions for enabling on-device agentic experiences such as parallel generation and software tools like Qualcomm AI Orchestrator. The complete show notes for this episode can be found at https://twimlai.com/go/717.

Jaksot(763)

Suosittua kategoriassa Politiikka ja uutiset

ootsa-kuullut-tasta-2
rss-ootsa-kuullut-tasta
aikalisa
rss-podme-livebox
politiikan-puskaradio
rss-vaalirankkurit-podcast
et-sa-noin-voi-sanoo-esittaa
otetaan-yhdet
rikosmyytit
rss-sinivalkoinen-islam
rss-hyvaa-huomenta-bryssel
the-ulkopolitist
linda-maria
rss-raha-talous-ja-politiikka
rss-mina-ukkola
rss-kaikki-uusiksi
aihe
radio-antro
rss-merja-mahkan-rahat
rss-tasta-on-kyse-ivan-puopolo-verkkouutiset