STAR ATTENTION: EFFICIENT LLM INFERENCE OVER LONG SEQUENCES | #ai #2024 #genai
AI Today4 Dec 2024

STAR ATTENTION: EFFICIENT LLM INFERENCE OVER LONG SEQUENCES | #ai #2024 #genai

Paper: https://arxiv.org/pdf/2411.17116 The paper introduces Star Attention, a novel two-phase attention mechanism for efficient Large Language Model (LLM) inference on long sequences. It improves computational efficiency by sharding attention across multiple hosts, using blockwise-local attention in the first phase and sequence-global attention in the second. This approach achieves up to an 11x speedup in inference time while maintaining high accuracy (95-100%). The effectiveness of Star Attention is demonstrated through experiments on various LLMs and benchmarks, exploring the trade-off between speed and accuracy based on block size and anchor block design. The research also analyzes the algorithm's performance across different task categories. ai , artificial intelligence , arxiv , research , paper , publication , llm, genai, generative ai , large visual models, large language models, large multi modal models, nlp, text, machine learning, ml, nividia, openai, anthropic, microsoft, google, technology, cutting-edge, meta, llama, chatgpt, gpt, elon musk, sam altman, deployment, engineering, scholar, science, apple, samsung, anthropic, turing

Populärt inom Teknik

uppgang-och-fall
elbilsveckan
market-makers
rss-elektrikerpodden
bosse-bildoktorn-och-hasse-p
natets-morka-sida
bilar-med-sladd
rss-laddstationen-med-elbilen-i-sverige
skogsforum-podcast
rss-uppgang-och-fall
gubbar-som-tjotar-om-bilar
developers-mer-an-bara-kod
rss-veckans-ai
rss-technokratin
hej-bruksbil
bli-saker-podden
rss-it-sakerhetspodden
algoritmen
rss-heja-framtiden
rss-en-ai-till-kaffet