Byte Latent Transformer: Scaling Language Models with Patches | #ai #2024 #genai
AI Today27 Joulu 2024

Byte Latent Transformer: Scaling Language Models with Patches | #ai #2024 #genai

Paper: https://arxiv.org/pdf/2412.09871v1.pdf The paper introduces the Byte Latent Transformer (BLT), a novel large language model architecture that processes raw byte data without tokenization. BLT dynamically groups bytes into patches based on predicted entropy, allocating more computational resources to complex sections of text. This approach achieves performance comparable to tokenization-based models while significantly improving inference efficiency and robustness to noisy input. The authors present a scaling study demonstrating BLT's superior scaling properties and its enhanced performance on various downstream tasks, particularly those requiring sub-word understanding. Finally, the study explores methods to leverage pre-trained models to improve BLT training. ai , artificial intelligence , arxiv , research , paper , publication , llm, genai, generative ai , large visual models, large language models, large multi modal models, nlp, text, machine learning, ml, nividia, openai, anthropic, microsoft, google, technology, cutting-edge, meta, llama, chatgpt, gpt, elon musk, sam altman, deployment, engineering, scholar, science, apple, samsung, anthropic, turing

Jaksot(30)

SAMURAI: Adapting Segment Anything Model for Zero-Shot Visual Tracking with MotionAware Mem | #2024

SAMURAI: Adapting Segment Anything Model for Zero-Shot Visual Tracking with MotionAware Mem | #2024

Paper: https://arxiv.org/pdf/2411.11922 Github: https://github.com/yangchris11/samurai Blog: https://yangchris11.github.io/samurai/ The paper introduces SAMURAI, a novel visual object tracking method...

27 Marras 202414min

Adding Error Bars to Evals: A Statistical Approach to LM Evaluations | #llm #genai #anthropic #2024

Adding Error Bars to Evals: A Statistical Approach to LM Evaluations | #llm #genai #anthropic #2024

Github: https://arxiv.org/pdf/2411.00640 This research paper advocates for incorporating rigorous statistical methods into the evaluation of large language models (LLMs). It introduces formulas for c...

27 Marras 202414min

Marco-o1: Towards Open Reasoning Models for Open-Ended Solutions | #ai #llm #alibaba #genai #2024

Marco-o1: Towards Open Reasoning Models for Open-Ended Solutions | #ai #llm #alibaba #genai #2024

Paper: https://arxiv.org/pdf/2411.14405 Github: https://github.com/AIDC-AI/Marco-o1 The Alibaba MarcoPolo team introduces Marco-o1, a large reasoning model designed to excel in open-ended problem-sol...

27 Marras 202414min

FLUX.I TOOLS | #ai #computervision #cv #BlackForestLabs #2024

FLUX.I TOOLS | #ai #computervision #cv #BlackForestLabs #2024

Github: https://github.com/black-forest-labs/... Black Forest Labs announced FLUX.1 Tools, a suite of four open-access and API-based models enhancing their FLUX.1 text-to-image model. FLUX.1 Fill exc...

27 Marras 202414min

Tülu 3 opens language model post-training up to more tasks and more people | #ai #llm #allenai #2024

Tülu 3 opens language model post-training up to more tasks and more people | #ai #llm #allenai #2024

Blog: https://allenai.org/blog/tulu-3 Summary The Allen Institute for Artificial Intelligence (Ai2) has released Tülu 3, an open-source family of post-trained language models. Unlike closed models fr...

27 Marras 202414min

Multimodal Autoregressive Pre-training of Large Vision Encoders | #ai #computervision #apple #2024

Multimodal Autoregressive Pre-training of Large Vision Encoders | #ai #computervision #apple #2024

Paper: https://arxiv.org/pdf/2411.14402 Github Link: https://github.com/apple/ml-aim This research introduces AIMV2, a family of large-scale vision encoders pre-trained using a novel multimodal auto...

27 Marras 202414min