Generative Benchmarking with Kelly Hong - #728
In this episode, Kelly Hong, a researcher at Chroma, joins us to discuss "Generative Benchmarking," a novel approach to evaluating retrieval systems, like RAG applications, using synthetic data. Kelly explains how traditional benchmarks like MTEB fail to represent real-world query patterns and how embedding models that perform well on public benchmarks often underperform in production. The conversation explores the two-step process of Generative Benchmarking: filtering documents to focus on relevant content and generating queries that mimic actual user behavior. Kelly shares insights from applying this approach to Weights & Biases' technical support bot, revealing how domain-specific evaluation provides more accurate assessments of embedding model performance. We also discuss the importance of aligning LLM judges with human preferences, the impact of chunking strategies on retrieval effectiveness, and how production queries differ from benchmark queries in ambiguity and style. Throughout the episode, Kelly emphasizes the need for systematic evaluation approaches that go beyond "vibe checks" to help developers build more effective RAG applications. The complete show notes for this episode can be found at https://twimlai.com/go/728.

Episoder(781)

Scaling Up Test-Time Compute with Latent Reasoning with Jonas Geiping - #723

Scaling Up Test-Time Compute with Latent Reasoning with Jonas Geiping - #723

Today, we're joined by Jonas Geiping, research group leader at Ellis Institute and the Max Planck Institute for Intelligent Systems to discuss his recent paper, “Scaling up Test-Time Compute with Late...

17 Mar 202558min

Imagine while Reasoning in Space: Multimodal Visualization-of-Thought with Chengzu Li - #722

Imagine while Reasoning in Space: Multimodal Visualization-of-Thought with Chengzu Li - #722

Today, we're joined by Chengzu Li, PhD student at the University of Cambridge to discuss his recent paper, “Imagine while Reasoning in Space: Multimodal Visualization-of-Thought.” We explore the motiv...

10 Mar 202542min

Inside s1: An o1-Style Reasoning Model That Cost Under $50 to Train with Niklas Muennighoff - #721

Inside s1: An o1-Style Reasoning Model That Cost Under $50 to Train with Niklas Muennighoff - #721

Today, we're joined by Niklas Muennighoff, a PhD student at Stanford University, to discuss his paper, “S1: Simple Test-Time Scaling.” We explore the motivations behind S1, as well as how it compares ...

3 Mar 202549min

Accelerating AI Training and Inference with AWS Trainium2 with Ron Diamant - #720

Accelerating AI Training and Inference with AWS Trainium2 with Ron Diamant - #720

Today, we're joined by Ron Diamant, chief architect for Trainium at Amazon Web Services, to discuss hardware acceleration for generative AI and the design and role of the recently released Trainium2 c...

24 Feb 20251h 7min

π0: A Foundation Model for Robotics with Sergey Levine - #719

π0: A Foundation Model for Robotics with Sergey Levine - #719

Today, we're joined by Sergey Levine, associate professor at UC Berkeley and co-founder of Physical Intelligence, to discuss π0 (pi-zero), a general-purpose robotic foundation model. We dig into the m...

18 Feb 202552min

AI Trends 2025: AI Agents and Multi-Agent Systems with Victor Dibia - #718

AI Trends 2025: AI Agents and Multi-Agent Systems with Victor Dibia - #718

Today we’re joined by Victor Dibia, principal research software engineer at Microsoft Research, to explore the key trends and advancements in AI agents and multi-agent systems shaping 2025 and beyond....

10 Feb 20251h 44min

Speculative Decoding and Efficient LLM Inference with Chris Lott - #717

Speculative Decoding and Efficient LLM Inference with Chris Lott - #717

Today, we're joined by Chris Lott, senior director of engineering at Qualcomm AI Research to discuss accelerating large language model inference. We explore the challenges presented by the LLM encodin...

4 Feb 20251h 16min

Ensuring Privacy for Any LLM with Patricia Thaine - #716

Ensuring Privacy for Any LLM with Patricia Thaine - #716

Today, we're joined by Patricia Thaine, co-founder and CEO of Private AI to discuss techniques for ensuring privacy, data minimization, and compliance when using 3rd-party large language models (LLMs)...

28 Jan 202551min

Populært innen Politikk og nyheter

giver-og-gjengen-vg
aftenpodden
aftenpodden-usa
forklart
stopp-verden
i-retten
popradet
lydartikler-fra-aftenposten
rss-gukild-johaug
fotballpodden-2
det-store-bildet
dine-penger-pengeradet
nokon-ma-ga
rss-ness
hanna-de-heldige
aftenbla-bla
frokostshowet-pa-p5
rss-dannet-uten-piano
e24-podden
rss-penger-polser-og-politikk