Evals, error analysis, and better prompts: A systematic approach to improving your AI products | Hamel Husain (ML engineer)
How I AI13 Loka 2025

Evals, error analysis, and better prompts: A systematic approach to improving your AI products | Hamel Husain (ML engineer)

Hamel Husain, an AI consultant and educator, shares his systematic approach to improving AI product quality through error analysis, evaluation frameworks, and prompt engineering. In this episode, he demonstrates how product teams can move beyond “vibe checking” their AI systems to implement data-driven quality improvement processes that identify and fix the most common errors. Using real examples from client work with Nurture Boss (an AI assistant for property managers), Hamel walks through practical techniques that product managers can implement immediately to dramatically improve their AI products.


What you’ll learn:

1. A step-by-step error analysis framework that helps identify and categorize the most common AI failures in your product

2. How to create custom annotation systems that make reviewing AI conversations faster and more insightful

3. Why binary evaluations (pass/fail) are more useful than arbitrary quality scores for measuring AI performance

4. Techniques for validating your LLM judges to ensure they align with human quality expectations

5. A practical approach to prioritizing fixes based on frequency counting rather than intuition

6. Why looking at real user conversations (not just ideal test cases) is critical for understanding AI product failures

7. How to build a comprehensive quality system that spans from manual review to automated evaluation

Brought to you by:

GoFundMe Giving Funds—One account. Zero hassle: https://gofundme.com/howiai

Persona—Trusted identity verification for any use case: https://withpersona.com/lp/howiai

Where to find Hamel Husain:

Website: https://hamel.dev/

Twitter: https://twitter.com/HamelHusain

Course: https://maven.com/parlance-labs/evals

GitHub: https://github.com/hamelsmu

Where to find Claire Vo:

ChatPRD: https://www.chatprd.ai/

Website: https://clairevo.com/

LinkedIn: https://www.linkedin.com/in/clairevo/

X: https://x.com/clairevo

In this episode, we cover:

(00:00) Introduction to Hamel Husain

(03:05) The fundamentals: why data analysis is critical for AI products

(06:58) Understanding traces and examining real user interactions

(13:35) Error analysis: a systematic approach to finding AI failures

(17:40) Creating custom annotation systems for faster review

(22:23) The impact of this process

(25:15) Different types of evaluations

(29:30) LLM-as-a-Judge

(33:58) Improving prompts and system instructions

(38:15) Analyzing agent workflows

(40:38) Hamel’s personal AI tools and workflows

(48:02) Lighting round and final thoughts

Tools referenced:

• Claude: https://claude.ai/

• Braintrust: https://www.braintrust.dev/docs/start

• Phoenix: https://phoenix.arize.com/

• AI Studio: https://aistudio.google.com/

• ChatGPT: https://chat.openai.com/

• Gemini: https://gemini.google.com/

Other references:

• Who Validates the Validators? Aligning LLM-Assisted Evaluation of LLM Outputs with Human Preferences: https://dl.acm.org/doi/10.1145/3654777.3676450

• Nurture Boss: https://nurtureboss.io

• Rechat: https://rechat.com/

• Your AI Product Needs Evals: https://hamel.dev/blog/posts/evals/

• A Field Guide to Rapidly Improving AI Products: https://hamel.dev/blog/posts/field-guide/

• Creating a LLM-as-a-Judge That Drives Business Results: https://hamel.dev/blog/posts/llm-judge/

• Lenny’s List on Maven: https://maven.com/lenny

Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email jordan@penname.co.

Jaksot(67)

From Figma to Claude Code and back | Gui Seiz & Alex Kern (Figma)

From Figma to Claude Code and back | Gui Seiz & Alex Kern (Figma)

Most teams are still passing static design files back and forth, and most Figma files are already out of date by the time they reach engineering. Gui Seiz (designer) and Alex Kern (engineer) from Figm...

11 Maalis 40min

Mastering Midjourney: How to create consistent, beautiful brand imagery without complex prompts | Jamey Gannon

Mastering Midjourney: How to create consistent, beautiful brand imagery without complex prompts | Jamey Gannon

Jamey Gannon is an AI creative director who specializes in creating consistent, beautiful brand imagery using AI tools. In this episode, Jamey demonstrates her streamlined workflow for generating cohe...

9 Maalis 49min

How Coinbase scaled AI to 1,000+ engineers | Chintan Turakhia

How Coinbase scaled AI to 1,000+ engineers | Chintan Turakhia

Chintan Turakhia is Senior Director of Engineering at Coinbase, where he’s led the transformation of a 1,000-plus-engineer organization to embrace AI tools at scale. When tasked with rewriting Coinbas...

2 Maalis 58min

5 OpenClaw agents run my home, finances, and code | Jesse Genet

5 OpenClaw agents run my home, finances, and code | Jesse Genet

Jesse Genet is a homeschooling parent and entrepreneur who runs her household with five specialized OpenClaw agents. She layers them on top of her Obsidian “second brain,” deploys each on its own Mac ...

25 Helmi 49min

“I haven’t written a single line of front-end code in 3 months”: How Notion’s design team uses Claude Code to prototype

“I haven’t written a single line of front-end code in 3 months”: How Notion’s design team uses Claude Code to prototype

Brian Lovin is a designer at Notion AI who has transformed how the design team builds prototypes, by creating a shared code environment powered by Claude Code. Instead of designers working in isolated...

23 Helmi 51min

How this visually impaired engineer uses Claude Code to make his life more accessible | Joe McCormick

How this visually impaired engineer uses Claude Code to make his life more accessible | Joe McCormick

Joe McCormick is a principal software engineer at Babylist who lost most of his central vision due to a rare genetic disorder right before starting college. He pivoted from mechanical engineering to c...

16 Helmi 49min

Claude Opus 4.6 vs. GPT-5.3 Codex: How I shipped 93,000 lines of code in 5 days

Claude Opus 4.6 vs. GPT-5.3 Codex: How I shipped 93,000 lines of code in 5 days

I put the newest AI coding models from OpenAI and Anthropic head-to-head, testing them on real engineering work I’m actually doing. I compare GPT-5.3 Codex with Opus 4.6 (and Opus 4.6 Fast) by asking ...

11 Helmi 30min

How to build your own AI developer tools with Claude Code | CJ Hess (Tenex)

How to build your own AI developer tools with Claude Code | CJ Hess (Tenex)

CJ Hess is a software engineer at Tenex who has built some of the most useful tools and workflows for being a “real AI engineer.” In this episode, CJ demonstrates his custom-built tool, Flowy, that tr...

9 Helmi 53min