Skip to content
How I AIHow I AI

Evals, error analysis, and better prompts: A systematic approach to improving your AI products

Hamel Husain, an AI consultant and educator, shares his systematic approach to improving AI product quality through error analysis, evaluation frameworks, and prompt engineering. In this episode, he demonstrates how product teams can move beyond “vibe checking” their AI systems to implement data-driven quality improvement processes that identify and fix the most common errors. Using real examples from client work with Nurture Boss (an AI assistant for property managers), Hamel walks through practical techniques that product managers can implement immediately to dramatically improve their AI products. *What you’ll learn:* 1. A step-by-step error analysis framework that helps identify and categorize the most common AI failures in your product 2. How to create custom annotation systems that make reviewing AI conversations faster and more insightful 3. Why binary evaluations (pass/fail) are more useful than arbitrary quality scores for measuring AI performance 4. Techniques for validating your LLM judges to ensure they align with human quality expectations 5. A practical approach to prioritizing fixes based on frequency counting rather than intuition 6. Why looking at real user conversations (not just ideal test cases) is critical for understanding AI product failures 7. How to build a comprehensive quality system that spans from manual review to automated evaluation *Brought to you by:* GoFundMe Giving Funds—One account. Zero hassle: https://gofundme.com/howiai Persona—Trusted identity verification for any use case: https://withpersona.com/lp/howiai *Where to find Hamel Husain:* Website: https://hamel.dev/ Twitter: https://twitter.com/HamelHusain Course: https://maven.com/parlance-labs/evals GitHub: https://github.com/hamelsmu *Where to find Claire Vo:* ChatPRD: https://www.chatprd.ai/ Website: https://clairevo.com/ LinkedIn: https://www.linkedin.com/in/clairevo/ X: https://x.com/clairevo *In this episode, we cover:* (00:00) Introduction to Hamel Husain (03:05) The fundamentals: why data analysis is critical for AI products (06:58) Understanding traces and examining real user interactions (13:35) Error analysis: a systematic approach to finding AI failures (17:40) Creating custom annotation systems for faster review (22:23) The impact of this process (25:15) Different types of evaluations (29:30) LLM-as-a-Judge (33:58) Improving prompts and system instructions (38:15) Analyzing agent workflows (40:38) Hamel’s personal AI tools and workflows (48:02) Lighting round and final thoughts *Tools referenced:* • Claude: https://claude.ai/ • Braintrust: https://www.braintrust.dev/docs/start • Phoenix: https://phoenix.arize.com/ • AI Studio: https://aistudio.google.com/ • ChatGPT: https://chat.openai.com/ • Gemini: https://gemini.google.com/ *Other references:* • Who Validates the Validators? Aligning LLM-Assisted Evaluation of LLM Outputs with Human Preferences: https://dl.acm.org/doi/10.1145/3654777.3676450 • Nurture Boss: https://nurtureboss.io • Rechat: https://rechat.com/ • Your AI Product Needs Evals: https://hamel.dev/blog/posts/evals/ • A Field Guide to Rapidly Improving AI Products: https://hamel.dev/blog/posts/field-guide/ • Creating a LLM-as-a-Judge That Drives Business Results: https://hamel.dev/blog/posts/llm-judge/ • Lenny’s List on Maven: https://maven.com/lenny _Production and marketing by https://penname.co/._ _For inquiries about sponsoring the podcast, email jordan@penname.co._

Claire VohostHamel Husainguest
Oct 13, 202554mWatch on YouTube ↗

Episode Details

EPISODE INFO

Released
October 13, 2025
Duration
54m
Channel
How I AI
Watch on YouTube
▶ Open ↗

EPISODE DESCRIPTION

Hamel Husain, an AI consultant and educator, shares his systematic approach to improving AI product quality through error analysis, evaluation frameworks, and prompt engineering. In this episode, he demonstrates how product teams can move beyond “vibe checking” their AI systems to implement data-driven quality improvement processes that identify and fix the most common errors. Using real examples from client work with Nurture Boss (an AI assistant for property managers), Hamel walks through practical techniques that product managers can implement immediately to dramatically improve their AI products. *What you’ll learn:*

  1. A step-by-step error analysis framework that helps identify and categorize the most common AI failures in your product
  2. How to create custom annotation systems that make reviewing AI conversations faster and more insightful
  3. Why binary evaluations (pass/fail) are more useful than arbitrary quality scores for measuring AI performance
  4. Techniques for validating your LLM judges to ensure they align with human quality expectations
  5. A practical approach to prioritizing fixes based on frequency counting rather than intuition
  6. Why looking at real user conversations (not just ideal test cases) is critical for understanding AI product failures
  7. How to build a comprehensive quality system that spans from manual review to automated evaluation

*Brought to you by:* GoFundMe Giving Funds—One account. Zero hassle: https://gofundme.com/howiai Persona—Trusted identity verification for any use case: https://withpersona.com/lp/howiai *Where to find Hamel Husain:* Website: https://hamel.dev/ Twitter: https://twitter.com/HamelHusain Course: https://maven.com/parlance-labs/evals GitHub: https://github.com/hamelsmu *Where to find Claire Vo:* ChatPRD: https://www.chatprd.ai/ Website: https://clairevo.com/ LinkedIn: https://www.linkedin.com/in/clairevo/ X: https://x.com/clairevo *In this episode, we cover:* (00:00) Introduction to Hamel Husain (03:05) The fundamentals: why data analysis is critical for AI products (06:58) Understanding traces and examining real user interactions (13:35) Error analysis: a systematic approach to finding AI failures (17:40) Creating custom annotation systems for faster review (22:23) The impact of this process (25:15) Different types of evaluations (29:30) LLM-as-a-Judge (33:58) Improving prompts and system instructions (38:15) Analyzing agent workflows (40:38) Hamel’s personal AI tools and workflows (48:02) Lighting round and final thoughts *Tools referenced:*

*Other references:*

_Production and marketing by https://penname.co/._ _For inquiries about sponsoring the podcast, email jordan@penname.co._

SPEAKERS

  • Claire Vo

    host
  • Hamel Husain

    guest

EPISODE SUMMARY

In this episode of How I AI, featuring Claire Vo and Hamel Husain, Evals, error analysis, and better prompts: A systematic approach to improving your AI products explores systematically improve AI products with traces, error analysis, and evals Hamel argues the biggest unlock for higher-quality AI products is the same as classic product work: look at real data—especially real user interactions—rather than relying on “vibe checks.”

RELATED EPISODES

Claude Code Just Got WAY More Powerful

Claude Code Just Got WAY More Powerful

Quests, token leaderboards, and a skills marketplace: the elite AI adoption playbook | John Kim

Quests, token leaderboards, and a skills marketplace: the elite AI adoption playbook | John Kim

The internal AI tool that's transforming how Stripe designs products | Owen Williams

The internal AI tool that's transforming how Stripe designs products | Owen Williams

A complete beginner's guide to coding with AI: From PRD to generating your very first lines of code

A complete beginner's guide to coding with AI: From PRD to generating your very first lines of code

How Microsoft's AI VP automates everything with Warp | Marco Casalaina

How Microsoft's AI VP automates everything with Warp | Marco Casalaina

How to turn meeting notes into prototypes that your sales team can immediately demo to customers

How to turn meeting notes into prototypes that your sales team can immediately demo to customers

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome