How I AIEvals, error analysis, and better prompts: A systematic approach to improving your AI products
At a glance
WHAT IT’S REALLY ABOUT
Systematically improve AI products with traces, error analysis, and evals
- Hamel argues the biggest unlock for higher-quality AI products is the same as classic product work: look at real data—especially real user interactions—rather than relying on “vibe checks.”
- He demonstrates how “traces” (logged multi-step AI interactions including prompts, tool calls, and outputs) make failures debuggable and reveal surprising user behavior (vague, typo-heavy, ambiguous inputs).
- The core method is error analysis: sample ~100 traces, write brief human notes on the first upstream failure, then categorize and count issues to create a prioritized defect backlog.
- Only after this grounding should teams write evals: use code-based checks for objective issues, and LLM-as-a-judge for subjective ones—designed as specific binary pass/fail tests and validated against human labels to avoid misleading dashboards.
IDEAS WORTH REMEMBERING
5 ideasReal user traces are the foundation of AI quality work.
Before you optimize prompts or add evals, you need logs of what users actually do (including messy, ambiguous language) and what the system actually executed (system prompt, tool calls, retrieved context).
Error analysis makes an intractable problem tractable.
Randomly sample a manageable set (e.g., ~100 traces), write one-sentence notes, and stop at the most upstream failure to avoid getting lost in downstream artifacts of earlier mistakes.
Categorize and count failures to build a prioritized roadmap.
After open-coded notes, bucket issues (manually or with an LLM) and count frequency; this turns “the model is weird” into a ranked list like “handoff failures” and “tour scheduling errors.”
Custom annotation UIs can be worth it to remove friction.
Off-the-shelf observability tools work, but lightweight internal tools tailored to your channels and filters can speed human review, standardize labels, and improve throughput.
Write evals only after you know what to measure.
There are infinite possible evals; error analysis tells you which ones matter. In the Nurture Boss example, they wrote evals specifically for tour scheduling and transfer/handoff behaviors.
WORDS WORTH SAVING
5 quotesThe most important thing is looking at data.
— Hamel Husain
Just spend three hours of your afternoon, go through, read some of these chats, look at some of them with your human eyes... and get to work.
— Claire Vo
Error analysis has two steps. The first step is writing notes... basically like journaling what is wrong.
— Hamel Husain
The last thing you wanna do is throw up a judge on the dashboard... and people don't know if they can trust it.
— Hamel Husain
If you do all this eval stuff, fine-tuning is basically free... those difficult examples... are exactly the stuff you want to fine-tune on.
— Hamel Husain
High quality AI-generated summary created from speaker-labeled transcript.
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome