How I AIEvals, error analysis, and better prompts: A systematic approach to improving your AI products
CHAPTERS
Hamel Husain’s core premise: quality comes from looking at data
Hamel frames AI product quality as a data analysis problem, not a “prompt magic” problem. The twist with LLMs is that the data is messy, stochastic, and often multi-step, but the fundamentals of product analytics still apply.
Case study setup: Nurture Boss virtual leasing assistant and why it’s hard to scale
Hamel introduces a real client, Nurture Boss, an AI assistant handling inbound leasing communications across channels (SMS, email, web chat). The team’s pain: prompt tweaks felt risky because they couldn’t tell if changes improved overall behavior or broke something else.
Traces and observability: capturing what actually happened in an AI interaction
Hamel explains “traces” as the log of end-to-end AI behavior: system prompt, user message(s), tool calls, tool responses, and final assistant output. He shows how observability tools (e.g., Braintrust, Arize Phoenix) make these sequences inspectable.
Reality check from user logs: vague, typo-filled, ambiguous prompts
Reviewing real conversations reveals how differently users interact than builders expect. Hamel and Claire highlight an example where the user’s message is unclear, and the assistant responds with something plausibly helpful but likely wrong for the user’s intent.
Error analysis, step 1 (open coding): write quick notes on the first upstream failure
Hamel introduces error analysis as a simple, high-leverage process borrowed from classical ML practice. The first step is “open coding”: sample ~100 traces and write short notes describing what went wrong, stopping at the earliest (most causal) error in the chain.
Error analysis, step 2: categorize notes and count—turn observations into priorities
After collecting notes, you bucket them into issue categories (optionally with LLM help) and then quantify frequency. Counting transforms qualitative review into a prioritized roadmap of fixes, replacing paralysis with clarity.
Building custom annotation UIs: reducing friction for faster review and labeling
Hamel explains that off-the-shelf observability UIs are helpful but sometimes too generic or slow for high-throughput review. For Nurture Boss, they quickly “vibe coded” a tailored annotation tool with filters, channel views, and lightweight labeling to speed analysis.
Impact of the process: clients get immediate quality gains and clearer next steps
Hamel notes many clients find error analysis alone transformative—sometimes enough to ship meaningful improvements. It also prevents premature obsession with eval tooling by clarifying which evals matter and what “good” looks like for real failures.
Choosing evaluation types: code-based checks vs reference-based vs subjective judging
With prioritized failures in hand, the next step is writing evaluations that match the problem type. Hamel distinguishes deterministic/code-based evals (unit-test-like), reference-based evals (known right answers), and LLM-judge evals for subjective or nuanced criteria.
LLM-as-a-Judge done right: binary, task-specific, and validated against human labels
Hamel critiques vague dashboards (helpfulness/truthfulness scores) as hard to interpret and easy to misuse. Instead, he recommends judge prompts that output binary pass/fail for specific failure modes, and validating judge agreement with hand-labeled examples to build trust.
Improving prompts and system instructions: fix obvious gaps, then iterate systematically
Once evals reveal failure clusters, teams decide what to change—prompting, retrieval, examples, tool specs, or eventually fine-tuning. Hamel emphasizes there’s no universal prompt trick; progress comes from targeted experimentation guided by measured errors (e.g., missing today’s date causing scheduling mistakes).
Evaluating and analyzing agents: tool-to-tool handoffs, transition matrices, and workflow insight
For agentic systems, Hamel highlights advanced analysis techniques like mapping transitions between steps and identifying where failures cluster. Claire adds that the same telemetry helps product discovery—seeing where users seek value and where workflows bottleneck.
Hamel’s personal AI workflows: Claude Projects, Gemini for video, and a monorepo for prompts
Hamel shares how he runs his business with AI: Claude Projects for repeated tasks (proposals, copywriting, course FAQs, legal), Gemini for turning videos into consumable notes, and a GitHub monorepo that centralizes prompts, rules, content, and tooling to avoid vendor lock-in.
Who should do annotation & a practical writing prompting tip: outline → draft → edit inline
In the closing lightning round, Hamel argues subject matter expertise is central—often PMs, sometimes ops/function experts, and data scientists as analysis scales. For writing quality, he recommends stepwise workflows and tools that support inline editing to create better examples and preserve human voice.
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome