Skip to content
Aakash GuptaAakash Gupta

How to Build AI Evals in 2026 (Step-by-Step, No Hype)

Hamel Husain and Shreya Shankar are back with the definitive guide to AI evals. Step-by-step walkthrough using real production data from Nurture Boss. Error analysis, LLM judges, and the mistakes 90% of teams make. Full Writeup: https://www.news.aakashg.com/p/hamel-shreya-podcast-2 Transcript: https://www.aakashg.com/how-to-master-ai-evals-a-step-by-step-guide-with-hamel-husain-shreya-shankar/ ---- Timestamps: 0:00 - Intro 2:09 - Why Every AI Product Needs Evals 3:11 - Real Example: Nurture Boss Case Study 5:26 - Starting with Observability 11:24 - Ad Start 13:05 - Ad End: Analyzing Traces 24:55 - Error Analysis Introduction 27:00 - Axial Coding Explained 30:53 - Ad Start 32:40 - Ad End: Counting Issues 42:26 - Building Your LLM Judge 48:02 - Measuring the Judge 56:38 - PM vs AI Engineer Roles 1:01:29 - Common Mistakes to Avoid 1:06:31 - Outro ---- 🏆 Thanks to our sponsors: 1. The AI Evals Course for PMs & Engineers: You get $800 with this link: https://maven.com/parlance-labs/evals?promoCode=ag-product-growth 2. Vanta: Automate compliance, Get $1,000 with my link : https://www.vanta.com/lp/demo-1k?utm_campaign=1k_offer&utm_source=product-growth&utm_medium=podcast 3. Jira Product Discovery: Plan with purpose, ship with confidence - https://www.atlassian.com/software/jira/product-discovery 4. Land PM job: 12-week experience to master [getting a PM job](https://www.landpmjob.com/) - https://www.landpmjob.com/ 5. Pendo: the #1 Software Experience Management Platform - http://www.pendo.com/aakash ---- Key Takeaways: 1. AI evals are the #1 most important new skill for PMs in 2025 - Even Claude Code teams do evals upstream. For custom applications, systematic evaluation is non-negotiable. Dog fooding alone isn't enough at scale. 2. Error analysis is the secret weapon most teams skip - Looking at 100 traces teaches you more than any generic metric. Hamel: "If you try to use helpfulness scores, the LLM won't catch the real product issues." 3. Use observability tools but don't depend on them completely - Brain Trust, LangSmith, Arise all work. But Shreya and Hamel teach students to vibe code their own trace viewers. Sometimes CSV files are enough to start. 4. Never use agreement as your eval metric - It's a trap. A judge that always says "pass" can have 90% accuracy if failures are rare. Use TPR (true positive rate) and TNR (true negative rate) instead. 5. Open coding then axial coding reveals patterns - Write notes on 100 traces without root cause analysis. Then categorize into 5-6 actionable themes. Use LLMs to help but refine manually. 6. Product managers must do the error analysis themselves - Don't outsource to developers. Engineers lack domain context. Hamel: "It's almost a tragedy to separate the prompt from the product manager because it's English." 7. Real traces reveal what demos hide - Chat GPT said the assistant was correct but missed: wrong bathroom configuration, markdown in SMS, double-booked tours, ignored handoff requests. 8. Binary scores beat 1-5 scales for LLM judges - Easier to validate alignment. Business decisions are binary anyway. LLMs struggle with nuanced numerical scoring. 9. Code-based evals for formatting, LLM judges for subjective calls - Markdown in text messages? Write a simple assertion. Human handoff quality? Need an LLM judge with proper rubric. 10. Start with traces even before launch - Dog food your own app. Recruit friends as beta testers. Generate synthetic inputs only as last resort. Error analysis works best with real user behavior. ---- 👨‍💻 Where to find Hamel Husain: Website: https://hamel.dev Twitter/X: https://x.com/HamelHusain Course: https://evals.info 👨‍💻 Where to find Shreya Shankar: Website: https://www.shreya-shankar.com Twitter/X: https://x.com/sh_reya Course: https://evals.info 👨‍💻 Where to find Aakash: Twitter: https://www.x.com/aakashg0 LinkedIn: https://www.linkedin.com/in/aagupta/ Newsletter: https://www.news.aakashg.com #aievals #aipm #productmanagement ---- 🧠 About Product Growth: The world's largest podcast focused solely on product + growth, with over 200K+ listeners. 🔔 Subscribe and turn on notifications to get more videos like this.

Aakash GuptahostHamel HusainguestShreya Shankarguest
Jan 14, 20261h 7mWatch on YouTube ↗

At a glance

WHAT IT’S REALLY ABOUT

Step-by-step evals workflow: traces, error analysis, and LLM judges

  1. The speakers argue that most real AI products need evals, and “no evals” claims often rely on upstream testing or informal dogfooding rather than rigorous measurement.
  2. They demonstrate starting with observability by collecting and reviewing real production traces (even via simple logging) to see what users actually experience beyond polished demos.
  3. They emphasize error analysis as the core leverage point: manually “open code” trace issues, then “axial code” them into actionable categories and count frequency to prioritize work.
  4. They show how to turn high-impact error categories into automated evaluators, including code-based checks for objective issues and LLM-as-judge prompts for subjective product failures.
  5. They stress that LLM judges must be validated against human labels using metrics beyond simple agreement (e.g., true positive/true negative rates) to avoid misleading confidence and stakeholder mistrust.

IDEAS WORTH REMEMBERING

5 ideas

Start with traces, not abstract metrics.

Review real production conversations (including tool calls/RAG/multi-turn) to see the messy failures that “helpfulness” scores and generic dashboards routinely miss.

You don’t need fancy observability to begin.

An observability platform can help, but logging to CSV/JSON/DataDog is sufficient if you can reliably inspect traces and attach notes to them.

Do “open coding” quickly to build intuition and a dataset.

Scan ~100 traces and write brief notes on what went wrong (or skip if fine) without debating root cause; speed and coverage matter more than perfection.

Convert notes into categories (axial codes) that are specific and labelable.

Vague buckets like “quality” or “temporal issues” don’t help teams label consistently; use concrete, actionable categories (and a “none of the above” option) to discover missing buckets.

Count issues to escape prioritization paralysis.

Once errors are categorized, pivot tables (and optional hierarchical subcategories) reveal the most frequent failure modes and enable PM-driven prioritization by frequency and severity.

WORDS WORTH SAVING

5 quotes

“This is what your AI agents are actually doing out there in production.”

Aakash Gupta

“If you try to put helpfulness score… it’s not gonna catch stuff like this very well at all.”

Hamel Husain

“ChatGPT will say, ‘Yeah, absolutely,’ but it will miss all of this nuance.”

Shreya Shankar

“The main thing that’s inhibiting people is not doing the error analysis.”

Hamel Husain

“It’s almost a tragedy to separate the prompt from the product manager ’cause it’s English.”

Hamel Husain

Why evals are necessary beyond demos and vibe checksTracing and observability (tools vs DIY logging)Open coding: fast note-taking on trace failuresAxial coding: categorizing issues into actionable bucketsCounting and prioritization with pivot tables and subcategoriesCode-based evals vs LLM-as-judge evalsValidating judges: TPR/TNR vs accuracy/overall agreementPM vs AI engineer responsibilities and prompt ownershipCommon mistakes: skipping error analysis, outsourcing judgment, generic metrics

High quality AI-generated summary created from speaker-labeled transcript.

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome