Aakash GuptaHow to Build AI Evals in 2026 (Step-by-Step, No Hype)
At a glance
WHAT IT’S REALLY ABOUT
Step-by-step evals workflow: traces, error analysis, and LLM judges
- The speakers argue that most real AI products need evals, and “no evals” claims often rely on upstream testing or informal dogfooding rather than rigorous measurement.
- They demonstrate starting with observability by collecting and reviewing real production traces (even via simple logging) to see what users actually experience beyond polished demos.
- They emphasize error analysis as the core leverage point: manually “open code” trace issues, then “axial code” them into actionable categories and count frequency to prioritize work.
- They show how to turn high-impact error categories into automated evaluators, including code-based checks for objective issues and LLM-as-judge prompts for subjective product failures.
- They stress that LLM judges must be validated against human labels using metrics beyond simple agreement (e.g., true positive/true negative rates) to avoid misleading confidence and stakeholder mistrust.
IDEAS WORTH REMEMBERING
5 ideasStart with traces, not abstract metrics.
Review real production conversations (including tool calls/RAG/multi-turn) to see the messy failures that “helpfulness” scores and generic dashboards routinely miss.
You don’t need fancy observability to begin.
An observability platform can help, but logging to CSV/JSON/DataDog is sufficient if you can reliably inspect traces and attach notes to them.
Do “open coding” quickly to build intuition and a dataset.
Scan ~100 traces and write brief notes on what went wrong (or skip if fine) without debating root cause; speed and coverage matter more than perfection.
Convert notes into categories (axial codes) that are specific and labelable.
Vague buckets like “quality” or “temporal issues” don’t help teams label consistently; use concrete, actionable categories (and a “none of the above” option) to discover missing buckets.
Count issues to escape prioritization paralysis.
Once errors are categorized, pivot tables (and optional hierarchical subcategories) reveal the most frequent failure modes and enable PM-driven prioritization by frequency and severity.
WORDS WORTH SAVING
5 quotes“This is what your AI agents are actually doing out there in production.”
— Aakash Gupta
“If you try to put helpfulness score… it’s not gonna catch stuff like this very well at all.”
— Hamel Husain
“ChatGPT will say, ‘Yeah, absolutely,’ but it will miss all of this nuance.”
— Shreya Shankar
“The main thing that’s inhibiting people is not doing the error analysis.”
— Hamel Husain
“It’s almost a tragedy to separate the prompt from the product manager ’cause it’s English.”
— Hamel Husain
High quality AI-generated summary created from speaker-labeled transcript.
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome