At a glance
WHAT IT’S REALLY ABOUT
A five-skill crash course for becoming an AI product manager
- An AI PM is framed as either using AI to accelerate PM workflows or building AI into products, and Aman argues virtually every PM role will become “X + AI” rather than a separate job track.
- The core learning path starts with AI prototyping using code-capable agents (notably Cursor) to build a working agentic app quickly, while building comfort with debugging and iteration.
- Observability is positioned as the bridge from “it works” to “we understand why,” using tracing to visualize agent graphs, tool calls, prompts, latency, and failure points in production-like workflows.
- Evals are presented as the mechanism to move from subjective “vibe coding” to measurable product quality, combining human labels, code checks, and LLM-as-judge grading—with emphasis on validating the judge against human annotations.
- Prompt engineering, RAG, and fine-tuning are compared by goal, effort, and impact, and the episode closes with career guidance: build side projects, don’t wait for better models, avoid over-automating early, and invest two hours weekly in tools + intuition + application.
IDEAS WORTH REMEMBERING
5 ideasTreat “AI PM” as an overlay on your domain, not a new identity.
Aman’s model is “fintech × AI” or “healthcare × AI,” where AI accelerates existing PM strengths (domain insight, customer understanding) rather than replacing them.
Start with prototyping to compress learning time and raise credibility.
By building a real prototype (even messy) with tools like Cursor, PMs learn the stack through iteration—prompts, code, dependencies, and debugging—while producing an artifact they can demo.
Cursor is slower at true “0→1 UI,” but wins for depth and control.
Bolt/Lovable/v0 can produce quick mock UIs, but Cursor (VS Code fork) enables deeper edits, agent frameworks, file-level control, and iterative expansion beyond a shallow demo.
Expect things to break; your real skill is recovery loops.
The episode repeatedly shows dependency/version issues (Python packages, Node engines, ports), and the recommended workflow is copy errors from terminal, paste to the agent, and iterate calmly.
Observability turns an agent demo into an engineerable system.
Tracing (often just installing a package and adding decorators) provides a visual graph of parallel agents, tool calls, prompts, and timings—making it possible to debug latency, cost, and correctness.
WORDS WORTH SAVING
5 quotesI think every PM will become some flavor of AI PM, either using those tools or building around them if you aren't already.
— Aman Khan
I really don't think that, you know, being an AI PM is not an either/or. I really view it more as an X, meaning, like, you can think of yourself as a fintech X AI PM or a healthcare X AI PM.
— Aman Khan
Don't be scared about things breaking. They're going to break. What matters is how you can work with the agent to fix your problems.
— Aman Khan
I like to joke, it's like going from vibe coding to thrive coding because you're going one step deeper, right?
— Aman Khan
What if evals were your requirements instead of your AI product?
— Aman Khan
High quality AI-generated summary created from speaker-labeled transcript.
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome