Skip to content
Aakash GuptaAakash Gupta

If you only have 2 hrs, this is how to become an AI PM

Every PM has to build AI features these days. And with that means a completely new skill set: AI prototyping, observability akin to telemetry, AI evals as the new PRDs, understanding RAG vs fine-tuning vs prompt engineering, and working with AI engineers. So this week, I bring you a 2-hour crash course into becoming a better AI PM. I teamed up with Aman Khan. When it comes to people creating AI PM content, Aman is amongst the most insightful and informed. And that's because he's been an AI PM since 2019. He worked at Cruise on self-driving cars. He's worked with Spotify on their AI systems. And now he works at Arize, one of the leading observability and evals companies. 🎥 Timestamps: Can Anyone Become AIPM? - 0:00 5 AIPM Skills Overview - 5:52 Skill 1: AI Prototyping - 6:31 Ad: Miro - 13:35 Ad: Atlassian - 14:50 Building Trip Planner Agent - 15:27 Ad: Maven - 29:46 Ad: Amplitude - 30:40 Skill 2: Observability - 50:34 Skill 3: Evals - 1:10:10 RAG vs Fine-Tuning vs Prompt Engineering - 1:29:54 Bolt Teardown - 1:30:32 Skill 5: Working With Engineers - 1:43:24 Don't Make These Mistakes - 1:48:33 2 Hours Weekly Plan - 1:53:55 AIPM Jobs Exist - 1:57:45 Aman's Resources - 2:00:48 Outro - 2:04:00 Podcast transcript: https://www.news.aakashg.com/p/aman-khan-podcast 💼 Check out our sponsors: 1. Miro: The innovation workspace is your team’s new canvas - http://miro.pxf.io/PO4WZX 2. Jira Product Discovery: Plan with purpose, ship with confidence - https://www.atlassian.com/software/jira/product-discovery 3. Maven: Get $100 off Aman’s course with my code ‘AAKASHxMAVEN’ - https://maven.com/aman-khan/thriving-as-an-ai-pm?utm_campaign=aakash-gupta&utm_medium=affiliate&utm_source=maven&promoCode=AAKASHxMAVEN 4. Amplitude: Test out the #1 product analytics and replay tool in the market - https://bit.ly/4hl25RG 👀 Where to Find Aman: LinkedIn: https://www.linkedin.com/in/amanberkeley/ X: https://x.com/_amankhan Substack: https://amankhan1.substack.com/ Company: https://arize.com/ Course: https://maven.com/aman-khan/thriving-as-an-ai-pm?utm_campaign=aakash-gupta&utm_medium=affiliate&utm_source=maven&promoCode=AAKASHxMAVEN 👨‍💻 Where to find Aakash: Twitter: https://www.twitter.com/aakashg0 LinkedIn: https://www.linkedin.com/in/aagupta/ Instagram: https://www.instagram.com/aakashg0/ 🔑 Key Takeaways: 1. Cursor beats Bolt for serious AI PMs. While Bolt is great for quick mockups, Cursor gives you the control you need to build real agent systems and understand what's happening under the hood. 2. Observability comes before evals. Just like regular products need telemetry for analytics, AI products need traces for evals. Point Cursor to documentation and it adds what you need. 3. Vibe coding doesn't scale. Looking at outputs and deciding if they "feel good" works for prototypes, but not production. You need systematic evals to measure what "good" actually means. 4. Most PMs fine-tune too early. Aman showed a prompt outperforming a fine-tuned model. Start with prompting (95% of results), add RAG for external data, only fine-tune for cost/speed. 5. Your evals need evals. When your LLM judge marks outputs as "friendly" while your human labels say "robotic," that mismatch tells you exactly where to improve your system. 6. Use text labels, not numbers. LLMs understand "friendly vs robotic" better than 1-5 scales. They're trained on language, not mathematics. 7. AI engineers want data, not docs. Stop sending Google Docs with requirements. They want you labeling datasets and defining success through evals. 8. Bolt is just a really good prompt. Aman tore down Bolt's architecture - it's system prompts + tool calling + code generation. The "magic" isn't magic. 9. Side projects are your interview hack. When Aman asks "What are you building?" he can immediately gauge curiosity, initiative, and hands-on experience. 10. Don't automate yourself too early. Use AI as a second brain for analysis, but don't try to automate your entire job. Learn to work with reasoning models to push your thinking.

Aman KhanguestAakash Guptahost
Jun 14, 20252h 4mWatch on YouTube ↗

At a glance

WHAT IT’S REALLY ABOUT

A five-skill crash course for becoming an AI product manager

  1. An AI PM is framed as either using AI to accelerate PM workflows or building AI into products, and Aman argues virtually every PM role will become “X + AI” rather than a separate job track.
  2. The core learning path starts with AI prototyping using code-capable agents (notably Cursor) to build a working agentic app quickly, while building comfort with debugging and iteration.
  3. Observability is positioned as the bridge from “it works” to “we understand why,” using tracing to visualize agent graphs, tool calls, prompts, latency, and failure points in production-like workflows.
  4. Evals are presented as the mechanism to move from subjective “vibe coding” to measurable product quality, combining human labels, code checks, and LLM-as-judge grading—with emphasis on validating the judge against human annotations.
  5. Prompt engineering, RAG, and fine-tuning are compared by goal, effort, and impact, and the episode closes with career guidance: build side projects, don’t wait for better models, avoid over-automating early, and invest two hours weekly in tools + intuition + application.

IDEAS WORTH REMEMBERING

5 ideas

Treat “AI PM” as an overlay on your domain, not a new identity.

Aman’s model is “fintech × AI” or “healthcare × AI,” where AI accelerates existing PM strengths (domain insight, customer understanding) rather than replacing them.

Start with prototyping to compress learning time and raise credibility.

By building a real prototype (even messy) with tools like Cursor, PMs learn the stack through iteration—prompts, code, dependencies, and debugging—while producing an artifact they can demo.

Cursor is slower at true “0→1 UI,” but wins for depth and control.

Bolt/Lovable/v0 can produce quick mock UIs, but Cursor (VS Code fork) enables deeper edits, agent frameworks, file-level control, and iterative expansion beyond a shallow demo.

Expect things to break; your real skill is recovery loops.

The episode repeatedly shows dependency/version issues (Python packages, Node engines, ports), and the recommended workflow is copy errors from terminal, paste to the agent, and iterate calmly.

Observability turns an agent demo into an engineerable system.

Tracing (often just installing a package and adding decorators) provides a visual graph of parallel agents, tool calls, prompts, and timings—making it possible to debug latency, cost, and correctness.

WORDS WORTH SAVING

5 quotes

I think every PM will become some flavor of AI PM, either using those tools or building around them if you aren't already.

Aman Khan

I really don't think that, you know, being an AI PM is not an either/or. I really view it more as an X, meaning, like, you can think of yourself as a fintech X AI PM or a healthcare X AI PM.

Aman Khan

Don't be scared about things breaking. They're going to break. What matters is how you can work with the agent to fix your problems.

Aman Khan

I like to joke, it's like going from vibe coding to thrive coding because you're going one step deeper, right?

Aman Khan

What if evals were your requirements instead of your AI product?

Aman Khan

Definition of AI PM: AI-powered PM vs AI product PMCursor vs Bolt/Lovable/Replit/v0/Windsurf tradeoffsAgentic prototyping example: Trip Planner with LangGraphTracing/observability and visual agent graphsPrompt iteration, model swaps, latency and UX impactsEvals: datasets, experiments, LLM-as-judge, human labels, judge alignmentRAG vs prompt engineering vs fine-tuning mental modelsWorking with AI engineers/researchers; evals as requirementsCareer strategy: side projects, second-brain prompts, job market signals

High quality AI-generated summary created from speaker-labeled transcript.

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome