Skip to content
Aakash GuptaAakash Gupta

What AI PMs REALLY Need to KNOW in 2026 (Agents, Discovery, EVERYTHING)

Todd Olson spent 28 years in product management and built Pendo to $2.5B. He reveals why AI PM jobs doubled to 20% of all postings (and pay 30-40% more), the exact 5-layer technical pyramid to upskill from core PM to AI PM, and how to ship AI features at scale with proper evals, cost optimization, and the right product strategy. Full Writeup: https://www.news.aakashg.com/p/todd-olson-podcast Transcript: https://www.aakashg.com/the-complete-ai-pm-roadmap-how-to-upskill-from-core-pm-to-ai-pm-with-pendo-ceo-todd-olson/ ---- Timestamps: 0:00 - Intro 1:29 - Episode Begins 3:24 - Why AI PMs Get Paid 30-40% More 6:07 - How to Upskill into AI PM 11:50 - Ad 12:54 - The 5-Layer Technical Pyramid 16:30 - AI/ML Fundamentals 23:00 - Data Pipelines & RAG 33:02 - Trace Analysis & PM-Eng Tension 40:44 - Cost & Performance Optimization 48:56 - Evals Are Your Domain 56:03 - AI Product Roadmap 1:04:16 - Live Demo: Pendo's AI Features 1:13:07 - Ad 1:14:12 - Stakeholder & Board Management 1:22:03 - Outro ---- 🏆 Thanks to our sponsor: Reforge: Get 1 month free of Reforge Build with code BUILD - https://reforge.com/aakash ---- Key Takeaways: 1. AI PM market exploded - Last year 10% of PM jobs were AI PM jobs. This year it's 20%. They pay 30-40% more because of scarcity and skill level. But Todd warns: "You better damn well be good and know what you're talking about if you're gonna call yourself an AI PM because we are going to interrogate the hell out of it." 2. Real requirement is production at scale - Not "I built prototype at 1-person startup." Hiring managers want 20,000 paying B2B customers experiencing your AI feature successfully. To get there: upskill internally at current company by shipping AI features on your roadmap. 3. The 5-layer technical pyramid - Foundation: AI/ML fundamentals, data pipelines, prompt engineering. Middle: Observability (trace analysis), cost optimization, evals. Top: Product strategy, stakeholder management, leadership. You need to climb all 5 layers. Most PMs stop at layer 1. 4. RAG is table stakes - "RAG is the de facto way to build." You ingest data, create embeddings, feed into vector database, look up relevant context, pass to LLM. Todd: "If you put too much in context window, just like a human, you get confused. You want to give the right context." 5. PM-engineering tension is real - At startups, PMs do trace analysis. At large companies, engineering managers push back: "This is my world. I don't want some PM shadowing me." Similar to Data Dog—most PMs don't have login. Know the line. Be fluent but respect boundaries. 6. But evals are YOUR domain - Unlike trace analysis, evals are where PMs are the expert. "The PM is probably the best-suited human being to author and manage eval sets." You understand user and business needs. Engineers don't have that context. This is must-have competency now. 7. Cost optimization will matter - Some AI companies have sub-15% gross margins. Traditional software is 70-80%. Todd: "It's not a business at sub-15%." Eventually you'll rearchitect systems because infrastructure is too costly. Rule: when something's faster, it's cheaper (both buying compute). 8. Solve hard problems, not shiny objects - Todd's test: "Are we gonna do much better job than ChatGPT out of box? Why would we just wrap that and slap Pendo logo on it?" His discovery agent example: hard part isn't interviewing customers—it's finding which to interview, prioritizing, scheduling. Automate that workflow. 9. Kill bad features ruthlessly - Todd shipped features couple years ago that weren't great and turned them off. "Too often we hold on to something. Turn them off. Be unafraid. The more stuff in your product, the worse the experience is by default." 10. Control the narrative with boards - Don't show up with no story and get crushed with random requests. Todd: "Show them how you actually run your business. I want to see what you're looking at, not something just made for me." Think deeply about how each bet drives shareholder value. ---- 👨‍💻 Where to find Todd Olson: LinkedIn: https://www.linkedin.com/in/toddaolson/ Twitter/X: https://x.com/tolson Company: https://www.pendo.io 👨‍💻 Where to find Aakash: Twitter: https://www.x.com/aakashg0 LinkedIn: https://www.linkedin.com/in/aagupta/ Newsletter: https://www.news.aakashg.com #aipm #productmanagement #pendo ---- 🧠 About Product Growth: The world's largest podcast focused solely on product + growth, with over 200K+ listeners. 🔔 Subscribe and turn on notifications to get more videos like this.

Aakash GuptahostTodd Olsonguest
Dec 3, 20251h 21mWatch on YouTube ↗

CHAPTERS

  1. AI PM job boom and the danger of “AI” as a label

    Aakash opens with data showing AI PM job postings doubling year-over-year, and Todd explains why job posts are partly marketing signals. Todd warns that calling yourself an AI PM invites extra scrutiny because of rampant “AI washing.”

  2. Upskilling mindset: using AI at work vs building AI into products

    Todd separates two AI tracks PMs must master: personal productivity and shipping user-facing AI features. He argues that PMs who don’t use modern AI tools for research/prototyping will fall behind on speed and discovery.

  3. Why AI PMs earn 30–40% more (and the resume “AI washing” trap)

    Todd explains AI compensation premiums as a mix of market heat and skill scarcity, similar to other deep technical PM specializations. He stresses that employers will “interrogate” AI claims because fake credentials and superficial exposure are common.

  4. The upskilling roadmap: a 5-layer “technical pyramid” for AI PMs

    Aakash lays out a step-by-step competency pyramid: fundamentals first, then observability/cost, then evals, then roadmap/stakeholders, and finally ethics/leadership. This frames the rest of the episode as a structured learning path.

  5. AI/ML fundamentals: model choice, token economics, privacy, and constant change

    Todd emphasizes hands-on experimentation and understanding model tradeoffs (quality, speed, cost). They discuss multimodality, open-source/self-hosting, data residency, and how vendor constraints shape product decisions.

  6. Data pipelines & RAG: getting the right context at scale

    They argue data pipelines belong in the foundation because most real products need RAG-style context injection. Todd explains embeddings, vector databases, and the performance/governance challenges of shipping this reliably at enterprise scale.

  7. Prompt engineering: not hype—instruction quality and platform vs domain PM roles

    Todd frames prompting as the skill of precise instruction and contextual setup, analogous to being good at search. They predict further specialization (AI platform PMs enabling domain PMs) while still requiring broad prompt fluency.

  8. Trace analysis, agents, and PM–engineering boundary lines

    As orchestration grows (agents calling tools/other agents), trace-level understanding helps diagnose failures and performance issues. Todd notes real tension: some engineering leaders resist PMs “shadowing” technical debugging, so PMs should be fluent without overstepping.

  9. Production monitoring realities: ops/SRE, access controls, and company context

    Todd explains that monitoring is often owned by ops/SRE teams, and PM access may be restricted by contracts and background-check requirements. The takeaway: understand monitoring concepts, but adapt expectations to your org’s operational model.

  10. Cost & performance optimization: COGS, gross margins, and the path from speed to efficiency

    Todd argues AI economics will increasingly matter as companies must reach sustainable margins. They discuss how early builds over-optimize for speed, then require re-architecture (caching, smaller models, tuned systems) to reduce costs and improve latency.

  11. Evals are the PM’s domain: AI QA, metrics, and experimentation cadence

    Todd says eval design and management is where PMs should lead—AI grading AI requires product judgment about quality and outcomes. They also cover how AI lowers the cost of variants, making experimentation more mandatory and more frequent.

  12. AI product roadmap & discovery: solve hard problems, avoid shiny objects, and kill weak features

    Roadmapping starts with hard, high-leverage workflows and unique data/context advantages—otherwise you’re just “wrapping ChatGPT.” Todd stresses rapidly sunsetting low-retention AI features and building a distinct point of view (workflow-centric vs agent job titles).

  13. Live demo: Pendo’s AI—agent analytics, rage prompts, dashboards, and agent mode workflows

    Todd demonstrates how Pendo measures and improves agent experiences: conversation analytics, topic clustering, retention by use case, and “rage prompts” with replay context. He also shows an integrated dashboard approach and agent mode that executes cross-platform analysis with guardrails and multimodal outputs.

  14. Discovery acceleration & enterprise synthesis: customer finder, MCP, and aggregating insights across systems

    The demo continues with AI-assisted discovery: identifying interview targets, generating outreach guides, and automating scheduling workflows. Todd also highlights MCP as a real integration standard and shows AI summarizing feature requests from sources like Gong, support tickets, Salesforce, and CSVs into actionable themes and linked ideas.

  15. Stakeholder & board management for AI: control the narrative, align to outcomes, and build for regulatory change

    Todd advises proactively framing the AI strategy for boards, using them as helpers rather than approval machines. They cover aligning AI bets to business objectives and designing AI systems with toggles/guardrails to adapt across regions, industries, and evolving regulation.

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome