Aakash GuptaWhat AI PMs REALLY Need to KNOW in 2026 (Agents, Discovery, EVERYTHING)
CHAPTERS
AI PM job boom and the danger of “AI” as a label
Aakash opens with data showing AI PM job postings doubling year-over-year, and Todd explains why job posts are partly marketing signals. Todd warns that calling yourself an AI PM invites extra scrutiny because of rampant “AI washing.”
Upskilling mindset: using AI at work vs building AI into products
Todd separates two AI tracks PMs must master: personal productivity and shipping user-facing AI features. He argues that PMs who don’t use modern AI tools for research/prototyping will fall behind on speed and discovery.
Why AI PMs earn 30–40% more (and the resume “AI washing” trap)
Todd explains AI compensation premiums as a mix of market heat and skill scarcity, similar to other deep technical PM specializations. He stresses that employers will “interrogate” AI claims because fake credentials and superficial exposure are common.
The upskilling roadmap: a 5-layer “technical pyramid” for AI PMs
Aakash lays out a step-by-step competency pyramid: fundamentals first, then observability/cost, then evals, then roadmap/stakeholders, and finally ethics/leadership. This frames the rest of the episode as a structured learning path.
AI/ML fundamentals: model choice, token economics, privacy, and constant change
Todd emphasizes hands-on experimentation and understanding model tradeoffs (quality, speed, cost). They discuss multimodality, open-source/self-hosting, data residency, and how vendor constraints shape product decisions.
Data pipelines & RAG: getting the right context at scale
They argue data pipelines belong in the foundation because most real products need RAG-style context injection. Todd explains embeddings, vector databases, and the performance/governance challenges of shipping this reliably at enterprise scale.
Prompt engineering: not hype—instruction quality and platform vs domain PM roles
Todd frames prompting as the skill of precise instruction and contextual setup, analogous to being good at search. They predict further specialization (AI platform PMs enabling domain PMs) while still requiring broad prompt fluency.
Trace analysis, agents, and PM–engineering boundary lines
As orchestration grows (agents calling tools/other agents), trace-level understanding helps diagnose failures and performance issues. Todd notes real tension: some engineering leaders resist PMs “shadowing” technical debugging, so PMs should be fluent without overstepping.
Production monitoring realities: ops/SRE, access controls, and company context
Todd explains that monitoring is often owned by ops/SRE teams, and PM access may be restricted by contracts and background-check requirements. The takeaway: understand monitoring concepts, but adapt expectations to your org’s operational model.
Cost & performance optimization: COGS, gross margins, and the path from speed to efficiency
Todd argues AI economics will increasingly matter as companies must reach sustainable margins. They discuss how early builds over-optimize for speed, then require re-architecture (caching, smaller models, tuned systems) to reduce costs and improve latency.
Evals are the PM’s domain: AI QA, metrics, and experimentation cadence
Todd says eval design and management is where PMs should lead—AI grading AI requires product judgment about quality and outcomes. They also cover how AI lowers the cost of variants, making experimentation more mandatory and more frequent.
AI product roadmap & discovery: solve hard problems, avoid shiny objects, and kill weak features
Roadmapping starts with hard, high-leverage workflows and unique data/context advantages—otherwise you’re just “wrapping ChatGPT.” Todd stresses rapidly sunsetting low-retention AI features and building a distinct point of view (workflow-centric vs agent job titles).
Live demo: Pendo’s AI—agent analytics, rage prompts, dashboards, and agent mode workflows
Todd demonstrates how Pendo measures and improves agent experiences: conversation analytics, topic clustering, retention by use case, and “rage prompts” with replay context. He also shows an integrated dashboard approach and agent mode that executes cross-platform analysis with guardrails and multimodal outputs.
Discovery acceleration & enterprise synthesis: customer finder, MCP, and aggregating insights across systems
The demo continues with AI-assisted discovery: identifying interview targets, generating outreach guides, and automating scheduling workflows. Todd also highlights MCP as a real integration standard and shows AI summarizing feature requests from sources like Gong, support tickets, Salesforce, and CSVs into actionable themes and linked ideas.
Stakeholder & board management for AI: control the narrative, align to outcomes, and build for regulatory change
Todd advises proactively framing the AI strategy for boards, using them as helpers rather than approval machines. They cover aligning AI bets to business objectives and designing AI systems with toggles/guardrails to adapt across regions, industries, and evolving regulation.
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome