Skip to content
Aakash GuptaAakash Gupta

The AI-Native PM Operating System [Live Demo]

Mike Bal (Head of Product at David's Bridal) shows his complete AI native PM operating system. MCP integrations explained, live demos, and how to stop drowning in 20 different tools. Full Writeup: https://www.news.aakashg.com/p/mike-bal-podcast Transcript: https://www.aakashg.com/the-ai-native-pm-operating-system-how-to-connect-all-your-tools/ --- Timestamps: 0:00 - Intro 1:44 - What Makes an AI Native PM 2:43 - Operating System vs Tool Stack 4:52 - Cursor and MCP Demo 12:14 - Connecting Tools Through MCP 15:23 - Design with Figma Make 20:14 - Google AI Studio 24:01 - Confluence and Figma Integration 30:51 - Research with Manus 37:11 - Manus vs Claude Research 41:47 - Email and Communications 47:19 - Licenses and IT 55:42 - PM Lifecycle and Mistakes 1:00:23 - Outro --- 🏆 Thanks to our sponsor: Linear: Plan and build products like the best - https://linear.app/partners/aakash --- Key Takeaways: 1. Operating systems beat tool stacks - Stop logging into 20 different UIs. Build one central interface through Cursor and Claude Desktop that connects to everything. The composable mindset adapts to your needs. 2. MCP changes PM workflows forever - Model Context Protocol lets you connect JIRA, Figma, GitHub, Notion, Confluence through natural language. Check ticket status without opening JIRA. Compare designs without manual cross-reference. 3. Design validation takes 30 seconds now - "Find my Confluence doc about Feature X, load this Figma design, compare them and tell me what I missed." Used to take 1-2 hours of manual comparison work. 4. Manus dominates heavy research - Gives you multiple file outputs: sample CSVs, combined datasets, data sources report, quick start guide, markdown summary. All traceable back to sources. ChatGPT just gives responses. 5. Research must stay external until vetted - The "conspiracy theorist LLM" problem is real. If you automatically feed everything into your system, AI anchors to wrong information. Vet research separately, then bring validated context in. 6. PMs can build what required engineers - Mike built a colorization app for e-commerce in one morning. Migrated content to Sanity CMS in a few hours. All from natural language prompts in Cursor. 7. Context switching kills productivity - Every time you open a new tab, you lose flow state. The operating system keeps you in one interface. The AI handles the context switching for you. 8. Corporate IT restrictions become irrelevant - You already have Cursor or Claude Desktop. You already use JIRA, Figma, GitHub. Connect them through a better interface. No new tool approvals needed. 9. Analytics workflows save massive time - Export Clarity data, upload to Cursor, prompt "analyze drop-offs and create visualizations." Takes 10 minutes vs hours of manual Excel work. 10. AI native PMs think in prompts - "What do I need to do? What are the steps? What tools will help?" Treat AI as an extension of yourself, not a separate tool to learn. --- 👨‍💻 Where to find Mike Bal: LinkedIn: https://www.linkedin.com/in/mikebal/ YouTube: @thatmikebal Website: https://mikebal.com/ 👨‍💻 Where to find Aakash: Twitter: https://www.x.com/aakashg0 LinkedIn: https://www.linkedin.com/in/aagupta/ Newsletter: https://www.news.aakashg.com #aipm #cursor --- 🧠 About Product Growth: The world's largest podcast focused solely on product + growth, with over 200K+ listeners. 🔔 Subscribe and turn on notifications to get more videos like this.

Aakash GuptahostMike Balguest
Feb 3, 20261h 1mWatch on YouTube ↗

CHAPTERS

  1. Why PMs need an AI-native workflow (and why Claude over ChatGPT)

    Aakash introduces Mike Bal (Head of Product at David’s Bridal) and frames the core question: where AI tools actually belong in a PM workflow. Mike explains why he’s migrated toward Claude for reliability and depth, and hints at how to work around corporate constraints when tool access is limited.

  2. What makes an AI-native PM: thinking in steps, prompts, and tools

    Mike defines “AI-native PM” as a shift from tool-first to outcome-first work: identify the job to be done, translate it into steps/instructions, then select the best tools to execute. The biggest unlock is overcoming the mental barrier that certain tasks are “too technical.”

  3. Operating system vs tool stack: central home base + connected apps

    They distinguish an operating system from a scattered “tool stack.” Instead of constant context switching across tabs, AI-native PMs use a central hub (e.g., Claude Desktop or Cursor) that can pull from—and act on—other systems like Jira, GitHub, and docs via connectors.

  4. Live demo: Cursor + MCP for real work (Sanity CMS example)

    Mike demos Cursor as a central workspace and shows how MCP (Model Context Protocol) lets the AI assistant read/write into external tools. He uses Sanity (CMS) to query recent changes and create a new task document without opening Sanity’s UI.

  5. Connecting more tools through MCP: databases, hosting, and shipping apps

    Building on the Sanity demo, Mike describes broader MCP use cases—like database schema changes in Supabase and deploying apps using Render—without deep DevOps knowledge. They also note Claude Desktop can run similar connector workflows via settings.

  6. Claude projects, custom instructions, and memory for multi-initiative work

    Mike explains how he structures Claude Projects with initiative-specific context to avoid muddling across domains. He uses a “memory MCP” (and Claude’s evolving memory features) to retain relationships over time and to intentionally pull cross-project knowledge when needed.

  7. Claude vs ChatGPT (again): MCP support, connectors, and reliability tradeoffs

    They revisit why Claude is central: Anthropic created and supported MCP early, and Claude is preferred for deep work and writing. They acknowledge OpenAI is adding connectors, and mention “gateway” tools that let you attach MCPs and swap underlying models.

  8. Design iteration workflow: turning static visuals into editable Figma assets (Figma Make)

    They demo a practical design workflow: take a flat image/diagram and convert it into editable, layered components in Figma using Figma Make. Mike frames Figma Make less as production-ready prototyping and more as a fast way to explore variations, edge states, and flows to bring back to design.

  9. Rapid prototyping with Google AI Studio: from idea to runnable app in minutes

    Mike recommends Google AI Studio as his go-to for quick, functional prototypes and experimentation with the newest models. He highlights the developer-grade UX (context handling, iteration) and shows how a small internal tool can be built quickly, then exported to GitHub/Cursor for a normal dev loop.

  10. Knowledge + progress in one place: Confluence + Figma gap analysis via MCP

    Mike demonstrates pulling product truth from Confluence (requirements/vision) and comparing it against a specific Figma frame using the Figma MCP. The result is a “gap analysis” that flags mismatches PMs often miss when manually cross-checking docs and designs.

  11. Research as context gathering: Manus workflows and why agent traces matter

    They shift to research and context gathering: Mike uses Manus for thorough, asynchronous agentic research that outputs multiple artifacts (CSVs, reports, markdown) and exposes its trace/sources. He contrasts this with Claude Research mode, which he finds can burn limits without comparable transparency.

  12. Manus vs Claude Research and the risk of “bad memory anchors”

    Mike explains why he doesn’t default to Claude Research despite having a high-tier plan: limits and insufficient “show your work.” He emphasizes selective ingestion—choosing what enters the main operating system—to avoid models anchoring on noisy assumptions and drifting into unhelpful priors.

  13. Email/comms automation via connectors: Gmail/Drive/Calendar and beyond

    They cover communication workflows using Claude connectors (Gmail, Calendar, Drive) to retrieve context like scheduling details or documents—often faster than native search. They discuss connector availability across ecosystems and how MCP/connector UX is being simplified for less technical users.

  14. Licenses, IT, and rollout strategy: read vs write access + usage-based alternatives

    Aakash raises the cost/IT burden of many AI tools. Mike suggests pragmatic governance: start with read access, use personal/free tiers for non-sensitive ideation, and expand privileges as teams demonstrate value; where possible, prefer usage-based billing via API keys over multiple subscriptions.

  15. PM lifecycle coverage + common AI mistakes (and how to sell access internally)

    Mike argues AI can help across the entire PM lifecycle—research, validation, writing, design checks, ticket quality, and delivery monitoring—so long as PMs stay intentional and skeptical. They close with common failure modes (overprompting, lazy ingestion, no gut-checking), plus a practical pitch to leadership focused on velocity and impact.

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome