Skip to content
Aakash GuptaAakash Gupta

The AI-Native PM Operating System [Live Demo]

Mike Bal (Head of Product at David's Bridal) shows his complete AI native PM operating system. MCP integrations explained, live demos, and how to stop drowning in 20 different tools. Full Writeup: https://www.news.aakashg.com/p/mike-bal-podcast Transcript: https://www.aakashg.com/the-ai-native-pm-operating-system-how-to-connect-all-your-tools/ --- Timestamps: 0:00 - Intro 1:44 - What Makes an AI Native PM 2:43 - Operating System vs Tool Stack 4:52 - Cursor and MCP Demo 12:14 - Connecting Tools Through MCP 15:23 - Design with Figma Make 20:14 - Google AI Studio 24:01 - Confluence and Figma Integration 30:51 - Research with Manus 37:11 - Manus vs Claude Research 41:47 - Email and Communications 47:19 - Licenses and IT 55:42 - PM Lifecycle and Mistakes 1:00:23 - Outro --- 🏆 Thanks to our sponsor: Linear: Plan and build products like the best - https://linear.app/partners/aakash --- Key Takeaways: 1. Operating systems beat tool stacks - Stop logging into 20 different UIs. Build one central interface through Cursor and Claude Desktop that connects to everything. The composable mindset adapts to your needs. 2. MCP changes PM workflows forever - Model Context Protocol lets you connect JIRA, Figma, GitHub, Notion, Confluence through natural language. Check ticket status without opening JIRA. Compare designs without manual cross-reference. 3. Design validation takes 30 seconds now - "Find my Confluence doc about Feature X, load this Figma design, compare them and tell me what I missed." Used to take 1-2 hours of manual comparison work. 4. Manus dominates heavy research - Gives you multiple file outputs: sample CSVs, combined datasets, data sources report, quick start guide, markdown summary. All traceable back to sources. ChatGPT just gives responses. 5. Research must stay external until vetted - The "conspiracy theorist LLM" problem is real. If you automatically feed everything into your system, AI anchors to wrong information. Vet research separately, then bring validated context in. 6. PMs can build what required engineers - Mike built a colorization app for e-commerce in one morning. Migrated content to Sanity CMS in a few hours. All from natural language prompts in Cursor. 7. Context switching kills productivity - Every time you open a new tab, you lose flow state. The operating system keeps you in one interface. The AI handles the context switching for you. 8. Corporate IT restrictions become irrelevant - You already have Cursor or Claude Desktop. You already use JIRA, Figma, GitHub. Connect them through a better interface. No new tool approvals needed. 9. Analytics workflows save massive time - Export Clarity data, upload to Cursor, prompt "analyze drop-offs and create visualizations." Takes 10 minutes vs hours of manual Excel work. 10. AI native PMs think in prompts - "What do I need to do? What are the steps? What tools will help?" Treat AI as an extension of yourself, not a separate tool to learn. --- 👨‍💻 Where to find Mike Bal: LinkedIn: https://www.linkedin.com/in/mikebal/ YouTube: @thatmikebal Website: https://mikebal.com/ 👨‍💻 Where to find Aakash: Twitter: https://www.x.com/aakashg0 LinkedIn: https://www.linkedin.com/in/aagupta/ Newsletter: https://www.news.aakashg.com #aipm #cursor --- 🧠 About Product Growth: The world's largest podcast focused solely on product + growth, with over 200K+ listeners. 🔔 Subscribe and turn on notifications to get more videos like this.

Aakash GuptahostMike Balguest
Feb 2, 20261h 1mWatch on YouTube ↗

At a glance

WHAT IT’S REALLY ABOUT

Building an AI-native PM OS with MCP-connected tools and workflows

  1. AI-native PMs “think in prompts,” translating outcomes into steps and selecting the best AI+tool combination to execute them.
  2. An “operating system” mindset replaces a loose tool stack by using a central hub (Claude Desktop or Cursor) connected to systems of record (Jira/Confluence/Figma/GitHub/CMS) via Model Context Protocol (MCP).
  3. Live demos show AI performing real work across tools—querying/updating a CMS, comparing PRDs to designs, and pulling knowledge from Confluence and Figma without opening those apps.
  4. For prototyping, the workflow emphasizes fast concept-to-code loops (Google AI Studio → export to GitHub → iterate in Cursor) while using Figma Make mainly for design variations and edge-case states.
  5. Research is treated as “context gathering,” with Manus preferred for traceable, multi-asset outputs and controllable ingestion into the main OS to avoid polluting model memory.

IDEAS WORTH REMEMBERING

5 ideas

Treat AI as a workflow hub, not just a chat tool.

They argue the real leverage comes from keeping a “home base” (Claude Desktop/Cursor) and pulling data/actions from Jira/Confluence/Figma/GitHub/CMS into that interface to avoid constant tab-switching.

MCP turns “ask and do” into cross-tool execution.

With MCP and API keys, the assistant can query or modify external tools (e.g., creating a new CMS entry in Sanity) while staying in the same conversational context—often with read/write permission gating.

Use project-level context and memory intentionally.

Mike sets custom instructions per project and uses memory tooling to preserve relationships, but warns against dumping unvetted research into the core environment because model memory can anchor on the wrong assumptions.

Use Figma Make for design variation, not production-ready code.

They find Figma Make valuable for quickly generating editable, layered variations and edge-case states that designers can reuse, but not reliable enough yet for code you’d ship.

Prototype fastest by moving from AI Studio to a real dev loop.

Google AI Studio is positioned as the quickest place to one-shot prototypes and test new models, then export to GitHub/Cloud Run and continue iteration in Cursor like a standard developer workflow.

WORDS WORTH SAVING

5 quotes

AI-native PMs are actually working from what do I need to do, to what are the steps that I need to get done, to what are the best tools to get me those things.

Mike Bal

You were actually operating with a layer of abstraction from UI.

Mike Bal

I don't have to leave the tool that I'm in right now to get an answer for that, which is nice.

Mike Bal

I still feel like my frustration with Claude is chat length limits, and then usage limits.

Mike Bal

The bad thing about a lot of the LLMs is they'll pick and choose what's in the memory to really anchor themselves to.

Mike Bal

What “AI-native PM” means (thinking in prompts)Operating system vs tool stack (central hub)Model Context Protocol (MCP) connectorsCursor + Claude Code workflowCMS/DB/hosting integrations (Sanity, Supabase, Render)Design workflows (Figma Make, Figma MCP)Prototyping with Google AI Studio → GitHub → CursorKnowledge retrieval (Confluence/Atlassian Rovo) + PRD/design gap checksResearch and context gathering (Manus vs Claude Research)Email/calendar/Drive connectors (Gmail, Google Drive)Licensing, permissions, and enterprise constraintsPM lifecycle usage and common AI mistakes

High quality AI-generated summary created from speaker-labeled transcript.

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome