At a glance
WHAT IT’S REALLY ABOUT
Building an AI-native PM OS with MCP-connected tools and workflows
- AI-native PMs “think in prompts,” translating outcomes into steps and selecting the best AI+tool combination to execute them.
- An “operating system” mindset replaces a loose tool stack by using a central hub (Claude Desktop or Cursor) connected to systems of record (Jira/Confluence/Figma/GitHub/CMS) via Model Context Protocol (MCP).
- Live demos show AI performing real work across tools—querying/updating a CMS, comparing PRDs to designs, and pulling knowledge from Confluence and Figma without opening those apps.
- For prototyping, the workflow emphasizes fast concept-to-code loops (Google AI Studio → export to GitHub → iterate in Cursor) while using Figma Make mainly for design variations and edge-case states.
- Research is treated as “context gathering,” with Manus preferred for traceable, multi-asset outputs and controllable ingestion into the main OS to avoid polluting model memory.
IDEAS WORTH REMEMBERING
5 ideasTreat AI as a workflow hub, not just a chat tool.
They argue the real leverage comes from keeping a “home base” (Claude Desktop/Cursor) and pulling data/actions from Jira/Confluence/Figma/GitHub/CMS into that interface to avoid constant tab-switching.
MCP turns “ask and do” into cross-tool execution.
With MCP and API keys, the assistant can query or modify external tools (e.g., creating a new CMS entry in Sanity) while staying in the same conversational context—often with read/write permission gating.
Use project-level context and memory intentionally.
Mike sets custom instructions per project and uses memory tooling to preserve relationships, but warns against dumping unvetted research into the core environment because model memory can anchor on the wrong assumptions.
Use Figma Make for design variation, not production-ready code.
They find Figma Make valuable for quickly generating editable, layered variations and edge-case states that designers can reuse, but not reliable enough yet for code you’d ship.
Prototype fastest by moving from AI Studio to a real dev loop.
Google AI Studio is positioned as the quickest place to one-shot prototypes and test new models, then export to GitHub/Cloud Run and continue iteration in Cursor like a standard developer workflow.
WORDS WORTH SAVING
5 quotesAI-native PMs are actually working from what do I need to do, to what are the steps that I need to get done, to what are the best tools to get me those things.
— Mike Bal
You were actually operating with a layer of abstraction from UI.
— Mike Bal
I don't have to leave the tool that I'm in right now to get an answer for that, which is nice.
— Mike Bal
I still feel like my frustration with Claude is chat length limits, and then usage limits.
— Mike Bal
The bad thing about a lot of the LLMs is they'll pick and choose what's in the memory to really anchor themselves to.
— Mike Bal
High quality AI-generated summary created from speaker-labeled transcript.
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome