CHAPTERS
Why PMs need an AI-native workflow (and why Claude over ChatGPT)
Aakash introduces Mike Bal (Head of Product at David’s Bridal) and frames the core question: where AI tools actually belong in a PM workflow. Mike explains why he’s migrated toward Claude for reliability and depth, and hints at how to work around corporate constraints when tool access is limited.
What makes an AI-native PM: thinking in steps, prompts, and tools
Mike defines “AI-native PM” as a shift from tool-first to outcome-first work: identify the job to be done, translate it into steps/instructions, then select the best tools to execute. The biggest unlock is overcoming the mental barrier that certain tasks are “too technical.”
Operating system vs tool stack: central home base + connected apps
They distinguish an operating system from a scattered “tool stack.” Instead of constant context switching across tabs, AI-native PMs use a central hub (e.g., Claude Desktop or Cursor) that can pull from—and act on—other systems like Jira, GitHub, and docs via connectors.
Live demo: Cursor + MCP for real work (Sanity CMS example)
Mike demos Cursor as a central workspace and shows how MCP (Model Context Protocol) lets the AI assistant read/write into external tools. He uses Sanity (CMS) to query recent changes and create a new task document without opening Sanity’s UI.
Connecting more tools through MCP: databases, hosting, and shipping apps
Building on the Sanity demo, Mike describes broader MCP use cases—like database schema changes in Supabase and deploying apps using Render—without deep DevOps knowledge. They also note Claude Desktop can run similar connector workflows via settings.
Claude projects, custom instructions, and memory for multi-initiative work
Mike explains how he structures Claude Projects with initiative-specific context to avoid muddling across domains. He uses a “memory MCP” (and Claude’s evolving memory features) to retain relationships over time and to intentionally pull cross-project knowledge when needed.
Claude vs ChatGPT (again): MCP support, connectors, and reliability tradeoffs
They revisit why Claude is central: Anthropic created and supported MCP early, and Claude is preferred for deep work and writing. They acknowledge OpenAI is adding connectors, and mention “gateway” tools that let you attach MCPs and swap underlying models.
Design iteration workflow: turning static visuals into editable Figma assets (Figma Make)
They demo a practical design workflow: take a flat image/diagram and convert it into editable, layered components in Figma using Figma Make. Mike frames Figma Make less as production-ready prototyping and more as a fast way to explore variations, edge states, and flows to bring back to design.
Rapid prototyping with Google AI Studio: from idea to runnable app in minutes
Mike recommends Google AI Studio as his go-to for quick, functional prototypes and experimentation with the newest models. He highlights the developer-grade UX (context handling, iteration) and shows how a small internal tool can be built quickly, then exported to GitHub/Cursor for a normal dev loop.
Knowledge + progress in one place: Confluence + Figma gap analysis via MCP
Mike demonstrates pulling product truth from Confluence (requirements/vision) and comparing it against a specific Figma frame using the Figma MCP. The result is a “gap analysis” that flags mismatches PMs often miss when manually cross-checking docs and designs.
Research as context gathering: Manus workflows and why agent traces matter
They shift to research and context gathering: Mike uses Manus for thorough, asynchronous agentic research that outputs multiple artifacts (CSVs, reports, markdown) and exposes its trace/sources. He contrasts this with Claude Research mode, which he finds can burn limits without comparable transparency.
Manus vs Claude Research and the risk of “bad memory anchors”
Mike explains why he doesn’t default to Claude Research despite having a high-tier plan: limits and insufficient “show your work.” He emphasizes selective ingestion—choosing what enters the main operating system—to avoid models anchoring on noisy assumptions and drifting into unhelpful priors.
Email/comms automation via connectors: Gmail/Drive/Calendar and beyond
They cover communication workflows using Claude connectors (Gmail, Calendar, Drive) to retrieve context like scheduling details or documents—often faster than native search. They discuss connector availability across ecosystems and how MCP/connector UX is being simplified for less technical users.
Licenses, IT, and rollout strategy: read vs write access + usage-based alternatives
Aakash raises the cost/IT burden of many AI tools. Mike suggests pragmatic governance: start with read access, use personal/free tiers for non-sensitive ideation, and expand privileges as teams demonstrate value; where possible, prefer usage-based billing via API keys over multiple subscriptions.
PM lifecycle coverage + common AI mistakes (and how to sell access internally)
Mike argues AI can help across the entire PM lifecycle—research, validation, writing, design checks, ticket quality, and delivery monitoring—so long as PMs stay intentional and skeptical. They close with common failure modes (overprompting, lazy ingestion, no gut-checking), plus a practical pitch to leadership focused on velocity and impact.
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome