Aakash GuptaThe Claude Code Analytics Workflow Top AI PMs Don’t Want You to Know
Aakash Gupta and Frank Lee on claude Code + MCP turns analytics into end-to-end PM automation.
In this episode of Aakash Gupta, featuring Aakash Gupta and Frank Lee, The Claude Code Analytics Workflow Top AI PMs Don’t Want You to Know explores claude Code + MCP turns analytics into end-to-end PM automation Frank Lee demonstrates a “vibe PMing” workflow where Claude Code/Cursor connected via MCP pulls product context and analytics data to answer questions and generate deliverables end-to-end.
At a glance
WHAT IT’S REALLY ABOUT
Claude Code + MCP turns analytics into end-to-end PM automation
- Frank Lee demonstrates a “vibe PMing” workflow where Claude Code/Cursor connected via MCP pulls product context and analytics data to answer questions and generate deliverables end-to-end.
- The setup centers on organizing durable product context (plans, specs, templates, meeting notes) in a GitHub-backed repo that Claude Code and Cursor can reference and update.
- Custom “Skills” (structured prompts + tool permissions) operationalize repeatable tasks like chart anomaly analysis, dashboard summaries, and feedback synthesis with consistent output formats.
- The workflow expands from quantitative analysis (charts/dashboards) to qualitative synthesis (Zendesk/Slack/surveys) and then into execution by drafting PRDs and pushing tickets or prototypes into Linear/code.
- They discuss common MCP pitfalls (too many irrelevant tools, wrong expectations) and rebut criticisms by highlighting tool optimization, improving auth/managed connectors, and dynamic tool calling to reduce context waste.
IDEAS WORTH REMEMBERING
10 ideasTreat MCP as the data/action bridge, not the whole workflow.
MCP primarily lets models interact with external tools and data; the workflow power comes from layering Skills, good prompts, and curated toolsets on top of MCP connections.
Build a “product repo” as your persistent context layer.
Store roadmap notes, initiative folders, PRD templates, and terminology in markdown inside a GitHub repo so agents can reliably reference and generate consistent outputs across sessions and devices.
Codify recurring PM tasks as Skills with heuristics and output formats.
Lee’s chart/dashboard Skills specify what patterns to look for (spikes, seasonality, anomalies), how to query (e.g., charts three-at-a-time), and a standard narrative structure for business-ready summaries.
Automate weekly reporting by scheduling dashboard agents into where teams work.
Instead of building WBR slides manually, schedule agents to synthesize dashboards and push insights into Slack/email so meetings focus on decisions and solutions rather than reporting.
Connect quant changes to qual evidence in the same agent run.
After dashboard/chart analysis, the agent can pull related feedback, release notes/annotations, experiments, or session replay signals to form hypotheses about why metrics moved.
Use feedback MCP access to turn messy inputs into actionable clusters fast.
By unifying Zendesk/Intercom/Slack/surveys (and more) and exposing both pre-processed insights and raw items, agents can produce weekly “urgent issues, bugs, top loves” reports tailored to your question.
Route outcomes flexibly: PRDs, tickets, messages, or prototypes.
Once insights are generated, you can draft PRDs via templates, push structured work into Linear/Jira via MCP, message engineers directly, or even prototype in code with Claude Code/Cursor depending on complexity.
Manage context/tool bloat to avoid latency and degraded answers.
Loading many MCP servers adds tool descriptions and instructions into context, which can skew tool choice and slow responses; keep only relevant tools enabled and remove ambiguous tool names/descriptions.
Have a fail-safe for long threads: export state before compaction fails.
Aakash’s tactic is to ask the agent to write a markdown “progress + next steps” file when nearing ~90% context, so work can be resumed in a fresh session if /compact can’t run.
MCP reliability improves with optimization and client-side advancements.
They argue criticisms (context waste, auth pain, setup complexity) are easing via managed connectors, dynamic tool calling/RAG-based tool selection, and Skills that load instructions only when relevant.
WORDS WORTH SAVING
5 quotes“The most powerful thing is managing my product process in Claude Code and Cursor using a bunch of MCPs.”
— Frank Lee
“MCP… is the easiest way to connect your AI models with any external tools, action, and data.”
— Frank Lee
“At Amazon, I used to spend all of my Sundays… Now… Monday morning… [dashboards are] automatically synthesized.”
— Frank Lee
“They’re… correct. They’re too verbose. So I can go back to my prompt and say, ‘Hey, dramatically cut down the words.’”
— Frank Lee
“Sometimes people think that MCPs can do everything… [but] MCPs are easy ways for your AI to interact with external systems.”
— Frank Lee
QUESTIONS ANSWERED IN THIS EPISODE
5 questionsWhat does your “analyse chart” Skill prompt look like end-to-end (including the exact heuristics and the final report template)?
Frank Lee demonstrates a “vibe PMing” workflow where Claude Code/Cursor connected via MCP pulls product context and analytics data to answer questions and generate deliverables end-to-end.
When would you choose Claude Code over Cursor for the same PM task, and what quality differences do you consistently observe?
The setup centers on organizing durable product context (plans, specs, templates, meeting notes) in a GitHub-backed repo that Claude Code and Cursor can reference and update.
How do you decide the minimal set of MCP tools to enable for a given repo to avoid tool-choice confusion and latency?
Custom “Skills” (structured prompts + tool permissions) operationalize repeatable tasks like chart anomaly analysis, dashboard summaries, and feedback synthesis with consistent output formats.
Can you share a concrete example where the agent’s anomaly hypothesis was wrong, and how you tuned tool names/descriptions or the Skill to prevent repeats?
The workflow expands from quantitative analysis (charts/dashboards) to qualitative synthesis (Zendesk/Slack/surveys) and then into execution by drafting PRDs and pushing tickets or prototypes into Linear/code.
What’s the best practice for connecting quant insights (Amplitude charts) to qual evidence (feedback, session replay, releases) without overfitting to anecdotes?
They discuss common MCP pitfalls (too many irrelevant tools, wrong expectations) and rebut criticisms by highlighting tool optimization, improving auth/managed connectors, and dynamic tool calling to reduce context waste.
EVERY SPOKEN WORD
Install uListen for AI-powered chat & search across the full episode — Get Full Transcript
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome