Aakash GuptaThe Claude Code Analytics Workflow Top AI PMs Don’t Want You to Know
EVERY SPOKEN WORD
60 min read · 12,089 words- 0:00 – 1:37
Intro
- AGAakash Gupta
What is the most powerful AI workflow you've seen?
- FLFrank Lee
I think the most powerful thing is managing my product process in Claude Code and Cursor using a bunch of MCPs.
- AGAakash Gupta
What does that mean in plain English?
- FLFrank Lee
So basically, you can surface all of your product context into your agent or tool of choice.
- AGAakash Gupta
What can you do with this?
- FLFrank Lee
Oh, man, I think you can do so many things. You can analyze charts, you can automate your weekly reporting, you can synthesize a huge amount of qualitative feedback, you can turn all of those insights into a bunch of specs, and then you can even push them to Claude Code or Cursor to start prototyping directly in code.
- AGAakash Gupta
So I've got one of the heads of AIPM at Amplitude, the principal PM in charge of their agents and MCP products here today, and we're gonna walk you through everything you need to know to master analytics in Claude Code.
- FLFrank Lee
I automate a lot of my weekly business reporting using a bunch of dashboard agents. I can easily analyze, run successive queries, pull in a bunch of data, and just run a bunch of analysis on my product data.
- AGAakash Gupta
Wow.
- FLFrank Lee
They made this conscious decision on, like, how do we make the most powerful agent, or what are the most powerful features and set of context they can provide the model so that people can code in ways that they've never been able to do before.
- AGAakash Gupta
What is the biggest mistake people make when using MCP? Before we go any further, do me a favor and check that you are subscribed on YouTube and following on Apple and Spotify podcasts. And if you want to get access to amazing AI tools, check out my bundle, where if you become an annual subscriber to my newsletter, you get a full year free of the paid plans of Mobbin, Arise, Relay app, Dovetail, Linear, Magic Patterns, DeepSky, Reforge Build, Descript, and Speechify. So be sure to check that out at bundle.aakashg.com, and now into today's episode.
- 1:37 – 1:47
Guest Introduction
- AGAakash Gupta
Frank, welcome to the podcast.
- FLFrank Lee
I'm stoked to be here. Thanks for having me.
- AGAakash Gupta
My pleasure. What is the most powerful AIPM workflow you have seen people use
- 1:47 – 3:45
Most Powerful AIPM Workflow
- AGAakash Gupta
these days?
- FLFrank Lee
It's gotta be managing your whole product process out of Claude Code, Cursor, and hooking it up to some product analytics providers like Amplitude's MCP.
- AGAakash Gupta
What does connecting Claude Code via MCP really mean in plain English?
- FLFrank Lee
Okay, so there's two different tools. Claude Code, it's a terminal-based coding agent with access to a bunch of different Claude models. Um, but it has some pretty cool stuff like MCPs, Skills, a bunch of other actions. MCP, it stands for Model Context Protocol, and the simplest way to put it, it's the easiest way to connect your AI models with any external tools, action, and data. So we've used MCP a lot at Amplitude to connect a bunch of tools and data that we have to a bunch of external AI clients. So big ones include Claude, Cursor, ChatGPT, and we use that to power a bunch of pretty complex coding and product workflows.
- AGAakash Gupta
So what does this setup allow PMs to do?
- FLFrank Lee
So this setup lets you do a lot of things. So I automate a lot of my weekly business reporting using a, a bunch of dashboard agents. I can easily analyze, uh, run successive queries, pull in a bunch of data, and just run a bunch of analysis on my product data. I can deeply investigate different trends that are happening in charts. So where I used to be creating a bunch of manual charts, the model is doing all of that for me now. It's really easy for me to explain why my metrics are moving, 'cause I can pull in all of the underlying feedback data to try to hypothesize about what's, what, why things are changing. I can drill into specific account health o- of our users, and also some timelines about what people are doing, uh, within our product, just to see, like, what their behaviors are like. I've been actually using these models to draft a bunch of PRDs based off of this analyzed product context that MCP makes so accessible in Claude Code. And then if you're really adventurous, you can actually take some of those insights that you're generating, convert them into PRDs. If you wanna route them to a human, uh, you can push them into Jira or Linear via MCP. Um, but if you're really daring, you can also push them directly into Claude Code, Cursor, and try to draft some prototypes on your own.
- AGAakash Gupta
Okay, so it's really the gateway drug to all of the 10X AIPM workflows that people talk about these
- 3:45 – 6:28
Setting Up Claude Code + MCP
- AGAakash Gupta
days. Before we really get into this, can you show us how to set all this up?
- FLFrank Lee
All right, sounds good. Let me walk you through how this is set up. So I'm actually gonna jump to Cursor. If you s- see, like, this repo, I've spun up my own Amplitude product repo. So on the left-hand side, I have a bunch of context that I've already actually aggregated about all the different product lines that I work on. So for example, we've been working on agents, AI feedback, Ask AI, like, a lot of our enterprise control features, MCP. So I have, um, specific folders and contexts and files detailing about some of the initiatives that we might have, like, within our, like, our roadmap, um, some of the specs that I might be thinking about, and also just a bunch of these different learnings and, like, verbiage that refer to, like, the specific parts of this product. So within Claude Code or within Cursor, I can easily refer to some of those pieces of context, brainstorm about it, draft new spec. But I think the most interesting thing from the most recent changes in Claude Code and, uh, Claude, iterating on this, like, new protocol called Skills. Um, so I'll just jump into a specific example, but I've created this specific skill. And the way that I simply think about this is it's a clear, like, name about what this skill is. There's a little description about, like, what this skill is and, like, when to perform it, and so that's some metadata that the models can use to determine when to use that function. But at the simplest nature, this is just a prompt. There's a prompt about what is this specific task that I'm accomplishing, what is the set of heuristics or steps that I might take. Uh, and then also you can configure what set of tools you might use that is connected to MCP in order to accomplish that objective.
- AGAakash Gupta
Mm.
- FLFrank Lee
And so, like, we, we actually had this model, like, this tool release recently called, like, Auto Insights, which I can point it to a specific chart. We'll pull, like, anywhere from, like, 20 to 30 of charts that are similar to that based on, like, relevant properties, and then we'll do a bunch of analysis, try to find the specific segments or the specific, like, group eyes, and then try to figure out why, why things are changing, hypothesize about, like, what the biggest drivers are. And for a human, that would've taken hours for me to spin up all these charts. We've actually replicated a lot of that using this Claude skill. So I, I taught, like, this model basically, "Hey, look at this chart. Pull this data from using Amplitude's MCP on the URL." You can basically, "Here's some heuristics about, like, how to identify and, like, look at specific patterns. So look for spikes, look for seasonality, looks for, look for anomalies." And then basically taking some of that context, trying to look at other tools, like within our feedback, within our releases, within our annotations on, like, some Amplitude data. Can I explain that analysis? And then I give it a format. Like, tell me what happened, when this occurred, what is your primary hypothesis about what changed, some plausible explanations, uh, your evidence, and then what this means for us as a business. And so I'm gonna jump into Claude Code and terminal real quick, where I c- I can kind of show you, uh, an example of, like, where, where this ran.
- AGAakash Gupta
All right, so I have sort of two separate follow-up questions here.
- FLFrank Lee
Yep.
- AGAakash Gupta
Let's first start with the context.So
- 6:28 – 9:41
Context Management in Claude Code
- AGAakash Gupta
what is, through tri-trial and error, what have you learned is the right context you should be including here for Claude Code? Because I know there's always this concern about just using up your context window also. So there is a concept of too much context, but what is the right amount of context, and how do you organize that so that Claude Code can ingest it well?
- FLFrank Lee
Yeah. I actually don't worry about it too much because you basically can refresh a lot of the context every time you start a different session, right? Um, so within Claude Code, you can, like within Terminal, if I feel like I'm running out of context, I can just exit out and then basically it'll refresh the context.
- AGAakash Gupta
Mm.
- FLFrank Lee
Similarly, with, uh, in Cursor, if I wanted to spin up a new agent, I could just jump to the top, like, right, spin up a new tab, and it basically refreshes, like it forgets what was happening in like a prior thread, and then I have a fresh set of context in order to do that.
- AGAakash Gupta
Mm.
- FLFrank Lee
That being said, there is a bit of context management that you do have to think about. One of the trade-offs of MCP is that a lot of the tools that you might have loaded... So let me show you example. I have Amplitude's MCP, uh, hooked up, so there's a bunch of tools that are here. Uh, same with Linear, I have that hooked up. Um, so the, the context from what tools are available, the names, the descriptions. There's also some instructions tied to each MCP on like what's the specific format that, um, like this MCP uses, or how do we want the data to be used, or like some preferences on when or how the tool should get used. So that does fill up some of the context depending on what MCPs you have loaded. That being said, most folks don't have a huge amount of MCPs loaded up. I think even with Amplitude and Linear loaded up, I might be using up anywhere from five to ten percent of my context. So I haven't actually run into that many limitations when doing some of, like, this product-specific analysis.
- AGAakash Gupta
Mm.
- FLFrank Lee
That being said, if you are doing a really long analysis, so for example, pulling in a huge amount of chart data, doing a bunch of successive queries, pulling in huge reams of feedback, sometimes you might run into context issues. Um, the one benefit on Claude Code is when you start running out of context, they have this feature called compaction, where it starts trying to summarize some of what happened in that thread, consolidates that into some smaller summary, and then lets you move on given the fact that you had, like, reached the previous set of context. And then there's also just like a pretty big, like, bitter lesson take on it. You can worry about, like, context management now. The reality is in, like, the next few months, a lot of the new models release, they'll probably expand their context windows or have some different mechanisms within the agent client to, to handle some of that complexity for you. So just build for the future and, like, think about pulling in, like, what, what's most relevant to give your agent, like, the best types of responses.
- AGAakash Gupta
Yeah, you can use the /compact command. They actually show you how much percentage of your context window you're using up in Claude Code now. So I recommend for people, when you're hitting that ninety percent or something, don't just rely on the /compact. Actually, I ran into this problem at about one AM last night where it said, "Conversation too long, can't compact." And so my newest sort of hack is when I'm at like ninety percent or so, because I don't want it to just fail at compacting, I will say, "Write a markdown file of your process and your progress and what you have left to do." That's a good way also to kind of transfer the context over if you need to. So you answered that we can put almost as much context as we want. So what context... Like, can you walk us through specifically, like what sort of methods are you using to get context, like meeting notes and stuff in there, and how have you structured those folders and files in here?
- FLFrank Lee
Yeah. Sounds good. At the simplest, like I have a product folder.
- AGAakash Gupta
Today's
- 9:41 – 11:08
Ads
- AGAakash Gupta
episode is brought to you by Amplitude. Replays of mobile user engagement are critical to building better products and experiences, but many session replay tools don't capture the full picture. Some tools take screenshots every second, leading to choppy replays and high storage costs from enormous capture sizes. Others use wireframes, but key moments go missing, creating gaps in your understanding. Neither approach gives you a truly mobile experience. Amplitude does things differently. Their mobile replays capture the full experience, every tap, every scroll, and every gesture, with no lag and no performance hit. It's the most accurate way to understand mobile behavior. See the full story with Amplitude. Today's podcast is brought to you by Pendo, the leading software experience management platform. McKinsey found that seventy-eight percent of companies are using gen AI, but just as many have reported no bottom-line improvements. So how do you know if your AI agents are actually working? Are they giving users the wrong answers, creating more work instead of less, improving retention or hurting it? When your software data and AI data are disconnected, you can't answer these questions. But when you bring all your usage data together in one place, you can see what users do before, during, and after they use AI, showing you when agents work, how they help you grow, and when to prioritize on your roadmap. Pendo Agent Analytics is the only solution built to do this for product teams. Start measuring your AI's performance with Agent Analytics at pendo.io/aakash. That's P-E-N-D-O.I-O/A-A-K-A-S-H.
- 11:08 – 19:29
Top 5 Use Cases for PMs
- FLFrank Lee
Um, so these are some different product lines that I'm working on. So agents, AI feedback, Ask AI, enterprise features, MCP. So within each of them, I might have some context about what is our, like, Q1 plans. I won't show it 'cause there, there's a bunch of stuff there, but-
- AGAakash Gupta
[chuckles]
- FLFrank Lee
... for plans, like we're thinking about forms of memory, like ambient agents, different types of agents that we're thinking about. So these are basically specs that I've converted into markdown files that I can refer to pretty easily using like the at-command, uh, if I wanted to brainstorm from a given sp- like PRD or just say, "Hey, we already have a bunch of planning within, like, our Q1 plan. Can you actually just fetch all of the specific items and then turn those into drafts of PRDs?" Um, so as an example, I have a few templates actually for some, like, pretty recurring types of tasks I might do. So-
- AGAakash Gupta
Mm
- FLFrank Lee
... like for example, draft short PRD. So give me like a sense of the problem context. Talk through, like in narrative form, what a solution or workflow might look like. Here's how I want you to format or break down that response. Here's some heuristics on like how do you think and strategize about what to build. Uh, so there's some of, like some light heuristics that I typically might have on selecting features, emphasizing simplicity, like it's okay to say no. Here's how to think about some of the design, like principles for some of these features. Uh, and then taking some of those requirements that might be in the doc or might even live elsewhere and turn that into acceptance criteria here. Um, so that's just a sample. But if, for example, in Cursor, there's like, like draft, like-Draft a quick PRD using the PRD template. Um, so that it has, it pulls this into context immediately, has, knows what, like, my, the format I'm looking at, and then I could talk about, "Hey, talk about this specif-specific topic," or refer to this doc where we have a bunch of ideas and, like, structure that into a better format. Uh, so that's, that's one example. I guess the other thing, you touched a little bit on Granola, and I've been playing around with this. I use Claude Code actually to try to create some of these light little automations or connections that I prev-previously wouldn't have. So my understanding as of now is Granola d- actually does not have a dedicated MCP right now, so I, I try to hack Claude Code to, to build some type of automation so I can adjust some of my calls. So there's, if you see here, there's some, like, notes, say like the year, month, uh, and then there's, there's some, like, meetings that I've, I've had. And so it tries to inject some of, like, the, the summarized notes there. So this, for example, this is like our MCP standup that happened. I basically could run a command, "Can you pull in my recent, um, Granola notes using the script we wrote?" Um, let's hope this works. I, I, I haven't run this in the last two days, but basically, like, that's, that's also a part of my product repo where we have... I use Claude Code to create the script, and it's gonna run, like, this command terminal. And then, yeah, you see there's a bunch of these meeting notes that, like, were, were previously ingested that auto get dumped in. Um, so if you want, you can refer to, like, the whole folder. So if I'm like, "Hey, @2026," um, or that's like a specific one, but, like, the @202601 folder here, you immediately pull all of those notes in context. If I wanted to say, "Summarize the main takeaways, give me the specific action items," I can put that into another folder. But basically all this stuff is immediately accessible if you need it to, 'cause it's all, like, consolidated in that one system.
- AGAakash Gupta
Amazing.
- FLFrank Lee
Yeah.
- AGAakash Gupta
Love this use case. I think most people should be using this specific one. So how do you get around the MCP? He created this little function or skill or script in this case that he referred to it with Claude Code so that he could access those Granola meeting notes. And how are you using Obsidian and Conductor in your workflow?
- FLFrank Lee
To be quite honest, not too much. Um, I think those are the powerfuls if there's, like, specific nuances on, for example, Conductor, right? So I, I have it pulled up. Conductor is, like, a great tool when if you have a bunch of coding agents that you want to run in parallel. The problem is oftentimes if you have one branch of your GitHub and you're trying to push changes, if you kick off a bunch of, like, Claude Codes in, like, the same branch, oftentimes they might have a bunch of changes that conflict with each other. They don't really, they're not really talking to each other. Um, so the thing that they nailed on Conductor was they made use of this feature called a Git worktree, which essentially copies your repo into a separate, like, worktree. Uh, so then you can run multiple Claude Codes in parallel branches. They won't conflict, and basically y- you can decide to change, like, to merge some of those changes, like, together. If you're c- like, going into the huge amount of, like, agent orchestration workflows, if you're running, you're trying to run five to 10 of these coding agents in parallel, Conductor's probably the best in-class tool that I've seen for that. I know there's a lot of excitement in the industry on what is this orchestration, like, framework looks like, but this is the prettiest and best that I've seen so far. But for just a call out, for specifically product management, I, I don't really need too many tools. Like, it's not a big deal for, like, this model to conflict. Like, if I'm kicking off a task, like, usually I'm, I want it to create one PRD here for this team, or I want you to do an analysis here. So there's not as much of a issue where, like, the code conflicts, uh, within this branch. And so it's, it's honestly probably easier to just do it directly within different tabs in terminal instead of having to spin up a bunch of Git worktrees. But if you're doing specifically these coding tasks, trying to do it all within the same branch is basically just going to lead a huge amount of messiness.
- AGAakash Gupta
And where does Obsidian come into the workflow?
- FLFrank Lee
Yeah. So it's, it comes with personal preference. Obsidian basically has access to your file system the same way that a cursor also does. If you hate staring at markdown, then Obsidian is, like, a pretty wrapper to, to directly make edits. Like, it's more akin to, like, a typical text editor. But me personally, I love playing around with a markdown specifically. I, like, spend a lot of time coding myself, and, like, it reads pretty well to me, so it's kind of just nicer to, to have the same tool where I can push code, push changes directly, and use the agents natively using Cursor or within Claude Code.
- AGAakash Gupta
Yeah. And you can also just double-click or right-click on any of these MD files and hit Open Preview, or I do up Apple-Shift-V, and you can look at these previews.
- FLFrank Lee
Right.
- AGAakash Gupta
So I feel like the Obsidian hack, it's not really necessary. So-
- FLFrank Lee
Yeah
- AGAakash Gupta
... we got people to set it up. What are the top use cases of Claude Code and MCP with analytics for product managers and product analysts?
- FLFrank Lee
I think there's five things that I'm doing really heavily. So first one, anytime there's an anomaly or some type of big change in my metrics, I'm using some skills and tools within Claude Code in our MCP to analyze and, like, do some deep chart analysis. So I drop in a link, and it, basically the agent can figure out exactly what specific segment or what little group by or what specific related metric might hypothesize and lead to that change. So that saves me a bunch of time. I think the second thing is I've automated basically all of my, like, weekly business reviews and dashboard reporting. At Amazon, I used to spend all of my Sundays just coming up with, with our metrics and explaining why things are happening. Now I've pointed this to Claude Code and some of these, like, a- Amplitude dashboard agents, and Monday morning I come, all of the five to six dashboards I look at are automatically synthesized. I know exactly the three to five top insights for me, and I know the one specific urgent issue, um, to tackle with my team. So don't need to analyze, like, the dashboards anymore. I can just start, like, focusing on solutions with the team. And the third one is doing a bunch of deep qualitative customer feedback analysis. With Amplitude's MCP and the AI feedback product, all of our tickets from Zendesk, Slack, Amplitude surveys, that's all unified in one place now. So I can just point Claude Code, tell it to analyze the feedback related to AI, agents, MCP, and then the agent will navigate all of our tools, consolidate all the data, do its analysis and clustering, and then give me a clean report of exactly what the urgent issues are, what the bugs are, uh, and then what's the single most thing that people love about our product that week. Fourth one, I spend a lot of time taking those insights coming in from those dashboard agents or, like, this Claude Code chart analysis. And then given this context, can you think about what's the right set of actions or plan to do to r-resolve this space? Um, so I've given this agent some heuristics on how I think about coming up with solutions, and the agents, the agent can go at length, explore a bunch of different ideas. I can drop in some images, and then it can just turn that into a spec so that I don't have to write it anymore. So Opus is, like, the best thought partner that I've ever had on com-coming up with some of these, like, design and product ideas. And then lastly, I've kind of totally reshaped, like, how I think about what I route to the team to build versus what I just do myself now. I can take these PRDs or take some of these problem statements that the agent had surfaced. If it's easy, I just drop it into Claude Code or, or Cursor directly, and then it could pre-draft some of the solutions for me while I'm in a meeting. Um, but if it's a really tough problem, I can go ahead and hit the linear MCP and say, "Hey, I want to route this to the AI capabilities team or, like, the MCP team. File it under, like, this specific project, and, like, assign it to this engineer, and then assign it to Cursor as, like, a courtesy just to hopefully, like, soften the, the workflow on their end." So these are the five things I'm h- doing the most frequently.
- AGAakash Gupta
I love these. This is the new vibe PMing workflow, these five use cases. Can you demo each of these for us?
- 19:29 – 30:48
Automating Dashboard Reporting
- FLFrank Lee
So just for context, this is one chart that I'm looking at on the MCP side. We're curious about, like, what is the average number of tool calls that might happen, uh, in, in a given week? And I'm seeing this anomaly where, like, the average number of tools being hit is 8.1 for this week, which is quite different. It's something, like, that's worth investigating, 'cause I need to figure out how do we manage the right tools, the descriptions, and make sure nothing's broken, right? So I can take this link. If I jump into Claude Code, like I said, I had a bunch of these different skills that I had. So-There was the analyse chart skill, which I'll just preview real quick for you, uh, here. Analyse chart. So remember, there's a name, description, and a bunch of heuristics on what to do about it. If I do the slash command, basically that can trigger that specific skill for you.
- AGAakash Gupta
Yeah.
- FLFrank Lee
So analyse this chart and explain why, uh, there's a jump this week. So stepping in here. So this might take a bunch of time, but now the agent is connected to the Amplitude MCP. Um, so you can see here, it's parsing the URL. It's starting to pull in some data on the chart, and then we have some tools that lets you navigate the data. Um, so there's chart data. We can query the underlying data set, and then basically the model starts showing its work on how it's navigating your taxonomy, how it's navigating relevant charts. Maybe it's navigating the specific events and properties related to that. So this might take a few minutes and, but basically you, it's, you can see the agent reason about which charts it's looking at, which events it's looking at, or what are some hypotheses about, like, where this might change.
- AGAakash Gupta
Mm-hmm.
- FLFrank Lee
And so it's navigating the properties. It's breaking it down by, like, what it thinks might be that spike. Now it's pulled in a different chart. And if you think about the human equivalent of this, it's the same as me having five different versions of this chart. I had to go, like, manually group and cut this, um, by a specific property that I have in my head, or I might have a, create a separate chart, uh, where I, I look at a different event that's, like, somewhat related to it. Or, like, there's a bunch of this manually, manual, like, iteration on the GUI that I had to do, like creating a bunch of different of these charts. And now the agent is doing all of this just via, via terminal. So you can see here, this is the format that I, I specified for it, but it says, "Here's the chart, here's the time range I looked at, um, here's the metrics that it's analysed," and it, it walks you through, like, that a spike happened, when it happened, the underlying data that led to that, and then some hypotheses about it. So detected that there's a change within, like, the types of tools that we had, uh, analysed, uh, enabled. It has access to our experiments and feature flags data, so it saw that these, there is some changes within what feature flags we had enabled, uh-
- AGAakash Gupta
Mm-hmm
- FLFrank Lee
... which provides different tool access. And then, so you see some flag activity. There's some differences in historical precedents, and then hypothesises about something else that might have changed. Maybe it's data quality, a different power user. So basically, like what I would've had to do manually to investigate and explain what's happening, the agent just, I think it literally did this in, like, a minute and a half, and it, this would've taken me probably, like, one to, like, one to three hours, depending on how complex the query was.
- AGAakash Gupta
Yeah. And that's with having a background as a data analyst. If you're just-
- FLFrank Lee
Yeah
- AGAakash Gupta
... not having that background, it might take you half a day.
- FLFrank Lee
Yeah. And, and that's assuming you know th-which, which properties to look at. Like, basically because, at least on the Am- the Amplitude MCP side, we've done a huge amount of, like, configurations on how do you simplify the names of the tools and the descriptors, or how do we improve our underlying search infrastructure, or how do we capture the right context about when to use a specific chart, or when to use a specific dashboard, or when to use a specific event, and that context is all provided, um, to the agent, like, via these MCP tool calls, so that it can more accurately, like, navigate that data for you. And so if you compare this to an expert of the taxonomy, it's probably the difference of this running in one and a half minutes versus them spending 10 minutes navigating the chart. If this was a person that did not know that taxonomy, they would've either gotten the wr-wrong answer, or they would've had to spend, like, two days chasing down the right person in the org that could tell them what to look at. This agent basically was able to figure it out itself, given a lot of this, like, underlying, like, semantic infra.
- AGAakash Gupta
Very powerful. So deep chart analysis, I think I understand this use case. Really, the powerful thing here is you write yourself a skill and you teaching it how to navigate the MCP and make the right tool calls.
- FLFrank Lee
Yep. Exactly.
- AGAakash Gupta
Totally understand this workflow. What's the next workflow? Well, that was around automating dashboards, right?
- FLFrank Lee
Yeah. The next one's around au-automating dashboards. So similarly, I have an analyse dashboard skill. So here's when I use it, meeting prep, dashboard investigations, connecting quant to qual. And I can show, like, I instructed the agent to look for the URL, query the underlying charts, fetching them, like, at three at a time, 'cause there's some context limitations, like, when you're interacting with some of these tools. Look for these specific things. Like, if it's a KPI chart, like, look at this and the percentage change. If it's a bar chart, look at concentrations or gaps. So teaching them some heuristics on, like, how I as a human would've thought about, like, this data.
- AGAakash Gupta
Mm-hmm.
- FLFrank Lee
Uh, and then what's cool is, like, now that the, this agent has analysed a bunch of charts and figured out, like, what's happening in the dashboard, I, I pointed it to, "Hey, go, go pull the feedback data recently and see if there's anything relevant there." Or we don't have it here, but, like, we also have access to, like, session replays or experiments. So basically, depending on what data you have and, like, whether it's relevant to a specific dashboard, you can instruct it to also query that as part of its agentic workflow. Uh-
- AGAakash Gupta
Mm-hmm
- FLFrank Lee
... so this is, uh, and then this, here's, like, the takeaway of, like, how I wanted it presented, just personal preference on what I, I think is helpful, some best practices.
- AGAakash Gupta
Mm-hmm.
- FLFrank Lee
Uh, but in reality, like, let me, let me show you. So I'll, I'll start over just given the, the context piece, but here's Claude, analyse dashboards. Let's pull in... So l- we're talking about MCP, so can you give me a TLDR on my external MCP usage dashboard? And so this is different than last time, 'cause, like, last time I, I gave it an explicit URL, uh, which is-
- AGAakash Gupta
Mm-hmm
- FLFrank Lee
... and this time I, I'm kind of vibe giving it some, some, like, some natural language query, right? So if you are an MCP PM, you have to spend a lot of time thinking about, hey, a lot of folks are gonna give you pretty ambiguous natural language questions. How do you, like, take those queries and then, like, teach the model to figure out how to use your search infrastructure or navigate your data so that it pulls the right thing? So I have a dashboard that's called external MCP usage dashboard, so it's, it's quite ch- it's, like, kind of cheating. Um, but if I told it, said, "Hey, pull the MCP dashboard," you can teach it on the tools to be like, "Hey, here's how you think about using, like, this search endpoint," or, "Here's how to filter the data properly," or run a few different, like, searches and then figure out, like, reason about what's the best response. So there's a lot of different little experiments you can do on, like, optimising the tool.
- AGAakash Gupta
Mm.
- FLFrank Lee
Um, but now you can see it started looking for external MCP usage, MCP usage. Like, these are different search queries it was running on our search endpoint o-on the dashboard data. It pulled it, it identified the dashboard, and then it has a different tool for querying those charts, so it's pulling in those three diff- like, those different chart IDs three at a time, 'cause that was the instruction I gave it. Uh, and then now once it's pulled all of that data in context, it gives me a quick overview about what some of the metrics look like. So won't spend too much time on the metrics, 'cause that's, like, uh, that's our metrics, but-
- AGAakash Gupta
[laughs]
- FLFrank Lee
... you kind of see, like, oh, once again, I could have spent the 10 minutes scanning through, like, all of the charts in my dashboard, and then if I found anything that was interesting, I would have to, like, copy-paste some of those individual data points, pull them into a specific, like, part of, like, the bullet points that I would summarise. I'd have to think about the narrative. Um, I'd have to jump into my AI feedback and run a search on MCP and see if there's anything relevant. But the model, given those instructions, did it all in, what was that? Like, a minute and a half as well? Like-
- AGAakash Gupta
Yeah
- FLFrank Lee
... that, that saved my whole Sunday again.
- AGAakash Gupta
[chuckles] So that was the promise, right? We were gonna save our whole Sunday. So if I recall, like, I used to do this too, like when I was at Epic Games. PMs used to have a pretty, pretty much do the analyst hat as well. So I remember on Sundays for my Monday meeting, I'd spend like six hours going to various Tableau dashboards, having them load, copy-pasting them, interpreting them, analyzing them. How do you put together that whole workflow of like, "These are my 25 typical WBR slides. Go pull those all for me"?
- 30:48 – 33:01
Ads
- AGAakash Gupta
AI is writing code faster than ever, but can your testing keep up? Testkube is the Kubernetes native platform that scales testing at the pace of AI-accelerated development. One dashboard, all your tools, full oversight. Run functional and load tests in minutes, not hours, across any framework, any environment. No vendor lock-in, no bottlenecks, just confidence that your AI-driven releases are tested, reliable, and ready to ship. Testkube, scale testing for the AI era. See more at testkube.io/aakash.That's T-E-S-T-K-U-B-E.IO/A-A-K-A-S-H. I hope you're enjoying today's episode. Are you interested in becoming an AI product manager, making hundreds of thousands of dollars more, joining OpenAI and Anthropic? Then you might want to do a course that I've taken myself, the AIPM certificate ran by OpenAI Product Leader Miqdad Jaffer. If you use my code and my link, you get a special discount on this course. It is a course that I highly recommend. We have done a lot of collaborations together on things like AI product strategy, so check out our newsletter articles if you want to see the quality of the type of thinking you'll get. One of my frequent collaborators, Pavel Hern, is the Build Labs leader, so you're gonna live build an AI product with Pavel's feedback if you take this AIPM certificate. So be sure to check that out. Be sure to use my code and my link in order to get a special discount. And now back into today's episode. Here's the dirty secret about prototyping. You spend two weeks building a prototype. You validate your assumptions. Engineering loves the direction. Then what happens? You throw the whole thing away. Bolt changes this completely. When you prototype in Bolt, you're not building throwaway mock-up. You're building real front-end code that integrates with your existing design system. So when you hand it to engineering, they don't throw it away, they ship on top of what you've built. I use Bolt every single day. I host my LAN PM job cohort on it, and honestly, I'm up till 2:00 AM some days just vibing in the tool, having fun, and building. That's when you know a product is good, when you're using it past midnight, not because you need to, but because you want to. Check out Bolt at bolt.new/aakash. That's B-O-L-T.N-E-W/A-A-K-A-S-H. Link in the show notes.
- 33:01 – 33:54
Customer Feedback Analysis
- FLFrank Lee
Basically, uh, there's a lot of people that have to do this, like the aggregation, like, and analysis manually. So you can see-
- AGAakash Gupta
Yeah
- FLFrank Lee
... this is a format that I specifically like, but you can also provide the agent free form context of, "I actually don't wanna look at, like, urgent issues and feature requests. I only wanna look at praises," or, "I only wanna look at references for, for competitors." So just explain that in natural language, and then the agent can, like, kind of figure out its, like, its right response for that. This is actually, like, what people are saying about some of, like, the agent con- like, products. They want more context. There's some, like, inaccuracies. Like, sometimes there's degradation in quality. Like, these are all things that, like, we hear sporadically here or there in different channels. But having it all in one place, having it in a specific report, if I need to, like, change it in a specific format, it's so easy via, like, Claude Code. Or, like, the next thing I'll demo for you is, like, okay, can you convert all of these recommendations, uh, into specs and place them in the agent's folder?
- 33:54 – 39:24
Converting Insights into PRDs
- FLFrank Lee
So let's see if one, this one rips correctly, but-
- AGAakash Gupta
So this is that fourth workflow. Once we've gotten an insight from our feedback, customer feedback themes, let's convert it into actions or specs.
- FLFrank Lee
Yeah. So this is a live example. I, I-- Like, typically, I actually do this in Cursor 'cause I wanna use my specific draft PRD template.
- AGAakash Gupta
Mm.
- FLFrank Lee
Uh, but you can also do this in Claude Code because in Claude Code, it's all tied to the same thing, right? I got, I have this whole product repo in my GitHub, so you can navigate there. You can navigate it on Cursor. I typically find that Claude Code's a little bit more powerful. It, it takes a bit longer, but, like, the response quality is, is, is marginally better in Cursor. But the Cursor, like, there is some benefit of having, like, a dedicated IDE experience, uh, doing the app references, like, being able to swap between different models and testing around, seeing my, my specific file system and making edits directly. So I always get hesitant when people are saying it's one or the other. Like, I use both extremely heavily.
- AGAakash Gupta
Yeah. I pretty much use Cursor because I can use their really fast models if I want to, and then I use Claude in Cursor, which you can also do.
- FLFrank Lee
Yeah. So you, you can see, like, I think they're starting to add a few within this folder. But yeah, you basically can create a bunch of specs using o- one or the other based off the templates that you have. Uh, and then, like... So once again, I have a few MCPs connected. There's the Amplitude one. There's Context Seven, which it makes it pretty easy for you to fet- fetch docs from, from other repos to explain what's happening. Uh, and then I have our, our Linear connected if I want to push tickets. So, so for, like, some of these long-running tasks that I need to coordinate a lot with the team, I'll push them to Linear so that people can track it. We're also kind of in a weird era where there's a lot of tickets I don't even push into Linear anymore. Like, I might just have, take this context and just message the engineer directly in Slack. Uh-
- AGAakash Gupta
Mm-hmm
- FLFrank Lee
... just removes one part of the steps or overhead there, so.
- AGAakash Gupta
And it can even-- Slack has an MCP, so you can even do that through here.
- FLFrank Lee
Yeah. So this one takes a minute. There's, there's like six different things, but basically it's, it's drafting these specs. It's putting them into markdown files. I didn't specify the specific PRD template, but, like, uh, basically, like, the first draft of it is being generated based off of this analysis and this context that I pulled in via the, the feedback.
- AGAakash Gupta
Can we look at one of those and see? Like, realistically, do you think those are quality? Would you use those? And right now we're using Sonic, guys, so it's a little bit faster, but it might be a little bit lower quality, but let's put it to the test. All right, so it's generating these specs. Can we take a look at some of these and see if they really hit our quality bar?Let's do it. And by the way, can you explain that branching GitHub toggle that you're doing in the top left for people who don't know?
- FLFrank Lee
Right here?
- AGAakash Gupta
Yeah.
- FLFrank Lee
Basically, all of these changes that are happening in Claude Code or in Cursor, they're happening in a local branch on GitHub. So there's a copy of this whole repo that's on my local desktop.
- AGAakash Gupta
Mm-hmm.
- FLFrank Lee
When I feel like I'm happy about the set of changes, I can just push it directly into, like, my, the Claude, like, my GitHub Claude instance of this repo.
- AGAakash Gupta
Nice.
- FLFrank Lee
The same way that you'd think about a code base, like, I kind of just have my own product system that's in GitHub so that I can access it wherever. 'Cause-
- AGAakash Gupta
Nice
- FLFrank Lee
... the, the other thing that this unlocks is, if you're on Claude Code on mobile, you can refer to your GitHub repo that's in the Claude, and then also come up with ideas and kick off, like, asynchronous tasks as well.
- AGAakash Gupta
Oh, that's super powerful. Okay.
- FLFrank Lee
Yeah. Um, 'cause you're not always gonna have access to Cursor on your phone or, like, Claude Code and terminal on your phone. For example, like, all this stuff about Claude Bot, like, I'd be hesitant to put that onto my work laptop and try to steer Claude Code via, like, a remote branch. But if I have the, all of my repo and all my context in the Claude, I can just kick off Claude Code directly within the Claude app.
- AGAakash Gupta
Nice.
- FLFrank Lee
Yeah.
- AGAakash Gupta
What's the command for somebody to set this up the first time? Duplicate my repo into GitHub or something, just simple like that?
- FLFrank Lee
I had just gone into GitHub and created, like, a new, like, repository there.
- AGAakash Gupta
Mm.
- FLFrank Lee
So when I set this up, I, I just follow the typical GitHub instructions that they have, like, ins- instructions on how to clone a repo to your local branch.
- AGAakash Gupta
Yeah.
- 39:24 – 40:35
Pushing to Linear or Code
- AGAakash Gupta
I think we said the fifth workflow is either pushing these into Linear or doing them yourself. So can you kind of show us both of those?
- FLFrank Lee
I'd be nervous to kick it off, but, like-
- AGAakash Gupta
Mm-hmm
- FLFrank Lee
... one example of something we have, um, we have this channel where we are, like, basically calling out to Cursor or Claude Code within, like, an open channel about some instructions that we want to push in our code base. So-
- AGAakash Gupta
Mm
- FLFrank Lee
... I might take this PRD or, like, the specific acceptance criteria, and I can say, like, "@Cursor," like, "slash the JavaScript repo, make this change to the AI feedback page," or, "Make this, like, prototype this change there." And then you can spin up a background agent there. Or depending on if you have, like, Claude Code set up on your terminal, like, this is, like, me pointing to our JavaScript repo. So if I wanted to, like, pull up Claude, like, I can basically pull up Claude and push a change there. Or lastly, like, I have the JavaScript repo as well. Like, assuming I, I'm on, like, the, the relevant branch, I would just dump that into here and say, like, "Create a branch for that."
- AGAakash Gupta
Mm-hmm.
- FLFrank Lee
So I don't want to kick it off. Like, like, basically, I don't want to hand hold, like, the actual code, um, being-
- AGAakash Gupta
Yeah
- FLFrank Lee
... drafted. So that, that, that's, that would be one route.
- AGAakash Gupta
Yep. So basically, guys, that's the end-to-end workflow. Starts with the deep chart analysis. You've got your automated reporting. You've investigated customer feedback themes. You've converted that into an idea, and then you take action. You might even code it or a prototype of it yourself. What is the biggest mistake people make when using MCP?
- 40:35 – 45:19
Biggest Mistakes with MCP
- FLFrank Lee
I think there's two big ones. One is just kind of the wrong expecta- expectation settings for MCP. Sometimes people think that MCPs can do everything. Just the simplest way to think about it is MCPs are easy ways for your, your AI to interact with external systems. They're not complex ways to do workflows. They're not complex tasks already. Like, there's a bunch of other tools more relevant for that. But just one, setting the expectation of just being able to pull in data is the first step. I think the second problem is sometimes people get excited, and they connect a bunch of MCP servers, some... have a lot of tools. Some of them are just totally irrelevant to a given workflow or that, in that repo. And so this comes back to the problem with MCP, where if you have a huge amount of MCPs connected, too many tools that are not being used, all of those descriptions about what tools are available, what descriptions they are, what formats are needed, they start being provided as context to the model on a bunch of different queries that might not be relevant. And so that, that might lead to higher latency on a given response or slightly skewed or, like, inconsistent responses 'cause the agent is thinking about these tools even though it's not relevant to the task at hand. So just be thoughtful about when you decide to incorporate an MCP tool. For stuff that's not as relevant, you can hide or remove those tools, typically within Cursor, Claude Code, et cetera. Like, you can, like, remove stuff. And then if, if you're actively developing yourself, you also need to optimize on what is the right set of tools that you want to make available in your MCP server, what are those things named, and then making sure that there, there's not ambiguity, uh, for the model on which tool to use, and there's, like, accurate descriptors and accurate, like, instructions on what formatting is required depending on what you're using it for.
- AGAakash Gupta
What you just said there, these breaking the fourth wall, behind the curtain insights of what it's like to be a-And MCP-PM have been one of my favorite parts of this conversation. So I think you're uniquely qualified to answer this question, which is that people say MCP is an unreliable or a bad standard. What do you say to that?
- FLFrank Lee
All right. I came prepared, it says a slide. Um, so I see a bunch of things related to MCP. One, people just have been overhyped about what MCP can do. The second is MCPs waste a lot of context. The third is there's a lot of painful experiences on how people, like, set up or have to deal with authentication when interacting with MCPs. Uh, and then lastly, and probably the most important thing, is pretty complicated on how to actually configure an MCP. Um, so those are the, the four con- the cri- criticisms I see most frequently. And so I think the main response is set the right expectations for yourself on, like, what MCP is used for. It is by far the easiest way to connect, like, external systems with most of the AI clients, uh, that you're using. And instead of like, as a PM, I have to pull forward this huge roadmap of integrations and spend a lot of engineering resources building them out, most, like, AI native tools at this point are exposing a lot of their tools, data, and actions via MCP. By far the easiest way to get started. Second piece is MCPs require, like, some amount of optimization. Like, you need to go back and forth on what is the right set of tools you wanna make available in your server. You need to optimize the names of them, like data sets or actions you make available. And then whenever you're finding edge cases, it's similar to some of the eval-driven development that you see for agents. Once you finds, find those edge cases, figure out how you can optimize the tool names, the descriptors, and once you come to the right set of tools, those servers and those integrations, like, work very seamlessly and, like, kind of power a huge amount of these, like, complex workflows. Um, I think the last thing is most of these, like, AI clients, Claude, ChatGPT, like Cursor, everybody's all converged around, like, the same standard on MCP now. Like, it's the standard. Most people have some type of managed connector flow, where instead of having to deal with this, like, manual URL setting that is in, like, the first version of these tools, you can just click one button and, like, those MCPs are set up for you, assuming you're logged into the right tool that you're trying to connect to. Auth is handled. So now that it's the native standard and everyone's converging on, like, the same, like, standard, it's actually pretty seamless to get started now. And so, and then lastly, there's been a lot of this criticism about, like, context rot when there's a huge amount of tools. Within the last month and a half, that's actually com- dramatically changed depending on the AI clients. There's like pioneers like Cursor, they've done, um, some experiments around, like, dynamic tool calling. So instead of having the tools all called every single time, they actually, like, use RAG to find, like, the right set of tools to call given a natural language question. So you're not using all of your context all the time. Same within Claude, like they've adopted some, some type of similar standard for not using, like, all of your context, uh, across these tools. And then that's also one of the reasons why they built Skills. Skills gives you some instruction on here's the specific tasks, here's the instructions and, like, the prompts. That only gets loaded in context when the model thinks it's relevant to pull that skill. So you can kind of circumvent a lot of these problems with MCP by pairing it with, like, the right changes on the client side or with a, a skill to instruct when to use that specific tool.
- AGAakash Gupta
Yeah. For my money, Skills is one of the most important releases, so we started with it, we're ending with it. Make sure you guys are learning those skills. All right, so we've talked a lot about MCP and
- 45:19 – 50:28
What Amplitude Is Shipping
- AGAakash Gupta
Claude Code. What else is Amplitude cooking up? What are the most powerful agents you guys are building and releasing?
- FLFrank Lee
So we're super excited about February seventeenth because we're bringing agents across the whole platform. I wanted to give some context about, like, Amplitude and, and its journey. Um, and internally, we actually think about this launch as kind of our Cursor moment. And the reason why this matters is when we think about Cursor, they made coding accessible to so many folks that had never coded before. It was the first foray for a lot of folks on how do you, like, spin up, like, your own GitHub PRs, or how do you pull in, like, AI and agents into a coding workflow. And it brought so many people and lowered, like, raised the floor for a bunch of non-technical folks that were never able to code before. But what's interesting on, like, the Cursor side is when they think about product development, they're always trying to raise the ceiling of what's possible within software engineering. They made this conscious decision on, like, how do we make the most powerful agent, or what are the most powerful features and set of context they can provide the model so that people can code in ways that they've never basically been able to do before. So with our agents launch, we genuinely think this is our Cursor moment. There are, there has historically been problems where folks, they're not s-super familiar with the data taxonomy or how to navigate, like, the Amplitude UI or what actions or, like, ways that they can improve their product are available on the platform. We've built an agent that's embedded across the whole platform. It has access to every single data point, every single action and, like, improvement that can be taken in Amplitude, and we make it possible to, for people to navigate the whole product just using chat now and using this agent. So really, like, basically raising the floor of what's possible for them.
- AGAakash Gupta
Mm.
- FLFrank Lee
But what's even more powerful is given how powerful these tools are, these actions are, we're actually seeing these like 10X, like, analysts, 10X marketers, 10X like, like product managers that can do so much more than they used to, like given some of those workflows that I showed you in Claude Code. And so more people are creating, like, so many new guides and surveys or, like, running a bunch of new experiments or, like, being able to, like, deeply analyze and, like, understand trends that are happening across their data, all without the same amount of manual time, and also, like, navigating that data in ways that they were neighbor- never able to do before. Uh, so like these, there's these power PMs, these power engineers that have access to way more data that, that they've ever had, uh, previously and can accomplish and orchestrate some of those actions within Amplitude, but also within, like, external places like Cursor, Claude Code, Figma, et cetera. So just jumping directly into what we're shipping. Uh, there's four parts of this launch. One is this embedded Amplitude agent, and so we call this the global agent. It's, it lives across the platform. You can pull it in a side panel in chat. You can also just navigate on any single page. It has access similarly to Cursor of whatever's on the page and all of the underlying tools and data that we have available in the platform. We also have the second part of the launch, which we're calling sub-agents. So these are more of these specialized asynchronous agents. So there's a dashboard agent that auto-monitors your dashboards for anomalies, automates your reporting. We have a session replay agent, analyzes hundreds of to thousands of your session replays, understands qualitatively-... what people are doing in your plat- like, your platform, what bugs people are running into, what issues, uh, like, they're stumbling on, what conversion blockers, and telling you exactly, like, what people are doing. And we built a feedback agent that pulls in all this qualitative data, cuts it in different heuristics, and tells you exactly what customers are feeling and saying. And then we have a website optimization agent that scrapes your website and understands ways to optimize your copy, optimize different components on your site, and then recommends you a bunch of different experiments you could run. What's underpinning all of this is MCP. Like, we have MCPs provided to these, these internal, like, this Amplitude agent, these sub-agents. But we've also made all of these tools available externally. So if you wanna spin up your own bespoke workflows and agents, you have access to all of those creation, those read, those edit types of tools across all of the data and actions available in the Amplitude platform. And so we've seen these power users build super complex internal workflows. They're mixing and matching data from, like, one specific digital analytics platform with their Tableau data, and then they're auto-piping, like, these insights and anomalies into, like, like Cursor, similarly to how I, I showed you, and building a lot of these bespoke agents using our MCP. Uh, and then lastly, we want these agents everywhere in Slack, like, everywhere that you work. The first thing that we're releasing, like, in February, this Amplitude embedded agent is available in Slack. You can ping it. Like, I think about this as, like, the big short. You know, this, this is my quant. Like, anytime I have a question about what data needs to be pulled, what analysis I wanna run, I can just hit Amplitude Agent directly in Slack. It's gonna run for a few minutes, and it's gonna tell me exactly, like, that analysis without me doing any of the work. I can link back to exactly what charts it looked at, and it, it can also change anything within the platform. So if I need to edit a feature flag, if I wanna run a different experiment, if I wanna, like, watch a survey, like a session replay, you can kick it all off in Slack and then have follow-on conversations with it.
- AGAakash Gupta
Super powerful. If people wanna learn more or contact you, where can they find you?
- FLFrank Lee
If people wanna get started, you should jump to amplitude.com. If you wanna see what's happening within this launch, you can go to amplitude.com/ai. And once you're jumping into the product, regardless of what plan you are, you can interact with our agents, you can set them up in Slack. You could connect to all of our data within Claude, ChatGPT, Cursor, Claude Code using MCP. And then if you wanna hit me up directly, I'm franklee on Twitter, but you can also hit me on LinkedIn.
- AGAakash Gupta
Frank, thank you so much for this masterclass in using Claude Code, using analytics, and the future of agency analytics.
- 50:28 – 53:00
Outro
- FLFrank Lee
Thank you.
- AGAakash Gupta
All right, guys, so we've walked you through what I think is the future of PMing. We were joking off air, this is vibe PMing. Just like there's vibe engineering, vibe coding, vibe designing, all these different trends, this is vibe PMing. We walked you through five workflows using Claude Code and MCP connected to your analytics tool. And by the way, if you don't have an analytics tool that is pulling in your Gong data and your Salesforce data and the other data, use those MCPs to pull in that data, and your Zendesk data, et cetera, and inform your insights, create specs, even potentially code something like he showed you using a Cursor or Claude Code agent, or create the relevant tickets in Linear to get your engineering team to do it. This is the future way PMs are going to be working. If you are not in an org that is giving you this access, that is allowing you to do this stuff, you should be making the request. And if you're a product leader, you should be using this video as an example to get everything through your IT departments, because this makes PMs much more effective. When we talk about all PMing, PMs becoming AI PMs, this is exactly why. Everybody should be getting Claude Code access. Everybody should be getting Cursor access. Everybody should be getting GitHub access to the code base. This is what not just teams like Amplitude will be doing, but the Toyotas and the Fords and the UnitedHealth Groups and the Mount Sinai health systems. In two or three years, they'll be doing that too, and there's gonna be a huge advantage if you are one of the teams that adopts this sooner. So if you need more help on this, be sure to check out my newsletter, where we're gonna have a deep walkthrough of everything he presented today. Check out my other tutorials on MCP, which is with the CTO of Zapier, with Claude Code, with the full stack PM on this channel to learn more, to make sure you're comfortable with it, and send me what you lear- what you built. Share the insights so that we can continue to create better content for you. And I'll see you in the next episode. I hope you enjoyed that episode. If you could take a moment to double-check that you have followed on Apple and Spotify podcasts, subscribed on YouTube, left a rating or review on Apple or Spotify, and commented on YouTube, all these things will help the algorithm distribute the show to more and more people. As we distribute the show to more people, we can grow the show, improve the quality of the content and the production to get you better insights to stay ahead in your career. Finally, do check out my bundle at bundle.aakashg.com to get access to nine AI products for an entire year for free. This includes Dovetail, Mobbin, Linear, Reforge Build, Descript, and many other amazing tools that will help you as an AI product manager or builder succeed. I'll see you in the next episode.
Episode duration: 53:10
Install uListen for AI-powered chat & search across the full episode — Get Full Transcript
Transcript of episode WK0bZrS8pVs
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome