Skip to content
Aakash GuptaAakash Gupta

What AI PMs REALLY Need to KNOW in 2026 (Agents, Discovery, EVERYTHING)

Todd Olson spent 28 years in product management and built Pendo to $2.5B. He reveals why AI PM jobs doubled to 20% of all postings (and pay 30-40% more), the exact 5-layer technical pyramid to upskill from core PM to AI PM, and how to ship AI features at scale with proper evals, cost optimization, and the right product strategy. Full Writeup: https://www.news.aakashg.com/p/todd-olson-podcast Transcript: https://www.aakashg.com/the-complete-ai-pm-roadmap-how-to-upskill-from-core-pm-to-ai-pm-with-pendo-ceo-todd-olson/ ---- Timestamps: 0:00 - Intro 1:29 - Episode Begins 3:24 - Why AI PMs Get Paid 30-40% More 6:07 - How to Upskill into AI PM 11:50 - Ad 12:54 - The 5-Layer Technical Pyramid 16:30 - AI/ML Fundamentals 23:00 - Data Pipelines & RAG 33:02 - Trace Analysis & PM-Eng Tension 40:44 - Cost & Performance Optimization 48:56 - Evals Are Your Domain 56:03 - AI Product Roadmap 1:04:16 - Live Demo: Pendo's AI Features 1:13:07 - Ad 1:14:12 - Stakeholder & Board Management 1:22:03 - Outro ---- 🏆 Thanks to our sponsor: Reforge: Get 1 month free of Reforge Build with code BUILD - https://reforge.com/aakash ---- Key Takeaways: 1. AI PM market exploded - Last year 10% of PM jobs were AI PM jobs. This year it's 20%. They pay 30-40% more because of scarcity and skill level. But Todd warns: "You better damn well be good and know what you're talking about if you're gonna call yourself an AI PM because we are going to interrogate the hell out of it." 2. Real requirement is production at scale - Not "I built prototype at 1-person startup." Hiring managers want 20,000 paying B2B customers experiencing your AI feature successfully. To get there: upskill internally at current company by shipping AI features on your roadmap. 3. The 5-layer technical pyramid - Foundation: AI/ML fundamentals, data pipelines, prompt engineering. Middle: Observability (trace analysis), cost optimization, evals. Top: Product strategy, stakeholder management, leadership. You need to climb all 5 layers. Most PMs stop at layer 1. 4. RAG is table stakes - "RAG is the de facto way to build." You ingest data, create embeddings, feed into vector database, look up relevant context, pass to LLM. Todd: "If you put too much in context window, just like a human, you get confused. You want to give the right context." 5. PM-engineering tension is real - At startups, PMs do trace analysis. At large companies, engineering managers push back: "This is my world. I don't want some PM shadowing me." Similar to Data Dog—most PMs don't have login. Know the line. Be fluent but respect boundaries. 6. But evals are YOUR domain - Unlike trace analysis, evals are where PMs are the expert. "The PM is probably the best-suited human being to author and manage eval sets." You understand user and business needs. Engineers don't have that context. This is must-have competency now. 7. Cost optimization will matter - Some AI companies have sub-15% gross margins. Traditional software is 70-80%. Todd: "It's not a business at sub-15%." Eventually you'll rearchitect systems because infrastructure is too costly. Rule: when something's faster, it's cheaper (both buying compute). 8. Solve hard problems, not shiny objects - Todd's test: "Are we gonna do much better job than ChatGPT out of box? Why would we just wrap that and slap Pendo logo on it?" His discovery agent example: hard part isn't interviewing customers—it's finding which to interview, prioritizing, scheduling. Automate that workflow. 9. Kill bad features ruthlessly - Todd shipped features couple years ago that weren't great and turned them off. "Too often we hold on to something. Turn them off. Be unafraid. The more stuff in your product, the worse the experience is by default." 10. Control the narrative with boards - Don't show up with no story and get crushed with random requests. Todd: "Show them how you actually run your business. I want to see what you're looking at, not something just made for me." Think deeply about how each bet drives shareholder value. ---- 👨‍💻 Where to find Todd Olson: LinkedIn: https://www.linkedin.com/in/toddaolson/ Twitter/X: https://x.com/tolson Company: https://www.pendo.io 👨‍💻 Where to find Aakash: Twitter: https://www.x.com/aakashg0 LinkedIn: https://www.linkedin.com/in/aagupta/ Newsletter: https://www.news.aakashg.com #aipm #productmanagement #pendo ---- 🧠 About Product Growth: The world's largest podcast focused solely on product + growth, with over 200K+ listeners. 🔔 Subscribe and turn on notifications to get more videos like this.

Aakash GuptahostTodd Olsonguest
Dec 3, 20251h 21mWatch on YouTube ↗

EVERY SPOKEN WORD

  1. 0:001:29

    Intro

    1. AG

      Last year, about 10% of the posted PM jobs were AI PM jobs. This year, it's double that, nearly 20%. What do you think of this data?

    2. TO

      Well, what we're talking about is you're measuring job postings. That's two things. It's a company saying what type of person they want/what type of job listing is going to attract certain types of people. Here's a word of caution to all of you out there who are hearing this and seeing this. You better damn well be good and know what you're talking about if you're gonna call yourself an AI PM.

    3. AG

      Todd Olson spent 28 years in product management. Now he is leading one of the leading product management tool companies out there, so he has a unique vantage point to help you learn step by step how to become an AI product manager and what it takes to become a CPO or a CEO in AI. Todd, what should people be focusing on when it comes to the fundamentals?

    4. TO

      I think the more firsthand experience you have touching all this technology, the better.

    5. AG

      Cost and performance optimization. I think you guys are on the bleeding edge of thinking about this. What do people need to focus on here?

    6. TO

      This is a real issue that a lot of people don't think about.

    7. AG

      How do you actually build a good AI product roadmap?

    8. TO

      It starts fundamentally with solving hard problems.

    9. AG

      We're having the CEO of a two and a half billion dollar company demystify AI PM for you. What's hype? What's real? Really quickly, I think a crazy stat is that more than 50% of you listening are not subscribed. If you can subscribe on YouTube, follow on Apple or Spotify podcasts, my commitment to you is that we'll continue to make this content better and better.

  2. 1:293:24

    Episode Begins

    1. AG

      And now on to today's episode. Todd, welcome to the podcast.

    2. TO

      It's great to be here. When you say veteran, it just makes me feel old, but, uh, [laughs] but it's all good. It's all good.

    3. AG

      So, as I said, the hype is around AI product management.

    4. TO

      Yep.

    5. AG

      When I talk to anybody about the field who hasn't been in the field, they don't actually talk to me about roadmaps and strategy, the things that I used to hear about. They talk to me about AI, AI, AI. And so I was trying to demystify this myself, and what I did is I went on LinkedIn, and I searched product management jobs last August, and then I searched it this August. And I broke it down into growth product management, core product management, and of course, AI product management, and the numbers really astounded me. When last year, about 10% of the posted PM jobs were AI PM jobs, this year, it's double that, nearly 20%. What do you think of this data?

    6. TO

      Yeah. Well, look, I, I think, um, it doesn't surprise me. I guess that's the first thing I think of. Um, I guess what, you know, what, what we're talking about is you're measuring job postings, so that, that's two things. It's a company saying what, what type of person they want/what type of job listing is going to attract certain types of people. So really, I mean, if you think about it, this is a marketing game between companies and, and, and prospective, um, employees and, and both are trying to maximize both sides of this marketing game. So look, um, AI, you, you could have just taken out PM and just said everything AI, and everything you said at the, at the onset of this would, would be relevant because the truth is AI's hot. AI companies are hot. The AI economy is hot. Therefore, like, uh, you know, the, the, the PM flavor of it, of course, is hot. And, um, yeah. So yeah, it doesn't surprise me. You know, I, I, I think, um, now the, the line between AI PM and other

  3. 3:246:07

    Why AI PMs Get Paid 30-40% More

    1. TO

      PMs, I think, is a, is a g- interesting question. I mean, it implies, of course, there's some level of, like, AI fluency. Like, I understand the tools. I play around with it. I use it regularly. I understand maybe the technology stack, how it works. I've, I've used a, a bit. But let's be honest, all this stuff is so new. You only have so much experience, like only so much experience. Like, and it's changing, like literally every week. And, uh, like, like one of the most fascinating things is, um, and we'll probably talk about it, but we've done two acquisitions, um, in the last, like, roughly 12-ish months, maybe 14 months, both AI companies. And, um, one of the, like, criteria that we were looking for is we wanted a company post-ChatGPT to, to acquire, not pre. So, so why? It's because most companies pre-ChatGPT had to rewrite their entire application. They had to relearn all their own skills. There was like a, a sense that if somehow you were just too early, early mind you, that you're obsolete or, or your skills were less relevant. Maybe that's a better way of thinking it. So, so, um, so sometimes, like, having more recency to technology is actually better for you in terms of your, your, your success rate because you're learning things for the, you know, the first time. So I, you know, I don't know. Like, I, I think it's, it's an interesting trend. Do I think this is gonna last? Um, I don't know. I don't know. But, but nearly every company I talk to is doing some level of AI work, and there's no way that, like, they're only relying on AI PMs to do it. Obviously nor- normal PMs are also doing AI. So, um, so yeah, I mean, it'd be interesting to see what, what long-term, uh, wins out here.

    2. AG

      To that point, normal PMs also building AI, it seems more important than ever if you are a PM to upskill into these AI features, even if there isn't an AI feature on your roadmap for the next three or six months.

    3. TO

      Well, yeah, and look, there, there's two sides of AI, generally speaking, for all of us. It's, it's like, one, how we use AI to run, I mean, like, you can see our lives, but, like, certainly our professional life, so our work life, and there's two is how do we incorporate AI into the products we build to serve our end users, right? Those are two, two separate thoughts. Like, yes, if you are not using a solution to do prototyping like a Lovable or a Bolt or a V0 or a Replit, like, like yeah, you're gonna be left behind because other PMs are, right? And it, and it's really about speed and, um, the ability to just not wait on other individuals to help, to help, you know, get to a prototype, help go do discovery, help validate it with end users. I mean, I, I think, like if I'm doing market research on a market, if you're not using like Deep Research within ChatGPT

  4. 6:0711:50

    How to Upskill into AI PM

    1. TO

      to just do high-level cursory competitive analysis to give me a lay of the landscape, like that is-The way you should be, like, working, period, regardless of what you're building, regardless of what you call yourself. And then there's respect to, like, how do I use AI? I mean, AI, it's all, uh, you know, available via APIs now. It's an API you can call, right? So I don't care what you're building, but if you're dealing with any level of text, you should be using large language models. I don't know, I don't know if that makes you an AI PM, but you should be using it. Like, you'd be kind of dumb not to. Um, and, um, you know, even things like generating an email using an LLM is gonna... It's gonna be a better email than you trying to do it yourself, right? So I, I do think that the way we think about it is most of our PMs, regardless of what they're working on, are using AI in some regards in their customer-facing features. And does that mean every single solitary feature has it? No, no. Like, I was just messing around. We're releasing... Uh, or maybe there is a little AI, but we're just, we're shipping this, like, dark mode for guides. It's like, you know, we have our in-app messages. You can, like, toggle them. Um, I think AI actually does generate the CSS from the existing CSS, so I do think it's [laughs] actually AI generated. But, but, but so it is funny. I, I was, I was trying to pick something that wasn't AI related, and it somehow magically was 'cause nearly everything is. But, like, think about it. Like, one, it's actually a pretty cool use case for AI, but, but, um, not every feature is gonna have AI in it. And, and to be honest, we're not marketing it as an AI feature. It's not an AI dark mode. It's just dark mode. [laughs] It just happens that under the covers, that's one of the technologies we use because it's just a, a better way to do it, so if it makes sense.

    2. AG

      Mind blown.

    3. TO

      [laughs]

    4. AG

      So many insights packed into just a few minutes there. I wanna just reemphasize a couple points for po- folks. You may not call yourself an AI PM. We're gonna talk about that in a second because there might be a reason you want to call yourself an AI PM. And then the second point around even if you aren't building an AI feature, you should be thinking about is gonna hit an LLM API gonna enhance the feature that I'm building even if we don't market it as such? I think that's such a phenomenal insight. So let's talk about this point you made around calling yourself an AI PM. A lot of people I talk to want to call themselves an AI PM because it seems like AI PMs get paid more. Why is this happening? Is it simply a matter of putting that branding label on yourself? What is going on here? Why is this data showing that AI PMs are getting paid 30 to 40% more than regular PMs?

    5. TO

      Because AI's hot. The market's hot. You know, why, why are AI companies valued, you know, you know, greater than other companies? And some of it's due to growth rate, but, but yeah, I think that's why. I think there is also this sort of idea that in an AI world where it's like everything's, like, around labor arbitrage, that we're gonna have fewer people that are gonna do more, henceforth they should be paid more as well. Like, if I'm doing the work of one and a half PMs, like, should I be paid one and a half PM salaries? I don't know. Maybe. Maybe I can make that case. Now, my argument would be all of us should be working that way, so, like, there, there shouldn't be some situation where you're somehow 50% more productive than I am. But, but I think maybe right now for the time being, there is sort of this disparity between folks using AI technology and the ones who do not and/or understand it more. But, um, but look, I mean, if you were, um, a PM for, um, like, a really sophisticated technology, um, in the past, would you be paid more than one that, that wasn't? Yeah, probably. I think it's... It, it comes down to scarcity of skill set. You know, if I'm looking for someone who I know has used all the various, say, AI tooling versus one that's never had that experience, am I willing to pay a little bit more for someone who has that? Like, yeah, yeah. Even us, like, so, like, we're an analytics platform. I'll give you a different example. Like, we have analytics PMs. These are people that are, like, know about statistics and are pretty deep technically on data. Some of them have advanced degrees in data science, and are PMs. Are they scarcer, more highly skilled, honestly more paid than ones who are just, like, um, more UI-oriented PMs? Yeah, yeah, I'd, I'd probably say that. So even technical PMs, you know, like, I think are just... They cost more because often if you're a technical PM, maybe you have this alternative to be a software engineer, which has its own sort of career pathing with it. So yeah, look, I, I... One, it's hot, so yes, it's gonna be more expensive. Uh, two, there's scarcity because of the newness of the technology. Um, so yeah, none, none of that surprised me. And yeah, does that mean you wanna call yourself an AI PM? Yeah, it probably does. Here's a wor cau- word of caution to all of you out there who are hearing this and seeing this. You better damn well be good and know what you're talking about if you're gonna call yourself an AI PM. Because I can tell you, if I'm gonna be paying any percent more for that skill, we are going to, like, interrogate the hell out of it, right? Because there's so much, you know, AI washing in companies now and, you know, people calling themselves AI companies when they're not, or people saying they use AI when they're not. I am sure there's gonna be AI washing on resumes. "I'm an AI PM. I'm an AI this." They put some, like, crappy two-hour certificate program that they went to that somehow MIT threw out, and all of a sudden that means they went to MIT, which by the way, they didn't. They went to, like, some state school and are, like, calling themselves an MIT grad online. So, like, you're gonna get all that, right? And, and so don't be that person because all of us can smell through it. We've seen this before. Um, you better damn well know what you're talking about bec- if you're gonna try to, try to catch this wave. So that's, that's my take at least.

  5. 11:5012:54

    Ad

    1. AG

      Yeah. I hope you're enjoying this masterclass on AI product management with Todd Olson. Today's episode is brought to you by Reforge. You know that feeling when you try to prototype something with AI, and it spits out something completely generic? The problem is that most AI app builders aren't built for product teams or product managers like me and you. They're built for founders who are starting from scratch.But product teams aren't building from zero. You have an existing product, real customers, design guidelines. Reforge's new product, Reforge Build, generates prototypes that reflect your real pricing tiers, real features, real customer language, not generic placeholder. So stop fighting tools built for founders. Start using a tool purpose-built for product managers to AI prototype well in the way that we're teaching this episode with Todd. Try it for free at reforge.com/aakash. That's reforge.com/A-A-K-A-S-H, and use the code BUILD for one free month of premium.

  6. 12:5416:30

    The 5-Layer Technical Pyramid

    1. AG

      The people who are fibbing on their resume, they aren't landing these jobs. Like, the, all these jobs-

    2. TO

      Yeah

    3. AG

      ... are pretty in rigorous processes. Um, and in general, what I've found is that if you're gonna wanna land one of these AI PM jobs, you would have had production at scale, successful AI feature experience, and all three elements of those are key. It's not like production successful at a 1-person startup that you created last month, but production at scale with 20,000 paying B2B customers. Now, that's what they're looking for, right? And so for a lot of people, it's gonna be, "How do I develop the experience internally at my own company in my current PM role, and doing that via upskilling in order to land one of these roles?"

    4. TO

      100%, yeah. Makes sense.

    5. AG

      So that's what we wanna help you do for the rest of this video, guys. This is a high-level view of how to think about if I wanna upskill into AI features. Maybe, you know, my roadmap right now, I c- I can't think about the example Todd just gave of tapping into an LLM to check out the CSS. I'm not at that intuitive level of building AI features. You need to work your way up this pyramid. So you need to start with foundational skills, which we're gonna dive into for you guys, AI/ML fundamentals, data pipeline understanding, and prompt engineering. These are the basics. Then you need to move up into observability and monitoring. How do you do trace analysis? Do you know what trace analysis is when I just said that? Or is only the word debugging ringing a bell? Well, then you need to understand debugging for AI. Production monitoring, what that looks like, and cost optimization. Pendo has some really cool features we're even gonna show you to do that. Then you need to move into evaluation and QA. So it's funny, like, as I got more senior as a PM, I started doing less and less QA, especially once I hit the group product manager, director product manager. Now I talk to them, and they're back into it, but what are they doing? They're doing evals. And so what are evals? How you A/B test different prompt engineered versions of a fine-tuned LLM, how you look at the quality metrics and KPIs, that's the next level. And then we get into the level Todd's probably operating at, product strategy. What is your AI product roadmap? W- How are you managing your stakeholders? And then your leadership skills, AI ethics, and team building and culture. So this is the roadmap, everybody. Let's start with AI/ML fundamentals. Todd, what should people be focusing on when it comes to the fundamentals?

    6. TO

      Yeah. I mean, look, it's, it's... Y- you need to know how it works. You, you knew the core and what, what it does, what it's capable of. Honestly, you just need to play around with it a lot. So I think, I think the more firsthand experience you have touching all this technology, the better. I mean, yeah, you have this concept of token economics. Yeah, we'll get to that more around costing and things like that. But, um, you need to start understanding the trade-off decisions between using different models, different levels of different models. Like, when do I use GPT-5 during like going back to 4, right? Um, what's the difference between, uh, Anthropic and OpenAI in different use cases? Or even like, you know, when you listen to like Gemini and other funds. I mean, um, understanding some of the, the open source options that are, that are out there, and self-hosted options that are out there, like privacy and security is actually a real issue when I talk to customers with respect to, to AI, and maybe that's on another one of these things. But understanding how, like, what's being trained, where your data's going, all those sorts of things, I, I, I think that's, you know, these are, these are all, all value- all valuable skills, so.

    7. AG

      I think you brought up a

  7. 16:3023:00

    AI/ML Fundamentals

    1. AG

      really interesting point around Gemini isn't even included here, but Gemini is a really interesting model. You should know what is Gemini useful for. In my opinion, you know, one of the things that it's really useful for is it actually understands video. And so anytime I'm building a product that it needs to be truly multimodal, like I want you to input a YouTube video, so I just recently built like a script analyzer of my competitor podcasts, so I built that on Gemini. And so you need to understand what is Gemini useful for. I think another thing you mentioned there is open source. I think it was Brian Chesky of Airbnb just made waves where he was talking about how they don't use OpenAI models in production. They're using a Chinese open source model, I believe, Alibaba's Qwen. How do people think about that? What is your take on what Brian Chesky said?

    2. TO

      Look, I, I think you gotta like test, um, different models for different applications. There's performance characteristics. There's quality characteristics. Maybe you don't need the quality of like GPT-5, but like performance, speed, and cost becomes a much bigger issue. I think with respect to like data residency and privacy and, like, what you're trying to do with your customer base, I think that, um, is a very relevant decision as, as, as part of this. I mean, um, like we, we've had issues with data residency where certain models aren't available in certain countries, and we have, like, you know, we, we have various data centers across the world that we have to be cognizant of. So, so I think these are all things that if I can say it's a Pendo-hosted b- you know, uh, open source model that's all within our infrastructure, already within our DPA that you've already signed and all that sort of stuff, like that may be easier for me than convincing them to add OpenAI to my DPA. Oh, or because OpenAI for a while wasn't even supporting certain countries, the Microsoft version of it, which by the way isn't the exact same version that you're gonna get 'cause like they weren't like, like literally, um-Um, at parity with respect to the ver- So these are all little details that you may come up with some cool idea, you may go to build and ship it, like it may not actually apply to your [chuckles] customer base at the level you expected. Um, and yeah, we're a huge Google shop, so that's why we know Gemini, and like, yeah, Gemini is easier for us to include because Google's already like a provider for us versus adding a new provider.

    3. AG

      Mm.

    4. TO

      And if the quality is good in certain use cases, which by the way, it wasn't some, wasn't in others, as you were just referring to, like we actually have both. Um, and then we use Claude for a lot of code because it seems to be the best at code generation right now. So, so like we, we, you know... You nee- And, and they, by the way, they're constantly changing.

    5. AG

      Yeah.

    6. TO

      So we, we test against new models all the time, so.

    7. AG

      So if you're watching this episode a year from now, all that's already out of date, and that's what we mean in this section is stay up to date is part of AI fundamentals. It's a new skill set for you. What are the newsletters? What are the news sources you're subscribed to? The second area is data pipeline. And I know that when I first published this and sent this to my editor, they were like, "This is the first area." [chuckles] They were like, "What? Why is this in the foundation? Why do PMs need to know this?" I would argue it's very important. I think that RAG is basically the most important thing that people are building in their product features, which is Retrieval Access Augmented Generation, I believe. I think I butchered that, but it's something like that, where you hook it up to a vector database in order to get real-time information. Let's say Pendo's building a customer support agent. Their support thing on Zendesk or wherever it is, is really good and up to date, so they need to hook that up to there so that when the customer support agent updates an article, you know, that can be updated in real time. And so that's why this foundational understanding of data pipelines is so important. Do you agree? Should PMs really be focused-

    8. TO

      Yeah

    9. AG

      ... on this, and what do they need to know?

    10. TO

      Yeah. This is actually huge. Um, yeah. Th- th- this is, um... Yeah, I mean, I, I think RAG is kind of a de facto way to build, and it comes back, like, you know, on the previous slide, you had context window as one of the terms, you know. And so, so it comes down to how do we supply the context with the right level of context for the LLMs to do their jobs well? So a lot of times what, what these solutions are doing is you're passing everything you need i- in upfront, and, and it's kind of munging it right there o- on the fly. But getting the right context to it. If you put too much in the context window, just like a human, if you give them too much context, you get really, really confused. So you wanna, you wanna give the right context. So the way a lot of this works is we're ingesting data, we're, um, creating embeddings based on the, the content we're ingesting that we're, we're feeding into a vector database. So when someone asks us a question, we're then looking up what context is relevant to that question they could pass to the, um, LLM to answer it. So, so it's actually a core way to build, um, understanding how it works, what, you know, what's the performance considerations, like when to do it. Um, I, I think one of the things that, you know, you, it's kind of implied in all this, yeah, you have governance, but also just scale. You know, how do we get these systems fast? Like, like how does it actually work? Uh, I think, I think we, we, um... This is the thing where we're doing acquisitions of AI companies or we're thinking about AI features. When we launch it up, we support some of the largest web apps in the world use us, you know. Um, it's just gotta work at scale, and if it doesn't, like that's a problem. Like if it falls over because the co- you know, everything's like too big, then that's another thing. I don't think we have scale in there. But that, that, that, um... But yeah, I think this is huge. I think this is a must-have. I totally agree with you, so.

    11. AG

      And so this is where Todd was mentioning those technical PMs getting paid more. Part of being an AI PM is being a little bit technical. And the final component of the bottom layer is prompt engineering. I think a lot of people are gonna roll their eyes at this, Todd, because they've seen so many posts on social media, "Check out this killer prompt. Check out my prompt framework." Tell me why prompt engineering actually matters.

    12. TO

      Um, because I, you know, again, it also comes down to a little bit of context and instruction. The better context and instruction we have that, that we're setting the LLM, the, the better response we're gonna get. So I, I think there is a bit of a skill set to it, but you know, people probably would've laughed at me if I said 10 years ago there's a skill set in how I use Google Search, and I had to go look for things, you know. But, but probably was. Some people are probably more effective than others, and I think, I think this is, um, a similar thing. So, so, you know, um, understanding what a good prompt looks like, what level of detail. Um, yeah. I mean, I, I think this

  8. 23:0033:02

    Data Pipelines & RAG

    1. TO

      is sort of must-have a- as well for... And this is actually, it's interesting you said something earlier. You know, in y- in your first graph, you have this concept of platform PM. So we have platform PMs. Like, like I wonder if we're gonna see a world where there's gonna be an AI platform PM, which understands the embeddings piece, RAG, maybe like some of the core fundamentals, and expose services to other PMs, which are more domain experts, subject matter experts on the, the, you know, particular vertical maybe, or subject matter experts on certain areas. And, but those folks are still g- everyone's gonna have to know how to understand prompt engineering. That's where I think maybe it's like, you know, um, like we're all gonna have to, because you're gonna want a subject matter expert or domain expert in a certain area to use these prompts to teach the LLM, like how to think about the problem it's solving, how to use the data you're passing, and et cetera, et cetera, so.

    2. AG

      You guys heard it here. Everybody [chuckles] needs to learn prompt engineering, and I love this point you made about different types of AI PMs. We didn't talk too much about that, but the role is already specializing, and I think you just forecasted the future where we're gonna have AI platform PMs, we're gonna have more end user PMs. At the AI companies already, we're seeing that there's research PMs and product PMs, so that's already a very clear separation. We're gonna start to see more and more bifurcation, I think, as the role grows.

    3. TO

      Yeah. Yeah, I think so too.

    4. AG

      So we're moving up one layer. As we said, most people, they understand debugging, but they don't [chuckles] understand trace analysis. What do people know, need to know about this, and why is it important to building AI products?

    5. TO

      Yeah, look, I think if you have like, um, um, more orchestration, you have agents calling other agents, calling tools, like really understanding like what, what's happening behind the scenes because what's getting sort of like passed from agent to agent, I think is useful. Yeah, I mean, performance, it, it can often like, like to get ac- you know, high levels of performance, understanding where things break down. Sometimes it's just errors. So within a large chain, so sort of understanding where was the error and like how does it sort of recover, 'cause if it tries to redo itself and fix itself, like there's a lot of optimization within there. Like I, I think this is interesting. I mean, I, I, um, I think this is one where I do see a sort of a partnership between PM and dev. This is an area where I'm actually seeing tension amongst teams. I'm having engineers... Like, some engineering ma-man-managers are like, "No, this is my world. I don't want-"

    6. AG

      Mm.

    7. TO

      "... some PM sitting next to me shadowing it." So, um, I, I want you to tell me what to build and, and, and why to build it, not, not like how to build it. I, I'm gonna make sure it's performant. I'm gonna make sure it works well. I'm gonna make sure you hit your requirements. Which, which then will then, then the eval's gonna cover that piece. So I think this one's a question mark for me in the sense that, um, I think at a startup, yes, you'll be doing this. I think at a large company, there may be a little bit of like, um, standoffishness from engineering or like some division of labor that, that may try to exist. Maybe not. Like, if you're like best buddies with your engineering leader, you're gonna sit next to him. But, but you just gotta think about a team of [inhales] four to six engineers and like one PM, like how, how does that work with these systems? But, but yeah. Yeah. I mean, it's, it's... Look, first of all, more knowledge is power, so like I would definitely know how it works. If I'm, if I'm talking to an engineer, I wanna say, "Hey, what's the trace say?" 'Cause then I can ask the question and get the, the results back. But maybe you're not the one actually doing it, so.

    8. AG

      This is where your vantage point is giving us so much insight. As a PM, you need to tread very lightly. You need to learn this skill y- on your own agents that you're making for your own personal productivity. But you're not e-expecting, "I'm gonna be the one deciding what observability platform, whether we buy Arize or Brain Test." You're not the person who's actually gonna be implementing Arize and then doing the majority of the trace analysis. But you need to be fluent in it, right? And sometimes maybe the feature breaks in production, everybody's off on a P0 bug, you're, you don't wanna be the PM who's just there helping coordinate, you wanna actually jump in and help. And that might be a point where you go in and you say, "Oh, yep, I checked the traces in Arize. These are some of the errors I'm seeing. Are you guys also seeing this?" So it's almost like you only pitch in occasionally, you're not actually doing this work.

    9. TO

      Yeah, 100%. Yeah, I mean, it's like, just like Datadog. Like, like most PMs don't make the Datadog decision. Some of them don't even probably have a login to Datadog. Like, and I know no PM wants a lo- like, like to be on the pager duty, you know, uh, [chuckles] end where they're getting called in the middle of the night. So, so yeah, th-th-th-it's an interesting one. I agree, though. If you can help out, if you can lend a hand, if you c- y-yeah, always do so. It's always makes you more valuable as a PM. But yeah, this one, yeah, it's an interesting one.

    10. AG

      What are the other areas of PM-engineering tension that people should be aware of?

    11. TO

      Yeah, I, I think who owns what is, is always a level of PM-engineering tension. You know, um, like yeah, PMs or, uh, engineers want some autonomy, and they also like they, they want to be valued and skilled as well, and they wanna be able to play around and experiment with things and, um, I think they should have the right to do so. So like there's a craft to being an engineer as well, right? And we wanna sort of respect that craft. So, so I do think like just some level of like separation of concerns i-is valuable. And look, with AI, I, I think we would all say everything's sort of smashing together. But a, a PM's job like is, um, by nat-by, by necessity a, a role of influence, where a lot of people are influenced by your decision-making but yet do not report to you directly. I mean, that engineer has a boss, and probably a boss's boss, and that person probably made the decision on what telemetry tool you're using. [laughs] It's not even that engineer. It's probably like multiple levels up, right? And, and so, so like be respectful that like, like that person has a whole chain because the engineering organization is so much bigger and there's a lot more levels there and it's a lot more, you know, complexity there. Um, so yeah. Yeah, I mean, uh, y- it's interesting you're talking about like if there's like a P0 related to something, like it's the engineers that are getting pulled in and fixing it, not you.

    12. AG

      Yeah. [chuckles]

    13. TO

      You know? And I, I think it's useful for you to be able to see some of these things because bugs do affect your capacity as a PM indirectly. But if a bunch of bugs are coming in, like it's ultimately gonna affect your innovation roadmap. So it's kind of good to understand this balance in your head because if you take a less quality-oriented approach, you're gonna be the one to get affected by rework and things like that. So, so yeah.

    14. AG

      So wise. So production monitoring. This is another one I'm not sure. Where is the line of the PM here? How deep should they be going into these tools?

    15. TO

      Yeah, I'd say not as much, you know, unless you're a platform PM. Like, and we like to have PMs on platform teams, but like actually a lot of our engineers don't even do this. Like, we have a whole team of ops people, like that is their job. Um, and ops, SREs, you know, and they're really, really good at, at their jobs and, and they... And, and there's a lot more considerations for like how all the infrastructure runs globally and, um, but they're the ones that are like detecting this and monitoring it and managing it and, and, um-Yeah. Yeah. I mean, say traffic patterns. Yeah, if there's like a denial of service attack, it's our ops team that's doing. We're not, like, bothering PMs with it. You know, it's, it's... You know, a head of ops is calling me on a weekend, you know, not, not our PM team. There's not much the PMs can even do. But, but, um, well, here's the other consideration that I failed to mention before, but it's certainly very, very relevant here. By contract with our customers, only a very few number of people can touch customer-related systems and customer-related data, so, um, they have to all be background checked. Like, it's, it's... And, and, like, we don't wanna make every single person at Pendo get a background check 'cause that feels onerous, right? But the people that do this work all are, and, um, that's why there is a separation. And so, so yeah, if you're a tiny company, you're probably able to do this. If you're at a larger company, like, heck yeah, you won't be, you won't be near it. [laughs] So.

    16. AG

      Yeah. There's this huge difference of what type of PM you are, what size company you're at, how sophisticated they are. You might even find some of the smaller companies where the SREs, the DevOps teams, they're handling this. So be very sensitive to what product management is at your company. [chuckles] We're talking about product management abstractly, but in the five product management jobs I had in my career, it was completely different at each company, and that's probably the case for every PM out there.

    17. TO

      100%.

    18. AG

      Cost and performance optimization, I think you guys are on the bleeding edge of thinking about this. What do people need to focus on here?

    19. TO

      Yeah, I mean, um, this is a real issue that a lot of people don't think of, and we think of it because we, we, we deal in the data, the data world, and so there's just a lot of, lot of data and complexity. And, um, the truth is how you build and design systems affects your, your, um, cost of goods sold, your COGS, which ultimately affect your gross margin, which ultimately affects your success of the business. Now, it's an interesting time and place 'cause I, I joke about gross margin, but some of the fastest growing AI companies in the world have, like, very, um, uh, unattractive gross margins right now. So you... One could argue that it's trading tokens. So you're, you're paying tokens to them, and they're paying it potentially probably to Anthropic or someone else, um, which is, like, this, a fascinating circle that we, we have. We have these AI apps that turn tokens to the model providers. Those model providers just go to GPUs. The GPUs just [chuckles] you know, just, like, vicious, vicious circle. Um, but we as, as, as we know, the w- So

  9. 33:0240:44

    Trace Analysis & PM-Eng Tension

    1. TO

      why, um, why should this matter? Because over time you will have to get to a rational gross margin, and, um, we're gonna see radical innovation in this area. At some point, we're gonna get diminishing marginal returns on quality. There's only so good it's gonna have to be, where everything's gonna shift to size, performance. You know, if you listen to, like, Ali Ghodsi, he'll, he'll talk about, like, tiny models as maybe the future instead of large, large models. So a, a lot of people think that the future will be more in just smaller, more tuned models because the only way you're gonna hit the gross margin characteristics of a successful, viable long-term entity. Like, some of these companies are operating at, like, a sub-15% gross margin. It's not a business. [laughs] I mean, no offense, it just isn't.

    2. AG

      It's a restaurant. [chuckles]

    3. TO

      So, so, um, you know, a typical software, you know, company like, like, like a Pendo-like company is like, you know, it's like high 70s, you know? Like, and, and a great company can be in the 80s. So, so I think that's why it all matters, um, and you're gonna have to, like, whittle it down and, like, look, I mean, here's the good news is we've seen past this. And when Pendo started, its, its gross margin was much worse than it is now. Why? Because we optimized for speed of innovation than we did for cost, so that means we overspent on a bunch of infrastructure in the early days of the company, and, um, over time it became important for us to find efficiencies, and, and now we actually have teams of people that do it. So this is maybe a, a, a warning for all you AI companies out there that are just focusing only on speed and growth and the... Which it's great. Do that now. Eventually, you're gonna have to worry about, like, "Ooh, we gotta re-architect this entire system. It's gonna take us a year because we gotta use different infrastructure because the infrastructure we picked, while easy to use and really, really fast to market, it is just too costly." And that, that's kind of where we've, we've now thankfully gone through a lot of that work, um, today. But, but yeah, I, I think everyone's gonna have to be thinking about that. And, and oh, yeah, I, I love your example because this is often the, the case. Very often the path to better performance leads to cost savings, almost always. Almost always when something's faster, it's cheaper. That's just a good rule because you're essentially buying compute if you think about it, you know, a- a- and anything at some level goes down to compute. So that's, um... So there's always, like, a twofer. Um, one, none of us like investing in this stuff because, like, ugh, you know? We all like building new features for customers. Let's be honest. Most PMs like growth. They're growth-minded humans. I'm a growth-minded human. Um, but it is cool when you save money and make things faster. [laughs] Very, very cool.

    4. AG

      And I've noticed this typical pattern when building AI features. Certainly, this was the case at Apollo when we were building the email AI writer, for instance. You can hill climb up your evals, and then once you reach a good level, like, oh, in the email writer example for Apollo, humans are accepting the draft that we're creating at 70% without any edits. That's, like, amazing level. We were at 20% to start, so we've gone from 20 to 70. Then you can go in, and you can start to cost optimize. And so what our teams, some of our AI research specifically teams did, they tuned the prompt so well, they found a cheaper model that they fine-tuned so well that they were able to hit that same eval result of 70% with much cheaper cost. So sometimes you can also go up the hill and then kinda climb down on cost.

    5. TO

      Yeah, I mean, you mentioned caching strategies too. Like, what do you cache, and what do you don't cache? Do we really need to recompute, you know, some cluster algorithm every single time, you know, someone comes to a page? NoWildly inefficient and expensive, but a lot of people do it because it's the fastest route to market. And oh, by the way, folks, for all of you generating really cool, like, yeah, using code generators and like, "Oh, I used Replit generated my app," they don't usually take into account the cost savings piece in these generated apps. It just t- sort of works. So usually all these things need to be re-architected to, you know, think differently, have cachings, things like that. So...

    6. AG

      Mm. Yeah, so especially with anything vibe coded.

    7. TO

      Correct.

    8. AG

      So we've been mentioning evals. We've finally gotten to the topic. I think this is another one where I'd be really interested to understand, like how deep should the PMs be going? How should they be inputting into the eval process?

    9. TO

      I, I think more so than some of the other pieces is the short answer. I mean, I, I think this is a real area that people should be focused on. Yeah, a lo- lot of it I'm seeing is custom frameworks, but, but, um, um, [lip smack] unlike automated test suites, which, you know, if you go back to automated test suites and like, you know, engineers would be focusing on unit tests, just fundamentally like, do these methods work, to like integrations tests, do these sort of systems work together, to maybe some level of UI automation tests. You know, you have testing teams that work alongside your PM teams, but this is a little different. I mean, this is like AI grading AI, and the, the quality of the evals I think matters a lot, and the PM is probably the best suited human being to, um, author and manage these, the- these sets. So this is an area where I think it'll, uh, it will be a competency, it'll be a must-have competency for all PMs, whether an AI PM or not. Um, because all of us are gonna, I do believe, be using LLMs, you know, in the future, uh, in regards to what we're building, and I think this is an area where we're gonna have a lot more innovation in this area, make it easier to do this. But, but yeah, I, I think this is gonna be a hot area. So...

    10. AG

      The PM is the best positioned human. I love that insight here. Like, you are the expert in the user, you're the expert in what the business needs. When we're talking about things like trace analysis or some of the other stuff, the engineers are clearly the expert, and so it's more about how you work with them. This is an area where you need to be the expert.

    11. TO

      Yes. I agree. I agree. May- now, the engineers are probably gonna have to supply the frameworks. There's probably gonna have to be some, some con- you know, harness situations set up. There's gonna be some tooling there. But yes, you are the person that understands how to do this, better than the engineer. So I would, I would definitely invest here.

    12. AG

      Prioritize this. A/B testing and experimentation, it's a classic topic. You guys helped develop this field for product managers to begin with over the last decade plus. What do they need to know about it when it comes to AI?

    13. TO

      Yeah, I mean, uh, I don't think it's a sig- significant difference than what it normally would be. Like, like, you know, you, you, um, you have to know what statistical significant is, is. You know, there's a variety of different tests. There's like split tests, there's classic AB, ABC tests. There's like different ways to test depending on your user base. This will be tend- this will also like depend on whether you're a B2C product or a B2B product, 'cause that'll determine like essentially the N, the number of like, um, the number of te- you know, test subjects essentially you're looking at. But, um, basic statistics, yeah. And you're gonna test a variety of things, just like you would test a UI or a button or, you know. You're gonna test, um, a prompt. You're gonna test an LLM provider, like a large... You know, you're, you're gonna test different things and, and you know, I think, yeah. You know, this is like basic stuff, so, so I, I don't, I don't think this is like particularly complicated, but I think in the world of AI, it's really, really useful. What, what, what makes it exciting, what makes this more relevant is that let's say you weren't doing A/B testing and experimentation before. In a world of AI, the cost to generate a variant is much, much, much lower. So, so, so you wouldn't have tested as much in the past, and I think that's the interesting mindset shift. We'll probably talk about discovery a little bit later. I still think discovery's critically important. Um, but maybe, maybe in the olden days you would spend a little more time in discovery

  10. 40:4448:56

    Cost & Performance Optimization

    1. TO

      and a little t- little less time experimenting, 'cause it's just so damn costly to generate more variants. Whereas now, hell with it. Like, like, like get something out there, run, run a bunch of experiments, and like iterate your way to, to success. I mean, like for some products, that's gonna be an okay strategy. Um, just depends on your, your user base, but I think that, I think this is pretty interesting, so...

    2. AG

      There were so many core product teams I've been on or managed where we had some experiments on the roadmap, but as the realities of building things came, we ended up like, you know, maybe launching like a couple experiments a quarter instead of the 12 to 15 we promised at the beginning of the quarter. I feel like with AI reducing how quick, how much time it takes to create, especially these front-end only changes, there's really no excuse for PM teams to go entire quarters and months without experimenting.

    3. TO

      Yeah, I think... So maybe, so maybe the really hot take on this slide is less that there's something new with respect to AI and experimentation. I think what's new is if you were not doing it before, you probably have to do it now, and that's the new thing. Like, a lot of people could avoid this [laughs] in their day-to-day jobs. They didn't need to know how to do A/B testing or experimentation because they... You're right, they never had the capacity to do so. I think all of us are gonna have... That'll, that'll be a tool in all of our tool bags that we're just gonna have to like exercise more and more than we ever have before.

    4. AG

      And the good news is I made my entire career on these small little growth experiments, so they will work for you guys. The, still probably the biggest dollar impact change I ever made in my career was changing the search ghost text at threadup.com. You know? Like these little things can really live, give you a huge impact. So I think you guys will all benefit a lot from putting these little things into your roadmap with AI. Let's move into quality metrics and KPIs. How does the metrics understanding that a PM needs to have change for these AI features?

    5. TO

      Um, well, we talked a lot about, like, the costing implications and things, things like that. I think, um, you know, look, I, I think, um, look, I, I, I think, um, the leading indicators may be a little different, may be a little different. Like, for example, if you're doing an autonomous agent to do a bunch of work for folks, do you care if you have daily active users? Like, no, maybe not, right?

    6. AG

      Yeah.

    7. TO

      I, so I, I think what's gonna shift is, um, we're gonna care a lot, lot more about outcomes. And by the way, we always should have been caring about outcomes. But, but I think, I think the, um, daily, weekly active use is gonna, like, for some products, be, be sort of less important. Now, for some products, it may be more important. Like, if the, you want the, the end user to be interacting with your agent a ton, then that obviously is a really important metric for you. But if you don't care, you just wanna get something done for the user, which is a lot of B2B products, I think, I think it's gonna be, um, a lot more shifting towards that. And I mean, one, one of the examples I love from, like, these, this AI world and the B2B world is, you know, um, Fin, which is this customer experience agent, ha- has this kind of 99 cent per, um, per support ticket, you know, closed. Doesn't matter whether the human touched it or not, you know?

    8. AG

      Yeah.

    9. TO

      That's the kind of models that are super interesting to me. Now, look, not every product has something as clean as support ticket closed. That's very clean and ambiguous outcomes, but, but I think, um, well, we all have some outcomes we're trying to drive, so, so I think there's a... That's what we all should be thinking about. What's the goal of your product? What's success look like, and how do we make sure we're measuring that?

    10. AG

      This seems like one of the ones where that's why I didn't put any time to proficiency here. It's less about, like, I need to go study something for three months just to upskill. It's actually more like com- applying your common sense, your logic, but to these new sets of features. And the way you improve this is either by shipping those features at your company, but if you don't have access to that, think about what, how would I have measured the change that if somebody shipped. You know, recently Notion a- launched AI agents. It seemed to get a lot of hype. Maybe you tried it out. How would you measure that? Because as you walk through that process, you realize, "Oh, wow, measuring this agent isn't the same as the other features." And so I think that's the main thing here.

    11. TO

      Yeah. And we'll-- And I'll talk more about this, and hopefully I'll show a little bit of it. But, like, I think, you know, we're looking at, um, what are they using the agent for, um, how good the retention rate is. We're looking at things like frustration signals right now for agents. Like, are people pissed off? Like, how often does someone have, like, a short, pithy one-line response like, "Just do this for me," you know? Um, because that's a, like, if I'm irritated when using something, then that's, like, a negative level of se-sentiment. So I think, actually, I think, um, in, in, in this world of, um, in this, this AI world, I don't know if we're gonna talk about it on, on one of the little blocks here, but, but, like, um, I think being close to the customer is even more important. And, like, that's why you're hearing about this forward deployed engineer or, you know, we're talking here Pendo forward deployed PMs, where we go on site, sit alongside our customers, and we tweak the contest window. We tweak what sort of embeddings we can pull back. So go back to your, like, core fundamentals. Of course, you have to know those in order to sit, sit on site with a customer and, like, tweak, but we wanna tweak. We wanna, like, obsess over some workflow and say, "We're gonna automate this whole thing so it is flawless for you, but we're gonna send you on site to go do it." And their success is, like, successful workflow completion, um, that would have taken this user, I don't know, hours down to, like, seconds or minutes. Like, that is a success measure. And then, but yeah, it's... And, and how satisfied is the user? Does this actually replace you doing it yourself? I think that's the kind, that's where we're going to as an industry. And that, it's kind of exciting. To me, to me, to me, it feels, like, far more transformative than I'm gonna create this page with a bunch of table and data cells and, like, some forms and stuff, right? [chuckles] It's like, how do we make sure we obsess over this workflow and, like, fully automate it?

    12. AG

      That's huge. I love this idea of a forward deployed product manager. I wanna do a quick plug for one of yours, I believe, Dave Geline, field CPO. He has this amazing podcast. I think that's the type of thing that is what's gonna happen. So now we're moving up. I think part of roadmapping is discovery.

    13. TO

      Yep.

    14. AG

      This is, I think, probably the closest bucket I put for feature prioritization. So how do people think about this? You know, a lot of teams, they wanna just build, like, the shiniest object, right? They have the shiny object syndrome around AI that, okay, well, now that, um, Claude Skills are the new thing, now that Claude MCP is the new thing, now that AI agents are the new thing, and they're just running after each shiny object. How do you actually build a good AI product roadmap?

    15. TO

      Yeah. Look, I, I think it starts fundamentally with solving hard problems. This is where, like, this whole AI washing can, can, can come in, into effect. I mean, there's a lot of times where I, I think about building something and I'm like, "Are we gonna do a much better job than, like, ChatGPT out of the box? Like, why would we just wrap that, put, slap a Pendo logo on it, and ship it to a customer?" Like, no. And so then it becomes, okay, well, what unique asset, data, context do we have that we could provide that would add something, like, a step level above what, what we're doing? Um, and what, go back to what problem are you actually solving for end users? So, so, you know, some of our early AI bots, I'll show some of them, like, like, but we, we're talking about discovery. And, like, what's a part of discovery that's hard? Honestly, just finding which damn customers to go interview and, like, prioritizing and setting up all the interviews. That's the painful part. Like, you think, like, PM wants to be a scheduler too? Like, no.One, you gotta like figure out who are good people to talk to, for what reasons. You need a thesis around it. Then you need to go like get on people's schedules. So one of our first takes is like automating that entire workflow so it just gets done for you. Um, that to me sound-- seems valuable. It's a hard problem, and it would take literally weeks of work to do. So that, that's kinda like the types of things that, that I focus

  11. 48:5656:03

    Evals Are Your Domain

    1. TO

      on is like, let's look at something that's a hard problem, it's kinda tedious, no one likes doing it, and let, let's, let's really obsess there. And, and, um, so yeah, that, that's kinda how we think of all these, th- these AI features and bets is it just it fundamentally has to go back to that, and it can't just be AI for AI's sake. Oh, we can redo this text. I mean, I, I always tell teams if I-- if I'm just like cut and pasting something and throwing it in ChatGPT and throwing it back on our product, like just let them do that versus like-

    2. AG

      Yeah

    3. TO

      ... like that's not a game-changing feature. So, so, um, so yeah, I, I, I think a, a lot about that. And then, you know, I, I like what you have around tech debt here. It's like be willing to throw things away that aren't working or like that are already obsolete. This technology is changing so much that like you may try something, and you may realize it doesn't work, and you're gonna get-- throw, throw it away. Do not-

    4. AG

      Yeah

    5. TO

      ... hold onto it. Throw it away. Too often we hold onto something, you know, but, uh, we, we shipped a few features couple-- like a year and a half ago that weren't great, and we just turned them off. You know, turn them off. If they're not great, they don't solve a hard problem, people really aren't using them, like turn them off. Like be unafraid, um, because the more stuff you have in your product, the worse the experience is just by default, right? So, so I think, I think we have to like be really, really vigilant here of like how good is this thing, you know, uh, what, what's the quality of the problem it's solving, you know, like what's the retention rate for it, and, and go from there. And then, yeah, I think having a point of view is really, really valuable. You have this like competitive positioning, strategic technology events. What's Pendo's AI story versus other companies? Are we just following everyone else? You know, everyone has an agent, we have an agent. Uh, I don't wanna follow people. Um, I, I think you'll, you'll see that we, we've kind of taken a different approach than shipping agents per se, that we have this like agent mode. We, we see it more of as a modality, like, like we're, we're, we're using our products differently and, and, um, and versus individual agents. Like, like my, my vision was we're-- while some companies t- talking about digital workers, like the AI PM, I, I fundamentally think org charts are gonna be less important in the future.

    6. AG

      Mm.

    7. TO

      That's my vision. I think what matters in business is like series of workflows. You know, um, like how do we get something from idea to, to customer? That's a workflow, right? And how we operate across that workflow, it's gonna be a set of humans and agents working together, coding agents-

    8. AG

      Mm-hmm

    9. TO

      ... discovery agents, maybe, you know, PRD writing agents. But like it, it's a workflow that really, really matters. Um, when you think about y- uh, your recruiting in your business, like there's gonna, you know, um, those are, those are key workflows. And like some of these things you need humans involved. Like would you take a job, Aakash, if you like literally never met a single person face-to-face?

    10. AG

      Of course not.

    11. TO

      No. And we hear these horror stories of people hiring people n- without meeting them face-to-face, and it turns out that they're working like six jobs and what-

    12. AG

      [laughs]

    13. TO

      ... whatever, right? So it doesn't, it doesn't seem to pan out in that direction either, right? But would I hire an agent to sift through 1,000 resumes that I got through, through my, my inbound portal? Yeah. Like I don't like doing that. So again, these are all workflows that those are really what, what matter, and roles are gonna blend. Um, but, uh, but that's our positioning in the market against other folks. Some of our competitors are gonna create agents, and they're gonna put titles on them. I know they are, and that's fine. That's their vision for their future. Our vision's gonna be different. We're gonna, we're gonna, we're, you know, just taking a different tact, and, and, but it's all based on... And I think customers then self-select the companies they wanna work with based on the alignment of vision, and, and that's like really how it all works. But that, but that to me, like all great companies have a strong point of view, you know. And you, you, you know, ours has always been we're a platform first. I mean, we wanted... You know, we went very, very wide as a company very early while everyone else went deep. But now we're probably a combination of deep and wide across different areas. But like, yeah, like you'll see our AI strategy is very wide. Like these, these use cases, workflows cross-cut our solutions. Like what's fun is what was defined as a product, we're kinda eliminate the lines which, by the way, is a really hard problem. When you had a product manager owning a product, and now you're telling them to own a workflow that cross-cuts four products, that's a different skill, different technology, different stacks you gotta learn. So that, but that's to me, like if you solve that and nail that well, you're gonna have an amazing experience, so.

    14. AG

      Yeah. This is super messy to actually execute on the ground. There were so many insights in there, but one I would reemphasize for you guys is killing those bad features. Nothing is gonna kill your AI adoption of your new AI feature than a really bad AI feature that's existing in their face. So you need to kill those features. But that might mean, you know, if you're the PM on that feature, you need to go up to your product leader and say, "Hey, the retention on this feature is bad enough. We're at 10% four-month retention. It's time to kill this feature." You're gonna be the one closest to that data, so you're gonna have to proactively work around it, and sometimes when you're solving these workflows, you might not have a core surface area that you're owning in the product that you're so comfortable and used to. So with AI, you need to be really thinking next level about what are these problems, what are these workflows, what does this mean for what I should focus on? Should I really own this surface area or not? Some PMs ask that question. Most, they struggle with it. When it comes to stakeholder management, this is a core product skill. Everybody [chuckles] knows about it.

    15. TO

      Yeah.

    16. AG

      Um, but the funny thing, and the reason I have it here, is because as you become a product leader, like, you know, managing your investor and your board level requests matters, and you know this better than anybody 'cause you've been managing your board for so many years. How do people think about that? Because boards are demanding more and more AI features, but you don't wanna just run after every shiny AI object.

    17. TO

      Yeah. Yeah, look, I mean, I, I-The board's reacting to what, what you're giving them and what you're presenting, so, so, um, so I'd encourage you to, like, y- control a narrative. Like, if you show up at a board meeting with no narrative-

    18. AG

      Wow

    19. TO

      ... yeah, you're gonna get, like, crushed of, "Why don't you do this, and why don't you do this? Have you thought about this?" I, I, I, I think the key thing that I always focus on is, um, I have a kind of first principle approaches to boards that, like, these are really smart people, they're here to help. We have, we have vested interests. We are aligned in the incentives, so, like, everything they say is valuable. Um, so if they bring something up and, "Why aren't you doing this?" Or, "Have you thought about this?" My standard response is, "Well, one, if we haven't researched it, it's, well, let- let's go take some time. We'll come back to you in the next board meeting." Or two, if we have, like, sharing what we know. It's like, "Yeah, great point. We looked at that. Here's what we found out." And we, we talk through those examples specifically. And, and look, maybe there's something, you know, that some, some bias in, in how we tested it or some, some audience set that w- that was, like, slightly different than what they anticipated, but we, we always try to share the why. Um, the other thing is, like, if you're showing up your board just sort of looking for approval, like you just want, like, a stamp

  12. 56:031:04:16

    AI Product Roadmap

    1. TO

      on your report card, "All looks good," you're using them wrong. Like, they're there to work for you.

    2. AG

      Mm-hmm.

    3. TO

      Like, they're getting equity in the company for a reason. And, and, like, so, so I always think about what do we wanna bring the board that we want their feedback on, that we want to see what they're seeing? Because here's the other thing, like a lot of them, what, what they're doing is they sit on other boards or they're involved in other companies. So, you know, while we're all in our company, it's very easy for us to have group think and, like, only think about our own world, but they are seeing a whole nother world out there. Now, it's, of course, couched in their bias. You know, we, we have some early stage investors on our board which see all, like, series A companies, and they ask me why I can't build Pendo with, like, 10 people, and I'm like-

    4. AG

      Right

    5. TO

      ... "It's a little more complicated than that, folks." Uh [laughs] , um, but then we have folks that are on later stage boards, which, which, you know, see, see totally different things. And, and so, so your job is, like, to come in and, and, and, uh, bring, bring areas for conversation and for feedback, and I think that, that, that's what, how I, I treat it. But, um, I think there's a, there's a... I agree, this is one of the most critical skills in product, and this is where a lot of product leaders fall down, is that they, they have a hard time elevating to that next level around how do I communicate what success look like? How do I align what I'm doing to the business objectives that we have? Like, understand where's the business going and how does your piece roll up to that? So, so, so, so critical. Um, and that, that's essentially what a, what a board needs, is like how does this, um, how is this ultimately driving growth for our business? How is this ultimately [laughs] increasing shareholder value? That's ultimately what the board needs to hear about. And so, so I think a lot of... Honestly, I think a lot, just in general, about how each one of our bet drives shareholder alignment, so that way I'm thinking the same thing they're thinking.

    6. AG

      Yeah.

    7. TO

      So, like, like, like, that's the other feedback I, I'll often will tell people is, like, no board I've ever been part of or have wanted to be on do I wanna see something just made for me, um, although most people make things just for the board. [laughs]

    8. AG

      Yeah. [laughs]

    9. TO

      But I think the lesson in this is thinking very deeply of how do you manage your business and what do you wanna see? And why aren't you looking at this more often? Why are you only creating it for a board? You know, it, it, it, um... When someone comes to meet with me and is like, "I created this just for you," it's like, well, how else were you managing your business? If you just created this for me, what were you looking at? I wanna see what you're looking at.

    10. AG

      Mm-hmm.

    11. TO

      Maybe it is smarter than what I wanted to see, but if it's not, we should all be talking about why we're not looking at the same stuff, right? And, and that... When I, so when I think about stakeholder management, I'm thinking of, to be great at running your business, what do you need to be looking at? And I wanna see the same stuff you do. And, and maybe you, maybe you synthesize it, you know, maybe you, uh, summarize a tab, but like, I, uh, that's kinda... I wanna make sure... What I'm testing is, are you running your business well?

    12. AG

      Hmm.

    13. TO

      And that's what the board's testing of me. Am I running my business well? Am I looking at everything? Am I, am I focused on the right things? Um, that's ultimately the measure.

    14. AG

      That was a mini masterclass in board management. So we move up to the final layer, which y- I think you need to understand, and this is probably most important for people, like, at your level, CPO level, the higher level people. You need to manage the overall AI roadmap to make sure that you have the right direction given down to teams about your position on data privacy, ethics, safety. What do people need to learn here?

    15. TO

      Yeah, I mean, well, this goes back to kind of, like, again, I, I use the term first, first principles. Like, like, what, what do we do with data? What data's secure? Like, what's privacy? One is, like, your, your company probably already has a privacy and compliance position and stance. You need to make sure that whatever you're doing in AI respects it and, and, and abides by it. Like, you can't do something different and crazy. Um, I think understanding bias and fairness, I think is, is really, really, really useful and interesting. Like, like, if I ask a non-related question to my AI, like, is it fully ungated? Uh, you know, one test we like to do in our AI is we'll ask a question about who won some Olympic game.

    16. AG

      [laughs]

    17. TO

      And, and it should say, "Sorry, I can't help you with that," right? Uh, 'cause I, I tried to ask, like, the really nefarious questions around, you know, um, very sensitive topics, but, but you could also... Like, what, what if, just by mistake, product manager released something that didn't have any guardrails, and you had some, like... You know, ChatGPT was in the paper this weekend around some, some kid that was talking to it about suicide, and it, it gave it, like, suggestions, right? I don't know if you saw that article, but, but terrible article.

    18. AG

      Oof.

    19. TO

      And look, like, we should expect some th- some things with technology. I'm sure there's been technology about Reddit threads about bad things happening, and, and no one's, like, it, then... Yeah. Well, we, we... I don't wanna talk about [laughs] whether we have moderation or things like that.

    20. AG

      [laughs]

    21. TO

      Like, that's a whole nother conver- Like, that's a v- that's a loaded term, so I'm just gonna parking lot that. But you need to understand, like-These are all things that could happen, and, and, and how do you deal with it? And, and, um, how does it work in different countries? You know, uh, if you are working with companies in Germany or even Austria and they have workers' councils, what, what, uh, is allowed amongst workers' councils? 'Cause you may come up with this amazing feature that pe- like the entire, the, the largest economy in Europe may not be able to use. [laughs] Because they, they have these things that nowhere else really in the world has them, and, and they're, they're, they're, um... Yeah, they, they have a lot of power. And, and similar, are you, are you working with financial services? Are you working with healthcare, HIPAA? Um, I think these are all things that if you're in those, those markets, you have to understand. You have to understand, do they have a special posture towards AI? Because, and this is a tough one, because you're seeing a lot of conversation around how much AI will be regulated, and right now, I think with the US, our current administration, I doubt there'll be a lot of regulation based on what I currently see. But administrations change, and, and, uh, and, and the thing I always worry about is, you know, um, sometimes pendulums swing too far, so we may be in a, you know, you know... That's one argument some folks are making for at least a little bit of regulation now, so when perhaps someone else comes in, in power, they don't completely overregulate. So I think, I think somewhere maybe in the middle is the right answer. I don't know. I'm not a r- not a politician, but, but I'm saying that these are all the considerations you have to have, so. And here's the thing, if you build it in a way where you can't change how it works, you are screwed. You may have to take the product off the market. So we build a lot of like toggles, switches. All the AI we're building, we're building in a way that we can adapt it fast, like on a dime. If it's like hardwired in, you can't turn things off for different customers in different regions, in different locales, in different industries, you're screwed. So, so when you're building it, I would encourage everyone to build things like treating it like quicksand, because it kind of is.

    22. AG

      Amazing. So we have walked people through a roadmap of how to upskill themselves. As we said at the beginning, you can't just slap an AI PM label on yourself. You need to go learn these things, but then more importantly, you need to go ship these in production at scale successfully. Can you walk us through some examples of how you guys have done this and what it looks like to s- ship successful AI features?

    23. TO

      Yeah, so this is, this is Pendo, so I'm, I'm, um... You know, basic homepage, logging in. The, the first thing I wanted to talk about is we were talking about how to measure success of AI agents, and we talked a little bit about sort of... We talked about evals, we talked about observability and traces, and we talked about things. But what we haven't talked about is just like, like w- what's the user experiencing and how are people using it? And that's an area that, that we've been investing in. So, so look, we have a new, a new type of analytics that we're calling agent analytics, and if you kind of come in here and you, you can see here, um... Let me just work, work into an agent. So like imagine I'm a company and I have a variety of agents, some of which are customer facing, some of which are employee facing, and I just want to get data on how people are using it. I can kind of log in. I can see number of conversations, number of prompts, unique visitors, growth, overall retention. Um, I can see use cases, so it'll actually use topics and themes

  13. 1:04:161:13:07

    Live Demo: Pendo's AI Features

    1. TO

      and start grouping and organizing conversations. You can see if I've, if it's a B2B product, I have customers, I have users, I have retention rates. Retention rates are, "Hey, do people who ask this type of question come back and ask it again?" Which is a pretty good indication of s- of quality, you know?

    2. AG

      Mm-hmm.

    3. TO

      Uh, um, you know, I have some here that was 0%, um, success rate, so you can see here the way it works. I also have this concept of, um, rage prompts, which just like we had rage clicks in the sort of, uh, in the replay days, the, the, we have the, the same concept here, and if I kind of look at some of these rage prompts, you know, "Why can't you find my confirmation number?" Um, you can kind of even click right into the conversation. You can kind of see the information of what's happening, and then you can actually go and watch a replay as well. So you can kind of see what the user's doing while they're doing it. So, um, so this, this, this also sort of highlights, um, sort of the connectedness of our platform. I told you earlier that like part of our view on building any features is taking like a platform first approach, but here's something where we have integration with AI agents. It leverages our existing install. It's capturing full conversations, and it's giving some level of success metrics down to the conversation level of what's going on. And of course, this you can, uh, slice and dice and, um, uh, you know, group by segments, group by accounts, et cetera. And then the other thing we have is we have this concept of analyzing in paths. This is, um, uh, this is really powerful in that, you know, you can actually show what activity was happening prior to getting to the AI agent. So this is, you know, so our view here is that... I'm just gonna quickly run this. But, um, all these AI agents as people are rolling them out in their, their software, it's in the context of existing software. So, so a lot of this is like what we're calling hybrid applications. People are gonna have situations, and I think the, the key will be how do I optimize the experience across these, these, um, various paths? Yeah, so you can see here it's running and, and I'll, I'll let it run. Um, um, but, but the general view here is, is that, um, this will give you context between sort of, you know, traditional UIs and non-traditional. And you can see here just a very, very quick path. You can see how people got to it. You know, you can watch replays along here to watch how people traverse the application. So in a world where you're introducing a new concept, a new UI element, you want to understand how are people getting to it? Where are they going? This is a really, really good way to, uh, to visualize it. Um, so that's kind of one piece I wanted to focus on. I think, um-I think the other piece, I guess since we're talking about hybrid, let's jump into our dashboards because this is also an area where I think it's, it's useful to see, um, how AI can be used in the context of broader applications. So I showed sort of a path. These are funnels, you know, and, and, and part of this idea is, okay, I have a thesis as a product manager that as we introduce an AI assistant, that it's going to improve conversion of certain outcomes. And we talked about outcomes earlier. And I can see complete booking before AI assistant, you can see it's like a 9%. Well, now I have an AI assistant confirmed booking and it's 46%. So this is a good way, and we're also talking about how do we talk a-- tell a story to a higher executive audience. What's the ROI of investing in AI? Well, it's improvement of people actually confirming a booking. That's the improvement. That's the ROI. And you can see here from there, this, this dashboard also just talks about, generally speaking, success for our product. So the-- and the way we think of dashboards is I want systems that are self-service for my team. So, like, if I have a question on this AI assistant, I'm not gonna, like, Slack someone. I'm not gonna call a meeting for someone. That's a waste of my time and their time. I'm gonna come here, and I can see, you know, high-level metrics for this, and we saw some of this earlier, prompts, conversations, retention. I have a goal set up for the assistant, so I can see how we're tracking against the goal. Um, I have use cases, um, that we also talked about. These are sort of the, the emergent use cases. Um, I have, you know, information if I want them, about the team, the designer, the product manager, the program manager. So this is like one, one-stop shopping. So if I have a question around this, I know who to go talk with. Um, [lip smack] I have high-level us-

    4. AG

      Kind of a platform approach, I think you're talking about. You're bringing in so many different elements here.

    5. TO

      Exactly. It's qualitative, it's quantitative. Uh, I can see weekly active visitors growing, and it actually talks a little bit about what it is and why it matters. Um, user adoption of the overall assistant. So this is just a good, solid PM dashboard, um, that kinda looks at all aspects of what's going on. So, um, so if I see something I like... Let me, let me go back to this chart. I can actually ask a question right in line, you know, "Why did this dip?" And it'll go off to someone that can go answer me. So, like, like, that's kinda all built in, but this is a great way to, like, you know, talk actively on what's going on, uh, retention charts-

    6. AG

      So this can become, like, your artifact that you guys are communicating about the results of, let's say, like, at a weekly meeting, and it's basically automated for you.

    7. TO

      Exactly. I mean, time to first use. What-- How long does it take to get there? I mean, uh, I already talked, showed some path and journeys. You can see that. So this is, like, one-stop shopping for everything you need on, uh, a, a given feature area. So that, so that's, um, that's one start, uh, to, to kinda how we, we run, run our business with Pendo, I think. But the other thing is h-how we released our own agents. And for that, we have, um, what we've talked about as agent mode, and you can sort of see just a, a new way of working within Pendo. It's not necessarily a PM agent. You know, we have previous chats, things like that built in. But you can very, very easily say, "Hey, what, what can I do?" And it'll go in, and it'll start working, and it'll, you know, essentially answer, like, what it can do. And then, and you can see here it's taken a absolutely a platform approach. Usage insights, adoption engagement, customer discovery, guide and survey performance, um, replays, debugging. So, like, it's like the breadth of the platform. It, it's everything from qualitative analysis to, to quantitative analysis. So, so I can, like, ask certain questions around, um, "Hey, can you compare two segments?" Let's compare, let's see, gold customers versus silver customers. And that's what it'll do. I mean, it'll just go in and start, like, doing the analysis on, like, comparing, comparing these, these two segments. Now, look, I mean, um, it's taken a few minutes. It's gonna do a little bit of work, but, you know, I, I think, um... Oh, now it's gonna ask me a question. Um, let's look.

    8. AG

      Which it needs to do if it's, like, some analytics-based thing versus just hallucinating.

    9. TO

      Yeah.

    10. AG

      So that's probably a guardrail you guys built into the product.

    11. TO

      Yeah. And look, it, it's actually smart. It's like, "What do you wanna compare?" Like, you wanna compare how they're using guides? You wanna compare against, like, their overall engagement, their qualitative data? Like, you kinda need to, to do it. Let's look at overall activities, say, visitors. Uh, yeah. Let's say over, like, unique visitors per account.

    12. AG

      And I think this goes to the point you were saying of, like, you don't wanna just paste ChatGPT into your product. You actually, you sculpted the underlying LLMs to create a certain experience that matches the user expectations out of your platform, which are that it's a very high trust analytics user insights platform, so we can't just get things wrong.

    13. TO

      Yeah, exactly. And, and now I can go in, and I could show you using Pendo via Claude in our MCP server, and that is totally ungated, and that's Claude using Pendo's APIs to try to answer some of these questions intelligently. And, and, and it'll actually get to a lot of the same answers. It's just gonna take a lot longer because it's actually pulling raw data. It's running Python over it. Um, it's doing a lot of really interesting work, but, but, um, I, I think that's one of the areas where, where it, it's pretty exciting actually. Um, but, um, yeah. And I, I am, uh, wildly impatient, so I'm gonna, like, move over to another tab just to show you faster.

    14. AG

      [laughs]

  14. 1:13:071:14:12

    Ad

    1. AG

      You know that feeling when you try to prototype something with AI and it spits out something completely generic? Then you spend hours tweaking colors, fonts, copy, and features just to make it feel like your actual product? Here's the problem: most AI app builders aren't built for product teams. They're built for those starting from scratch. But product teams aren't building from zero. You have an existing product, real customers, design guidelines, a backlog full of ideas you need to explore and validate fast. That's what Reforge Build does, AI prototyping that starts from your product. Add your customer feedback, strategy docs, and product features as context. Create reusable templates using your product design. Explore multiple variants side by side. Collaborate with your team in one place. Reforge Build generates prototypes that reflect your real pricing tiers, real features, real customer language, not generic placeholder. Stop fighting tools built for founders. Start prototyping like a product team. Reforge Build, AI prototyping built for product teams. Try it free at reforge.com/aakash. That's R-E-F-O-R-G-E.com/A-A-K-A-S-H, and use the code BUILD for one month free of premium.

  15. 1:14:121:22:03

    Stakeholder & Board Management

    1. AG

      Really quickly, though, I wanted to follow up on that MCP point, because I actually kind of made fun of MCP development a second ago, but it turns out it's actually kind of a real standard now. It wasn't just a shiny object. Have you guys been seeing big uptick of that? Has that been a really strategic feature for you guys to have built?

    2. TO

      Uh, yes. Yes, absolutely. I think, um, I, I, I think MCP is gonna open up a whole world of possibilities, and I, and I think we're just scratching the surface. But if you start... And, and we're looking at ways in which we can provide more data to, to MCP, and I, and I can toggle over and, and maybe show you a little bit of this. But here, here's just a quick example of, like, how, you know, it's very easy for it to, like, come back and give me an answer. You can also see that our, our, uh, implementation, and this comes down to... We, we actually didn't talk about this concept of taste in PM, but, but, uh, you know, if we talk ever about hiring and how do you hire and how do you hire for taste, I think that's, like, one of the more interesting areas. But I didn't want just a blank chat. Like, we, we have... Like, ours is multimodal. I can change this to a line chart. I can, like, export it. So it is interactive, um, which I do think is gonna be a big part of the future, is like, how do we have this sort of, like, hybrid interfaces, uh, going forward? And from there, I wanna, I wanna, uh, walk over and show this concept of, uh, the customer finder, which is a, a pretty... We, we talked about discovery as a use case, and, um, here I just wanna show an actual chat that, that I was doing, like, just before we got on the phone. But, um, so I think it's really highlights the, the power of this. But here I was, I was working on this problem, and, uh, I asked it, like, "Hey, I wanna find a bunch of users to go speak with." You know, great, identified 100 users who have visited this page, and it gives me, like, justification for here's a user, here's an account, and here's why I picked them, which is a really powerful way. If I'm doing discovery, I don't know why. And this was focusing on usage. I also coulda asked for people that are going to, um, that are going to- have already provided me qualitative feedback, and it'll actually find those as well. But this is a really powerful way to do it. Now, what's also powerful here, as you'll see here in our agent, we have, like, what do you wanna do next? You can create a segment of them, which means you can target them later. You can download the CSV. You can ac- actually, uh, even create a guide. So if I go here and create a guide, uh, it's gonna prompt me for which app, and then I'll create a guide here. Um, and if I come into here, it'll show, it'll, it'll take me right to a AI guide editor, where it automatically has, "Hey, shape the future of Acme Dashboard." It'll have a call to action on it. That call to action will actually call a Calendly link that will auto, um, populate events on my calendar. So it's, like, a really, really, really fast way to, like, okay, I just identified a bunch of customers. Next time they log in, they're gonna get this, this piece of feedback. Of course, I can edit all this. This is all editable, et cetera. But, but this is a really, really quick way, um, yeah, to, to accelerate your discovery process.

    3. AG

      [smacks lips] Wow, I like how that comes back full circle to what we were talking about, where that is one of the most important things you need to do, and this is how you speed up that workflow so that you actually get to it. [chuckles]

    4. TO

      Exactly, exactly. So, I mean, those are some, like, s- some high-level pieces. The other thing maybe worth showing you, um... Yeah, let's, let's just go back outta here. Like, we'll exit, exit out.

    5. AG

      And while you're pulling this up, I just wanna let you know we're over time, so you... I'm completely fine to go over as long as we need to, but just wanted to let you know. [chuckles]

    6. TO

      Appreciate it. You know, I, I think, I think this is kind of fun to show 'cause this is one of the areas we've invested a lot in. But one of the biggest pain points that people have is there's so much interesting product insights spread across your enterprise, and, like, how do I make sure I synthesize those and bring those back to the ones that really, really matter? So let's say I wanna look at, like, top feature requests, and I click this button. Let's, like, re- remove the date range, and let's go at it. And it's gonna go and pull data from polls I've run, gong calls, uh, support tickets, um, really all over the enterprise, and it's gonna surface it. And I would think about one of the most painful parts of a PM job is just sifting through qualitative information. Um, I mean, I'd often, uh, in my past tell PMs that one of their core competencies is going and sitting next to support to figure out what's going on. And here now, if I, I click this, you can see, like, we have this, this feature. You can see what the source is. I can see information around it. So it's a really, really, really powerful way to sort of, like, go through all these sources, and here's actually one with more sources. You can see CSVs, gongs, internal forms, guides, portals, Salesforce.Found 152 responses, and it organized for me really, really nicely. From there, I can automatically create and link ideas. Basically, I can push it right to Jira. So, so, so the vision is, is I don't have to read a million pieces of qualitative survey in Salesforce or in Gong or, or et cetera. Like, I can pull it right back here and drive decision-making. So it's another great example of AI and how, uh, how some of these AI solutions are really making product managers' lives easier.

    7. AG

      This is, like, hands down one of my favorite applications of AI because something AI's really good at, especially if a tool like Pendo is purpose-built for it where it's not hallucinating or anything like that. There's so many sources of information we're getting, whether it's Gong, Zendesk, social media, Slack. If you can get AI to help you analyze those and see, like, the needles in a haystack, you can find so many good ideas.

    8. TO

      Yeah, yeah. Yeah, well, that, that's a quick snapshot of what we're doing, but, um, yeah, yeah. I, uh... It's fun.

    9. AG

      Amazing. So this is the roadmap, folks. Check out Pendo's features if you wanna learn more about how they've built them, and you can see now that you've heard from Todd how he thought about what is the vision, what are the safety ethics and guardrails, what is the strategy, what is the roadmap, how they actually then implemented those features. This is gonna be a really, really good exercise for you in building your product thinking muscles, specifically your AI product sense muscle, which I think is different from product sense. AI, it's non-deterministic. You have to think about safety, all these different elements. You have to think about cost, which we showed you guys. So there's so many new elements that you need to develop. Use the tool kit that we've given you in this episode. Go out there, analyze episodes. Go try out Pendo's feature. Come back, comment below what you learned. And Todd, thank you so much for delivering this masterclass.

    10. TO

      Thank you, Aakash. It was a lot of fun, so...

    11. AG

      Bye, everyone. So if you wanna learn more about how to shift to this way of working, check out our full conversation on Apple or Spotify Podcasts. And if you want the actual documents that we showed, the tools and frameworks and public links, be sure to check out my newsletter post with all of the details. Finally, thank you so much for watching. It would really mean a lot if you could make sure you are subscribed on YouTube, following on Apple or Spotify Podcasts, and leave us a review on those platforms. That really helps grow the podcast and support our work so that we can do bigger and better productions. I'll see you in the next one.

Episode duration: 1:21:53

Install uListen for AI-powered chat & search across the full episode — Get Full Transcript

Transcript of episode C9hL_4Hrr8E

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome