Aakash GuptaThe #1 Skill PMs Need in 2025: AI Product Discovery Masterclass by World’s Leading Authority
EVERY SPOKEN WORD
55 min read · 11,069 words- AGAakash Gupta
What does discovery look like in the age of AI? Does it change everything or does it change nothing?
- TTTeresa Torres
You know, I've been getting asked a lot, like when delivery is free, do we still need to do discovery? And I actually think when delivery is free, discovery becomes more important.
- AGAakash Gupta
In today's episode, I sat down with Teresa Torres, the legendary author of Continuous Discovery Habits. This is a book that I myself have read multiple times, marked up multiple times. So many PMs are doing customer interviews, yet their products and their features fail. Why? She has worked with over 17,000 PMs across the world in over 100 countries, so she brings the insight you need to improve your discovery, not just for regular features with AI, but for AI features.
- TTTeresa Torres
Here's the challenge I see with prompt engineering. We all have experience like chatting with ChatGPT or Claude, and we're in a conversation. If we get that first prompt wrong, we can immediately refine. But when you're building a product, the prompt can't be refined by you. Once it's live in your product, there's no refinement. It's a one shot.
- AGAakash Gupta
So what are the signs that PMs are doing fake discovery?
- TTTeresa Torres
Nothing in their backlog changes. They don't kill any ideas. There's a lot of discovery theater out there.
- AGAakash Gupta
Before you go, Teresa, I have to ask, how big is the business of Teresa Torres?
- TTTeresa Torres
[laughs] Yeah, so I'm a [beep]
- AGAakash Gupta
Teresa, welcome to the podcast.
- TTTeresa Torres
Thanks for having me. I'm excited to do this.
- AGAakash Gupta
As I was saying off air, you are on my S tier of guests along with Marty Cagan. You are the two guests I wanted most when I dreamed of starting this podcast, and that's because I think you have probably advised more PMs on discovery than anyone else in the world. What would you say the number is at now?
- TTTeresa Torres
Yeah, it kind of depends on how we count. So when I was coaching teams directly, I would work with about 30 teams a year. I did that for over a decade, so probably over 300 teams. And what that means is like weekly calls for, for multiple months. So that was sort of in depth. Through the Product Talk Academy, we have over 17,000 students, which is pretty mind-blowing, and they come from over 100 countries.
- AGAakash Gupta
Wow. 100 countries.
- TTTeresa Torres
Yeah.
- AGAakash Gupta
I didn't even know PM was practiced in 100 countries. So you have really seen the whole world of product discovery. And it's interesting because so many PMs are doing customer interviews, yet their products and their features fail. Why?
- TTTeresa Torres
Yeah, this is a complicated topic. I think there's a lot of reasons for this. I think the primary reason is that we're not that good at interviewing. So a lot of teams, they go into interviews with the intent of exploring their solution and getting feedback on their solution. That's not really the best way to get feedback on our solutions. Our goal in our customer interviews should be to learn about our customers. Um, even if we know that's the goal of the interviews and we don't talk about our solutions, we tend to ask really unreliable questions like, "What do you like and dislike about different things?" "Tell me about your experience broadly." And so one of the things that I teach, I introduce this idea in the book, we teach it through all of our programs, is this idea of story-based interviewing. So how do I talk to you and collect a reliable story about your past experience? So I learn about what you actually do and not what you think you do, not what you aspire to do, but, like, in reality, what did you do recently so that I can make sure that I'm building a product that fits in your, in your lived world.
- AGAakash Gupta
So it sounds like people are asking too many hypothetical questions. Here's a prototype, would you like to use this? What would be the better way for them to approach that conversation?
- TTTeresa Torres
Yeah, so, like, let's look at this in tiers. So the first is a lot of people present a solution and say, "Would you use this?" That's terrible.
- AGAakash Gupta
[laughs]
- TTTeresa Torres
Unreliable feedback. We're not good at predicting our future behavior. Also, humans wanna be nice, so we're gonna say, "Yeah, of course I would use that." And even if we're, we think we're being honest, it's actually, uh, we're optimistic about our future time and about what we might do in the future, so we might actually genuinely think we're gonna use it, but it doesn't mean we are. There's actually better ways to test our solutions. So I really like assumption testing when we're evaluating solutions, which is a whole different activity from interviewing. We can get into that if you want. What I like to use interviews for is, let me just learn about you. And so what I wanna do, sort of the next level, like, people learn, okay, I should ask an open-ended question. So they'll be like, "Tell me about your experience with my product." The challenge with that type of question, it is open-ended. I might learn a lot about you, but what I'm gonna learn is what you think you do, not necessarily what you actually do. And so to fix that, I wanna ask you, tell me about, about the last time you used the product or, even better, tell me about the last time you solved the problem the product was designed to solve.
- AGAakash Gupta
That is exactly what I've learned through hard trial and error and reading and rereading [laughs] this book. It actually works, folks. And you briefly mentioned assumption testing. Can you give us the 30-second overview of what that is?
- TTTeresa Torres
Yeah, so it's this idea of we tend to fall into the trap of big idea testing, which means we have to do all the design up front and then usability test it or we have to build it and A/B test it. The challenge with those strategies, they're great to have in our toolbox. We want to do them eventually. But when we're doing them in discovery, we're learning after we did all the work if the idea would work or not. I prefer to learn if something's gonna work or not before I do all the work. And so the key to that is to break the idea down into its underlying assumptions, so what needs to be true in order for this idea to work, and then to test those particulars. Um, and we te- we can tend to test assumptions much quicker. We don't have to do all the design work. We certainly don't have to do all the engineering work. And we can start to collect data on whether, whether an idea would work or not before we do all the work.
- AGAakash Gupta
Okay. And I think this comes together, although you can correct me if I'm wrong, in the Continuous Discovery Habits system. Can you break down what exactly is continuous discovery?
- TTTeresa Torres
Yeah. So this framework I developed to help newly empowered product teams figure out what in the world to do. So let's, let's talk about that for a second. I think historically, product teams have been asked to deliver specific features, usually by specific dates. We ta-
- AGAakash Gupta
[laughs]
- TTTeresa Torres
We call these roadmaps. Sometimes teams are creating those roadmaps themselves and they've had to deal a little bit with wh-What do I put on my roadmap? But for a lot of product managers, their stakeholders are putting things on their roadmap, and their job is literally to just deliver. And so what I saw happen is companies started to shift from a output focus to a outcome focus. So now they're saying, "Okay, teams, we get it. The future's uncertain. We don't know what you should build, but you need to reduce churn, or you need to increase retention, or you need to drive engagement." And these teams are like, "What? You've always told me what to build. I don't know how to do this." And so the Continuous Discovery habits are about how do we answer these evergreen, wide open challenges like engagement and retention, or even customer acquisition. And so it starts with having a clear idea of what your outcome is. That's typically a metric. Can be something like increase engagement. Um, usually it's more specific than that. We're defining the types of engagement we want. But that's good enough for now. And then the habit on the left here is we're interviewing, and we're interviewing week over week with the goal of understanding our customers. So we're not testing our solutions. We're not evaluating our solutions in our interviews. We're really trying to understand who are we building for. We're setting up a good building with cadence. And then the visual in the middle, I should have started there, is an Opportunity Solution Tree. This is a visual that I designed to help people keep track of their messy discovery work. So it starts with an outcome at the top. We're interviewing to uncover the opportunity space. Opportunities are unmet customer needs, pain points, and desires. So as we interview, as we learn about what people did in the past, we're gonna uncover friction. We're gonna capture all that in the opportunity space. Eventually, hopefully not after too long, we're gonna choose a target opportunity. I really am a advocate of compare and contrast decisions when evaluating solutions. So we're looking at multiple solutions for the same target opportunity, and then we're using assumption testing to break those solutions down into their underlying assumptions and then assumption testing to evaluate which one looks like a winner.
- AGAakash Gupta
So this diagram is timeless. I still refer people back to the core Continuous Discovery loop. But how has AI changed Continuous Discovery?
- TTTeresa Torres
I think it depends on who you ask, and I really am s- conflicted about this. So let's talk about two different paths of how AI impacts this. There's the path of how does AI affect how I do my day-to-day job, and then there's the path of how does my job change when I'm building AI products. So in the first path, I love AI as a thought partner. So, like, even if I'm defining outcomes, I might be chatting with Claude or ChatGPT about my outcome and how I might better frame it or how I could better measure it. It's probably not telling me what my outcome is because it needs a lot of business context for that. I mean, you know, now people set up projects with all their business context, and maybe it could help with that. But typically, our outcomes are coming from our executives, and so maybe it's we're using it to refine that activity. I do know some teams that are starting to experiment with having AI interview their customers for you. I actually think this ... We live in a world where this is very possible. I'm a little bit concerned about what it says to our customers, like, we really wanna learn about you, but not enough to spend our own time doing it. I also have some concern about, like, one of the benefits of interviewing beyond what we learn from our customers. Just this act of having firsthand exposure to our customers helps us build empathy for them.
- AGAakash Gupta
Yeah.
- TTTeresa Torres
It helps us see the gap in how we think and how they think, and I don't think, like, reading a transcript is gonna get you that benefit. So I'm still a big advocate of humans talking to humans. One area where I'm really torn on is on synthesis. So I know a lot of teams are starting to use AI for synthesis. Um, here's what I like about this. I know a lot of teams that do no synthesis. They conduct interviews, the notes go into a folder, they never look at them again. If that's what you're doing, I think AI can help. My caution is I've experimented a lot with, like, my synthesis ver- versus, like, Claude's synthesis or ChatGPT's synthesis, and it misses a lot. It's r- you have to work really hard to get it specific to help it really identify opportunities. I've been running these experiments. I will probably release tools in this space eventually. It's, like, 60 to 80% good, and I worry about what we lose in that 20 to 40%. I also worry about what we lose when humans aren't in the data. I think it's r- I think our brains change when we spend that much time in our data.
Episode duration: 56:30
Install uListen for AI-powered chat & search across the full episode — Get Full Transcript
Transcript of episode wGuRuuPuYNQ
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome