EVERY SPOKEN WORD
35 min read · 7,323 words- 0:00 – 0:50
Intro
- AGAakash Gupta
The hardest new PM interview in 2026 is the AI product design interview. Whether you're interviewing at the top-paying AI PM companies like OpenAI, Meta, and Anthropic, which pay over $500,000 a year, or at the regular AI companies like Figma and Adobe and Miro, which pay around $300,000 a year, or even the lower-tier AI companies like Cisco and IBM, which pay $200,000 per year, these are great jobs, and they all ask AI product design questions. How would you design the new version of ChatGPT? How would you design an AI for space use cases? How would you improve 50% negative feedback from non-technical users on Claude Code's code generation? These are the types of questions that they're asking, and you need a skill set to answer them that isn't the same as the typical product design. And on YouTube, there
- 0:50 – 2:24
Five Types of AI Product Design Questions
- AGAakash Gupta
are tons of mock interviews about product design, but zero about AI product design. So in today's episode, we are presenting you the very first product design for AI mock interview on YouTube. We are going to show you exactly how to answer this question, and I have brought along my friend Bart, who is one of the co-instructors of my Land PM Job cohort. This cohort is a three-month program with three live sessions per week and biweekly one-on-ones where we help people land PM jobs. In the first cohort, we've had tons of people interview at OpenAI, Anthropic, Meta, Google, Microsoft, Netflix, all the top companies, and we've learned, what do those companies ask? What are they looking for? What is a successful response versus a failing response? And so in today's episode, you're gonna get all the sauce you can't find anywhere else, how these companies interview, and most importantly, you're gonna get a full mock interview of a product design question.
- SPSpeaker
Hi, Dr. Bart here, and together with Aakash, we are co-instructors on landpmjob.com.
- AGAakash Gupta
So there are five major types of product design interview questions you might get: product improvement, new product design, platform API, constraint-based, and UX/UI for AI. All five of these are important to practice, and they're pretty different. In our experience in the cohort, most people are failing on new product design. And so we have been advising a lot of people on OpenAI. I've been advising people on OpenAI even back to 2024 and 2023, and OpenAI has this one question they ask all the time. They've still been using it for years. So in today's video, we are going to cover that question that OpenAI asks.
- 2:24 – 3:02
The OpenAI Question
- AGAakash Gupta
So without further ado, into today's mock.
- SPSpeaker
You ready? Let's go. Aakash, how are you?
- AGAakash Gupta
Awesome. Thanks for having me here at this interview.
- SPSpeaker
Perfect. So I have only one question for you, and if you have any follow-ups, you have any doubts, I'm here, though I'm mostly interested in your answer. So tell me, if I asked you to design a new AI product that will help communicate with pets, how would you approach this topic, and what would you design?
- AGAakash Gupta
Wow, that's fascinating. Can I take a minute to just structure my
- 3:02 – 4:09
Clarification Questions
- AGAakash Gupta
thoughts on this?
- SPSpeaker
Of course, even two if you need.
- AGAakash Gupta
So I actually wanted to start with a couple clarification questions if that's okay with you.
- SPSpeaker
Go ahead.
- AGAakash Gupta
Yeah, I was starting to think about, oh, should we go talk about the users next? But I realized I wanted to clarify a couple things. Are we focused on a specific pet type, or should I just pick one?
- SPSpeaker
Pick one. Pick plenty. We feel that the pet communication is something that is a problem that's not tackled really using technology or any mass adoption. So we believe, for the sake of this question, that there's a powerful market here which just waits for the right product to solve the right problem.
- AGAakash Gupta
Is this a standalone product or integrated within an existing OpenAI offering?
- SPSpeaker
Your pick.
- AGAakash Gupta
So what's our success metric? Engagement, revenue, or something else?
- SPSpeaker
Actually, our mission in OpenAI is to, is to move towards AGI. So if this moves us towards that goal, that's the only metric we are really interested in.
- AGAakash Gupta
Fascinating. So I think I have a lot of empty white space here. Let me go ahead and build out the rest of the structure.
- 4:09 – 4:47
Structuring the Framework
- AGAakash Gupta
Can I take a minute?
- SPSpeaker
Of course.
- AGAakash Gupta
All right. Thanks for that. So I just created a basic structure for us here, which I'm thinking through, which is let's focus on users, maybe pick a specific group or segment of users that we really care about. Let's come up with their problems deeply. Let's think about some creative solutions. Let's hopefully pick one or two or prioritize them. Then let's try to actually design one so that we get into the product design portion of the interview, and then end with the progress towards AGI and any risks that there might be. Does that sound good?
- SPSpeaker
Perfect. Let's move on. Blow me away.
- AGAakash Gupta
All right, so let's jump into
- 4:47 – 6:13
User Segmentation
- AGAakash Gupta
users, and let's just think about what are the segments of users that we care about, right? So I'm gonna start with one interesting segment, right, is kind of the new pet owners.
- SPSpeaker
Mm-hmm.
- AGAakash Gupta
That could be one interesting segment. If we think about them, they're learning to understand their pets' needs, right? Probably pet owners with behavioral issues. I'm thinking, like, any sort of issues really, but, you know, training related, aggression, anxiety, separation problems. Probably pet owners of, uh, aging or sick pets. We would also have pet owners... Let's see. Who else? Professional trainers. Probably multi-pet households might have slightly different needs as well. Then we'd probably have, like, pet considerers, people who haven't even thought about getting a pet. And then I think another interesting segment might be, like, sort of like health-conscious or sort of crunchy pet parents. So I think these might be some different archetypes of potential buyers. We, we need to think about really two categories now that I think about this. So I'm gonna put a box around this.I'm gonna say, okay, this is buyers, right? We also probably need to think about what types of pets we're talking about, right? So that's another really important segment.
- SPSpeaker
You tell me. Wouldn't it be better to focus on the problems to solve first rather than on, on users?
- 6:13 – 7:20
Selecting Pet Types
- AGAakash Gupta
Um, I think we need to figure out what sort of users we care about, right? And part of this is even just understanding that, oh, we need to think about what the pets are too. That was an insight we had. So I think I always like to start with the people involved.
- SPSpeaker
All right.
- AGAakash Gupta
I think dogs, cats, fish, I think these are probably representing, you know, seventy-five to eighty percent of the pet market. And then we also would have sort of like all of our exotic animals or our other categories, uh, depending on your kind of your culture. But, you know, probably like, I think one other to maybe put in here is like horse or farm-related groups, but I, I maintain that this is probably twenty percent. So I think in terms of focus area, let's focus on the bigger market, right? The dogs, the cats, probably like these are the ones that I think people spend a lot of money on and matter the most to people and could be like good sort of MVPs. So maybe we start with dogs and cats. And then in terms of the user groups, I think all of these user groups are pretty interesting. Maybe we kind of use that knowledge of these user groups as we brainstorm problems. Does that sound good?
- SPSpeaker
Perfect.
- AGAakash Gupta
Awesome. And I would say that,
- 7:20 – 8:01
Picking the Target Buyer
- AGAakash Gupta
you know, one little thing I will add here is, you know, it sometimes is good to pick a buyer, and so we did have this goal, I guess, of picking one. If I had to pick one, for me, the group that I think is probably gonna be the most interesting in terms of like motivation as well as market size is gonna be pet owners with issues, right? Because they're gonna be facing so many specific problems. They're probably already spent tons of money on trainers and vets and behaviorists, and there's probably a measurable outcome, like a behavior change that we can create, and there's also that emotional connection they have that we can drive retention. So now that I think about it, we'll brainstorm for all the buyers, but we'll focus on pet owners with issues. Does that sound good?
- SPSpeaker
You tell me.
- AGAakash Gupta
Let's do that. So let's
- 8:01 – 9:18
Problem Brainstorming
- AGAakash Gupta
go into the problems, and I'm gonna keep a fast pace 'cause I really wanna make sure that we get to the design section here.
- SPSpeaker
Mm-hmm.
- AGAakash Gupta
So I'm just gonna brainstorm a little bit. Like, I guess one of them is I don't know why my pet is doing something. I guess a really common use case would be I don't know why my pet is doing something. Maybe they're peeing on the carpet or whatever it is. Maybe they're barking at nothing or the cat is attacking people's feet. You know, they're, they're not gonna have an interpretation layer that this product could really solve. I guess they may not know if they're making it worse or better. I think another thing is that professional help is just expensive, right? And it's also not that accessible versus something we could build could be super accessible. I guess that there's also a lot of contradicting advice out there. Sometimes it's hard to catch the behavior when it happens. For instance, they peed, but I wasn't there when they peed on the rug. Um, sometimes people are worried about, like, is the diet causing problems? You know, I see a lot of people just buying, like, all sorts of crazy expensive stuff for their pets. And then, I don't know, maybe sometimes people think, like, "Is this pet even right for me?" Like, "Am I able to handle this pet or not?" These would be some of the big problem areas. What do you think of that? Are there any others I should consider?
- SPSpeaker
No, this feels like a complete list to me. Carry on.
- AGAakash Gupta
So I guess we
- 9:18 – 10:09
Prioritizing Problems
- AGAakash Gupta
need to figure out, like, which ones do we wanna focus on in terms of our solutions. So I'm just thinking about these. I feel like there's this sort of foundational problem. If we think about this almost in terms of like a, a pyramid or something like that, like at the foundation is you need to know why they're doing something. So that's like number one, if we number these. And then you also need to get like some foundation of when it happened. So this is like the facts, right? That's like super important even before we get to like advice and is it right for me. So based on that, I would categorize the other sort of five problems higher up in this layer pyramid, and I would focus on one and five. Does that make sense?
- SPSpeaker
Yeah. The only thing that this can be cut out is how do we connect that to the AGI goal?
- AGAakash Gupta
True. Um-
- 10:09 – 11:45
Connecting to the AGI Goal
- SPSpeaker
If that's, that's not addressed, then it feels like we just ignored an important part and just wandered off in creative solution for the problem at, at hand, forgetting like the main goal. If I were to put words in your mouth, I would focus on the connection aspect. So it's good with the helping behavioral issues because this is a unique connection with a simpler organism, and this understanding how to solve these problems effectively on a large scale allows us to perhaps build empathy for the AGI on subjects that aren't humans. And, uh, though we are working on knowledge that's proven, that we scraped from the internet and it's not unethical, it feels like a good stepping stone to actually then create something that would do the same for humans, which are way more complex than dogs and cats. And this is how we do-
- AGAakash Gupta
Yeah
- SPSpeaker
... the AGI connection.
- AGAakash Gupta
Nice. I like it.
- SPSpeaker
And then we-- And actually, then you can use tho- that list and the AGI, uh, goal to prioritize the problem to solve.
- AGAakash Gupta
Yeah. Okay. So the other element in terms of the pyramid, of course, that we wanna think about is what is like the progress towards AGI? And I think that getting the right factual progress is the first step. It's like we need to be able to capture the data before we can build the knowledge layer on top. And it's also gonna further just improve our empathy with animals so that we can improve our empathy with the AI. So I would focus on one and five. Does that make sense?
- SPSpeaker
Yeah. Perfect.
- AGAakash Gupta
So in terms of the solutions, let me
- 11:45 – 15:25
Solution Brainstorming
- AGAakash Gupta
take a moment and just brainstorm these, and I'll come back to you. Does that sound good?
- SPSpeaker
Go ahead. Interview aside, Smart Collar sounds like the first AI hardware that actually makes sense.That's some patent or anything.
- AGAakash Gupta
All right, I took a couple minutes there just to come up with this, 'cause I really like this. So I came up with, like, seven different solutions. So I'll walk you through these if that sounds good, and then we can prioritize one. So I came up with this idea of, like, a smart collar. [chuckles]
- SPSpeaker
Ah, that's interesting. Go. Tell me more about it.
- AGAakash Gupta
So I came up with this idea of a smart collar. So basically, this is like a wearable with different biosensors that it's really sensing the pet's physical state. So let's say their heart rate spikes.
- SPSpeaker
Mm-hmm.
- AGAakash Gupta
Then it could actually send you a notification or even it could speak to you like the pet. That's why I think it's a cool collar. That says, "Hey, Bart, your dog's feeling stressed out because you are running around cleaning up." You know, whatever it might be, it'll help you understand things from the pet. So I think that's one idea. This next idea I came up with is, is this PetGPT Vision app. So this could be like a smartphone app where you point your camera or your smartphone camera at your pet, and then a multimodal AI is gonna analyze their position, their body language, maybe if they're barking or not, and it's gonna give you, like, a readout. So there's no new hardware needed really, and it might say like, "Hey," like, "your dog seems unusually, um, calm today. Maybe he hasn't had enough to eat," something like that. Then we could have a two-way edge hub. This is my third idea. So this could be like a home device, kind of think like your Google Home or something like that, but it's constantly detecting what the pet is feeling or doing, and then it might even play back things. Like, maybe your pet really likes to see the color purple or it really likes a specific sound, so it might even help live at the edge help them. And then I have this fourth idea, which is like a real-time behavior coach. So basically, the AI is gonna observe both the pet owner and the pet during their interactions. It might say something to the pet owner like, "Hey, you have risen your voice, and that's really upset your dog." Or it might say something like, for your dog, it realizes that a specific response will help the dog, and so it plays that response. Then we have the AI dietician coach, so I think this is really around that specific problem of, like, some people are gonna find that their dog is having some behavioral issues. They're too fat. They're too skinny. They're not eating. And so I tacked it back to that. So it's gonna track their food intake, treats, behavior patterns to identify nutrition-behavior connections. So like, uh, "Luna's afternoon aggression correlates with her high-grain breakfast," something like that. "Here's an adjustable mean plan, meal plan." Maybe we even integrate it with food delivery for really awesome execution.
- SPSpeaker
Mm-hmm.
- AGAakash Gupta
Then a pet match advisor, so this is to that problem of, is the pet right for me? We could have somebody, an AI that helps you choose the right pet, and maybe even, like, environmental modifications to improve compatibility. And then finally, a conversation simulator, so this is, like, you know, the most out there. I was just thinking, like, what is the AGI product, you know? So it's like we're a voice interface where you talk to your pet, and the AI responds as if it were your pet. So you might say something like, "Hey, Max, why do you bark at the mailman?" And then the AI would respond in character, "The uniform triggers may terr- trigger territorial instinct. I'm trying to protect you." So obviously, we are not gonna be able to get dog-to-word translation. That's not a thing that I think is ever gonna happen with the AI, but we can get AI to speak on the behalf of the dog. Do all these solutions make sense?
- SPSpeaker
Yeah, all clear. So only thing left is to prioritize the one you'd like to focus on. So tell me, how would you prioritize the solution that
- 15:25 – 18:53
Prioritization Framework
- SPSpeaker
you'd pursue as a PM in OpenAI?
- AGAakash Gupta
Yeah. So what I'm gonna do is, that's why I numbered these, let's just create a quick sort of a table of sorts, I think, and that's gonna help us kind of rate these. In a, in a real-life setting, I might go try to build some prototypes and stuff. Do we have time today to go build a prototype or pull out a laptop?
- SPSpeaker
No, let's just keep it, like, in person, no computers. I just want to s- see you work. If you'd like to, like, show me a mock, let's use pen and paper in front of us. That, that's good enough.
- AGAakash Gupta
Awesome. Perfect. So then I won't rush too much on this. So we need to-- So let's think about what are the criteria that I guess matter, right, across these. So I think that, um, the criteria that matter the most to me, we're gonna have, like, user impact. We'll have, I guess, tech feasibility, 'cause some of these are more or less feasible.
- SPSpeaker
Okay.
- AGAakash Gupta
Differentiation from existing products, engagement potential, because we really wanna build something that engages a lot of people. So we can kind of, like, look across all of these. So let's start with the smart collar, right? So I think very high user impact. I mean, low technical feasibility because it's hardware.
- SPSpeaker
Should you add the AGI part to-
- AGAakash Gupta
Differentiat-
- SPSpeaker
... to the table?
- AGAakash Gupta
Uh, no, let's not do that.
- SPSpeaker
Okay.
- AGAakash Gupta
Because I think that AGI will come from here, but once we have something that's making user impact, it'll, it'll drive towards that.
- SPSpeaker
Okay.
- AGAakash Gupta
So I think we get medium engagement potential. And then PetGPT Vision app. So I guess this would be, like, kind of like a medium user impact, high technical feasibility, a lot much easier, medium differentiation, medium engagement potential. Then we get a two-way bridge hub, so medium user impact. I think kind of low feasibility 'cause we're talking about hardware plus on-the-edge processing of that AI, so we need, like, a small model to build it. Differentiation would be quite high, though, I think. And then engagement potential, kind of low. Now, if we think about a real-time behavior coach, wow, like, this is the highest user impact, right? It could really make a difference in your pet's life. Tech feasibility is gonna be more on the medium side. Some of this stuff I don't even know if it'll work, like soothing sounds. High differentiation, of course. Engagement potential, I think this is, like, very, very high. So this is looking like a really, really promising product so far. AI dietician coach, I mean, that's high, right? It makes a big difference. A lot of people have these problems. I think technical feasibility, not as hard as the other ones. Differentiation, I mean, professionals can do it, so it's more medium. And then engagement potential, also more medium. Now, if we think about PetLife Match Advisor, I mean, user impact, it's medium to low, honestly. I would say, like, LM, low to medium. Um-Technical feasibility, very high. This is the easiest one. I mean, we could probably do this with the models that exist today. Differentiation is medium, and engagement potential, I think, is the problem here. I mean, it's one and done, right? It's not gonna re-engage people, and so that's also one of the reasons I knew this solution was here, that I included engagement potential. Then conversation simulator. This is our last feature. So, I mean, I think this is a medium user impact, much like two and three. Technical feasibility is high. It's pretty easy. Very high differentiation, I think, for this. Very high engagement potential. So if I were to just look at all over, over all these features, I mean, the real-time behavior coach is coming in number one, and the conversation simulator is coming in two. Does that sound good? Is there anything else you want me to explain about my thinking there? I just kind of blew right through it quickly so we can get into the design.
- SPSpeaker
No. All clear, all transparent. Choose
- 18:53 – 20:21
Selecting the Final Solution
- SPSpeaker
your solution or problem to solve actually.
- AGAakash Gupta
Well, for me, I mean, if you don't have a choice, it rates the highest. Let's go real-time behavioral coach. This is going back to our problem, right? If we go back to the problems we wanted to solve, the number one and number two, I don't know why they're doing something, and I want to catch behavior when it happens. This is solving those two problems the best. The conversation simulator, if I think about it, that's almost like a magic moment for this, right? That's gonna create the emotional stickiness and differentiation. So we can actually think about putting those two together. Let's put these into one product, I think, is what I'm ultimately leaning towards. One product with this is the magical moment, the conversation simulator. Both of those are gonna leverage OpenAI's core strength, which is multimodal understanding and natural language. Um, I think if we do a software-only approach on these, that's really gonna help us, and we-- maybe we think about phasing it that way in terms of the design. It's gonna lead to faster iteration, broader reach than a hardware play. And when you think about that AJ goal, we do wanna get it to everybody, and we want to use our nine hundred million weekly active users in ChatGPT. And then I'm thinking, like, actually this dietician coach, which rated pretty highly as well, that could be kind of like a follow-on. So that could be like-
- SPSpeaker
Almost feels like a subset of, of it, doesn't it?
- AGAakash Gupta
Yes, exactly. And that's why I think the framing of foundational problems actually helped us because now we're able to solve the framing of the foundational, and then the design of these other ones will kind of come naturally. So let's move into the design because I know we're limited on time here, and obviously this is a
- 20:21 – 24:22
Core Flows Design
- AGAakash Gupta
design interview, so we want to think about that. So what we need to think about, I guess, is the, the core flows, right? So let's write out these core flows together, and I'll just brainstorm, and interrupt me if I miss anything. So core flows. If I think about the core flows, we have the setup phase. So the user is gonna onboard with a pet profile, like species, breed, age, any known issues. Maybe they probably... We probably wanna have, like, a, uh, a video upload of the pet in their normal state if we're gonna have a real-time behavioral coach. And then probably the user is going to describe two to three issues in natural language. So we'll have a video for the state, and we'll have sort of like a quiz for pet age, that kind of thing. So that's our setup phase. So in terms of the next flow, we're gonna have, like, we're gonna have two types of use, really. We're gonna have a passive use and an active use. So we kind of need to design both of these. So let's see here. The passive use, this is gonna be like camera, audio monitors during key times. So this would be things like owner departure or mealtime. Then I guess the AI, we want it to build, like, a behavioral model, sort of like a baseline. I imagine, like, this type of a model, we want it to be pet specific, so it'll probably take, like, seven to fourteen days for us to build it. And then we probably want to give some sort of, like, summary to the user, maybe like a daily thirty-second summary to the user. So that would be the passive usage, and then we're gonna also have active usage. So if we think about active usage of a coach, right? This is when the user is gonna initiate during, uh, either bad behavior or training. And I guess what we want to do here is we want to show, we want to look at both the pet and the owner, and we want real-time voice coaching using, you know, the amazing ChatGPT real-time voice models. Maybe, like, slightly fine-tuned, of course, for this use case. And then we'll probably want some sort of a post-session replay with AI annotations, of course. So those are gonna be some of the keys. And then, as I said, the magic moment here is probably gonna be the conversation mode. So that would be the third mode we need to think about in defining this. So the third mode, let's think about conversation mode. So in conversation mode, what's gonna be happening? Well, we're gonna need some amount of sufficient data, right? So we're probably gonna have to say, like, I would guess, like, at least two weeks. So after two weeks, they can unlock talking to a pet. I don't know. We'll have to work with our research team to figure out whether two weeks is the right amount, but that's what I'm thinking, like, off the top of my head based on the AI products I've built. And then we're gonna need a voice interface. I think, like, we can probably rebuild a lot of what ChatGPT has now. Then we need to make sure that this model is really not hallucinating anything, and it's all observed. So we'll need to tune the model really specifically. So an example might be, like, if they ask Luna, "What do you need right now?" "I've been alone for six hours. I need movement and your attention." [chuckles] When you just look at your phone at home, it gets confusing. That would be some wow moment insight, right? Where the person is like, "Wow, me looking at my phone, it's losing that attention." So these would be the key use cases. So we need to think about, like, what are some of the key design decisions here, and then I'll go ahead and try to sketch some of this out. So looks like we're running out of space here. The key design decisions, I think we're gonna make this... So key design decisions. I think we want to make this voice first. Does that sound right to you? Not app first.
- SPSpeaker
Okay.
- AGAakash Gupta
And then we want to be really careful about anthropomorphization because they don't, as far as we know, think exactly like humans. And so we need to push the research team to make sure what we do anthropomorphize is correct. And then we probably want to have, like, some sort of progress visualization because we're talking about problematic... pets. And then I think we want to have a hardware-light approach. As I mentioned earlier, we'll focus on software first. Does that sound
- 24:22 – 26:56
Key Design Decisions
- AGAakash Gupta
good?
- SPSpeaker
Sounds good.
- AGAakash Gupta
So let me just-- S- You know, honestly, the way I would do this is I'd probably put this into an AI prototyping tool, but let's also sketch some of this out here, and I'll, maybe I'll sketch this out over here. So a set-- So let's look at, like, what the setup, right? It's a video, it's a quiz, and it's, like, a three issues. So the way I'm thinking about this is, like, a typical onboarding flow where they, like, kind of see, like, a progress bar, right at top. Like, "Oh, you've made this amount of progress." And, you know, they'll say, like, "Okay, since we want to do it in a visual-first way, we'll have this sort of like video with, like, the camera button." And essentially, they just click the camera button, and it takes a video of the dog, and we're gonna say, "Okay, so we think that this is this pet, this age. Is this right?" And then we'll give them the opportunity to kind of make some edits. So we're kind of like u- we're, like, inferring everything at, like, step one. And then we'll just have them-- Then we'll go into, like, the voice-first interface. So the voice-first interface is kind of like that big, you know, I think right now we have, like, that ChatGPT blue, like, bubble with, like, these things coming off of it, right? So that's like the well-known sort of voice interface that people have of ours. And so here we'll just have some instruction, like what are the key issues? And I'm just gonna put WKI. Like, what are the key issues? It's gonna say that up there, and the person's just gonna say it, and then we're gonna show it back to them in text, and they can edit it. Does that make sense for setup?
- SPSpeaker
Yeah. All good.
- AGAakash Gupta
All right, so I'm gonna go ahead and put a box around this and say we've designed the setup phase. And looking at the time here, I'm not gonna spend too much more time on designing every little detail of these, but we have the active, the conversational, and sort of this key moment, right? So let's talk about this-
- SPSpeaker
Pause real quick. Maybe, maybe now offer to flip to creating not an AI prototype, but at least, uh, word the prompt. So you say, "Uh, since we are short on time, let's not draw it, but let me provide you a prompt that I would use in, in Lovable," say, and then work off this rather than draw it manually. Like, to offer alternatives and present that you can both do it from the scratch and do it with the modern tools.
- AGAakash Gupta
Nice. I like it. So instead of, like, drawing out the rest of these, how about this? For this next one, for passive, active, and conversational, these three areas, why don't I just show you, like, how I would prompt this in a modern tool like Lovable? Does that sound good?
- 26:56 – 28:53
Prompting Lovable for Prototypes
- SPSpeaker
That sounds amazing. Let's-
- AGAakash Gupta
Okay. So I think for, like, pass-
- SPSpeaker
Show me how you do it. That's, that's something that no-- that I haven't heard before, so that, that's a unique approach, and I'd love to see you do it.
- AGAakash Gupta
Yeah. And even like, I even kind of like Cursor's visual editor, so I might do this in, like, Cursor and Claude Code. But let's go ahead and say, like, passive, like, Lovable prompt for now, right?
- SPSpeaker
Mm-hmm.
- AGAakash Gupta
So basically, what we need to do is we need to monitor the A, the audio/video. Once it's past a threshold, we need to be able to, like, have them converse. So what I would do is I would, I would tell Lovable, "Hey, we are going to create the, uh, specific screens, like the home screen for this app," and I'll explain the app. And this home screen is gonna have two states. This home screen is gonna have one state where, uh, it's collecting data and then the state after. And so we need to consider both of these. And the state after, what it's basically gonna do is it's, it's gonna show people a daily summary of what the AI collected about their dog's or their pet's behavior. And it's also gonna have, like, a 30-second summary at the top, which they can play. And a lot of people are gonna come in because we are gonna probably send them some sort of notification, right? So we're gonna send them, like, a notification about a text or an email or something like that, like your daily summary, and they're gonna click into that. So when we build this, we kind of wanna have this page be deep linkable and have them-- allow them to then click in so that they can see more information. So these different blocks are all gonna be, like, clickable blocks. And I'm not gonna do too much else. Like, what I found in prompting Lovable is the most important thing is to really explain the different states of the product, exactly what's the product is, what's clickable. Then I'll let it go ahead and give me a first layout design, and if I need to edit it, I'll edit it from there. Does that sound good?
- SPSpeaker
Sounds good.
- AGAakash Gupta
Awesome. So I want to end on this last point here because,
- 28:53 – 30:05
Progress Toward AGI
- AGAakash Gupta
you know, you were, you did emphasize at the beginning, we need to think about what is the progress towards OpenAI's mission of AGI? And when I think about AGI, right, AGI is more about automated general intelligence, right? So I want to break that word down, right? Automated. Here, we're creating, like, a real-time behavioral coach. General. So here we're going into the animal space. We're specifically looking at their behaviors using AV. This is stuff that a human could do, like an expert, but most people can't, right? And hopefully we'll also have some new AI... Well, this is the third point, intelligence. Hopefully with our research team, as we build out the team around this effort, we'll also have some new AI breakthroughs. And so I really think it does deliver across this automated general intelligence, but we want, we need to make this safe, right? And so we need to consider, you know, data privacy. We need to consider giving people the wrong information about their dog. We need to consider potentially, like, doing something that's good for the human but not for the dog. We don't want to do that either. So I think a couple of the risks we need to think about here are that it gives bad advice,
- 30:05 – 30:54
Risks and Mitigations
- AGAakash Gupta
right? And so to mitigate this, we'll need clear disclaimers. We'll need confidence scores. We'll also want to make sure it doesn't anthropomorphize too much. So we'll want to frame it as interpretationnot translation. We want to probably have some periodic reminders about AI limitations. There will also be some privacy concerns with having an always-on camera. So we'll probably want to have things like on-device processing and easy delete. And then I guess another risk is it doesn't work for all pets. So we'll probably want to roll it out to like certain types at once, dogs, then cats, et cetera. And dogs first because that's where the best behavioral research is. So that's how I'd think about it. Imagine Sarah. She's been struggling with her rescue dog Max's separation anxiety for months. She spent
- 30:54 – 31:35
The Story: Sarah and Max
- AGAakash Gupta
six hundred dollars on trainers and still comes home to a destroyed apartment. With this product, she'll understand that Max's anxiety spikes exactly twenty-three minutes after she leaves. That's when he loses hope that she's coming back. The AI coaches her to do a brief training protocol at the twenty-minute mark via an automated treat dispenser and calming audio. After three weeks, destruction indus- incidents drop eighty percent. And when she asks Max why he was so scared, she hears back from the AI, of course, "When the door closes and your smell fades, I didn't know if you'd come back. The treat in your voice recording helped me remember you always do." That's the product.
- SPSpeaker
So thank you. And how did Aakash do?
- 31:35 – 36:46
How the Interview Was Evaluated
- SPSpeaker
Well, it, it won't be a surprise if I tell you that was a really great interview, and any candidate that would go to OpenAI job offer interview would have been rated very, very high. But how would they rate him? Like, what framework could they use, and what could have gone bad? Let me share my screen. So execution. Well, exceptional, right? We saw the narration. We saw the different blocks. We could easily follow structured train of thought. And very often PMs who are eager to answer, who think they have a great reply in hand will sort of like wander off. They'll forget minor details. They will repeat stuff. And without proper structure, it... the best ideas, the best answers will be ruined. So the visual way of narrating that Aakash demonstrated was beneficial for him because he could focus on one thing at a time while also processing the further elements of the answer while talking. And it was beneficial for me, the interviewer, who knew exactly what's going on. I could be focused on the current chapter of the answer, and it was all clear from beginning to end. And while like a proficient answer would be good if you could do it without the visual aid, the visual aid makes a big difference in clarity and following the train of thought. Creative solutions. Well, obviously, having to, having presented eight different ideas was great, and they were what they were supposed to be at this point. Not too detailed, more like problems that were attached to solutions that had a clear explanation of what they're supposed to do without making them really convoluted, complicated, or uneasy to understand. It's just a job interview. It's akin to a brainstorming session with the team, making it clear, understandable, and then follow up with a very transparent way of prioritization with a clear framework of doing it was amazing. And if you're not doing this on job interviews, you definitely have to do at your work and job interviews. And user centricity. Different potential stakeholders that were outlined as the first step already shows that Aakash was super user-focused, that there were different types of user personas that Aakash could pursue. And though probably some PMs could have start with, uh, problems to solve and then attach users that would fit into those problems, the other way around actually I think worked better for such a unique problem as a pet owner communication. And user centricity could have easily been wasted if a candidate would have focused on technical aspects or on the AGI goal itself, or even worse, identify the user poorly by focusing on the pets rather than, well, people who will pay for the product, who will actually use it. Even though it's called pet communication, uh, AI solution, it's sort of like with toys for the youngest children. They are not really supposed to resonate with those little children, but with the parents who will buy those toys and believe or would actually solve a problem for a little one. So that was a mistake that could have easily have been done. And listen, myself and Aakash have been through these types of questions with multiple people over and over again. And in fact, we are in February starting the second edition of our cohort that's focused on making sure you will land your PM job. And why do we believe that this is the case? This is not our first rodeo. Even now, when we are doing the first cohort, already every fifth student all has got his or her PM job while we were teaching them to optimize their process, to improve their resume, to improve their interview game, to find themselves in the new AI world. All that already is yielding fruit, and it will get even better. And we'll improve it in the second cohort because we've, as PMs, learned on our initial product, and the second edition will be much better. You willHave all around two or three live sessions per week. We'll do six or more mock interviews. You'll get one-on-one coaching every two weeks, and every bit of what we've learned throughout the experience of the cohort one will enhance cohort two, and hopefully even more people will get their new PM job in 2026 as we go on. So what are
- 36:46 – 39:08
Land PM Job Cohort Info
- SPSpeaker
we waiting for? Go to www.landpmjob.com and book a call so we can make sure that this is a right fit for you. Because not about just bringing anyone to the cohort, it's about helping people who will be our, well, next success stories. We are here to help you with the cohort, and I can guarantee we'll do just that. See you in the class. And Aakash, thank you for the great interview.
- AGAakash Gupta
Glad you liked it. Was a little nervous on the response if it was gonna succeed, so glad I was able to deliver for you guys. If you guys want mock interview coaching, if you want real mock interviews, if you wanna be able to perform like I did today on the mock interview, join our cohort where we are gonna have five weeks of interview prep. A 12-week program, 36 sessions, plus six one-on-ones. This is 42 sessions of content, plus awesome recordings about past special topics. Everything you get in this cohort you cannot find anywhere else. It is not in our public writings. It is not in this YouTube content. What's in the cohort goes the layer deeper. It's the actual stuff that's going to help you land a PM job. It is how, in this cohort, we've had someone land an OpenAI PM job, and how we are going to have more and more of you land your dream OpenAI, Meta AI, Anthropic, whatever your dream job is. Whether that's AI PM or not, this cohort can help you. We have constructed it in such a way that is the only cohort in the world with this amount of support, and because of that, it is a premium cohort, so you have to apply online at landpmjob.com. And we'll have more of these videos, so if you like these, make sure to like, comment and subscribe, and we'll see you in the next one.
- SPSpeaker
See you then.
- AGAakash Gupta
I hope you enjoyed that episode. If you could take a moment to double-check that you have followed on Apple and Spotify podcasts, subscribed on YouTube, left a rating or review on Apple or Spotify, and commented on YouTube, all these things will help the algorithm distribute the show to more and more people. As we distribute the show to more people, we can grow the show, improve the quality of the content and the production to get you better insights to stay ahead in your career. Finally, do check out my bundle at bundle.aakashg.com to get access to nine AI products for an entire year for free. This includes Dovetail, Mobbin, Linear, Reforge Build, Descript, and many other amazing tools that will help you, as an AI product manager or builder, succeed. I'll see you in the next episode.
Episode duration: 39:18
Install uListen for AI-powered chat & search across the full episode — Get Full Transcript
Transcript of episode Ji8CGIaH9es
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome