Skip to content
Aakash GuptaAakash Gupta

The AI PM Behavioral Interview Masterclass (Mock w/ Real Answers)

In this episode, Dr. Bart Jaworski and I run a full mock interview covering all four major AI PM behavioral interview categories: AI product experience, technical AI knowledge, ML team collaboration, and AI ethics and safety. Real questions. Real answers. Real-time feedback on every response. Full Writeup: https://www.news.aakashg.com/p/ai-pm-interview-guide-2026 Transcript: https://www.aakashg.com/bart-jaworski-podcast/ Land PM Job Cohort: https://www.landpmjob.com ---- Timestamps: 0:00 - Why AI PM Jobs Matter 2:05 - Four Interview Categories 3:36 - Mock Interview Begins 6:03 - Feedback: Tell Me About Yourself 8:50 - AI Product Experience Question 12:22 - AI Bots in Fortnite 16:32 - Feedback: Storytelling Done Right 19:57 - Evaluating ML Models 23:44 - Feedback: Beyond Textbook Answers 25:55 - ML Team Conflict Question 34:41 - AI Ethics and Safety Question 41:30 - AI Strategy Question 50:13 - Six Overriding Interview Skills 52:05 - Land PM Job Program ---- Key Takeaways: 1. Answer the question behind the question. "Tell me about yourself" is really "Why should we hire you as an AI PM?" Be a skilled politician. Keep it under 2 minutes. Reference the company and the interviewer to differentiate yourself by 1-2%. 2. AI is the tool, not the story. Do not lead with AI. Lead with the problem, the user insight, the business context. Then show how AI was the right solution. This is what separates an 8/10 from a 10/10. 3. Use the STAR-M framework. Situation, Task, Action, Result, Metrics. Always end on metrics. PMs who get hired today drive measurable business impact, not just ship features. 4. Show real conflict with real resolution. Fake conflicts get spotted immediately. Show what sides people were on, why they disagreed, and how you used multiple methods to resolve it. Not just "leadership backed me." 5. Take your time to structure answers. Asking for a moment to organize your thoughts shows you are thoughtful, not unprepared. Write notes live. The interviewer sees you are not reading from AI or a script. 6. Reference your mentors and philosophy. Name-drop the people who shaped your thinking. Mention Hamel Husain, Shreya Shankar, Kevin from OpenAI. This proves you live the craft, not just memorize frameworks. 7. Read the interviewer signals. Watch their facial expressions. If they light up on a topic, go deeper. If they look disengaged, pivot. Adapt in real time. This is the difference between a rehearsed answer and a conversation. 8. Stand for AI ethics and safety. Companies do not want the PM who bulldozes through ethical concerns. Show that you paused, delayed shipping, and championed the right thing. It is how promotions happen. 9. Show iteration and failure honestly. If your AI product stories have no setbacks, they sound fabricated. Talk about pausing features, early prototypes being bad, and models improving over time. 10. Connect technical decisions to business outcomes. Every eval framework, every AB test, every model improvement should ladder up to revenue, retention, or conversion. Interviewers need to see you think beyond the model. ---- πŸ‘¨β€πŸ’» Where to find Dr. Bart Jaworski: LinkedIn: https://www.linkedin.com/in/drbartpm/ Land PM Job: https://www.landpmjob.com πŸ‘¨β€πŸ’» Where to find Aakash: Twitter: https://www.x.com/aakashg0 LinkedIn: https://www.linkedin.com/in/aagupta/ Newsletter: https://www.news.aakashg.com #aipm #pminterview ---- 🧠 About Product Growth: Aakash Gupta's newsletter with over 200K+ subscribers. πŸ”” Subscribe and turn on notifications to get more videos like this.

Dr. Bart JaworskiguestAakash Guptahost
Apr 9, 202654mWatch on YouTube β†—

EVERY SPOKEN WORD

  1. 0:00 – 2:05

    Why AI PM Jobs Matter

    1. BJ

      Companies are hiring AI PMs everywhere right now, but the problem is there isn't enough data on the internet to know what they're asking for.

    2. AG

      I've helped over 50 candidates get AI PM jobs paying over $300,000 in the last year. Here's what I've learned about those processes. AI PM jobs are now over 30% of PM jobs. Top AI companies like Anthropic and OpenAI are paying over one million dollars a year to these AI PMs.

    3. BJ

      So if you want to make more money in your career as a PM, you should consider AI PM. Unfortunately, the tactics you use to get a PM job and an AI PM job are dramatically different.

    4. AG

      AI PM interviews ask questions like AI product experience, AI technical knowledge, ML team collaboration, and AI ethics and AI safety.

    5. BJ

      Today's episode is literally everything you need to master all these question types, with real advice and real mock responses done by me and Aakash.

    6. AG

      So strap in, and if you stay till the end, you'll be able to nail each one of the major AI PM behavioral interview question types.

    7. BJ

      So Aakash, as part of the Land PM Job program that we run together, we've helped folks land jobs at OpenAI, Anthropic, Meta, Google, Stripe. What have you learned about the AI PM interview process since the program started?

    8. AG

      The biggest finding is that case interviews only end up being 10% of the interviews you actually get. Yes, at OpenAI, at Meta, at Google, you will face the case interview. But for all of the other roles, you're highly unlikely to face the case interview. You're actually just gonna face a series of behavioral interview questions. And even at the OpenAI's of the world, you are gonna face behavioral interview questions. So realistically, everyone who wants an AI PM job has to master all of the categories of behavioral interview questions.

    9. BJ

      So what are the AI PM interview questions people should

  2. 2:05 – 3:36

    Four Interview Categories

    1. BJ

      know? What are the categories people should focus on?

    2. AG

      So there's four broad categories of questions, and if you stay till the end, we will cover all four of these. The first, have you actually shipped AI products? And they're gonna ask you situational behavioral questions like, "Tell me about a time when you shipped an AI product and it had a big impact. Tell me about an AI strategy you've created. What was a time when you shipped an AI feature and the evals did poorly? When was a time when you shipped a failing AI feature?" So they have all of this category of behavioral situational questions. The next is working with ML and AI engineers. "What about a time you had a conflict with the AI research team? How did you build evals that helped the ML engineers hill climb?" You need to be able to answer these. The third category is AI-specific trade-offs. So how are you gonna trade off accuracy versus cost? How are you gonna trade off speed versus quality? How are you gonna trade off hallucinations versus actually answering their questions? These are tough choices, and they wanna see your actual experience. And then the final category is graceful failures. How do you deal with bias? How do you deal with hallucination? And a lot of times they'll ask about ethics and safety in this category as well. So you really need to make sure that you can strongly answer each of these, and you hi- practice the one that you're weakest at.

    3. BJ

      Okay, great. So how about we show folks how it's done and do a mock for each of these?

  3. 3:36 – 6:03

    Mock Interview Begins

    1. BJ

      Okay, so let's say I work in OpenAI, and I'm interviewing you for a senior AI PM role for the, say, apps team. I'll be their head of product, Ami Vora. So Aakash, tell me about yourself.

    2. AG

      Ami, it's such a pleasure to be in this interview with you. I've actually been reading your content in your Substack for years. I've been following all your podcast appearances. And of course, I've been a huge fan of OpenAI since ChatGPT launched in 2023, so it's a bit of a pinch me moment. But about myself, I started in PM back in 2008 working in SaaS. I've worked my way up from product manager to VP of product. In my latest role at Apollo.io, I managed a team of up to seven PMs, leading a cross-functional team of over 30 engineers and five designers and three analysts. And my charter there was really a culmination of everything I've learned throughout my career. So I started in SaaS in 2008. Then I worked at ThredUp, where I worked on search relevance and applications of ML to improve our pricing algorithms, because we had hundreds of thousands of items, and we needed to price them in real time. Then I worked at Epic Games, where I launched the very first AI in a major shooter. What these AI did is they acted as humans, and they solved the critical problem that beginner players were getting, quote-unquote, "stomped on" by expert players. Now they had somebody to stomp on, which were the AI. And the skill-based matchmaking that we launched along with that was the single biggest thing to move retention in my three years at Epic Games. Then I worked at a firm where our ML risk algorithms were responsible for making loan decisions, so very high-stakes decisions when it comes to an AI application, and I led the activation team. Eventually, I led the entirety of the growth experience team and sat on the senior leadership team. Most recently at Apollo, I've been working on, "How do we grow this company from $800 million in valuation to $2.5 billion in valuation?" And luckily, we did so successfully with a bunch of AI features that we launched on the growth team. Some of the most successful of those were the AI email writer and the AI agent. So I've really spent my whole career applying ML and AI to build apps, and that's why I really think this role would be an exciting next step for me.

    3. BJ

      All right. So dear viewer, spend... Like, pause

  4. 6:03 – 8:50

    Feedback: Tell Me About Yourself

    1. BJ

      the video for a second and tell me what do you think Aakash did great here. Pause now.Back? Okay, so let's compare notes. Did you notice that Aakash didn't really talk that much about himself? I didn't hear about his children or hobbies. He didn't really talk about himself as a PM. He actually was a skilled politician who answered the question, "Why would we hire you at OpenAI as an AI PM?" Because that's the goal of this interview, and quite frankly, this is what's actually being asked between the lines. So while Aakash is a phenomenal person to s- to spend time with, they didn't invite him to evaluate whether he's a good fellow to bring on a pizza party. They need to know whether he's a fit for the AI PM position or not, and Aakash, knowing precisely what's going on there, answered the question that was actually being answered in the room. And I would say that it was a little bit of a, a bold tactic to be like, "I admire you, admire the company." But let's be honest. If it comes to companies in the world, maybe FAANG and OpenAI's Anthropics in the world are the actual ones where you can say that and sound sincere because that's the forefront of product development, and you wanna be there or be square.

    2. AG

      I would say that two or three things I attempted to do in this response, and it's up to you, the viewer, to see if I did them. Number one, be concise. I tried to keep my response to less than two minutes. Number two, tell a really clear story. So what I tried to do is say, "Hey, I've had this 16-year career arc. Here are the steps that are relevant to an AI apps position within that." And then I tried to show that I'm at a certain level of seniority. I tried to name check the amount of PMs that I managed, the valuation, the titles, so that I could position myself for a more senior, higher paying role. And then finally, I infused a little bit of the question behind the question, as Bart explained so wonderfully, which is, "Why are you here today, Aakash?" Not just tell me about yourself. And so that's where the specific references showing that I've studied the interviewer, Amy Vora, and her content, and the specific references to ChatGPT, in this case that it came out in 2023, are gonna differentiate me just 1 or 2% compared to the other candidates. And that little 1 to 2% can mean the difference between, "Hey, we liked you," versus, "Hey, we gave you the offer."

    3. BJ

      Yeah, especially nowadays where there are so many quality candidates on the market, those small percentages really can make a difference. So with that said, now let's move on to the first category of AI product experience.

  5. 8:50 – 12:22

    AI Product Experience Question

    1. BJ

      Aakash, tell me about a situation where you've shipped an AI product.

    2. AG

      All right, so as we kind of saw in my tell me about yourself, I've been shipping AI products for the last 10 years. But if I had to pick one story that I think was a particularly fun story that highlights how I can think innovatively about AI applications that it can actually drive results, I would highlight the one I mentioned in Fortnite. I told you already it was the biggest change to retention we'd ever seen. Let me unpack that issue a little bit more. So the problem we faced, and I actually uncovered this in the data and made this the product strategy for the upcoming season, was we had an abnormally high new user churn rate. We were three years into the life cycle of Fortnite, and when we started Fortnite, the 30-day retention was something insane for a game. It was like 90%. But when I started at Fortnite two years later, it had dropped to something like 75%, and then it continued to drop for a year to like 65%. And I said, "Okay, this is a problem for us." We're acquiring millions of new users a day [laughs] sometimes, especially when we had a new season launch, and these millions of people, not all of them are coming back as frequently as they used to. So naturally, like any good product manager, I watched some of them in their sessions, I watched some of their game replays, and I talked to some of them, and there were a couple key themes that came out. Number one, "It's hard for us to learn the mechanics." So we shipped things like walkthroughs and better videos illustrating the mechanics. Number two, "I feel like I'm getting placed against people who are way better than me." So we shipped skill-based matchmaking. Unfortunately, those two weren't quite enough. Even with the skill-based matchmaking, we faced-

    3. BJ

      Mm-hmm

    4. AG

      ... this third problem, which is, "I feel like I'm not getting enough time to learn these mechanics. People are just killing me." In Fortnite in particular, Fortnite is a first-person shooting game, so people usually have some skill in first-person shooters, but Fortnite has this very unique mechanic, which is called building. And so what an experienced player would do is they would, quote, unquote, "box up" the new player. They would build around the new player, and the new player would have no chance, and they would get crushed. [laughs] And the problem was people would learn building pretty fast, like in their second or third month. There wasn't a big enough pool for us to matchmake just amongst the first month players. And you might say, "Aakash, why not?" I also thought that, too. If there's millions of people a day, why isn't there a big enough pool? Well, it turns out in a first-person shooter, the latency, what's known as the ping, really matters, and so we actually matchmake with people in your local region. So people in New York State are not matching with California. People in New York State are matching with the seven or eight states around them. People in Israel are not matching with people all the way out in India. That's actually too far. You'd think those are similar parts of the world. People in Israel are matching to the few countries around them in Israel. Same with India and the rest of the world. And so the pools were actually really small.And so what I realized was that the fundamental problem was that our human pools for new players were too small. To compound onto this issue, Epic was fighting a war against Apple and Google, and we got kicked off of mobile [laughs] . And Fortnite mobile, which was generating 80% of our new players, were kaput.

    5. BJ

      Mm-hmm.

    6. AG

      So we had 20% of the player base then. When I saw all these factors, I realized we needed to think about how we can bring AI into the game,

  6. 12:22 – 16:32

    AI Bots in Fortnite

    1. AG

      how we can use AI. And of course, games have always had AI in the form of non-player characters, but what if we had AI that was disguised as a human? That was the insight that I had. Worked with some game engineers to mock up some prototypes of this. Honestly, the initial prototypes were quite bad because they were based on mechanistic rules. If you get shot at, shoot at them, et cetera, they felt like AI. So when we actually built AI into the system, when we used neural networks, we were able to create players that acted and felt like humans. And once I saw this in play testing, where the way we had play testing in Epic was truly amazing, we would have new players come almost every day, and I'd be able to see them behind a mirror. And once I saw in those play testing that people didn't realize that then they were playing against AI, I knew this was ready to ship. And so we launched this feature. I got... I asked the AI engineers, I said, "What percentage do you think we should launch?" The AI engineers were like, "1%." I was like, "All right. Let's launch in 1%." And, uh, we monitored it at 1%, 5%, 10%. We ramped it to 50%. It was looking great, and so we ultimately ramped it to 100%. And when we did that, that's when we saw those amazing retention increases. And ultimately, it was something like a 7 to 8% retention lift, but that really compounded, and it generated hundreds of millions of dollars in revenue for us.

    2. BJ

      Thank you. Well, this is not a part of the interview. I have two quick questions out of curiosity. Uh, first one, did AI ever won a match, or was it, like, prohibited from being a winner?

    3. AG

      It did win, yes.

    4. BJ

      [laughs]

    5. AG

      It would occasionally win because our goal was to-

    6. BJ

      Mm-hmm

    7. AG

      ... to perform like a median new human so that they-

    8. BJ

      Mm-hmm

    9. AG

      ... couldn't tell that it wasn't AI. [laughs]

    10. BJ

      And my other question is, I think it's one of the recent developments, is to, I think, to the same problem. They, the Fortnite team decided to create a no build mode where the skill cap would be lowered just because of the lack of building. Did you ever consider that, too? And if you did, why did you decide against, uh, AI, uh, after all? Uh, so in favor of AI.

    11. AG

      So I didn't... So the no build mode, I was actually responsible for that, too.

    12. BJ

      Oh, nice.

    13. AG

      Although I wasn't the driving product manager, I was one of the-

    14. BJ

      Mm-hmm

    15. AG

      ... product managers supporting that launch. The driving product manager, he ultimately shot up to almost become head of product. He was so good. But that was actually happening in what is called the Fortnite Creative world. So Fortnite basically had two elements to it, Battle Royale and Creative. Battle Royale generated 95% of the revenue [laughs] because we would launch a new season, and people would buy a battle pass. That was the entire monetization magic of Fortnite. Creative, people would just go grind and build their own maps. But they would have no reason really to buy something on a regular schedule, and so it monetized really poorly per hour. And so the strategic choice I thought was, let's focus less on Creative. Let's focus more on Battle Royale. In Battle Royale, building is a critical part of it. We had, you know, hundreds of designers building each new season, and so we focused there. Now, the genius of this team on Creative was they, this guy who eventually, I said, shot up the ranks, he was figuring out how to improve the monetization of Creative. And once the Creative monetization per hour got pretty good, that's when the Fortnite team strategically, actually led by the CEO all the way from the top, said, "Okay, now we can finally make a no build mode more prominent." And so when you open up Fortnite's homepage, now it's not just Battle Royale, Battle Royale, and a little slot for Creative, but it's actually a giant thing where Battle Royale and no build get equal prominence. And like you said, today, the active users and the concurrent users on no build mode is actually much higher than build mode. So once they solved the monetization problem, they were able to lean into that new mode. And my contributions to that, I actually helped work on that homepage, that design of figuring out what those tiles-

    16. BJ

      Mm-hmm

    17. AG

      ... looked like. And we took inspiration from Roblox and Minecraft, and we actually helped evolve Fortnite from a first-person shooter to more of a creative universe.

    18. BJ

      Thank you. So, dear viewer, what did you like about this, uh, reply? Pause the

  7. 16:32 – 19:57

    Feedback: Storytelling Done Right

    1. BJ

      video, make some notes, and let's, uh, compare them in a second. Back? Perfect. So when it comes to what I liked about this, um, reply was the storytelling. It wasn't jumping to conclusion. It wasn't about focusing too much on AI technically. AI was whatAI PM stands for the element of using AI to solve real problems. So in order for me to fully embrace and immerse in this story, I needed to understand the background, the reasoning, the, the core problem that's being solved, and the evaluation metric, which was given a name, which was given a value, and the metric got me as a backbone of the story in driving Akash motivation on using AI, why AI was used, and why was it successful. So everything there was great. If you spent a little less time on AI or too much time on AI, this answer would have not been as well-received, because when you are on a behavioral interview, especially an AI one, it's a core thing to be understood, to make sure that what you have in your head is traveling successfully through your mouth to your interviewer's head, and Akash absolutely nailed it. And very often, very viable stories and, uh, u- and s- and cases will be lost because of the chaos of too much technical language or not giving the right context about the problem and the solution that was ultimately achieved.

    2. AG

      That was hopefully what I portrayed for you guys. As you can see, again, I tried to have a concise response. This time I might have been two minutes, 30 seconds, but that's because I was storytelling. I tried to keep it so that the interviewer was engaged. If you noticed, I'm not using ums and ahs. I'm also not reading off of a script. I'm actually looking at, how is Bart reacting? What are his facial expressions? And since he seemed engaged, I said, "Okay, I can go a little bit more than two minute." It seems like he [laughs] understands what I'm talking about here.

    3. BJ

      Mm-hmm.

    4. AG

      If Bart, or I, me, in this case, was just stone-faced like, "I don't care about this gaming story," I might not have even chosen the gaming story. Actually, I chose the gaming story because in my tell me about yourself, I saw Bart light up [laughs] on that story.

    5. BJ

      [laughs]

    6. AG

      And so it's really about me being responsive, and you guys saw the follow-up questions he asked were so knowledgeable, right? So it was really about reading the interviewer's signal for me. That was number one. Number two was putting in how I as a PM operate, right? Really learning, leaning into user insights, teaming up with other PMs, building a strategy, teaming up with game designers and engineers. Some PMs, the way they tell stories, it's like, "I did everything." It's like, well, of course you didn't, right? [laughs] You worked in a team context. And so I tried to differentiate what I versus we drove. So that's what I attempted to do in this question. You should try the same.

    7. BJ

      Yeah, especially when you consulted the engineers about how to roll out the feature. That's, like, nailing it on the head. All righty. Let's carry on. Okay, let's move on to the second category, technical

  8. 19:57 – 23:44

    Evaluating ML Models

    1. BJ

      AI knowledge. Akash, how would you evaluate if an ML model is performing well?

    2. AG

      All right. Can I actually take a second to structure my response here?

    3. BJ

      Take your time. We're in no rush.

    4. AG

      Okay. So this is a big question, of course, and we can talk about a specific example if you want. But I've kind of written down for myself a, a three-level framework so that we can talk through it. So the first level is offline evaluation, the second level is online evaluation, and the third level is business impact. So offline evaluation, this is the topic that I know Kevin, the former CPO at OpenAI, has talked about a lot, evals, right? Evals are the new PRD is what some people say. I'm not sure I quite agree with that, but I do think evals are critically important. And personally, as a PM, I try to have a very active role versus just outsourcing the evals to my AI engineering team. So when I think about evals, I actually take the Hamel Husain-Shreya Shankar approach. I've taken their class, and what their class says is that you want to figure out what are the actual failure cases. A lot of people focus on, hey, precision and recall and, yes, I think those are important, and they use generic eval packages like the ones OpenAI has outsourced, which I think are good and I think are a baseline. But I think the higher level is doing what they call axial coding of responses. So what you do is you look at either synthetic or real data. So, and for my E- AI email writer that I wrote at Apollo, I looked at real data on the 1% group, for instance. I know I'm calling these offline evals, but we tested with some people, and I looked at the real data and I said, "Hey, what are the failure cases?" And I axially coded them. So one example, it's referencing the wrong name of the recipient. Another example, it's referencing incorrect data about my company. Another example, it's not using the best practices we know that are gonna drive email open rates in the subject line. Another example, it's not using this best practice we know in the email that's gonna drive a reply. So I put together this whole list where I figure out all of the errors, and I code them into groups of errors, and then I created an eval framework for my AI engineering team. I said, "Here's few-shot examples, like four or five really good, four or five okay, four or five bad." So I spent the time to generate these 15 examples for them across this group so that they know what good looks like, and then they could construct a metric-based eval to hill climb upon. So that's the first group, offline evals. The second group I have written down here, online evaluation. So this is your standard product testing, right? Having AB tests, ramping it up from one, 10, 25, 50%, looking at, hey, in the AI email writer's case, how many people are accepting this email without edits? How many people are sending this email? What is the open rate of this email and the reply rate of this email compared to a human-generated email? So looking at the entire set of product metrics for how it is succeeding in the AB test. That's the second category. And then the final category, business impact, of course, right? So for the people that we have launched AI email writer, are we seeing more emails being sent, which results in more credits being used?Credits being used generally means that people buy more credits. So are we seeing people buy more credits? Are we seeing people potentially upgrade from one plan, we had four tiers of plans, to a higher tier of plan? And are we seeing people retain more? So I looked at the entire suite of output business metrics, and across these three levels, then I really make a decision, is the new AI email writer model good, or do we need to iterate on it more?

    5. BJ

      All righty. Thank you. So

  9. 23:44 – 25:55

    Feedback: Beyond Textbook Answers

    1. BJ

      let me just jump straight into feedback, because this is particularly, hmm, uh, a s- a section that you can really expect, and while you could say, "Okay, I just need to learn that from the book, from, from a course or whatever," notice how Aakash not only did provide the theory correctly, but structured it in a way that it immediately imply that he knows it, understands it, and knows how to apply it, that it's not just something that he read, but he actually lives and breathes that, has his own interpretation and best practices, and leaves me with confidence that he knows how to evaluate an, an ML model.

    2. AG

      That's exactly what I was trying to do, is I was trying to not give a textbook ChatGPT Claude response. I think that's the baseline. Yes, you can get a 7 out of 10, but your entire competition group is using that too. And so the two things I tried to do in this response, number one, reference, "Hey, these are my gurus. This is my philosophy on the topic," Hamel Husain and Shreya Shankar, and even mention that I know this is important because Kevin Weil at OpenAI has talked about it. So I've brought in some company-specific context, and then I've brought in some person-specific context. I've tried to make it in a response that ChatGPT simply couldn't create. It's something that is demonstrating my own knowledge. And then I also didn't have a canned response. A lot of you guys are gonna take the list of 80 questions, you're gonna come up with a practice, and then you're gonna try to use, you know, your practice response or AI to help you in the interview. That's a huge mistake. I'm not using any AI. I did take a minute to write down my thoughts, and I think that's very effective. It actually shows that you're thoughtful about it, and it also helps when then my eyes go off the camera, right? You wanna be able to say, "Oh, I'm looking over at my notes."

    3. BJ

      Hmm.

    4. AG

      You don't want them to think, "Oh, he's looking over at his AI," right? These are the notes that you saw me create live.

    5. BJ

      Mm-hmm.

    6. AG

      And then I tried to keep it concise. I've in- mock interviewed people who give me a six, seven-minute [laughs] response to this.

    7. BJ

      Yes.

    8. AG

      Mine was less than two minutes.

    9. BJ

      Perfect. All right. So

  10. 25:55 – 34:41

    ML Team Conflict Question

    1. BJ

      for the first category, ML team collaboration. Tell me about a situation when you had a conflict with your AI team, and how did you resolve it?

    2. AG

      [lip smack] Oh, wow. I guess I've had a couple. Can I take a second to choose the right story?

    3. BJ

      Take your time.

    4. AG

      I'm just thinking through what would be a fun one to talk about.

    5. BJ

      It's already promising if you have to decide which one of, is the fun one.

    6. AG

      Also, I just wanna show, talk about something we haven't talked about yet. So I'm just thinking about all the details here. Okay. So I actually come up with an interesting stories. We haven't talked much about my time at ThredUp yet, and I did reference that in my tell me about yourself, so I wanted to make sure that you could hear about what AI apps I built over there. One of the most significant projects I got to work on at ThredUp, and all credit to James Reinhart, our CEO, who said, "Invest on this", because I wouldn't have gotten the AI and ML engineer resources otherwise. I was the growth product lead. I wasn't actually leading these engineers originally, but he said, "Aakash is the right person to work on this project because Aakash is the one building all the systems to customize the experience for our new visitors. And so Aakash, I want you to build out our new pricing system." So what was the conflict with the pricing system? The big conflict, and believe it or not, this is dating the time. This is back in 2015, 2016. Should we use person-level information? That was the conflict. [laughs]

    7. BJ

      Ooh.

    8. AG

      AI engineering team says, "No, Aakash, we cannot do that." [laughs] Me, I'm saying, "We have to do that." And that's why I wanted to give you the context of the growth product team that I was leading. So basically, the reason James put me on this project is because I had just shipped a massively successful project using person-level information to customize what items, what homepage we were showing to people. So we were actually, at the time, like in 2014, 2015, Amy, you would know this 'cause you headed up product in Meta, everyone was spending on Facebook Ads. We were spending, I don't know, I think like $10 million a month [laughs] on Facebook Ads-

    9. BJ

      Ooh

    10. AG

      ... bringing in, at the time, I think like less than 50 cents a visit, so like 20 million visitors to the site. Big number of visitors, right? And if you think about what we were, we were an online thrift shop, so we had 100,000 brands. Some people are shopping Anthropologie. They're buying $100 shirts. Other people are shopping Old Navy, and they're buying $1 shirts. So there's a huge difference in what an Anthropologie consumer wants to see on our homepage versus an Old Navy consumer, the types of styles, whether there are kids or it's just her, all of those things. And so we had built this huge system that was successful, so I had the conviction around it. The AI engineering team is more worried about people are gonna feel it's creepy, peop- we're gonna run against the law, it's not gonna be ethical, right? So these three concerns. So what I had to do was one by one figure out who really held which concern on the team. I had to take a person-level approach. So you had seven engineers on this team and one designer who was functioning like an engineer because she was so involved, so eight people on this team. So for eight of these people, I needed to figure out, "Hey, are you in the creepy camp? Are you in the legal camp? Or are you in this ethics camp?" And then for each of them, taking a person-by-person approach, not just calling a group meeting and saying, "Guys, this is why your three things against this are bunk," but actually trying to sit with them and say, "Hey, what's the problem with the creepiness?" Right? "Let's document it." And then me myself facilitating the conversation with James, the CEO, and our COO, the head of operations, who was the other really important C-suite level executive for pricing, and saying, "Hey, what do we think about the creepiness?" And getting the CEO and the COO to really say, "Nope, we don't care about the creepiness. We need to build it in a way that it's not creepy," that influenced the AI engineers more than any evidence I could provide. Then on the legal concern, a somewhat similar approach, except not using the C-suite, using our legal team. M- Me, again, steel manning the legal argument, and then getting the legal team to tear it down. And so I wasn't the one tearing it down. They were the ones tearing it down. I felt like, my engineers felt like I was on their team. [laughs]

    11. BJ

      Mm-hmm.

    12. AG

      And then the final group, the group that was like, "This is not ethical," now this was a harder one, and I didn't rely on any sort of category of executives or groups. I, I actually did this more in team meetings, where once we'd conquered the other two issues, I had us, ourselves, had discussions around the ethics of this. We said, "Okay, well-Probably what the AI ML is gonna do is it's gonna figure out, hey, you're richer, and it's gonna charge you more, and it's gonna figure out, hey, you're, have less money, it's gonna charge you less. Is that unethical? [laughs] We thought about this. We looked at other retailers, and I got the team to realize that actually we're gonna be charging people with less money less [laughs] and so the ethics of this could actually be quite good. And by doing that, by taking this different approach to the different blockers, sort of an individualistic approach, I was able to get everybody on board, not just to build this system, but this was like a year-long project. Like, we shipped this, then we shipped iteration after iteration, and the whole team stayed on the project, and it was awesome. So the metric we were most caring about was first visitor to purchase conversion rate, right? 'Cause we're paying $10 million for these people to come in, let's make sure that they make a purchase from us. And we were able to increase that on a relative basis by nearly 15%, and so it was one of the best improvements we had in my time there.

    13. BJ

      Well, for one thing, it sounds more ethical than what PlayStation is doing now, where they will charge you more if you keep spending on PlayStation Store, and I guess that they would have got an, away with it if, if it wasn't for those pesky kids who leaked the, [laughs] the information. So kudos to that. And again, I really liked how you put me in the middle of the story with the right context, with everything I needed to know, and how you basically acted as a whip to get everyone's concerns, um, uh, approached and addressed, and I really liked it, though it sort of goes over my next question about, um, uh, a- ethics and safety, but let me rephrase the question so it, um, matches the, the theme. But tell me, what should the viewers, uh, mmm, spot in your reply?

    14. AG

      What I was hoping to do in this reply was very clearly show what is the conflict because when I mock people, sometimes it's not clear what the conflict is. So if you saw the weightage of my question, I gave a lot of weight to helping Amy or Bart really understand the conflict, the two sides we sat on, why we sat on those sides, who the conflicts were with, and to make it seem like a legit conflict. Too many people give me a fake conflict, so make it seem like a legit conflict, number one. Number two, tie it back to huge impact. Sometimes people will just say, "Ah, I resolved the conflict," and that's it. Where did I end? I ended on a high note. "Hey, we increased relative first visit to purchase conversion rate by 15%," one of the best things we had done for that metric, right? And so you wanna end on a high note for telling the story in so much detail. And finally, I tried to show that I used multiple methods, so it's not just like, "Oh, I, the C-suite backed me." You know, that's, like, a terrible response. Or just like-

    15. BJ

      [laughs]

    16. AG

      ... "Oh, I, I overrode them and pushed them through it," or, "I listened to them," right? I actually showed how I used three different techniques, and so I-- that's what I was thinking about, actually, when I was choosing my story. It was, I was like, I chose a story from Epic, and I was like, "Well, I kinda just used the design director to go over them," or, [laughs] "Oh, well, I kinda just let their concerns fade over time," or, "I kinda just-"

    17. BJ

      [laughs]

    18. AG

      ... Well, but here's an example where I did a lot of politicking, and I can talk about the politicking. And so understanding what they're looking for and thinking about, "Well, how am I gonna differentiate myself vis-a-vis others?" I'll differentiate myself by showing multiple ways I can handle conflict.

    19. BJ

      Not to mention that the resolution was not shallow. It was ongoing, and the team s- ke- kept on working on that product for a year, and y- you even specifically, uh, said that no one left. So if it was like, "Okay, okay, do it, but I, I'm reluctant and I don't like it," no, it was, it was deep. It was an actual flip so that it appears that the AI engineers were on board at the end. So let's move on to the next category. As I mentioned, it's pretty

  11. 34:41 – 41:30

    AI Ethics and Safety Question

    1. BJ

      similar to what you've already said. So let's keep on about in the area of ethics and safety. Tell me about advocating for AI safety versus shipping pressure.

    2. AG

      Okay. Well, I guess this does come up a lot. Let me take [laughs] another second. I know I'm always taking time, but I just wanna bring you a story that really helps highlight it, so...

    3. BJ

      You've proven that it's fi- worth the wait.

    4. AG

      [laughs] Thank you. Oh, this is a juicy story. Okay, so we were just talking about ThredUp, and you just drew the parallel to ThredUp, so let's continue the ThredUp. I told you about the first three months of the conflict, right?

    5. BJ

      Mm-hmm.

    6. AG

      Let's fast-forward three months to month six. ThredUp acquired a company in Europe. Dun, dun, dun, right? [laughs]

    7. BJ

      [laughs]

    8. AG

      Europe. Uh-oh, regulations. Uh-oh, more concern about ethics. [laughs] And both of those things-

    9. BJ

      Mm-hmm

    10. AG

      ... came up in reality, and I'm sure you at OpenAI are facing these too, as you guys are global. Uh, the two concerns, right? Let me unpack each of them. Concern number one, an AI engineer on my team uncovered that we were surfacing racial bias, essentially, in our pricing algorithm because certain zip codes with higher percentage of white people were seeing higher prices, and certain zip codes with higher percentage of colored people were seeing lower prices. And this could be a big problem, both, like, for the ethics of what we were trying to do, but more importantly, also for the regulations. And EU has some very specific regulations around this. I don't remember the exact details, but I believe they were tied-- This was pre-GDPR. It was something they had around, like, a racial discrimination act, and it actually varied by country. I could poke up, pull up the details if I really thought about it. I don't remember the exact details. But high level is that both the concern ethically and the concern legally around racial discrimination. Touchy subject, even for me to bring up in an interview, I know. But we had to deal with this issue. And so for me, the first thing was like, I don't wanna become the apps PM who's just, like, pushing aside these ethical and regulatory considerations. I'd already done that, effectively, six months ago, and so I already knew where the perception of me could be negatively seen. And so I wanted to be the most upstanding for AI safety and ethics. And there was that additional wrinkle that we had acquired this European company. So they hadn't seen the baggage of me six months ago ac- standing up for AI ethics and safety. So what I did is I said, "You know what?We're gonna have to delay this feature. [laughs] I told everybody, "You know what? This feature is on pause. It's on hold until the CEO of the acquirer, the legal team of the company we acquired, and the new AI engineer that we had gotten from them, all of them were happy, as well as the preexisting AI engineer on my team that had surfaced the concern. And so the first thing I did was accept the delay, and I... It was a delay like who knows if we'll ever even bring this up again. But I kept it in the back of my mind. I said, "You know what? Bringing out this pricing system, 15% improvement in first visit to purchase, it's really gonna help us in Europe as well, so we need to do it eventually." So when the planning came up for the next quarter, for all of those people, I brought it up in exactly this way, "Hey, look, the data from America shows a 15% visit to purchase improvement, right? This is awesome, right? We want that, right?" And they were all like, "Yeah, yeah, we want that." And I was like, "But", and then I would voice their concern for them, the ethics and legal concern. And so they actually, the AI engineers, came up with an ingenious solution to solve the racial bias problem, and they worked on the system so that we weren't doing that. Those zip codes, those countries, those areas that had more color, it wasn't like that. They actually made it so specific, so people specific, that we solved most of the racial bias issue. And once we solved it, because I stood for the AI ethics and safety and was willing to almost shelve the feature if needed, they themselves were then going to the legal team and pinging the legal team even without me, which was amazing, and saying, "Hey, look, we solved this issue. Now is it legal? Can we do it?" [laughs]

    11. BJ

      [laughs]

    12. AG

      And so I think for me, taking the stand of AI ethics and safety but surfacing the data, planting it, kind of like Inception, like incepting the idea, helped push it forward. Ultimately, we shipped it, and, you know, it was a quarter late.

    13. BJ

      [laughs]

    14. AG

      The CEO wasn't so happy about that. I ate the blame on that. I ate the, the, you know... Frankly, I just told him, "Yeah, I basically delayed it." That's what I just said. I didn't tell him the whole backstory, but he appreciated it. And when it came around, right, it came around in 360 feedback system. We used to use 15Five over there. Everyone said, "I loved how Aakash was, like, on our side for AI ethics and safety", and that ultimately led to me getting promoted. And so it was a virtuous cycle in the end.

    15. BJ

      So I love how you turned the story from what could have been a very boring, technical, legal drama that no one cared for into the way that you can change, uh, something that's your s- sto- showstopper, something that could have been a bump in the road, into an actual success and, uh, opportunity to get promoted. So kudos on that. And that's what you really want to do. If you ever are asked about your weakness, your failure, your struggle, you really want to turn that around and tell them about what you've learned, what you've achieved, and how you made a bad situation great for you, product, and the company. So choose your stories wisely, just like Aakash.

    16. AG

      What I was trying to do in this response that I think everybody should try to do is, again, I didn't want... What I'm always thinking about for the interviewer is I don't want them to feel like my responses are canned, highly practiced. So actually, I chose a brand-new story that I had never thought of prior to this recording. [laughs]

    17. BJ

      [laughs]

    18. AG

      But I was just riffing off the last question we had. I was trying to make it a conversation, right? And I took a little bit of time once I had chosen the story to figure out what details do I want to surface. How am I gonna surface relevant details to Bart in a concise storytelling sort of manner? And so that's when I wrote down for myself, okay, I probably need to mention that I got promoted because of this. [laughs]

    19. BJ

      [laughs]

    20. AG

      I probably need to mention that it was a delay, right? So it's a two-sided story in that I stood for AI ethics and safety. One thing that's very, very, very important for these companies and roles is they do not want the PM who is bulldozing AI ethics and safety. They actually want the PM who is making the righteous stand for these things, and so I tried to portray that.

    21. BJ

      Okay. For this final category,

  12. 41:30 – 50:13

    AI Strategy Question

    1. BJ

      tell me about an AI product strategy that you've created.

    2. AG

      Oh, this will be a fun one. Okay. I need to take a second just to make sure I choose the right story and I'm not repeating too much.

    3. BJ

      And viewers, it's, um, it's, like, for your own benefit that we do acknowledge that some of the stories, um, overlap with a few categories, and we want to present you with unique takes on each of them. But when you can skillfully merge few of the questions, few types at the same time to build your case, to build your visible skill set and knowledge, it's to your own benefit. You are not there to talk for 60 minutes. You are there to prove that you are a great match to be an AI product manager, and if you can do it faster in less amount of time, kudos for you. They have-- They don't need to ask you all the questions they have on the sheet. They just need to know if you're the guy or the gal or not. And by all means, do your best. Uh, try to highlight s- different PM and AI feature in every story. And if you can do that effectively, you're just presenting yourself as a better candidate. In fact, my first, uh-In PM interview for Skype was supposed to last an hour, and it only lasted 25 minutes because I managed to hit all the check marks immediately. So there was no need to extend the interview. So don't, d- don't worry that we are pushing ourselves to more categories and there is an overlap. It's for your own, your, your benefit, and if you do that, you will be golden on your interview. But let's see what, what Aakash did prepare for me in terms of AI strategy that he created. Over to you.

    4. AG

      So we talked a little bit about the AI email writer in the context of how I'd evaluate an ML model, but I never got to tell you about the strategy context around that. And actually, I was pretty involved in building this strategy, and it's-

    5. BJ

      Mm-hmm

    6. AG

      ... one of the strategies I'm most proud about, so let's talk about that. So to give you a little bit of recap, right, we said $750 million to $2.5 billion SaaS. What is the SaaS doing at Apollo.io? So it was a sales engagement tool. What does that mean? It's helping sales teams, and it wanted to be basically the main software that a sales team lives in outside of, of course, Salesforce, because everybody is using Salesforce. So what do people use Salesforce for? Of course, Amy, you'd understand this, right? It's the system of record. So what do they need on top of that? Well, Apollo had built an amazing engine before I came, I can't take any credit for it, around contact database. So we had one of the world's best contact databases to get the emails and phone numbers of the people in your account that you should be reaching out to. So we had built this, but when I looked into the data on this feature, I found that it was very much a one-time type of feature. It wasn't a recurring feature. And actually, this was one of the insights that got me hired. [laughs] This is-

    7. BJ

      [laughs]

    8. AG

      I had written a product strategy in the interview process and shared it with the CPO and the CEO in the interview, and one of the things was around engagement. So when it came around to planning a year later, I said, "All right. It's time to really triple down on engagement." ChatGPT, you guys released it in 2023. You guys changed the world, right? I know you were still at Meta at the time, but the team that you work with and lead now, they changed the world, and so we knew we needed to use it. So my critical insight pushing the team was, "Hey, let's not just focus on the contact database. We need to improve retention." The thing with Apollo was Apollo had, like everybody else, it started at the bottom of the market. You know what's the problem with SaaS for small and medium-sized businesses? You know this, of course. They churn, right? They don't retain well. You need to improve the retention. So we need to take them from contact database. Sometimes people would download 21,000 contacts from us and then never use us again. We need to go from contact database to actual tool they are using to make the, a contact. And at the time, there were tools like SalesLoft and Outreach, which were better well-established, and so a typical stack people would use was Salesforce plus Apollo plus Outreach. And so I said, "How do we replace Outreach? How is it Salesforce plus Apollo plus Apollo?" And we realized that AI could be the big wedge. And I looked at the data, and I helped us come up with three major vectors for our AI strategy here. And I'm just looking at my notes because I'd taken some notes on this. One was the email writer. The second was email warm-up, and the third here was email responses. So email writer, I already told you a little bit of the story. Suffice it to say email writer was a big success. Email warm-up, we had some huge fits and starts. We actually had to turn off warm-up for a while. It was a bit too early. Some of the emails were a bit too ridiculous. But as your models evolved at OpenAI, and as we improved our fine-tuning and our RAG pipelines to make sure that the emails weren't hallucinating, and they were fine-tuned for good emails, we relaunched email warm-up eventually, and that led to a massive uptake in our engagement tool. I mean massive because the problem that people faced before with u- using Apollo versus a SalesLoft or an Outreach is they would create a new email address for their salesperson, and the person would tank their own deliverability because they'd send a bunch of bad emails that would get marked as spam, and then the email providers would stop delivering their emails. With the email warm-up tool, once we fine-tuned the models enough, we were able to build back-and-forth email conversations that would essentially create credibility for an email address before it was ever used. Paired with that email writer that I've talked about earlier, this led to a huge increase in adoption of our engagement products on top of our contact products. And as that adoption increased, I don't remember the exact metrics, it was something like 10 to 20%, the retention increased. And we knew this mechanistically because as I had done in building the strategy for it, we had shown that if we get people into higher retention products, they retain better ultimately. So the end result, and they're still working on email responses, the third version of this strategy, because it was really a two-year strategy, the end result was a huge improvement in gross revenue retention. I can't share the exact metrics, but suffice it to say that-

    9. BJ

      Mm-hmm

    10. AG

      ... it was part of the reason they scored that $2.5 billion valuation. So my role in the strategy was really pushing the team to think beyond the initial product they had, to understand the new avenues, those three avenues that AI enabled, and then help drive the execution.

    11. BJ

      Perfect. I love how it was not a strategy to introduce AI into anything per se, but to solve a problem. Again, as an AI PM, you need to have that strong PM foundations and focus in order not to fall in the trap of putting AI for AI's sake. And in the era of AI, many solutions that were not possible in the past are now up for grabs. So it's all about the, all about choosing the right problems to solve, find the right strategy, and using AI when it's called for. And it's great in Aakash answer that AI was a clear, um, solution to the problem presented, while also we went through the journey of improving the MVP, knowing its issues, and polishing it with the releases of newer and newer models that allow the feature to, um, flourish. I've seen tons of AI projects that were released, ha- didn't meet the quality or evil sta- eval standards, and were essentially shelved and forgotten as a failed experiment rather than do what your strategy and your team did and to reevaluate every time the technology move forward, and it moves forward in incredible rates. SoStick to your guts. Show, sh- have a strategy to sh- sh- to solve a particular problem, and AI just might be the thing that will crack it and then let you land that AI PM job. Wouldn't you agree?

    12. AG

      I think so, and what I tried to do in this response was, number one, show what was my role in contributing to the strategy. So little anecdotes around I built this in my interview, I'm following up, I worked with the team. Number two, showing that it's a real strategy. Some people show a strategy that influences three months of work. I tried to show this influenced years of work. The final leg of this strategy is still ongoing, even though we've made incremental progress. And then I tied it back to metrics. I didn't remember the metrics on the top of my head, so I just gave a rough range, but it still helped people say, "Okay, he's focused on driving the actual business impact."

  13. 50:13 – 52:05

    Six Overriding Interview Skills

    1. BJ

      All right. Those were all the categories we had for today, but we are definitely not done. And, well, tell me, what are the overriding skills for behav- behavioral interviews AI PMs should internalize from all those categories that we've just talked about?

    2. AG

      So number one, see how specific I was? Real architecture, real outcomes, real stories. Number two, you see how I was connecting technical to business? A lot of times I see people not have that ending where I connect it back. Number three, you see how I showed iteration? I said, "Oh, we even paused the email warm-up for a while." Experimentation and failure are expected, and if your stories don't have that, people are gonna think they're fabricated. Number four, I demonstrated collaboration. I talked about how I worked with the CEO and the CPO. In every response, I mentioned the ML engineers or the designers. Nothing is done alone as a PM. Number five, include ongoing operations. So I talked about how they're still building it. I talked about how they're still working on this, how this is still happening. And then number six, use STAR-M. So what is that? Situation, task, action, result, metrics. And why do we have that M at the end, metrics? Well, you noticed in all my responses, I always ended on metrics. That's because the PMs who are getting hired today, they have to drive impact. Now, Boris Cherny said most software engineers are gonna start to get the title product builder or manager. Most designers are gonna start to become product managers. How do PMs differentiate as PMs anymore? [laughs] A lot of it is metrics, and that's why you need to have that business orientation, not just that user orientation.

    3. BJ

      All right. So where do we go from here?

    4. AG

      These are the top interview questions, but subscribe to my newsletter and

  14. 52:05 – 54:17

    Land PM Job Program

    1. AG

      b- you'll get constant articles updating you on everything. In this article I have here, I even cover the top interview questions across categories that you need to know. So be sure to check out my newsletter. And finally, be sure to apply to our Land PM Job program that Bart and I run. We help people land these jobs. We have helped people land jobs at OpenAI, Anthropic, Google, Meta, you name it. We are gonna walk you through building a LinkedIn, building a resume, building a GitHub, building a portfolio. We're gonna give you custom reviews on all of those. Bart has 200,000 LinkedIn followers. I have 300,000 LinkedIn followers. We're gonna help you write LinkedIn posts to get jobs. This is the kind of access and instruction that there is simply no other program out there providing, because there's no other program with people with this many followers or this much experience and this much insight into the PM job market. So be sure to subscribe to my newsletter, apply to the Land PM Job program. The third cohort starts in May. We have seen amazing success with our first and second cohort. In our first cohort, one-third of people got a job before the cohort finished. In our second cohort, a couple people have already gotten jobs. If you want to land a job that pays you more money, the ROI is a no-brainer.

    2. BJ

      And it's not only that you take classes with us. We are available for several one-on-ones with you, where we can discuss your specific difficulties. I'm there available. There are our friend... There are our other senior instructors that specialize in interviews, in resumes, in all the aspects that make you successful as a job seeker. So it's not come to the class, do the homework. It's about you, getting you the job, getting you at least two interviews. And trust us, we know how to do it. And if you're with us, we'll make sure that you'll be as s- successful as possible.

    3. AG

      We are guaranteeing you two PM interviews, guys. There's nobody else out there on the market guaranteeing you these interviews. So check out the program. We would have made a 50-question mock interview video, but that would just be too long. So you'll have to join the program for that. And until then, we'll see you next time.

    4. BJ

      See you next time. Bye-bye.

Episode duration: 54:27

Install uListen for AI-powered chat & search across the full episode β€” Get Full Transcript

Transcript of episode vPQCsAxWJ70

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome