Aakash GuptaStop Applying to AI PM Jobs Until You Watch This Safety & Ethics Mock
EVERY SPOKEN WORD
30 min read Β· 6,463 words- 0:00 β 1:36
The safety round most AI PM candidates underrate
- AGAakash Gupta
I think people are really underrating the safety and ethics interview for AI PM
- AVAnkit Virmani
The VP of product now says that we can't pull it because we have earnings next week. What would you do?
- AGAakash Gupta
The question for the VP isn't necessarily whether we can afford to act before earnings. It's actually if we can afford to have this headline
- PRPrasad Reddy
Candidate with 20 years of experience freeze on these questions because they have never had to formalize their safety reasoning.
- AGAakash Gupta
Ankit is an AI PM at Uber. Prasad Reddy is a former chief product officer at L-Nutra and VP at Danaher. And Dr. Bart has helped coach over 12,000 people for companies like Amazon, Microsoft, and Zalando. These guys, if you think about it historically, where they came from, was they left OpenAI because of safety concerns.
- BJDr. Bart Jaworski
By a margin of error, I think Prasad did the best. I think you all would have gotten the job.
- AGAakash Gupta
Prasad, you won. Do you agree? [laughs]
- BJDr. Bart Jaworski
Yeah. Fantastic. Thank you, boss. [laughs]
- PRPrasad Reddy
If you had to ask one question to every AI PM candidate, what would that be?
- AGAakash Gupta
Before we go any further, do me a favor and check that you are subscribed on YouTube and following on Apple and Spotify podcasts. And if you wanna get access to amazing AI tools, check out my bundle, where if you become an annual subscriber to my newsletter, you get a full year free of the paid plans of Mobbin, Arize, Relay App, Dovetail, Linear, Magic Patterns, Deepsky, Reforge Build, Descript, and Speechify. So be sure to check that out at bundle.aakashg.com. And now into today's episode.
- 1:36 β 3:39
Why senior candidates freeze on safety questions
- AGAakash Gupta
I think people are really underrating the AI PM safety and ethics round. That's why we've got together, the four of us, to give you real mock questions, a framework that you can use, plus a little bit of intel from what the inside actually looks like for people interviewing. I wanna start, Ankit, with you. You've actually been on the interviewer side at Meta. You've completed an AI job search yourself. What happens when people don't prepare for the safety and ethics round?
- AVAnkit Virmani
That's a great question, Aakash. At Meta, safety thinking was very clearly embedded in product sense itself. So if you designed an AI feature and that did not proactively meet harm scenarios and mitigation, you got dicked, um, on product sense, and not on some separate safety checkbox. What most candidates don't realize is that they are being evaluated on safety throughout their entire interview process. Um, at Uber and other companies, the stakes are even more physical and real, so real rides, real people.
- AGAakash Gupta
Prasad, you've coached a lot of PMs. You've led a lot of these product orgs. What do you see from the other side when companies are eval- evaluating CPO and VP-level candidates around ethics and safety?
- PRPrasad Reddy
Yeah. Thanks, Aakash. Either it's at a CPO level or a VP level, or even at a s- a product manager or a c- uh, senior product manager level, especially given healthcare, uh, safety and ethics are non-negotiable. And if you're really talking about it at a CPO and a VP level, safety thinking is table stakes. If a CPO candidate can't articulate how they would handle a product liability scenario with board-level implications, or at the C-level candidates, the interview is just over. Um, the candidate with 20 years of experience freeze on these questions because they have never had to formalize their safety reasoning.
- AVAnkit Virmani
And this is for Aakash, right? So Aakash, you've coached more than 200 candidates on safety questions.
- 3:39 β 6:34
The SHIR framework: severity, harm scope, immediacy, reversibility
- AVAnkit Virmani
If someone has just 10 minutes before their interview, is there a framework that you could suggest that they would use?
- AGAakash Gupta
So here's what you wanna remember, SHUR, severity, harm scope, immediacy, and reversibility. And I like this because those words, they basically tell you exactly what you're looking at. So severity, how bad is the worst case? As Prasad said, medical misinformation, that's an absolute no-no. That's a lot worse than a chatbot that is a little rude. Harm scope, how many people? A bug hitting 10 users versus 10 million, gonna be totally different. Immediacy, is this hurting people right now, or is it more of a latent risk? And then reversibility, can you undo it? A bad recommendation, reversible. A leaked data set, not so much. So a lot of times what I'll do is if I get a safety and ethics question, I'll ask, I'll say, "Can I have a l- 30 seconds to just gather my thoughts?" And then I'll run my story through SHUR to think about, "Okay, these are some of the details that I wanna pull out." Maybe I wanna talk to, really fast, the scope, the harm scope, the immediacy, and the reversibility. So then I would say something like, "Okay, this is hitting 5 million people. It's hitting them right now, but it is reversible." So you don't have to overcrowd out how much you're speaking to each of these if it's a short behavioral question, but at the same time, if it is a big interview case, you can actually talk about each of these and speak to each of these 10 minutes at a time.
- PRPrasad Reddy
That's, that's a great answer, um, Aakash. I would add one thing, though. At the executive level, the candidates who score highest are the ones who also size the business impact of each option. The SHIR framework gets you the risk assessment, but the interviewer also wants to see that you can quantify what each path costs. The pull costs $50M, the guardrails cost $200K and two weeks. The full retrain costs $2M and three months. When you lay those side by side against the severity, the decision becomes very obvious.
- AVAnkit Virmani
Yeah, that's, um, very correct. Uh, even at Meta, the candidates who scored the highest were the ones who very clearly sized the problem before solving it.
- AGAakash Gupta
So if you want to go deeper on this framework, if you want the full question bank and you want an AI grader to help you through it, check out my newsletter deep dive that accompanies this video. The link is in the descriptionNow that you have the framework, we are gonna get into a real mock interview so you can see what it looks like under pressure. So we're gonna do four interview rounds, and we're gonna rate each other across this rubric that's on the screen. So Ankit is gonna interview me on a safety dilemma, Prasad on a bias question. Prasad is gonna interview me on agent safety, then I will interview Ankit on a behavioral. After each answer, Bart will give out coaching notes. Let's see who practices what
- 6:34 β 11:06
Mock 1: Medical chatbot contradicting clinical guidelines
- AGAakash Gupta
they preach.
- AVAnkit Virmani
Very cool. Let's get started, Aakash. So you're the PM for a consumer AI chatbot, and, uh, your team discovers that the model occasionally generates medical advice that contradicts clinical guidelines, right? Pulling this feature would impact ten million monthly active users and fifty million in annual revenue. Can you please walk me through how you would handle the situation?
- AGAakash Gupta
First, I wanna do is size this. So let's just talk through the sizing. The severity is pretty critical because medical misinformation can cause real physical harm. The harm scope is probably ten million monthly active users, but I probably need to know what the percentages are who are actually getting medical queries and what percentage of those contradict the guidelines. So I want that data soon. I'm not gonna pull the whole feature, I reckon. I think I'd ship a guardrail this week. Any response the model classifies as medical gets a disclaimer with a link to verified sources. That will preserve the experience for most users while we contain the risk. Then parallel to that, I would want to audit the last thirty days of medical adjacent queries to quantify the actual harm. If more than a threshold are contradicting guidelines, I wanna escalate that to a full content filter on medical topics until the model is retrained and the research team come back to us. And then I would work with legal the same day because of the liability exposure. So this is really a safety decision, not a feature decision, and so I wouldn't wanna wait any longer.
- BJDr. Bart Jaworski
All right, so Aakash s-sized it. Everything was great. That's sure. Most candidates would jump immediately to either, like, pulling it, making it, canceling it, and trying to fix it without putting any, like, strain on the users or the feature. H- Therefore, or putting a business risk to the entire business. This solution allows, uh, like, the best of b-both worlds. The business is secured, the safety is introduced, and we can go on and figure out how to make it better. And it's very important, especially when it comes to medical q-queries, both in legal terms and for, well, medical conditions of our users.
- AVAnkit Virmani
Thank you, Bart. That's great feedback. Um, I have a follow-up question for you, Aakash. The VP of product now says that we can't pull it because we have earnings next week. What would you do?
- AGAakash Gupta
Hmm. Okay, so let's reframe this a little bit. The question for the VP isn't necessarily whether we can afford to act before earnings. It's actually if we can afford to have this headline, that we knew our AI was giving dangerous medical advice and continued to allow it to do so. That's probably not just a fifty million dollars did we hit this quarter. That's like a five billion dollar brand problem. So if we reframe it properly, and then we say, "Hey, we're not asking for a full pull, but we're actually asking for a guardrail," that will preserve the product for a lot of use cases. So it might not actually be even the fifty million impact he's thinking about. It might be only ninety percent of queries are actually going through, and it's just a five million dollar impact. So I'd try to impact size it for him. If the VP still says no, I would probably try to document [chuckles] what I was saying, send it over to the safety team so at least it's on the record that this was my documented approach, but we have to be team players, and whatever decision is made, we can agree to disagree.
- BJDr. Bart Jaworski
I really like it. The reframe from revenue to headline risk is very strong and very im- It's also about keeping the facts straight. You can only oppose the higher authority figure as long. So if you have the, let's say, paperwork to see that the risks were presented and accepted, and we move forward as agreed, then it's a team activity that can happen in many companies, especially when the higher authority figure has additional pressures, and maybe that stress doesn't allow this person to see clearly the bigger risks. So yeah, I really liked it, and I hope that whatever manager would hear that would actually go with Aakash's recommendation.
- AGAakash Gupta
I'm glad you liked it because I feel like I could've done better of these interviews. They're always something. So all right, Prasad, you're in the hot seat. You've been CPO at companies where product liability was real. So here's the
- 11:06 β 16:50
Mock 2: Hiring tool with a 15% demographic gap
- AGAakash Gupta
question for you. Your AI-powered hiring tool is showing a fifteen percent lower recommendation rate for candidates from certain demographic backgrounds. Engineering says it's a data problem, not a model problem. You have a board presentation in two weeks. What's your plan?
- PRPrasad Reddy
Thanks, Aakash. Great question. You know, the way I would look at it, it's not about a data problem or a model problem. We out-- Either it's a data problem or a model problem, the outcome is the same. The discussion should not be about what problem it is, but it's about how are we gonna solve it. We are screening out qualified candidates from certain backgrounds. That's a liability under EEOC guidelines, and it's the kind of thing that becomes a class action. Immediate action would be is I am pausing auto-reject for the affected segment. Human recruiters still see the recommendations, but no automated rejections until we complete an audit. For the board, I would like to be transparent, especially given this is such a could potentially become a liability. I am presenting this head-on. We found this one. Here's our response. Here's the timeline.Because boards respect transparency. Yes, these are hard conversations. Otherwise, boards punish surprises that come out later. We had a similar situation like this in the past when I was leading product at Danaher for the entire diagnostics. We had to make a hard choice because we f- we saw that the diagnosis was being incorrectly done, so we ended up actually pulling out the product or taking out certain parts of the features of the product and sent everything to human loop until we actually fixed that issue.
- BJDr. Bart Jaworski
All righty. It's very important to approach this as a product manager, and sticking to the engineering angle and trying to decide what is the source of the problem. It's not the product way. It's a great thing that Prasad decided to switch to what are we going to do with it, and the transparency is always a very important part of it, where the communication allows all the relevant parties to understand where you are and what you need to do in order to get things straight. So really, really liked it.
- AGAakash Gupta
So I wanna follow up with that, Prasad. The CEO, what if they say, "Our competitors don't do this level of testing, and we're moving too slow"? How would you react?
- PRPrasad Reddy
Fantastic. [laughs] You know, this always comes up, right? "Yeah, our company is not building. Why are we bothered about it?" Again, this is a short-term versus a long-term. You know, competitors are looking at a short-term hit. You know, we, as a, an organization and a company, are always looking at a competitive advantage and, uh, taking a more, uh, long-term approach. EEOC has been filing AI discrimination cases since two... twenty twenty-three, and the volume keeps going up. Uh, enterprise customers increasingly require bias audits in vendor selection. The audit adds 10 days. A class action costs years and hundreds of millions of dollars.
- BJDr. Bart Jaworski
And that's how we should be framing it, dear viewers. Safety is not about slowing things down or making this harder. It's about, for one thing, keeping the business safe and also having the competitive advantage of the moral high ground. Imagine Facebook being a, um, morality-first company. How different would it be when they would prioritize safety and truth over wall impressions? And it's good to put that in front as a business advantage, not anything else.
- AGAakash Gupta
I like that, Prasad. [laughs] I think you're gonna score higher than me, but we'll have to wait and see for Bart's reveal. So it's your turn, Prasad, to interview. This one's about a question that we see a lot of our mentees really fail.
- PRPrasad Reddy
Yeah. Thanks, Aakash, and Bart, I'm really looking forward to the scoring, okay?
- AGAakash Gupta
Are you looking to land your next product management job? I am accepting a group of just 30 product managers into a 12-week cohort led by me, where every Monday for 90 minutes, I help you through your job search, creating your candidate market fit, updating your LinkedIn, updating your base resume. You're gonna get personalized feedback and one-on-one mentorship sessions with my co-teachers, Ankit Virmani, who is an AI PM at Atlassian and was a group product manager at Meta; Prasad Reddy, who is a CPO and has been in product for over 26 years; as well as my other live instructor, Bart Jaworski, who's gonna run another 90-minute session per week, where we really help you deliver on all of the deliverables in an actionable way and get you custom resume feedback, custom LinkedIn feedback. This program worked extremely well in cohort number one, which is just finishing up. 40% of the cohort got a job before the cohort even ended. We got jobs at places like OpenAI and Anthropic. So if you want to get a higher paying PM job, be sure to check out my landpmjob.com cohort. The next cohort starts in February, runs through the end of April. The next time I'm opening up a cohort is in May. So if you want coaching from me to land a PM job, this cohort is a no-brainer. It is a premium priced product. It is more expensive than the average product out there, but the return is huge. Most people who join the cohort see a salary raise anywhere from $10,000 to $100,000 in the first year, and so the ROI will be there within a year, and we guarantee two-plus interviews. So if you don't get two interviews after completing the 12-week program and following all the steps, we will refund the money to you. So it's a no-brainer. Check it out at landpmjob.com. And now back in today's
- 16:50 β 21:17
Mock 3: AI agent booking flights and sending emails
- AGAakash Gupta
episode.
- PRPrasad Reddy
Uh, so Aakash, you are building an AI agent that can book flights, make purchases, and send emails on behalf of users. What's your safety framework for autonomous actions with real-world financial consequences?
- AGAakash Gupta
Okay, so I put together a little bit of a framework just to answer this, Prasad. I wanna talk through the scope, the confirmation, and the reversibility. Three sort of different elements to the product design here. So scope-wise, I think we're gonna need users to give their agent a spending cap in their onboarding. So maybe that's $500 per transaction and $3,000 per trip. It's gonna depend on your geography and the person. The next is having some sort of confirmation, and I think the confirmation should fork based on how big of a purchase or what the stakes are. So a low stakes action, no confirmation maybe even needed. But a medium stakes, like let's say booking a $100 flight, which is way lower than their budget, maybe it's just a push notification with a 60-second undo window. And then for high stakes, like anything towards the cap or maybe even more than the cap, maybe a hard confirmation is required to take any action. So that's the confirmation step on top of the scope. Then finally, reversibility. So-We should probably build some reversibility into the product. It kind of reminds me of how Amazon gives you 30 minutes to cancel an order. What if bookings just go into a pending state for 15 minutes before finalizing, or emails that it's sending out get a send delay? The idea is that we're not gonna do anything irreversible without either user confirmation or a buffer window. And then the final thing I'd say on top of this framework of scope confirmation reversibility is that we should have some sort of anomaly detection on top. So let's say I'm always booking $500 flights. If I start to book a $2,000 flight, we have extra flagging and displays for people.
- BJDr. Bart Jaworski
This was such a 2026 answer. I mean, we need to remember that in the past, people were liable for their actions, and now we are using bots that have no legal entity. So while it is super convenient to use an agent to book your flights or do anything connected to a financial transaction, having such guardrails for an entity that's non-human is required by now. We've heard stories of, I think it was Meta head of security who used Claude called and got her, um, emails deleted from her machine because, well, it went wild. Imagine if that bot went wild on your credit card. Excellent, uh, thinking and the right call given the state of technology and we are-- and where we are in the world.
- PRPrasad Reddy
Aakash, great response, especially I like the way you actually frame the scope. Let's say in spite of all that, a user's agent accidentally books $5,000 in flights. Who is liable?
- AGAakash Gupta
So I guess there's gonna be the legal answer and then the business strategy answer. So from a legal point of view, we'd have to consult our legal team, but I imagine it's somewhat unsettled. It's not a clear line quite yet, at least based on what I understand. Probably it really depends on how the guardrails work. So if they accidentally booked a $5,000 flight, was this under the guardrail that we had in onboarding or not? Now, regardless of the legal scenario, I think what's more important is the user and business strategy that we as product managers and leaders are advocating to the rest of the company internally. And from my point of view, we probably want to build our financial budgets and build this product in such a way that we can give a full refund, no questions asked, to that user. And maybe we even try to create some sort of limit or relationships with our airline partners so that they know that these are coming in through an agent, this is a sort of a different type of booking, and maybe they'll even give us more favorable cancellation and refund policies. So that'd be the overall strategy I'd try to take on it.
- PRPrasad Reddy
Great, but I will have to push you back on one thing. As a product lead and CPO, I've dealt with liability questions where full refund, no questions sounds generous, but creates moral hazard. You know, users will test the limits if they know they can always get their money back. You need the refund policy and the guardrails that prevent the situation from happening in, in
- 21:17 β 27:09
Mock 4: Right for users, wrong for short-term metrics
- PRPrasad Reddy
the first place. The refund is the safety net, the scope limits are the railing.
- AGAakash Gupta
All right, last round. Ankit, you're in the hot seat.
- AVAnkit Virmani
Let's do it.
- AGAakash Gupta
Tell me about a time you made a product decision that was right for users but wrong for short-term business metrics. What happened?
- AVAnkit Virmani
Hmm. That's a fascinating one. I would like to talk about the time the- when I was leading Reels ranking in Facebook Feed, um, specifically for the in-feed unit. Aakash, at that point of time, our value model was pretty heavily optimized for clicks, and the system rewarded content that caught attention, not necessarily content that was satisfying for a user. And users told us that they were feeling, um, that content was clickbaity, and young adult marginal users were seeing almost no topical diversity because of this. Now, my job was to restructure towards engagement quality, knowing that this would threaten ad click volume. So, uh, I know that prior value model changes had already been reverted due to revenue regressions. So I partnered with our RDS team to prove that engaged users, specifically the ones that clicked and have positive downstream engagement, correlated a lot more strongly with long-term retention than just raw click volume. That truly reframed this debate from short-term click versus long-term users. Um, and I restructured the, the value model from additive to multiplicative, and what that does is that content has to succeed at every single stage, which is click and completion and then downstream engagement. And sequencing it carefully, I launched a lower risk signal first to build credibility and then pushed a full restructure. Now, the results were pretty awesome. Um, roughly, uh, a 15% to 20% lift in daily, uh, active users in US and Canada, and about 25% gains in marginal users, which was our hardest to grow segment. Now, value model changes converted to daily active gains at roughly twice the rate of other intervention types. So the revenue stabilized, uh, within a few weeks as we saw higher quality sessions generated more ad inventory as well. What we learned from this, uh, thing was that protecting short-term metrics can actually trap you in a local maximum of sorts, and tactically, you don't just make the right call. Uh, it is very important to sequence the evidence as well so that your organization and your peers can follow along with you.
- BJDr. Bart Jaworski
And that, dear viewer, was a job interview masterpiece where not only did Ankit change the question really from making a decision that was right for users but wrong for the short-term metrics, but it was more of a how did you discover that you are using the wrong metrics? And it's very often in the product management world that we are chasing the wrong goals that have tooShort of horizon, making us do the wrong things and bet on very sh- uh, short-term investments. That is an excellent answer from Ankit, where again, he's driving a question about a potential failure to a story of his product success.
- AGAakash Gupta
Good one. So last question, if you joined a team and discovered a shipping feature had a known safety issue that leadership was aware of but chose not to fix, what would you do?
- AVAnkit Virmani
Okay. Um, first I'd love to understand why that decision was made. There's always a lot that you can find when decomposing a sequence of decisions, especially when it comes from leadership. And there might be context that I'm missing, maybe a fix that is in progress or another risk assessment that leadership went through, uh, while making this decision, and I would love to start by getting context before assuming anything at all. Now, after, uh, that issue is real, which is after I've identified that that issue, issue is real and leadership is choosing not to act, I would definitely document my concern formally. Um, not a Slack message, not, uh, a written memo to my manager, but a written memo to my manager, their manager, and any related teams with very specific rec- uh, risks associated and my recommended fix. Now, if that doesn't move the needle as well, I would use the company's ethics channel in this case. Uh, my personal line is that if users are experiencing active harm and internal paths of communication have failed, I would really want to consider whether this is a place that I can work or not. But most safety issues get fixed when someone puts the data in front of the right person in the right format, and I would be surprised if none of the escalations have worked for me.
- BJDr. Bart Jaworski
And this is what usually happens, where we know what's the right call given what we know, but a person above us makes a different decision. Again, just like Aakash did previously, it's a great course of action to document the decision-making process in order to help have it transparent in the future in case the senior decision backfires and, to put it mildly, and the retrospective has to be made in order to avoid making such bad calls in the future.
- 27:09 β 32:50
Bart's full scoring reveal
- AGAakash Gupta
All right. That is all of the rounds done, guys. You saw three of us answer. Bart, meanwhile, was taking notes and scoring us. I'm really keen to see these, Bart. How are you gonna rate us?
- BJDr. Bart Jaworski
Guys, that wasn't easy. You're all pros. Like guys, seriously, I would love to have like halves or s- three quarters because it was like something between four and five, and Aakash was first, so I landed with a, um, with a four here. But seriously, it's, it's, it, it was very hard to rate. I, I wish you were one of our students, so we had some wiggle room b- between some people that are expert in one field and worse in, in other, et cetera. Anyway, of course, framework application five out of five. How could the author not follow his own, uh, framework? I put a f- four on a trade of, um, reasoning, but again, probably could have been fine and it's like, mm, a mo- a moment's, um, interpretation of what happened. Uh, stakeholder quality, I also put four here. Oh, not this one. I was-- How come it worked in the, in the past and I'm now getting the wrong one? Oh, never mind. And communication clarity, oh, full clarity without any doubt. So that leaves us with 22 out of 25.
- AGAakash Gupta
I think you graded me nicer than I graded myself. [laughs]
- BJDr. Bart Jaworski
It's like I, I feel, I, I feel the pain of being the coach of like figure skate Olympics people. Like they all look the same. How are you supposed to differentiate?
- AVAnkit Virmani
Aakash, you got an easy interviewer, I'll just say that.
- AGAakash Gupta
Reversibility, I feel like my communication could have been a lot better, especially when I was looking at my notes versus when I was looking at the interviewer was like not very clean. And overall, like I had to kept looking back at my notes that I created, and I needed 30 seconds for each one. So feel like that's the one area I would give myself like a three and a half. [laughs] And maybe I came off a little too structured because I think that there's this feedback I'm continuing to hear from candidates that they come off too polished, too structured, so be careful. I could have done better from a communication, I reckon. [laughs]
- BJDr. Bart Jaworski
And I'm biased myself. We've worked together for a long time, so I can fill in any gaps that [chuckles] in my mind that Aakash says, so I always perfectly understand-
- AVAnkit Virmani
Aakash, you actually ring, uh, uh, you bring a very important point about being over-polished. There are times when I've heard from candidates that they were asked in interviews if they were reading off something when they are overly polished. And even specifically with candidates now using AI tools to potentially answer, um, being overly polished can definitely hurt your chances.
- BJDr. Bart Jaworski
So let me put in the final scores because I don't think it's super engaging for me to fill in every, um, every circle and narrate through it. So again, all those answers were really, really, really good. So by a margin of-Uh, error. I think Prasad did the best, but I think you all would have gotten the jobs eventually based on your answers. So it's probably not a good idea to score you out of 25, or it's instead it's, like, basically, um, would you be hired or not? Because that's all c- that, that's all that counts, really. You could get all fives and still f- lose to a better candidate or be that better candidate and leave someone who was perfectly matched without a new position.
- AGAakash Gupta
Prasad, you won. Do you agree? [laughs]
- PRPrasad Reddy
Yeah. Fantastic, thank you, Bart. [laughing] Lovely. I'm getting the job over, uh, Ankit and Aakash.
- AVAnkit Virmani
Prasad is like, "Yeah, I totally knew I'm gonna nail this. These two, these two guys weren't great."
- AGAakash Gupta
Yeah, I think Prasad, one thing he did really well that I reflect back on my two answers that I wish I had done, was he actually brought up the example when he was at Danaher, and I think that that's always very powerful. Ankit's questions, they had that specific historical element, and what I thought was really good about Ankit's answers compared to the average answer I hear for those types of questions, is how easy he was to follow. So basically, the way Ankit speaks and he structures information, the story was told in such a linear way. "Let me tell you about when I was on this team. This was the situation. These were some of the challenges we faced. These were some of the actions. These were the results." It was basically a really clean star format, but it didn't feel structured or overly polished. So good job, guys. If I had to rate you guys, I think I would rate you both above me, so I loved it. [laughs] I want you guys to give yourself our own scores. It's important to always reflect critically on the mocks that you do with others and with yourself. Here's a handy trick too that I love to do. Create a Claude project that has all of your interview transcripts in there, and then ask it over time, you know, "How have I done? How have I improved over time?" If you want in the transcript project, you can even throw in the transcript of this YouTube video and say, "How does my answer compare to these answers?" So using AI in your job search is a huge unlock. Before we go, we're gonna do a quick rapid fire to answer those top of mind questions for you about this round. You wanna kick us off, Ankit?
- 32:50 β 33:38
The 40 minute rule for proactive safety mentions
- AVAnkit Virmani
Yeah, let's do it. So, um, Aakash, tell us about the biggest safety red flag in an AI PM interview that you can see.
- AGAakash Gupta
I think the biggest mistake most people make is that they don't talk about safety proactively. So I want you to remember this rule, guys, the 40-minute safety rule. If it's 40 minutes of a 60-minute interview and you haven't mentioned safety, try to figure out how you're gonna bring it in because as Ankit said at the very beginning of this video, if you recall, safety may not be a specific round, but they are looking for you to talk about it. And so try to bring it up proactively. And one of the mistakes I see is, let's say you have five interviews in an interview day. You talk about safety in one, and then you think you don't need to talk about it in all four. Actually, safety is important enough that you wanna talk about it in almost every single interview.
- 33:38 β 34:48
Anthropic vs OpenAI vs Google: hardest safety round
- PRPrasad Reddy
Love the rapid fire. Thanks, Aakash, for taking the rapid fire questions. Anthropic versus OpenAI versus Google, which has the hardest safety round?
- AGAakash Gupta
By far, Anthropic. These guys, if you think about it historically, where they came from, was they left OpenAI because of safety concerns. And so you need to understand what were those safety concerns, what is their constitutional AI model, how you would answer some of these tough questions that we just showed you, both situational and historical behavioral. So Anthropic expect to talk about it for 45 to 60 minutes.
- AVAnkit Virmani
That's great to hear, Aakash. Now let's think through this from a candidate's perspective. If a candidate only had a couple of hours, say two hours, then how do they time box their preparation for a safety round specifically?
- AGAakash Gupta
So start with the framework, learn SHIR, and then practice out loud. [laughs] Just take some real questions and record yourself on video and watch back the recording of yourself. That's how you'll get to see how your delivery looks, how your note-taking t- looks, and how it all comes together. The big mistake people do is they just dictate into, uh, ChatGPT or Claude, and they don't actually look at the video
- 34:48 β 38:16
The one question every AI PM candidate should be ready for
- AGAakash Gupta
themselves.
- PRPrasad Reddy
Aakash, last one. If you had to ask one question to every AI PM candidate, only one question, what would that be?
- AGAakash Gupta
"Tell me about a time your product caused unintended harm." What you learn from that answer tells you everything.
- AVAnkit Virmani
Thank you, Aakash. That's a, that's a deep question to ponder. Last one from me is somewhat related to the question that you answered previously. If AI agents can spend real money, then who's liable?
- AGAakash Gupta
Generally, guys, the answer is gonna be the platform, and it's all ab- because you have designed the guardrails. So if those guardrails fail, that's on you. And so when you answer these questions, you wanna talk about how you can reduce the risk. But when the liability question comes up, generally you're gonna assume it with kind of the legal gray area nuance I talked about where, of course, we're gonna consult the legal team, we're gonna look at it jurisdiction by jurisdiction, and we're gonna try to minimize the risk.
- AVAnkit Virmani
Awesome. Thank you, Aakash.
- AGAakash Gupta
All right, guys. So if you enjoyed this, if you wanna practice live and get feedback from all of us, consider the Land PM Job Program that we put together. The first cohort, 30 students. The second cohort, 50. This third cohort is 75 students. It is a three-month program. Every Monday morning, I'm gonna help kick off your week with a Monday session. We're gonna start with understanding your candidate market fit. We're gonna work through optimizing your LinkedIn, your resume, your resume customization strategy, your portfolio, your GitHub. We're gonna give you all of the tools step by step to do all of that. And once we start generating interviews for you, we are going to help you succeed in those interviews. Just like what we did today, you're gonna do mock interviews with us. We're gonna give you all of the tools to succeed in those mock interviews. We're gonna go round by round through behavioral, AI product sense, product design, product execution, product success metrics, technical interviews, homework interview rounds, homework presentations. We're gonna go through the entire gamut. We're gonna have the focus on AI PM, but we're also gonna cover it for regular product managers. If you wanna work with the same group that you just watched, apply now.
- AVAnkit Virmani
I will help you prepare for product sense execution and behavioral interviews and be a rock star at these interview types.
- PRPrasad Reddy
Awesome. Um, I am Prasad Reddy. I will help you with all the mock interviews, AI PM rounds, your LinkedIn resume, and also portfolio.
- BJDr. Bart Jaworski
I'll teach you how to write LinkedIn posts and how to optimize your LinkedIn profile and help you be great at your job interviews.
- AGAakash Gupta
That's it, guys. Find all three of us on LinkedIn, Bart Jaworski, Prasad Reddy, Ankit Virmani. On behalf of them, thank you guys, and we'll see you in the next video. I hope you enjoyed that episode. If you could take a moment to double-check that you have followed on Apple and Spotify podcasts, subscribed on YouTube, left a rating or review on Apple or Spotify, and commented on YouTube, all these things will help the algorithm distribute the show to more and more people. As we distribute the show to more people, we can grow the show, improve the quality of the content and the production to get you better insights to stay ahead in your career. Finally, do check out my bundle at bundle.aakashg.com to get access to nine AI products for an entire year for free. This includes Dovetail, Mobbin, Linear, Reforge Build, Descript, and many other amazing tools that will help you as an AI product manager or builder succeed. I'll see you in the next episode.
Episode duration: 38:16
Install uListen for AI-powered chat & search across the full episode β Get Full Transcript
Transcript of episode RaBw5SRjWLE
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome