Skip to content
Aakash GuptaAakash Gupta

Stop Applying to AI PM Jobs Until You Watch This Safety & Ethics Mock

Apply to Land a PM Job Cohort 3 (starts May 4): https://www.landpmjob.com/ Most AI PM candidates underrate the safety and ethics round. In this episode, Ankit Virmani (AI PM at Uber, formerly GPM at Meta), Prasad Reddy (former CPO at L-Nutra, ex-VP at Danaher), and Dr. Bart Jaworski (coach to 12,000+ at Amazon, Microsoft, Zalando) join Aakash for four live mock rounds with real-time scoring, plus a framework you can use the next time medical chatbots, hiring bias, or autonomous agents come up in your loop. Full Writeup: https://www.news.aakashg.com/p/safety-ethics-interview --- Timestamps: 00:00 The safety round most AI PM candidates underrate 01:36 Why senior candidates freeze on safety questions 03:39 The SHIR framework: severity, harm scope, immediacy, reversibility 06:34 Mock 1: Medical chatbot contradicting clinical guidelines 11:06 Mock 2: Hiring tool with a 15% demographic gap 16:50 Mock 3: AI agent booking flights and sending emails 21:17 Mock 4: Right for users, wrong for short-term metrics 27:09 Bart's full scoring reveal 32:50 The 40 minute rule for proactive safety mentions 33:38 Anthropic vs OpenAI vs Google: hardest safety round 34:48 The one question every AI PM candidate should be ready for --- 🏆 Two things to consider: 1. AI Tools Bundle: A full year of Mobbin, Arize, Relay, Dovetail, Linear, Magic Patterns, Deepseek, Reforge, Build, Descript, and Speechify with an annual paid newsletter sub - https://bundle.aakashg.com 2. Land a PM Job: 12-week cohort with Aakash, Ankit, Prasad, and Bart - https://www.landpmjob.com/ --- Key Takeaways: 1. SHIR is the framework that buys you 30 seconds of structured thinking - Severity, Harm scope, Immediacy, Reversibility. Run any safety question through these four words before you say a word about your solution. Most candidates jump to "pull the feature" or "ship it anyway." SHIR gets you to a guardrail-plus-audit answer that actually matches how senior PMs think. 2. At the CPO and VP level, sizing business impact is table stakes - The pull costs $50M. The guardrails cost $200K and two weeks. The full retrain costs $2M and three months. If you cannot put numbers next to each path, you are not interviewing at the right altitude. 3. Safety is evaluated across the entire loop, not in one round - Meta embedded safety thinking inside the product sense rubric itself. If you make it 40 minutes into a 60-minute interview without mentioning safety, you have probably already lost points you cannot recover. 4. Reframe revenue arguments as headline arguments - When the VP says "we cannot pull this before earnings," your move is to ask whether the company can afford the headline that you knew the AI was giving dangerous medical advice and let it ship anyway. That converts a $50M quarter risk into a $5B brand risk in one sentence. 5. Agent safety has three pillars - Scope (spending caps and category limits), confirmation (forked by stakes, with push notifications and undo windows for medium actions), and reversibility (pending states, send delays, anomaly detection on top). Memorize this stack for any agent question. 6. Liability for AI agents almost always lands on the platform - Because you designed the guardrails. Frame your answer around how you reduce risk through scope limits and confirmation flows, then acknowledge the legal gray area and the jurisdiction-by-jurisdiction nuance. 7. No questions asked refunds create moral hazard - Prasad's pushback on Aakash here is the lesson. Refunds are the safety net. Scope limits are the railing. Build both. If you only build the refund, users will test the limit. 8. Anthropic has the hardest safety round in the industry - Expect 45 to 60 minutes on safety alone. Read up on constitutional AI and the founding story before you walk in. Practice both situational and historical behavioral answers out loud, and watch the recording back. --- 👨‍💻 Where to find Ankit Virmani: LinkedIn: https://www.linkedin.com/in/ankitvirmani/ 👨‍💻 Where to find Prasad Reddy: LinkedIn: https://www.linkedin.com/in/prasad-09/ 👨‍💻 Where to find Dr. Bart Jaworski: LinkedIn: https://www.linkedin.com/in/bart-jaworski/ 👨‍💻 Where to find Aakash: Twitter: https://www.x.com/aakashg0 LinkedIn: https://www.linkedin.com/in/aakashgupta/ Newsletter: https://www.news.aakashg.com #aipm #pminterview --- 🧠 About Product Growth: The world's largest podcast focused solely on product + growth, with over 200K+ listeners. 🔔 Subscribe and turn on notifications to get more videos like this.

Aakash GuptahostAnkit VirmaniguestPrasad ReddyguestDr. Bart Jaworskiguest
May 3, 202638mWatch on YouTube ↗

Episode Details

EPISODE INFO

Released
May 3, 2026
Duration
38m
Channel
Aakash Gupta
Watch on YouTube
▶ Open ↗

EPISODE DESCRIPTION

Apply to Land a PM Job Cohort 3 (starts May 4): https://www.landpmjob.com/ Most AI PM candidates underrate the safety and ethics round. In this episode, Ankit Virmani (AI PM at Uber, formerly GPM at Meta), Prasad Reddy (former CPO at L-Nutra, ex-VP at Danaher), and Dr. Bart Jaworski (coach to 12,000+ at Amazon, Microsoft, Zalando) join Aakash for four live mock rounds with real-time scoring, plus a framework you can use the next time medical chatbots, hiring bias, or autonomous agents come up in your loop. Full Writeup: https://www.news.aakashg.com/p/safety-ethics-interview --- Timestamps: 00:00 The safety round most AI PM candidates underrate 01:36 Why senior candidates freeze on safety questions 03:39 The SHIR framework: severity, harm scope, immediacy, reversibility 06:34 Mock 1: Medical chatbot contradicting clinical guidelines 11:06 Mock 2: Hiring tool with a 15% demographic gap 16:50 Mock 3: AI agent booking flights and sending emails 21:17 Mock 4: Right for users, wrong for short-term metrics 27:09 Bart's full scoring reveal 32:50 The 40 minute rule for proactive safety mentions 33:38 Anthropic vs OpenAI vs Google: hardest safety round 34:48 The one question every AI PM candidate should be ready for --- 🏆 Two things to consider:

  1. AI Tools Bundle: A full year of Mobbin, Arize, Relay, Dovetail, Linear, Magic Patterns, Deepseek, Reforge, Build, Descript, and Speechify with an annual paid newsletter sub - https://bundle.aakashg.com
  2. Land a PM Job: 12-week cohort with Aakash, Ankit, Prasad, and Bart - https://www.landpmjob.com/

--- Key Takeaways:

1. SHIR is the framework that buys you 30 seconds of structured thinking - Severity, Harm scope, Immediacy, Reversibility. Run any safety question through these four words before you say a word about your solution. Most candidates jump to "pull the feature" or "ship it anyway." SHIR gets you to a guardrail-plus-audit answer that actually matches how senior PMs think.

1. At the CPO and VP level, sizing business impact is table stakes - The pull costs $50M. The guardrails cost $200K and two weeks. The full retrain costs $2M and three months. If you cannot put numbers next to each path, you are not interviewing at the right altitude.

1. Safety is evaluated across the entire loop, not in one round - Meta embedded safety thinking inside the product sense rubric itself. If you make it 40 minutes into a 60-minute interview without mentioning safety, you have probably already lost points you cannot recover.

1. Reframe revenue arguments as headline arguments - When the VP says "we cannot pull this before earnings," your move is to ask whether the company can afford the headline that you knew the AI was giving dangerous medical advice and let it ship anyway. That converts a $50M quarter risk into a $5B brand risk in one sentence.

1. Agent safety has three pillars - Scope (spending caps and category limits), confirmation (forked by stakes, with push notifications and undo windows for medium actions), and reversibility (pending states, send delays, anomaly detection on top). Memorize this stack for any agent question.

1. Liability for AI agents almost always lands on the platform - Because you designed the guardrails. Frame your answer around how you reduce risk through scope limits and confirmation flows, then acknowledge the legal gray area and the jurisdiction-by-jurisdiction nuance.

1. No questions asked refunds create moral hazard - Prasad's pushback on Aakash here is the lesson. Refunds are the safety net. Scope limits are the railing. Build both. If you only build the refund, users will test the limit.

1. Anthropic has the hardest safety round in the industry - Expect 45 to 60 minutes on safety alone. Read up on constitutional AI and the founding story before you walk in. Practice both situational and historical behavioral answers out loud, and watch the recording back. --- 👨‍💻 Where to find Ankit Virmani: LinkedIn: https://www.linkedin.com/in/ankitvirmani/ 👨‍💻 Where to find Prasad Reddy: LinkedIn: https://www.linkedin.com/in/prasad-09/ 👨‍💻 Where to find Dr. Bart Jaworski: LinkedIn: https://www.linkedin.com/in/bart-jaworski/ 👨‍💻 Where to find Aakash: Twitter: https://www.x.com/aakashg0 LinkedIn: https://www.linkedin.com/in/aakashgupta/ Newsletter: https://www.news.aakashg.com #aipm #pminterview --- 🧠 About Product Growth: The world's largest podcast focused solely on product + growth, with over 200K+ listeners. 🔔 Subscribe and turn on notifications to get more videos like this.

SPEAKERS

  • Aakash Gupta

    host

    AI product management coach and creator of the Aakash Gupta channel, focused on AI PM interview prep, safety, and ethics.

  • Ankit Virmani

    guest

    AI product manager (noted as at Uber) with prior Meta/Facebook product experience, discussing ranking and safety tradeoffs.

  • Prasad Reddy

    guest

    Product leader (former CPO/VP; references leading product at Danaher diagnostics) focused on safety, bias, and risk management.

  • Dr. Bart Jaworski

    guest

    Interview coach and scorer providing structured feedback on AI PM safety/ethics mock interview performance.

EPISODE SUMMARY

In this episode of Aakash Gupta, featuring Aakash Gupta and Ankit Virmani, Stop Applying to AI PM Jobs Until You Watch This Safety & Ethics Mock explores aI PM safety interviews: SHIR framework, mocks, and winning strategies Safety and ethics are evaluated across the entire AI PM interview, not just in a dedicated “safety round,” and candidates often fail by not proactively surfacing harms and mitigations.

RELATED EPISODES

Inside a $400K AI Product Sense Interview (Amazon, Meta, Google, OpenAI)

Inside a $400K AI Product Sense Interview (Amazon, Meta, Google, OpenAI)

The ONE AI Skill Every Product Manager NEEDS in 2026

The ONE AI Skill Every Product Manager NEEDS in 2026

Complete Course: AI Product Discovery

Complete Course: AI Product Discovery

What this $2.45B CPO knows that you Don’t!

What this $2.45B CPO knows that you Don’t!

These 7 AI Tools Made Me $1,000,000+ In The Last 12 Months. Here's How:

These 7 AI Tools Made Me $1,000,000+ In The Last 12 Months. Here's How:

Zoom Head of Product: How We Build Product

Zoom Head of Product: How We Build Product

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome