Skip to content
Aakash GuptaAakash Gupta

Stop Applying to AI PM Jobs Until You Watch This Safety & Ethics Mock

Apply to Land a PM Job Cohort 3 (starts May 4): https://www.landpmjob.com/ Most AI PM candidates underrate the safety and ethics round. In this episode, Ankit Virmani (AI PM at Uber, formerly GPM at Meta), Prasad Reddy (former CPO at L-Nutra, ex-VP at Danaher), and Dr. Bart Jaworski (coach to 12,000+ at Amazon, Microsoft, Zalando) join Aakash for four live mock rounds with real-time scoring, plus a framework you can use the next time medical chatbots, hiring bias, or autonomous agents come up in your loop. Full Writeup: https://www.news.aakashg.com/p/safety-ethics-interview --- Timestamps: 00:00 The safety round most AI PM candidates underrate 01:36 Why senior candidates freeze on safety questions 03:39 The SHIR framework: severity, harm scope, immediacy, reversibility 06:34 Mock 1: Medical chatbot contradicting clinical guidelines 11:06 Mock 2: Hiring tool with a 15% demographic gap 16:50 Mock 3: AI agent booking flights and sending emails 21:17 Mock 4: Right for users, wrong for short-term metrics 27:09 Bart's full scoring reveal 32:50 The 40 minute rule for proactive safety mentions 33:38 Anthropic vs OpenAI vs Google: hardest safety round 34:48 The one question every AI PM candidate should be ready for --- 🏆 Two things to consider: 1. AI Tools Bundle: A full year of Mobbin, Arize, Relay, Dovetail, Linear, Magic Patterns, Deepseek, Reforge, Build, Descript, and Speechify with an annual paid newsletter sub - https://bundle.aakashg.com 2. Land a PM Job: 12-week cohort with Aakash, Ankit, Prasad, and Bart - https://www.landpmjob.com/ --- Key Takeaways: 1. SHIR is the framework that buys you 30 seconds of structured thinking - Severity, Harm scope, Immediacy, Reversibility. Run any safety question through these four words before you say a word about your solution. Most candidates jump to "pull the feature" or "ship it anyway." SHIR gets you to a guardrail-plus-audit answer that actually matches how senior PMs think. 2. At the CPO and VP level, sizing business impact is table stakes - The pull costs $50M. The guardrails cost $200K and two weeks. The full retrain costs $2M and three months. If you cannot put numbers next to each path, you are not interviewing at the right altitude. 3. Safety is evaluated across the entire loop, not in one round - Meta embedded safety thinking inside the product sense rubric itself. If you make it 40 minutes into a 60-minute interview without mentioning safety, you have probably already lost points you cannot recover. 4. Reframe revenue arguments as headline arguments - When the VP says "we cannot pull this before earnings," your move is to ask whether the company can afford the headline that you knew the AI was giving dangerous medical advice and let it ship anyway. That converts a $50M quarter risk into a $5B brand risk in one sentence. 5. Agent safety has three pillars - Scope (spending caps and category limits), confirmation (forked by stakes, with push notifications and undo windows for medium actions), and reversibility (pending states, send delays, anomaly detection on top). Memorize this stack for any agent question. 6. Liability for AI agents almost always lands on the platform - Because you designed the guardrails. Frame your answer around how you reduce risk through scope limits and confirmation flows, then acknowledge the legal gray area and the jurisdiction-by-jurisdiction nuance. 7. No questions asked refunds create moral hazard - Prasad's pushback on Aakash here is the lesson. Refunds are the safety net. Scope limits are the railing. Build both. If you only build the refund, users will test the limit. 8. Anthropic has the hardest safety round in the industry - Expect 45 to 60 minutes on safety alone. Read up on constitutional AI and the founding story before you walk in. Practice both situational and historical behavioral answers out loud, and watch the recording back. --- 👨‍💻 Where to find Ankit Virmani: LinkedIn: https://www.linkedin.com/in/ankitvirmani/ 👨‍💻 Where to find Prasad Reddy: LinkedIn: https://www.linkedin.com/in/prasad-09/ 👨‍💻 Where to find Dr. Bart Jaworski: LinkedIn: https://www.linkedin.com/in/bart-jaworski/ 👨‍💻 Where to find Aakash: Twitter: https://www.x.com/aakashg0 LinkedIn: https://www.linkedin.com/in/aakashgupta/ Newsletter: https://www.news.aakashg.com #aipm #pminterview --- 🧠 About Product Growth: The world's largest podcast focused solely on product + growth, with over 200K+ listeners. 🔔 Subscribe and turn on notifications to get more videos like this.

Aakash GuptahostAnkit VirmaniguestPrasad ReddyguestDr. Bart Jaworskiguest
May 2, 202638mWatch on YouTube ↗

At a glance

WHAT IT’S REALLY ABOUT

AI PM safety interviews: SHIR framework, mocks, and winning strategies

  1. Safety and ethics are evaluated across the entire AI PM interview, not just in a dedicated “safety round,” and candidates often fail by not proactively surfacing harms and mitigations.
  2. The SHIR framework (Severity, Harm scope, Immediacy, Reversibility) is presented as a fast way to structure safety reasoning under time pressure, especially when paired with clear problem sizing.
  3. Mock cases show how strong answers blend risk assessment with pragmatic mitigations (guardrails, human-in-the-loop, anomaly detection) while quantifying business impact and legal/liability exposure.
  4. Stakeholder pushback (earnings pressure, competitive speed) is handled by reframing to downside risk (brand, litigation, regulatory) and documenting decisions/escalating appropriately when leadership won’t act.
  5. The panel emphasizes practice techniques (speaking out loud, recording yourself, avoiding overly polished “AI-written” delivery) and notes Anthropic tends to run the deepest, longest safety interviews.

IDEAS WORTH REMEMBERING

5 ideas

Treat safety as a continuous evaluation signal across interviews.

Interviewers may score you down in product sense/design even if there’s no explicit safety round; bring harms, mitigations, and monitoring into multiple answers rather than only one interview.

Use SHIR to quickly size risk before proposing solutions.

State the severity, how many users are affected, whether harm is happening now, and whether it can be undone; this prevents jumping to extremes like “pull it immediately” without context.

Pair safety reasoning with quantified business options.

Strong candidates compare choices with costs and timelines (e.g., pull vs guardrails vs retrain) so leadership can see an obvious decision path rather than a purely moral argument.

Default to risk-reducing guardrails while you measure true scope.

In the medical chatbot mock, the recommended approach is immediate containment (classification + disclaimers/links or topic filtering), parallel audit of recent queries, and legal involvement—without necessarily killing the entire product instantly.

For algorithmic bias, stop automated harm first, then audit transparently.

In the hiring tool mock, pausing auto-reject (while keeping humans in the loop) reduces immediate discrimination risk; transparency to the board is positioned as essential to avoid “surprise” liabilities later.

WORDS WORTH SAVING

5 quotes

If you designed an AI feature and that did not proactively meet harm scenarios and mitigation, you got dicked, um, on product sense, and not on some separate safety checkbox.

Ankit Virmani

The candidate with 20 years of experience freeze on these questions because they have never had to formalize their safety reasoning.

Prasad Reddy

The question for the VP isn't necessarily whether we can afford to act before earnings. It's actually if we can afford to have this headline, that we knew our AI was giving dangerous medical advice and continued to allow it to do so.

Aakash Gupta

We are screening out qualified candidates from certain backgrounds. That's a liability under EEOC guidelines, and it's the kind of thing that becomes a class action.

Prasad Reddy

Tell me about a time your product caused unintended harm. What you learn from that answer tells you everything.

Aakash Gupta

Safety embedded throughout PM interviews (not a checkbox)SHIR: severity, harm scope, immediacy, reversibilityQuantifying tradeoffs: cost of pull vs guardrails vs retrainMedical misinformation guardrails and escalationBias in hiring systems and EEOC/class-action riskAI agent autonomy: spending caps, confirmations, undo windowsEscalation, documentation, and ethics channels

High quality AI-generated summary created from speaker-labeled transcript.

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome