Aakash GuptaStop Applying to AI PM Jobs Until You Watch This
Aakash Gupta and Jyothi Nookula on aI PM isn’t hype—master fundamentals, agents, RAG, and delivery.
In this episode of Aakash Gupta, featuring Jyothi Nookula and Aakash Gupta, Stop Applying to AI PM Jobs Until You Watch This explores aI PM isn’t hype—master fundamentals, agents, RAG, and delivery AI PM roles split into “traditional PM + AI features” (most jobs) versus “AI-native PM” where AI is the product and behavior is probabilistic.
At a glance
WHAT IT’S REALLY ABOUT
AI PM isn’t hype—master fundamentals, agents, RAG, and delivery
- AI PM roles split into “traditional PM + AI features” (most jobs) versus “AI-native PM” where AI is the product and behavior is probabilistic.
- AI PM work differs from classic PM through probabilistic quality management, data as a first-class product dependency, iterative model behavior, variable unit economics, and responsible-AI guardrails.
- Choosing whether to use AI is a core PM skill: AI fits pattern recognition, prediction, and personalization at scale, while heuristics/rules fit domains needing explainability, clear rules, limited data, or fast MVPs.
- Selecting techniques should be a toolkit decision across traditional ML, deep learning, and GenAI, with prompts/context/RAG often outperforming premature fine-tuning.
- AIPM career progression is accelerated by building real “products not projects,” showcasing agents and RAG in a portfolio, and understanding cultural differences across Amazon, Meta, and Netflix PM environments.
IDEAS WORTH REMEMBERING
10 ideasMost “AI PM” jobs are still classic PM roles with AI bolted on.
Jyothi estimates ~80% of AIPM postings are existing products adding LLM features (chat, summarization), while ~20% are AI-native products like ChatGPT/Copilot where the product is fundamentally probabilistic.
Pick your entry point: application PM is the easiest on-ramp.
She frames the stack as ~60% application PM (end-user UX/trust), ~30% platform PM (tools like eval/observability), and ~10% infra PM (vector DB/GPU serving), with required depth increasing lower in the stack.
AI PMs must manage probability, not deterministic correctness.
Because identical inputs can yield different outputs, AIPMs define acceptable error rates, handle edge cases, and often design deterministic fallbacks to preserve user trust.
Data strategy is product strategy in AI systems.
“Garbage in, garbage out” becomes a product reality: poor pipelines, labeling, or training/eval data quality directly degrades user experience and must be treated as a core PM responsibility.
Know when to say ‘no’ to AI.
AI is strongest for complex pattern recognition, prediction from historical data, and personalization at scale, but heuristics/rules win when explainability is mandatory, domain rules are explicit (e.g., taxes), data is sparse, or speed-to-market is paramount.
Start with the simplest technique that fits the data and interface.
Use traditional ML for structured “spreadsheet” problems needing predict/classify with cost/explainability constraints; deep learning for perception tasks (image/audio/video); GenAI when natural language interaction, generation, or synthesis is required.
RAG should precede fine-tuning in most product roadmaps.
Jyothi recommends a hierarchy: prompt optimization → context engineering → RAG, with RAG solving ~80% of enterprise knowledge grounding needs before considering fine-tuning.
Context engineering is the production-grade skill that controls quality and cost.
Managing what goes into the context window (immediate/session/knowledge context) directly impacts latency and token spend; dynamic retrieval/orchestration prevents loading entire knowledge bases every turn.
Agents are for goals; workflows are for predictability.
Workflows run predefined steps and decision trees, while agents decide which tools to use to accomplish an objective using an orchestrator + model + memory + tools, as shown in the n8n demo where the agent chose whether to call Gmail.
To ‘break into’ AIPM, build products with real users, not toy demos.
A strong portfolio includes an agent and a RAG system solving a real pain point, launched to friends/family so you can discuss breakpoints, iterations, tradeoffs, and metrics like a working PM.
WORDS WORTH SAVING
5 quotesThe core difference here is you see how traditional PM products are deterministic. However, AI products are probabilistic.
— Jyothi Nookula
Knowing when to say yes and when to say no is a very powerful skill that a PM should possess.
— Jyothi Nookula
Garbage in will lead to garbage out.
— Jyothi Nookula
RAG might solve 80% of your problems.
— Jyothi Nookula
Don’t think of it as projects. Think of it as building products.
— Jyothi Nookula
QUESTIONS ANSWERED IN THIS EPISODE
5 questionsIf 80% of AIPM roles are “PM + AI features,” what are the clearest signals in a job description that it’s actually AI-native versus a feature-bolt-on role?
AI PM roles split into “traditional PM + AI features” (most jobs) versus “AI-native PM” where AI is the product and behavior is probabilistic.
For application PMs building AI UX, what specific trust/reliability mechanisms (fallbacks, uncertainty UI, human-in-the-loop) did you see work best at Meta/Amazon/Netflix?
AI PM work differs from classic PM through probabilistic quality management, data as a first-class product dependency, iterative model behavior, variable unit economics, and responsible-AI guardrails.
How do you translate “quality distributions” into concrete metrics—what KPIs or eval suites do you recommend for an LLM feature beyond pass/fail QA?
Choosing whether to use AI is a core PM skill: AI fits pattern recognition, prediction, and personalization at scale, while heuristics/rules fit domains needing explainability, clear rules, limited data, or fast MVPs.
When AI unit economics are variable, what’s your practical framework for forecasting and controlling cost per task (token budgets, routing, caching, model tiers)?
Selecting techniques should be a toolkit decision across traditional ML, deep learning, and GenAI, with prompts/context/RAG often outperforming premature fine-tuning.
In the ‘when not to use AI’ bucket, where do you see teams overestimating explainability needs and missing valuable AI opportunities?
AIPM career progression is accelerated by building real “products not projects,” showcasing agents and RAG in a portfolio, and understanding cultural differences across Amazon, Meta, and Netflix PM environments.
EVERY SPOKEN WORD
Install uListen for AI-powered chat & search across the full episode — Get Full Transcript
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome