Skip to content
Aakash GuptaAakash Gupta

Stop Applying to AI PM Jobs Until You Watch This

Jyothi Nookula has 13.5 years in AI, 12 patents, and has been an AIPM at Amazon (SageMaker), Meta (PyTorch), Netflix (Developer Platform), and Etsy. In this masterclass episode, she breaks down the two types of AIPM roles, the three layers of the AI stack, when AI makes sense versus when heuristics win, how to pick between ML, deep learning, and Gen AI, and builds AI agents and RAG systems live. Full Writeup: https://www.news.aakashg.com/p/jyothi-nookula-podcast Transcript: https://www.aakashg.com/jyothi-nookula-podcast/ --- Timestamps: 0:00 - Intro 1:43 - Is AI PM actually real or is it BS? 4:22 - The roadmap to becoming an AIPM 7:11 - 5 core concepts every AIPM needs to know 10:06 - What differentiates a PM from an AIPM 11:50 - Ads 15:20 - When to use AI and when not to use AI 20:42 - How to select the right AI technique 26:32 - AI agents: building blocks, workflows vs agents 31:03 - Ads 33:26 - Building a workflow vs an agent in N8N 43:40 - Prompt engineering and context engineering 48:15 - RAG systems explained and built in Langflow 58:57 - The AIPM career playbook and portfolio strategy 1:02:00 - How PM cultures differ at Amazon, Meta, and Netflix 1:07:15 - Why Jyothi left Netflix 1:11:15 - Outro --- 🏆 Thanks to our sponsors: 1. Product Faculty: Get $550 off their #1 AI PM Certification with my link - https://maven.com/product-faculty/ai-product-management-certification?promoCode=AAKASH550C7 2. Amplitude: The most accurate mobile session replays with no performance hit - https://amplitude.com/session-replay?utm_campaign=session-replay-launch-2025&utm_source=linkedin&utm_medium=organic-social&utm_content=productgrowthpodcast 3. Pendo: Measure your AI agent performance with Pendo Agent Analytics - http://www.pendo.io/aakash 4. NayaOne: Airgapped cloud-agnostic sandbox to validate AI tools faster - https://nayaone.com/aakash/ 5. Kameleoon: Prompt-based experimentation that turns days of dev time into minutes - http://www.kameleoon.com/ --- Key Takeaways: 1. Two types of AIPM roles exist - 80% are traditional PM roles with AI features added on, where the core product existed before AI. 20% are AI native roles where the product IS AI and the value proposition is impossible without it. Know which type before you apply. 2. The AI PM stack has three layers - Application PMs own user experience (60% of roles, easiest entry point). Platform PMs build tools for other builders (30%). Infra PMs build foundational systems like vector databases and GPU orchestration (10%). 3. 19 out of 20 AI pilots fail from wrong problem selection - AI makes sense for complex pattern recognition, prediction from historical data, and personalization at scale. If explainability is non-negotiable, rules exist, data is limited, or speed is critical, start with heuristics. 4. Most teams overcomplicate their AI technique choice - If you can put the problem in a spreadsheet with inputs and an output to predict, traditional ML is the answer. Perception problems need deep learning. Natural language reasoning needs Gen AI. These are not competitors, they are tools in your toolkit. 5. AI products are fundamentally probabilistic - The same input can produce different outputs. AIPMs must think in quality distributions and acceptable error rates, not binary success vs failure. Data is a first-class citizen, not a nice-to-have. 6. Agents decide, workflows follow steps - Workflows have predetermined sequences with deterministic outcomes. Agents receive goals and independently decide which tools to use. The live N8N demo showed identical tools producing completely different execution patterns. 7. Context engineering is the real production skill - Claude Sonnet has a 200K token context window but that fills fast with knowledge bases, conversation history, and real-time data. Every token costs money. Managing what to load and when directly impacts both quality and cost. 8. Follow the hierarchy before fine tuning - Prompt optimisation first, then context engineering, then RAG. 80% of use cases get solved with RAG. Fine tuning should only be considered after exhausting all three. 9. Build products not projects - Launch your AI work, get real users, encounter real breakage. That gives you richer interview material than any course certificate. Build an agent, build a RAG system, and build an app that solves a real problem. --- 👨‍💻 Where to find Jyothi Nookula: LinkedIn: https://www.linkedin.com/in/jyothinookula/ NextGen Product Manager: https://enterprisereadyaipmroadmap.com/ 👨‍💻 Where to find Aakash: Twitter: https://www.x.com/aakashg0 LinkedIn: https://www.linkedin.com/in/aakashgupta/ Newsletter: https://www.news.aakashg.com #aipm #aiproductmanagement --- 🧠 About Product Growth: The world's largest podcast focused solely on product + growth, with over 200K+ listeners. 🔔 Subscribe and turn on notifications to get more videos like this.

Jyothi NookulaguestAakash Guptahost
Mar 23, 20261h 12mWatch on YouTube ↗

EVERY SPOKEN WORD

  1. 0:001:43

    Intro

    1. JN

      Stop applying to PM jobs until you have a true understanding of the AI fundamentals

    2. AG

      AI product manager is BS. Is it actually real, or is it-

    3. JN

      When I look at the industry landscape for AIPMs, there's a few critical distinctions that many people miss

    4. AG

      This is Jyothi Nookula. She's been an AIPM at Meta, Amazon, and Netflix. She's one of the world's most experienced and senior AIPM. Director of AIPM at Netflix feels like a dream job, so why did you leave Netflix?

    5. JN

      In a career-changing way, with a lot of opportunity out there, and with AI jobs increasing, I wanted to take the time to, like, go full-time into this.

    6. AG

      What is the roadmap to becoming an AIPM?

    7. JN

      The first is understanding the difference between what an AIPM does versus what a regular PM does. And the second would be-

    8. AG

      Today's episode is a master class in AI product management. If there's only one video you are going to watch, this is it. Do I need to learn these technical concepts like RAG and fine tuning to become an AIPM?

    9. JN

      I'm going to teach you everything, everything that you need to become an AI PM.

    10. AG

      So with all these rolling layoffs, is it really good to work at companies like Meta and Amazon? Before we go any further, do me a favor and check that you are subscribed on YouTube and following on Apple and Spotify podcasts. And if you want to get access to amazing AI tools, check out my bundle, where if you become an annual subscriber to my newsletter, you get a full year free of the paid plans of Mobbin, Arise, Relay app, Dovetail, Linear, Magic Patterns, DeepSky, Reforge Build, Descript, and Speechify. So be sure to check that out at bundle.aakashgee.com. And now into today's episode.

  2. 1:434:22

    Is AI PM actually real or is it BS?

    1. AG

      Jyothi, welcome to the podcast.

    2. JN

      Thank you. I'm so excited to be here.

    3. AG

      So I want to start with the hard questions, okay? Um, you know, you had this title, AI product manager, but I keep hearing that AI product manager is BS. Is it actually real, or is it hype?

    4. JN

      Yeah. So let me, um, give you a data-driven answer, because I've been on both sides of this, like hiring AIPMs at, um, Meta, Netflix, and Etsy, and now talking to, like, dozens of companies about their AI strategies. So when I look at the industry landscape for AIPMs, um, there's a few critical distinctions that many people miss. And the way I would, um, say that the kind of roles that exist are, are twofolds. One is a traditional PM with AI features added on. So this is probably the 80% of what's labeled as AIPM jobs out there right now, and these are where PMs are, um, are leading AI- uh, LLM capabilities and, uh, probably they're adding these AI features to existing products. So think of it like your, uh, chatbot, um, that you're adding to your customer service portal, or you're adding some AI summarization to your document. Um, now, the core product existed even before, even before you added or bolted an LLM onto it. Uh-

    5. AG

      Yeah

    6. JN

      ... so that's the traditional PM with AI, uh, features. The other type is the AI native PMs, and the way I think about that is probably this is, uh, a new category of PM roles that are opening up. Um, and I would say about 20% of the ones out there, um, are your AI native PM roles now. And here the product is AI. Uh, it's not a feature. It's not something that you just bolt onto the product. So think of it like things like your ChatGPT or your GitHub Copilot or your Claude, your Cursor, your Perplexity.

    7. AG

      Yep.

    8. JN

      The key characteristic would be the product is fundamentally probabilistic, and so the value proposition is literally impossible without AI. Um, and you can't build your ChatGPT without an LLM. So AI here is not just enhancing the product, it is the product.

    9. AG

      Okay, so two different types, 80% in traditional products, 20% in AI

  3. 4:227:11

    The roadmap to becoming an AIPM

    1. AG

      native. So there's basically 4X more open roles for you in those traditional companies. And we heard some of the companies you work for, those products existed before AI, but you were working on AI within them. So if somebody wants to become an AIPM, what is the roadmap to becoming an AIPM?

    2. JN

      Yeah. And, um, before I jump into the roadmap, I do want to talk about what type of AIPM roles exist along the stack. So at the top, I call this the Application PMs. Now, here the PMs own the end-to-end user experience. Um, they're thinking about how the users interact with AI. How do you build trust? How do you make AI reliable enough for everyday use? They need to understand, um, the AI-human interface and, uh, interaction patterns. This is probably the easiest for someone who wants to convert from a traditional PM role to become an AIPM role, because this encompasses a lot of the existing product management skills along with AI knowledge. So this is the easiest to get into.

    3. AG

      Mm.

    4. JN

      The second is the Platform PM. Now, here is where, uh, the PMs are building tools that other teams who are building probably application products-... are using. So think of it like developer platform or model orchestration systems or evaluation frameworks or observability tools. Here, the PM would need to understand both the technical infrastructure and developer experience. So here, maybe you're not building straight up for end users, you're building for other builders here. And the last is the Infra PMs, where these PMs are building the foundational systems that power all of these AI products, like probably, say, uh, vector databases or GPU orchestration or optimizing kernel level, um, compilation, or, um, optimizing model serving. So as you see, the lower you get into the stack, the deeper your expertise needs to be.

    5. AG

      Okay.

    6. JN

      So this is harder.

    7. AG

      [laughs]

    8. JN

      So easiest would be here, the easy and hard.

    9. AG

      Okay, this makes sense. And what are roughly the percentages of roles at each of these three buckets?

    10. JN

      I would say you'd see about, um, 60% of roles with Application PMs, um, about 30% roles with Platform PM, and maybe 10% roles with Infra PM.

    11. AG

      Okay. Makes sense. So the hardest roles are actually the smallest is kind of the good news.

    12. JN

      Yeah.

    13. AG

      So can you walk us through, let's say somebody has their goal still on infra, like what are the key concepts to know?

  4. 7:1110:06

    5 core concepts every AIPM needs to know

    1. JN

      Yeah. Whether it is application, platform, or infra, some of the key concepts are the same across. So, and that's what we are going to like, talk about today, where we're gonna talk of these five techniques. Um, the first is understanding the difference between what an AI PM does versus what, um, a regular PM does. And, um, the second would be determining when to use AI, because I think now there is this hype around using AI that it seems to be like the technique that everybody wants to go reach. But knowing when to say yes and when to say no is a very powerful skill that a PM should possess.

    2. AG

      Mm-hmm.

    3. JN

      The third is then we look into what are the AI techniques, uh, and what, what are the options of AI techniques that we can go choose from. So we'll look at some menu. And, uh, the fourth is now if we then decide that, yes, we need to like, use Gen AI, uh, for this product, then, um, we'll learn about a few core concepts, um, around AI agents, prompt engineering, context, RAG, evaluations. And last but not the least is we will learn about, uh, delivering AI products. Um, so we'll learn all the way into deployment.

    4. AG

      Let's do it.

    5. JN

      Perfect. Let's get started. So what is or who is a product manager? So a product manager is essentially the CEO of the product. A PM owns the product and its associated decisions along with it. So what the product managers do is balance all of these three domains. That is the UX, the tech, the business. And remember, all of these functions, PMs lead without authority. They don't report into them. So PMs need to be able to influence these, uh, teams and make hard calls. That is irrespective of whether it's an AI PM or a PM. This is like the baseline of what a PM does.

    6. AG

      Yep. Of course, varies a lot between companies. [laughs]

    7. JN

      Absolutely. Absolutely. And again, here are some traits of a good PM to just bring us all into the foundation of, uh, the baseline of what we mean when we say a good PM, right? Ra- defining a clear vision for your team, um, being customer obsessed, which is understanding what the pain point really is, um, and understanding the market landscape, aligning with your stakeholders, um, around the vision and building the vision for the product. And the fifth one is the bread and butter for a product manager is prioritizing product features and capabilities. Um, and last but not the least is creating a shared brain, um, for your product managers and your team to, uh, make independent decision-making. So what is a core skill that, uh, differentiates

  5. 10:0611:50

    What differentiates a PM from an AIPM

    1. JN

      a PM from an AI PM? The core, uh, difference here is you see how traditional, uh, PM products are deterministic. However, AI products are probabilistic, where traditional products have predictable behaviors, AI products are inherently probabilistic. That is, the same input can result in different outputs. Now, if I have a button in a traditional product, I click on it, every single time it will open up into like the next page, for example. But, uh, every time you ask, um, an AI product, because it's probabilistic, it can produce different outputs. So now you're, as AI PMs, you must think in terms of quality distributions, um, in terms of what's your acceptable error rates.It's no longer a binary of success versus, uh, failure. And so, um, uh, AIPM, uh, you tackle questions like what is the error rate that our users can tolerate, uh, before your trust breaks for that user? Um, and how do we handle these edge cases that handle, that occur probably, say, 5% of the time? Or do we need a fallback deterministic system to begin with? Um, also what is different here is data is your first-class citizen now with AIPMs, where traditional PMs can focus on features and user flows. As an AIPM, you must treat data as a product experience because poor data will create poor experiences. Um, and so having a good data strategy is a prerequisite before you even start working on your, uh, implementing your

  6. 11:5015:20

    Ads

    1. JN

      AI product.

    2. AG

      I hope you're enjoying today's episode. Are you interested in becoming an AI product manager, making hundreds of thousands of dollars more, joining OpenAI and Anthropic? Then you might wanna do a course that I've taken myself, the AIPM certificate ran by OpenAI Product Leader Miqdad Jaffer. If you use my code and my link, you get a special discount on this course. It is a course that I highly recommend. We have done a lot of collaborations together on things like AI product strategy, so check out our newsletter articles if you want to see the quality of the type of thinking you'll get. One of my frequent collaborators, Pavel Hern, is the Build Labs leader, so you're gonna live build an AI product with Pavel's feedback if you take this AIPM certificate. So be sure to check that out. Be sure to use my code and my link in order to get a special discount. And now back into today's episode. Today's episode is brought to you by Amplitude. Replays of mobile user engagement are critical to building better products and experiences, but many session replay tools don't capture the full picture. Some tools take screenshots every second, leading to choppy replays and high storage costs from enormous capture sizes. Others use wireframes, but key moments go missing, creating gaps in your understanding. Neither approach gives you a truly mobile experience. Amplitude does things differently. Their mobile replays capture the full experience, every tap, every scroll, and every gesture with no lag and no performance hit. It's the most accurate way to understand mobile behavior. See the full story with Amplitude.

    3. JN

      And also because-

    4. AG

      Yeah

    5. JN

      ... go ahead, sorry.

    6. AG

      Yeah, the data piece is actually one of the underrated ones because people often hear data, and they kind of shrug their shoulders and say, "Oh, I understand basic statistics." But there's a lot more to it in terms of data pipelines, how the data's being cleaned, how it's being put in there, what's being used as training data. So there's a lot of nuance that people need to realize exists under each one of these topics.

    7. JN

      Yeah, and like I always say, garbage in will lead to garbage out. If you do, if you have your data that is not, uh, in higher quality, uh, that, like, what you expect, then your model outputs will also not be as closer to reality as you'd expect.

    8. AG

      Yeah.

    9. JN

      And, uh, similarly, uh, your model behavior here with AI products would be iterative versus a fixed feature of a traditional product. Um, in this case, uh, the earlier button example that I was talking about, every time you click that purple button, it will lead to something similar. Versus here, you're iterating with your model. Um, ev- any new change, um, you need to retest your model. It, you need to understand what is changing among, uh, your model behavior, so it's a very iterative, uh, process. And then your unit economics are also very different, um, where your, um, traditional products have predictable cost structures. Um, now because of this probabilistic nature of AI products, your unit economics of AI products are also variable. It depends on how long your LLM probably would give you an answer, uh, or how short of an answer it could be. And last but not the least, you now need to emphasize a lot more on responsible AI and guardrails. Because traditional products, it's easier to, uh, focus on bugs and edge cases, but AI products, uh, need to be able to handle, um, potential harms, bias, um, uh, misuse, and emergent behaviors that weren't explicitly programmed into the model itself.

    10. AG

      Yep.

    11. JN

      So moving

  7. 15:2020:42

    When to use AI and when not to use AI

    1. JN

      on, now that we have understood what an AIPM does, moving on to how do you now determine when to use AI and when not to use AI? Now, there's this, uh, report from MIT that many people are aware of. Now, the reason why, uh, AI pilots fa- uh, fail, there could be several reasons, but one of the key factors that the paper called upon was picking the right opportunities to apply AI to go solve a problem. It seems common sense, but it is not as common, where several teams are choosing the wrong problems to go and apply AI, the reason being choosing good problems to apply AI is difficult, and that's what, that's what we'll learn today, is when do you use, um, AI in a product?

    2. AG

      Because apparently 19 per- out of 20 people are choosing the wrong one. [laughs]

    3. JN

      Absolutely, yeah. So here's when AI makes sense. So you have to choose AI when AI is well suited for some of these, uh, specific patterns, like when you have a pattern recognition in complex datas, when patterns exist in your data but they're too complex for humans to manually define.So for example, in products like YouTube, machine learning is used to view, um, identify the patterns of users who are watching videos, which would be impossible to capture with simple rules. So the relationships between your user behavior and the content preferences, it's multidimensional for a rule-based system, um, to capture. So pattern recognition is probably a very good use case when AI could be applied. The second is when, um, AI really excels, uh, is when you have historical data over several years, uh, to predict future outcomes. So like, for example, at Amazon, we used AI to forecast, um, inventory needs, uh, to predict, um, uh, based on a complex mes- mix of seasonal trends or upcoming promotions and even weather patterns. And these models could consider hundreds of different variables, uh, in ways that humans simply cannot, uh, process, uh, effectively. So in prediction use cases, AI is a great choice. And also when you need to create, say, personalized, individualized experiences for thousands or millions of users at scale, then AI becomes incredibly valuable. That ties back to the pattern recognition because probably there are several patterns and variables that could then impact. So for example, like content recommendation engines are classic examples of where AI thrives. So if your use case-

    4. AG

      Yeah

    5. JN

      ... about personalization, then that's a good place to look at applying AI.

    6. AG

      And what are the bad places?

    7. JN

      Yeah. And I would say that, I wouldn't say it's bad, but here's where heuristics, which is rules-based, is probably sufficient where you don't have to s- um, stick to, um, applying AI. Now, before I dive into this, what are heuristics? And I would say heuristics are nothing but a simple set of rules, like your if else, if this happens, then do that. Um, if this, then that. These are all probably based on your past experience, um, or probably, uh, something that works in that industry. Now, I would say heuristics or your rules are probably sufficient when, uh, explainability is non-negotiable in your industry because it's really hard for, um, AI models to have high explainability. They are interpretable tools, but explainability is still low. Or when there are clear rules in your domain, like for example, uh, tax calculation. We are at that time of the season where everyone's thinking about year-end taxes. Um, and so tax calculation software is a very good example. Tax codes are complex, but they're explicit, um, making them perfect for like, say, some rules-based implementation. And so if you, if there are some clear rules and comprehensive rules in your domain, probably sufficient to start with heuristics. And when data is limited, because AI needs lots of data to be effective. So if you're launching a new feature or you're entering a new, um, market where historical data doesn't exist, then starting with heuristics and a rules-based approach is probably better than, um, a force fitting AI to it. And also the other place where, um, you, your development speed is critical, where, um, AI systems generally take longer to build and implement. So for MVPs or time sensitive features, starting with, uh, traditional methods could be like, um, the right business decision.

    8. AG

      So you've determined your AI usage. It's not one of these things where the explainability matters, the speed to market matters. How do you select the right AI techniques?

    9. JN

      Yeah.

  8. 20:4226:32

    How to select the right AI technique

    1. JN

      So let's dive into it. So what are some AI techniques, um, that we could look at? And when we look at AI these days, people jump straight to like, "Oh, let's use ChatGPT," or, "Let's build, uh, with an LLM." But honestly, a simple machine learning model would have solved the problem in a week and at fraction of a cost. So let's break this down in a way that's useful for product, uh, decisions. So when I think of AI techniques, I think of them in three buckets. The first is this traditional ML where this is your, uh, your regression models or your random forests or your XGBoost, the stuff that's been around for years. It's mature, it's reliable, and honestly, it still powers most of the AI that you interact with daily. The second is deep learning. Now, this is your neural networks, your computer vision, your speech recognition, and this is, this thrives where, uh, when you're dealing with, say, image, video, audio, any form of unstructured data that has some sophisticated, uh, pattern recognition. And the third is where we get to Gen AI, where it's your LLMs, your diffusion models, um, your stable diffusion, your ChatGPTs, your clouds. Now, here's what's interesting. These aren't competitors. They are tools in your toolkit, and the best AI products, they usually probably come in multiple approaches. So I would say choose ML when you have structured data and you need to predict or classify something. So think in terms of, say, spreadsheets. Um, the sweet spot for ML is when you're predicting a number or a category, or you have historical data with clear patterns, um, or you, you need the model to explain its decisions or speed and cost really matter. So some examples of, uh, where traditional ML, uh, techniques still thrive is like your fraud detection or predicting the customer churn for your websites. Um, so as a PM, a question you should ask is, "Can I put this problem in a spreadsheet with clear input columns and an output I want to predict?" If the answer is yes, then start with ML and don't have to complicate it.

    2. AG

      Hmm.

    3. JN

      Looking into deep learning, use deep learning when you're dealing with, say, perception tasks like image, video, audio, or when the patterns are too complex for traditional machine learning to capture. And think about it this way, um, deep learning shines when humans can do the task easily, but it's really hard to write explicit rules for. Like for example, uh, when I see your face, I know this is Aakash. It's easy for me, butIf you ask me, write it in code as to what are those features that make me think that this is Aakash, this if-then statement, it's impossible to, to translate this problem into. And that's where deep learning comes in. So some of your examples like, um, medical image, uh, diagnosis or manufacturing, uh, defect detection, where computer vision can scan products on an assem- uh, on an assembly line and figure out, is this widget cracked or is this label misaligned? Classic examples of where computer vision could be used. Voice assistance, which is converting your speech to text. All of these are great with deep learning. So now a question that as a PM you should ask is, is this a perception problem? Am I dealing with images, audio, video? If yes, and you need to understand what's in that media, probably deep learning is your friend now. Now, here's the catch. Deep learning needs more data, more compute, and is less explainable than traditional machine learning.

    4. AG

      Right.

    5. JN

      A trade-off that you as a PM need to be aware of.

    6. AG

      And then Gen AI?

    7. JN

      Yeah, the hot topic. Um, now, use Gen AI when you want to understand, generate, or reason over natural languages or images. Now, the breakthrough with LLMs isn't that they can just write, it's that they can read, they can comprehend context, reason, um, across information, and they can respond appropriately, which is fundamentally different from any traditional AI system. So Gen AI is the right choice when you're dealing with a natural language interface where your users need to interact with your product using some conversational language, not probably just clicking buttons or filling forms. Gen AI is a good starting point there. Uh, content generation is the other use case where you want to write copy, you want to, um, write product descriptions, you want to write email drafts. So if you're creating net new text or images, Gen AI is a good, uh, use case to be applied. And when you need, uh, reasoning and synthesis, now, LLMs can take information from multiple sources, understand context, and make judgments. So unstructured reasoning and synthesis, Gen AI is your friend. So as a PM, the question that you have to ask is, does this task require reading or writing in natural language? Or do I need common sense reasoning, not just pattern matching? And are my users going to interact conversationally with this product? If the answer is yes to any of these questions, Gen AI is probably in your solution.

    8. AG

      Yeah, and then there's the whole angle around AI agents as well, right? Where most people are building AI agents into their products, or they're building MCPs into their products so that agents can interact with their products. So you also probably need to be thinking about, you know, are there agents we can be taking that we are generatively creating, the generatively planning, all those skills that you just talked about for Gen AI into a

  9. 26:3231:03

    AI agents: building blocks, workflows vs agents

    1. AG

      product.

    2. JN

      Yeah, and that's a great segue for us to get into the core building blocks that, uh, you have to know, uh, starting with AI agents.

    3. AG

      Let's do it.

    4. JN

      So the first concept that we want to touch upon is what is agents or what is agentic AI? So agentic AI is a system that can make decisions and take actions on your, uh, behalf or on its own to achieve some goal. And you're not explicitly telling it what order it needs to follow. It understands your goal and tries to reason and find the path to go and achieve that goal. So it... The true thing that differentiates AI agents is it is goal-oriented.

    5. AG

      Yeah.

    6. JN

      Now, looking at the core building blocks for an AI agent, the first is perception. Perception is how the agent perceives information like your text input, uh, image, or sensor data, or API connections. This is basically how the agent will receive input. The second block for a agent of, uh, considering the building blocks would be the reasoning. This is how the agent processes information and makes decisions. Here is where your models could live, like your LLMs or your classification models or planning algorithms. All of them live here. The third building block would be your execution or action systems. This is how the agent affects its environment, either that is through generating text or making those API calls or controlling hardware. This is how the agent actually takes action. And the fourth is learning. This is a feedback mechanism of how the agent evaluates outcomes and improves.

    7. AG

      So when do you use a workflow versus an agent?

    8. JN

      Yeah. So there's a huge difference between workflows and agents. Now, a simple workflow, um, both are, by the way, AI systems. Now, workflows are predetermined sequences of tasks where every, uh, thing is defined in terms of how the process will go and execute. Think of these as some automation pipelines where AI serves as, um, some sort of a powerful component within that overall workflow. An example would be like your invoice processing workflow, where you have step one, extract data from PDF, step two, validate against these rules, step three, now, um, uh, have the A- uh, AI system or agent, uh, evaluate, and step four, go and update this multiple, uh, system. Whereas agents are goal-oriented systems where they can independently decide, um, how to accomplish, um, those objectives.So the key characteristic of workflows is there's predictable patterns, um, execution paths. There's human-defined decision trees of how things have to go, and, um, there's probably deterministic outcomes of this and standard expectation of what the output would look like from one node to the next node. Whereas an agent, um, the characteristic of the architecture of an agent is very different, which let me walk you through what that looks like. So here is an agent architecture. So we have the agent. There is model, there is memory, and then there are tools. So the agent is the brain or the orchestrator. It controls the entire workflow, deciding what needs to be done, which tool needs to be called. The model here, it could be a language model, it could be a machine learning model. So you could have your GPT or Claude or any of the models here. Memory is where it stores context and historical information. This is what allows your agent to be stateful, to be able to remember past conversations or previous actions. And tools, these are the general utilities that your agent can use to extend its capabilities beyond what just this model could do. So it could be like a weather API or a booking system API. It could be a search API. It could be a code execution engine, any of these ones.

  10. 31:0333:26

    Ads

    1. AG

      Today's podcast is brought to you by Pendo, the leading software experience management platform. McKinsey found that 78% of companies are using GenAI, but just as many have reported no bottom line improvements. So how do you know if your AI agents are actually working? Are they giving users the wrong answers, creating more work instead of less, improving retention or hurting it? When your software data and AI data are disconnected, you can't answer these questions. But when you bring all your usage data together in one place, you can see what users do before, during, and after they use AI, showing you when agents work, how they help you grow, and when to prioritize on your roadmap. Pendo Agent Analytics is the only solution built to do this for product teams. Start measuring your AI's performance with Agent Analytics at pendo.io/aakash. That's P-E-N-D-O.I-O/A-A-K-A-S-H. Today's episode is brought to you by NayaOne. In tech buying, speed is survival. How fast you can get a product in front of customers decides if you will win. If it takes you nine months to buy one piece of tech, you're dead in the water. Right now, financial services are under pressure to get AI live. But in a regulated industry, the roadblocks are real. NayaOne changes that. Their airgapped cloud-agnostic sandbox lets you find, test, and validate new AI tools much faster, from months to weeks, from stuck to shipped. If you're ready to accelerate AI adoption, check out NayaOne at nayaone.com/aakash. That's N-A-Y-A-O-N-E.com/A-A-K-A-S-H. Today's episode is brought to you by the experimentation platform Kameleoon. Nine out of 10 companies that see themselves as industry leaders and expect to grow this year say experimentation is critical to their business. But most companies still fail at it. Why? Because most experiments require too much developer involvement. Kameleoon handles experimentation differently. It enables product and growth teams to create and test prototypes in minutes with prompt-based experimentation. You describe what you want, Kameleoon builds a variation of your webpage, lets you target a cohort of users, choose KPIs, and runs the experiment for you. Prompt-based experimentation makes what used to take days of developer time turn into minutes. Try prompt-based experimentation on your own web apps. Visit kameleoon.com/prompt to join the wait list. That's K-A-M-E-L-E-O-O-N.com/prompt

  11. 33:2643:40

    Building a workflow vs an agent in N8N

    1. JN

      And now let's do a hands-on exercise to go build a workflow, and then we'll also build an agent.

    2. AG

      Love this. Let's see it.

    3. AG

      Yeah.

    4. JN

      And for that, I will use N8N as my-

    5. AG

      And why N8N?

    6. JN

      It's really, A, it is, uh, low-code, no-code. Um, so it's easy for anyone to, like, go and build workflows or agents. Um, it's also, it also has a very strong, uh, community. And so you'll always find, um, forums where if you're stuck, you can ask questions, and you can quickly get results. So this really allows for anyone to actually go build agents and workflows.

    7. AG

      Got it.

    8. JN

      So today what we are going to build is first we're going to build a workflow, just so y-we all understand what a workflow looks like. And then when I, when we do the agent, uh, we can see that difference. So in N8N, there are, um, different types of nodes. There's trigger nodes, so these could be your, uh, nodes that, uh, start your automations. So, like, for example, trigger manually, or there's on a schedule or when at start. So there are different trigger events. So first we'll start with our trigger event, which is we wanted to trigger manually. Next, on triggering, I want this to go and call and make a HTTP, um, request. Uh, for op- uh, for open weather, we're going to use something called OpenMeteo to, uh, get the information, uh, from that API. So here's this free weather API that we are going to use.

    9. AG

      And how do you find, like, good APIs?

    10. JN

      Good old plain Google search where we s- I start off with, like, all right, um, I want to build this, so what do I need? I need a weather API. Let's go and search. There's APIs for almost bl- everything lots these days, so it's easier, uh, to just search.

    11. AG

      Got it.

    12. JN

      So from here, I can search for, um, what area I want to get, uh, uh, information for about the weather, um, details.So here is... I live in, um, uh, Los Angeles, so I'm going to take Los Angeles, um, information. So I'll say Try API, and if I set the, uh, latitude, longitude, which I already have for Los Angeles, um, and I can say how, how... what all do I need. I... It gives you a lot of options, like temperature. I can g- I can get rain, I can get, uh, cloud cover, a lot of things. So I'm just okay with temperature for now. I'm just going to go ahead, and here's a API URL that I need to use. So I'll take this API URL, I'll go back to my n8n. Now, I'll go and create a node called HTTP request. Now, this node is what will go and access and make a HTTP request to a URL. So I'm going to use the GET method, which pretty much gets for me to, like, post this, uh, URL and gets the information. I'm going to use the GET method, and I'm going to use the, um, API URL that we copied. And let's see. If I run the execute step here now, let's see what information we get. So you see how we are able to capture. So the right hand, right side is the output, so you can see the information that it captures from, um, from that API. I'm just going to pin it so we can use it for later. So now the information that we get, um, is not in a way that's easy for a workflow to go and execute, and so we're going to add a code node to do some code modifications. I'm running the code in JavaScript. And now you may say, like, "All right, Jyothi, but I don't know how to code," which is great. What I pretty much did is I, uh, went to ChatGPT, and I said, "This is what I'm trying to code. I want to code in, um, JavaScript. Um, I'm going to paste this into n8n," and, um, ChatGPT generated this code for me that I'm going to use now.

    13. AG

      Easy enough.

    14. JN

      Very.

    15. AG

      So that's your normal go-to workflow is pretty much use ChatGPT for all the coding that you need to use on n8n?

    16. JN

      Yeah, it's, it's very easy that way. So it gave me this whole block of code to go and use, and I pretty much just took that, and I say execute the step so we can see how the node executes. So you see how it captured that information and added this, because I gave this message using, in Los Angeles, the high is, the high today is this temperature and low today is this temperature. By the way, I'm still a Celsius person, so I, I do everything in Celsius.

    17. AG

      [laughs]

    18. JN

      All right, so now we have this code, and now what I wanted to do is send me an email. Um, so you see there's no intelligence here. It is basically step after step after step. Now, one of the step here could be an agent, uh, which does something and then hands it over to the rest of the workflow. So here, uh, n8n has great integrations. So, um, there's an integration for Gmail, so I can click on Gmail integration, and then there is a lot of actions that it could take. So I'm going to choose the message node, so the send message node. I already have a credential, but if you don't have one, you just have to do this and say Create new credential and do a Google Auth, um, and that's about it. It'll create a credential for you. Easy peasy. And now let's say I want to send it to jyothi@nextgenproductmanager.com, and the subject should be Weather report for today. And email type, we'll just keep it text, because HTML sometimes goes into spam, so we'll avoid all of that. And now if I execute this step, and for message, we'll just copy the message that we have from here. So I'm just copying and pasting that here. So I just dragged that message and put that into this field here. And now if I execute this step, it has sent the email.

    19. AG

      That was easy. Wow.

    20. JN

      Very easy.

    21. AG

      So now we have a basic workflow.

    22. JN

      So you see I got this message saying, "Los Angeles, the high is this." So this is a basic workflow, but this has no intelligence, so we'll go and add some intelligence to this. So let's use the, eh, same n8n workflows and now create an agentic workflow instead of a plain workflow. All right. We're starting from scratch again. We're going to add a chat on chat message, um, as a trigger, and now we'll add an agent. So by default it gives me an agent, um, that I can now get started with. So you can see how it has the model node, a memory node, and a tool node that we could add. Now I'm going to add the model node. I'm going to add OpenAI chat model. Now, this requires me to connect it to, um, the OpenAI, uh, API. So I have already done that, but if you have not, you, you could just create a new credential and add your OpenAI API key, and that will connect it. And you can also choose from the list what model you want. I'm okay with GPT-4.1 mini. So I've connected the agent to the model. I'll also give a simple memory so that it, uh, remembers, uh, things and has a place to store. And now let's add tools. So one of the tool is a get weather tool that we want to add, which is a HTTP request. I'm going to create a HTTP request. The same things that we did before, we are going to do that again. The m- the method is still the, um, GET method, and I'm going to paste the same URL that we got from our weather API. And then I'm going to add one more tool, which is the email. I'm going to say Gmail, and I'm going to add the same information. Weather todayAnd unlike our workflow where we had to define the message, here we could just say let the model, um, automatically define by based on whatever message it's getting, the agent can decide what that message should be. I could also add a description and say, "Unique weather information," and that's it. Y- you can see how we're not defining any code. We're not saying how to, um, convert that into a particular phrase. We're, we're not writing any of that, uh, steps. We are going straight to, like, saying, here is the HTTP, here is the, um, here is a tool to send, uh, an email, and now we let the agent do all of these tasks.

    23. AG

      Okay.

    24. JN

      So now let's run this so I can type a message and say, "What is the weather today in Los Angeles?" You can see it's running. It went and called this tool, HTTP request, and it gave me, um, an information. The weather today in Los Angeles has temperatures from 14.5 Celsius early in the day, rising to about 17 Celsius in the morning. All right, so it gave me information about weather, and you can see it didn't execute Gmail. I never said, "Don't execute this, don't execute that," but it didn't execute because it, the agent determined that all it needs is the get weather, and that's the only tool it needs to use. But now if I say send the message, and now it has used my Gmail tool to send me a message. So let's look at that. I got this message from the agent. So here, this is a classic example of how we are not telling which tool for the agent to use. The agent determined based on the question we asked and based on the task.

    25. AG

      That's what makes it actually AI, an actual AI workflow, not just a regular no-code workflow.

    26. JN

      Yeah.

    27. AG

      Awesome. So how do we go further here? What's next? Are we gonna learn RAG systems?

    28. JN

      Yeah, but before we get to RAG, I wanna talk about a critical concept, um, that every product manager working with

  12. 43:4048:15

    Prompt engineering and context engineering

    1. JN

      AI agents needs to know, and that's prompt engineering and context engineering.

    2. AG

      Yes.

    3. JN

      So let me start with prompt engineering, because this is where most of us begin our journey with, um, AI agents. So think of prompts as a primary interface between you and the AI system, and when I say primary interface, it literally is. Like, that, the prompt is how you tell the AI agent what to do, how to behave, and what outcomes to expect. First, there is system prompts. Now, these set the overall behavior and personality of your agent. So for example, if you're building a customer service agent, your system prompt might be, uh, to establish an empathetic personality, uh, that the agent has to be professional and always verify customer identity. The second is user prompt. Now, these are the prompts that an actual user inputs to an LLM or, or i- interfaces with, uh, the chat product. Now that's simple enough, but here's where it gets interesting, is how you design your system to handle these unpredictable nature of user inputs is what determines how your, um, agent responds, because users won't always ask the questions the way you expect them to, and that's where the power of prompt engineering techniques like your few-shot come into picture. Few-shot examples are where you show AI what good responses look like by providing some additional response.

    4. AG

      This is the really underrated one, where you actually put in an example, "This is a good response. This is a bad response." People think this is a lot of work, but when you're engineering the system prompt for an agent, it's actually worth it.

    5. JN

      Yeah, and I found this to be incredibly powerful in production systems to provide your AI with examples of what good responses look like by providing examples. So instead of trying to describe what you want in abstract terms, you could show AI concrete examples of ideal interactions. Now that you know prompt engineering, let's move into context engineering. Now, context engineering is where magic happens in production systems, because context engineering is about managing everything the AI needs to know to do its job effectively and doing it within the constraint of context windows. Now, AI models have context windows. That's a limit on how much information they can process at once. Claude Sonnet, for example, has a 200K token context window. Now, that might sound a lot until you start loading in your company's knowledge base, the conversation history, the real-time data, the user prompt. Suddenly you're making hard decisions about what stays and what goes. So I think about context engineering in three layers. First, there's immediate context. That's the current conversation or task that the user is having. The second is session context. That's the session information of the user's recent interactions. And the third is knowledge context. This is the broader information that your agent needs to reference. And here's something that I have learned the hard way. Context window management directly impacts your cost, because every token you process costs money. So if you're carelessly loading your entire knowledge base into every interaction, you're burning through your budget really fast. So context engineering is more like an art of knowing what to load and when.

    6. AG

      And what's an example of that that you learned the hard way? Did you guys overspend at one of these companies 'cause you just had engineered way too much context into it?

    7. JN

      Yeah, and that's when we actually started off with understanding that there is probably a way to dynamically, uh, figure out based on the information to load what context, uh, is needed, like what, uh, information from your knowledge base could be loaded in, and that's the prompt flow and orchestration patterns. That's where they come in.

    8. AG

      Okay.

    9. JN

      So when people say prompt engineering is dead, it is not dead. It is part of this holistic context engineering, um, that encompasses prompting strategies as well.

    10. AG

      So how do we dynamically pull in this information

  13. 48:1558:57

    RAG systems explained and built in Langflow

    1. AG

      quickly?

    2. JN

      So that's where now we are going to learn about RAG, which helps you figure out based on the prompt, you can retrieve the right information and load it. So let's dive into RAG. So-

    3. AG

      And for my money, this is, like, the most important skill, guys. Like, this is the point to just lift up off your phone as well and just look at, w- think deeply about how am I gonna learn this concept because every enterprise that's implementing AI internally for their workflows, every product, they're generally using a RAG system.

    4. JN

      Absolutely, and that, like I say, RAG is nothing but retrieval augmented generation. Now, it's very simple, but has tremendous amount of value that it provides. And so when people say, "Oh, should I go and fine-tune my model?" I'll be like, "No, let's start with RAG because RAG might solve 80% of your problems." Now, like the word says, retrieval, it retrieves information from the knowledge base, and then it augments it along with the user input before passing it to the LLM that allows the LLM to now have the context, uh, to be able to generate an output that is foundationally in, deep-rooted in the knowledge base of that company. So let's look at the RAG systems. So let's say you have a document, several documents of course, in a company. You chunk them, and chunking is nothing but breaking them down into smaller pieces, almost like think of like you have a storybook, and if you're saying that after every 20 pages, just rip it, it could be one chunking strategy. It's a fixed chunking strategy. So you take the document, you break that down into smaller chunks, and then you convert that into, pass it through an embedding model, and store it in a vector database. Now, when a user queries, the user query is also passed to this embedding model, converted into a vector, and now this vector goes into this vector database to now find the nearest neighbors, similar, um, approaches that would answer this user's question. It retrieves that information from the database and adds it along with the user input and passes this to the LLM. The LLM now has the documents, the relevant documents from ra- uh, from the vector database and the user input to now generate a response that is fundamentally rooted in the information. And so something to talk about here is fine tuning. Many of them reach out to me and ask, "Can we take an LLM and fine tune it to our use case?" Fine tuning should never be your first option, maybe not even your second option. It's something that you have to consider after you have exhausted prompt engineering, context engineering approaches, and RAG approaches. A practical hierarchy that I recommend is before you go and start with fine tuning is to first start with prompt optimizations to see if you can get better results. Then you can optimize your context engineering to figure out what context is being loaded into the context window of an LLM. Implement RAG. Most of the cases that I see, almost 80% of your use cases get solved with RAG versus fine tuning.

    5. AG

      Especially if you've done really good prompt engineering at the top.

    6. JN

      Absolutely, and only if these three don't suffice, then you should go for fine tuning.

    7. AG

      I think because fine tuning is there in the API documentation, people immediately jump to it, but first follow the sequence.

    8. JN

      So let's see how to build a RAG system.

    9. AG

      I'm excited for this one.

    10. JN

      So we are going to use something called Langflow. Again, Langflow is a no-code tool that allows you to build RAG systems with just blocks and connections.

    11. AG

      And why Langflow? Why not N8N or any other tools?

    12. JN

      You could use N8N too. Um, what I have seen is Langflow is more a first approach to agentic AI, um, and therefore it's easier to build RAG systems in Langflow, but you can build RAG systems in N8N as well.

    13. AG

      Got it.

    14. JN

      I would say this is just another tool that I'm introducing to our users, so now anyone who is comfortable with N8N could try with that. Langflow is another tool that also very nicely sits in with the Lang, uh, chain, LangSmith kind of community. Uh, so some of your tracing capabilities and all of that can come through easily as well.

    15. AG

      Got it.

    16. JN

      So starting with an empty blank slate, first we are going to build the flow where we are gonna take a document and chunk that up into pieces. So for that, we do the d- load data flow, which isStarting with a file. So given a file, I can select a file and add it. So in this case, I'm adding State of AI in Business 2025 report. So I'm adding that, and then I need to split text. So this is where I'm chunking my document. You can see it provides me different options like chunk overlap, the chunk size. I'm just going to keep, uh, as is, and I'm going to say, "Connect this file to this input." So this file will go into the split text and then, uh, get chunked up. I'm also going to call for OpenAI embedding because I'm going to use OpenAI's embedding model, and I'm going to use the embedding three small. And again, I've already given my API key, but if you haven't, and you're using it for the first time, you'll have to give your API, OpenAI API key here. Next is you need a vector database. Now, there are lots of options in Langflow. You could use Pinecone, you could use ChromaDB. I'm just going to use Astra DB, and Astra DB also has an API key that I'm, that I have already provided here. Now, in terms of database, I've created a database called RAG Demo, but you can also create a database by clicking on Plus New Database and creating a fresh new database. Once you've selected the database, you have to choose the collection where these chunks will go and get stored. So I, I am choosing Langflow as my collection, which I've created. You can choose, you can go and create any new collection from here. Now with that, I'm ready to make my connections. Now, the chunks that get passed from the split text, I'll connect it into ingest data, and my embedding model, I'll connect it to the embedding model on the Astra DB. This is our load data flow. Let's run this. So the flow was built successfully. We don't have much to see here because it's being saved, and all it did was it took the file, it chunked it up, and saved it into our, uh, database in Astra DB. So now let's build a retriever flow, which will actually, where a user can ask a question and then retrieve our answers from that text or from the document. So we'll start with a chat input because a user needs a way to ask a question, so we'll start with a chat input. We are, we are building a retriever flow. We'll also have our, um, embedding model. Remember, even the input is now vectorized. We'll add our embedding model, the same embedding model we used in the data flow. Now that vectorized information would go to your Astra DB to search from those documents. So here I'm choosing the same database, and I'm going to choose the same collection where it has records. Now I'm connecting my embedding model, and I am connecting my chat input as a search query. Now, the data that comes from this needs to be parsed, so we'll add a parser and connect a data frame. So in this case, if you see here, if I convert this into a tool mode, so here you see search results. If, if you just click on there, you can choose that instead of search results, I want it to be a data frame. So I choose my data frame, and I connect it to the data or the data frame piece of my parser. From here, I needed to create a prompt template to capture so where I can give instructions. So I'm going to choose a prompt template where I'm going to give system instructions, where I'm going to say, "Given," and I'm going to take the context from before. I'm going to say context. You can see that I've created this prompt variable context and say, "Given the context above, answer the question as best as possible." And then we'll add our language model. We'll do our connections in a second, and then we'll have an output from the language model. All right, we have built all of these frames. Now let's just start connecting them. So the prompt from here goes into the input. We have the context here.

    17. AG

      So we're making that a prompt variable.

    18. JN

      Correct, so that way I can add the question too. So this will pass. I can take my chat input right now here and connect it to the question. It receives that input. It also takes the context, it's connected to the input, and now we just have to connect the model response to the output.

    19. AG

      Okay.

    20. JN

      That's about it, and now let's run it and see. The flow was built successfully. Then if I go to the playground, I can ask a question. So this is where we'll go back to what I have built before, and we'll show.

    21. AG

      Yep.

    22. JN

      And so if I say, "What is this document about?" Here's where I have the document as report title, The Gen AI Divide, State of AI in Business , so it gives me more information, um, about what this document is.

    23. AG

      Nice. We've built a RAG system. People have got to see the basics of RAG. So this about covers for you guys all of the building blocks. We went through when do you even build with AI, what AI techniques do you choose, what are the key building blocks of AI, prompt engineering, context engineering, workflows versus agentic workflows, and finally, RAG systems. This is the roadmap you wanna go ahead and start learning, not just watching these podcasts, but going out implementing on your own so that you know them cold, so that when you hit your AI PM job, you can help engineering teams actually build these. You're not gonna be using a no-code tool to build in an actual product, but by doing a no-code tool, you get to learn the in and outs. You build the intuition. Some of the intuition we talked about where we say, "Hey, always do RAG before fine tuning." If you practice that, if you try to do fine tuning for a problem, and then you try to do RAG for a problem, you'll very quickly build that, your intuition on your own, and it'll allow you to be an effective PM in these situations. So

  14. 58:571:02:00

    The AIPM career playbook and portfolio strategy

    1. AG

      you have been cracked into AI PM. A lot of people wanna crack in. What is the right roadmap? What are the best hands-on projects to build to become an AI PM?

    2. JN

      I would say always start with-Don't think of it as projects. Think of it as building products. Now you could, um, build a use case, a pain point that you have, build for that use case. Then rather than just after building it and be like, "I'm done with it," actually take it forward. Think of it as a product. Launch it. Have your friends and family try to use it. Now all of a sudden you have real users. You'll have things that break. So you are now doing things like a real product manager would. And by building, taking it from a project to a product will actually start giving you the confidence to put those projects in your resume. You can give a lot more information and clarity for your recruiters when you talk to them, rather than saying, "Oh, I attended this course," or, "I did this project." Now all of a sudden you have a lot more richer details. This breaks in these use cases. Here w- here were the challenges that I had to overcome. And that gives you a lot more richer information and data to go with.

    3. AG

      So products, not projects. Should people be ca- creating a portfolio, and what does a good portfolio look like, if so?

    4. JN

      Yeah. Your portfolio should definitely have some sort of an app that's solving a real problem that you have built for. A lot of no-code prototyping tools today that you could use. Build an agent. We just built a simple agent. Build an agent that solves a problem. Build a RAG system. We just saw how to build a RAG system. So look for problems within your area and try to build examples of those as portfolios that you can then take it and not just call it a project, but have real users and convert that into products. Now all of a sudden, your resume has three products that you have orchestrated.

    5. AG

      How important are certificates? What does a AWS ML certification get you?

    6. JN

      Yeah. Um, certificates are a great way to signal to your recruiters, your hiring managers that what you have learned is not just theoretical, but also something that is credible and accredited. So you could go to, uh, for example, I offer at NextGen Product Manager AI product management course, and we teach everything that you've got to do to learn about AI product management, and you could learn those concepts. You could do a lot of hands-on projects, and then go take that AWS AI, um, practitioner certificate. Now that gives you a credible information or credible certificate to go across and tell your, uh, hiring managers of how it's not just something that you have learned, but it's also accredited by AWS.

    7. AG

      Speaking of AWS, I wanna talk about you and your career a little bit. You worked at AWS. You worked at Meta. You worked at Netflix. How do the

  15. 1:02:001:07:15

    How PM cultures differ at Amazon, Meta, and Netflix

    1. AG

      AIPM cultures differ at those three companies?

    2. JN

      Yeah. Very different. Let me start with, um, Amazon or AWS, which is where I started my career. It's a very customer-obsessed document writer kind of a way. Um, I think Amazon invented, uh, the term, uh, PR FAQ or the six-pager, where, um, everything at Amazon starts with a press release, um, and a frequently asked questions document even before, like, engineering even starts a single line of code, um, or before a design mockup is created. And the philosophy is we work backwards from the customer. You start thinking from the customer problem, and you articulate why existing solutions don't work, and then explain how your product solves it. That's the PR FAQ or the six-pagers, um, that's used for strategic reviews. Now it's not just a document for the sake of document. It's taken very seriously. This PR FAQ is reviewed all the way up by your VP or even sometimes Andy Jassy. And it's a very document writing heavy culture where I think Amazon PMs spend probably 40 to 50% of their time writing documents.

    3. AG

      Whoa.

    4. JN

      So you become an exceptional writer at Amazon. There's just no option around it. At Meta, I think it's the complete opposite in terms of process. If Amazon is about rigorous upfront planning, Meta is all about experimentation and iteration, and the culture ethos reflects that, right? Like it says, "Move fast." So at Meta, product management is expected to be deeply technical, where you're able to understand the code base. You're able to go through the insights, uh, of how something was implemented. You're able to talk about, um, how to ship multiple variants. How will you test them against your control groups? And you let the data tell you what works. I think of all the companies I've worked with, I've seen Meta having the most sophisticated, um, experimentation infrastructure in the industry. And as a PM there, you live and breathe statistical significance. At Netflix, um, the philosophy is context over control, where perhaps it's the most unique approach to product management amongst the big tech. Um, instead of having, like, some rigid process or some documentation requirements or some approval hierarchies, Netflix invests heavily in making sure everyone understands the strategic context, and then they trust you to operate independently within that context. So you're expected to be an exceptional communicator. You don't have to always be very formal with documents in the way that, um, Amazon expects, but it's all about building alignment through conversation, presentation, and shared understanding.So you need to be very comfortable with ambiguity and be able to define your own scrambling.

    5. AG

      All three of those companies, Meta, Amazon, and Netflix, they're kind of notorious for having hard cultures, like performance-oriented cultures. Amazon just laid off 30,000 people. Meta has the rolling layoffs. Netflix is known. Even Reed Hastings has slowly stepped back from his own role. Different people will leave. There's pressure everywhere. How is it working in these companies? Would you recommend it to other people?

    6. JN

      It's an absolutely phenomenal experience. I think I've learned a lot from working at these companies. I have built the documentation customer thinking rigor. Like, working at Amazon, the first thing you s- you learn, and it gets ingrained in you, is working backwards from a customer pain point. With Meta, it's all about how do I test quickly? How do I, once I know what I want to build, how do I test quickly? What are, what should that experimentation culture look like? And with Netflix, I have learned truly about what does autonomy mean and the power of autonomy, um, and the shared experience of, uh, building consensus and working together towards a shared vision. It shapes who you are as a person, the kind of insights you get as a product manager, and the scale is phenomenal across, um, Amazon, Meta, Netflix. Every feature that you build is probably used by millions of users. So the scale that you get to work with is amazing, and that would be an experience that I would encourage everyone to have at some point in their career in their roster.

    7. AG

      Hmm.

  16. 1:07:151:11:15

    Why Jyothi left Netflix

    1. AG

      So why did you leave Netflix? Director of AIPM at Netflix feels like a dream job. What was the story?

    2. JN

      Yeah. So I have been an AI PM for the past 13 and a half years in the field of AI. Believe it or not, AIs existed before LLMs. So, uh, I've been in the field of AI for so long. I have about 12 patents in the field of AI. And with so much of AI growth happening, I thought to myself, "Hey, I really derive a lot of satisfaction from actually teaching, um, product professionals how to transition into being an AI PM." I've been teaching for the past two and a half years, and the greatest satisfaction I derive is when someone says, "Your experience, your insights were so powerful that I was able to go crack that interview, and now my pay is, like, 2X of what I used to get." Immense satisfaction and in a career-changing way. And so I said, "You know what? With lot of opportunity out there and with AI jobs, uh, increasing," I wanted to take the time to, like, go full-time into this and spend my time teaching and consulting, um, companies to draft their AI strategy, uh, to apply the learnings that I have from leading, um, scaled AI businesses and products to help their portfolios. So I took the jump.

    3. AG

      So I ask this question, you can share as much as you want or as little as you want, but obviously Netflix is known to pay well. If you've worked at places like Meta and Amazon, they're known to pay well. So people would assume you're raking in the dough as a teacher. What can you share with us? How is the business of Jyothi doing now that she's no longer a full-time PM?

    4. JN

      So I am a newcomer to this field. Although I've been teaching for the past two and a half years, I did that as a part-time. And, um, I would say I've added two new courses. So I used to only teach AI PM because I just didn't have the bandwidth back then. But now that I'm going full-time in, um, I added two new courses. One is, uh, on diving deep on agentic AI. So this is for someone who is already aware of AI PM fundamentals and is now looking to go and lead and build AI, uh, agentic AI products. And I also introduced a PM accelerator, specifically helping, uh, professionals, uh, crack product interviews, be it product sense, product execution, behavioral. And I've, I'm seeing great interest across all the three from different, uh, groups. Most of the groups that I work with are folks who are, um, getting into AI now for the first time, and I also see that my-- I don't advertise my agentic AI or accelerator outside, but the courses run full just because all my previous students who took AI PM continue down the funnel, um, to agentic AI and, and, uh, PM accelerator. So it's been a great experience going into this full-time. I'm just two months in, so it's probably too early to figure out how things are, but I'm really excited about it.

    5. AG

      If I'm reading between the lines, you might not have hit director of AI PM at Netflix, but you clearly see a path to getting to more. Is that fair to say?

    6. JN

      Absolutely, yes.

    7. AG

      All right. That is the potential you guys can get as a course instructor. Jyothi, thank you for sharing your knowledge so in-depth, so freely with all of us. Really appreciate having you on the podcast.

    8. JN

      Thank you so much for having me. I am thrilled to be here.

  17. 1:11:151:12:04

    Outro

    1. AG

      All right, everyone, we'll catch you later. I hope you enjoyed that episode. If you could take a moment to double-check that you have followed on Apple and Spotify podcasts, subscribed on YouTube, left a rating or review on Apple or Spotify, and commented on YouTube, all these things will help the algorithm distribute the show to more and more people. As we distribute the show to more people, we can grow the show, improve the quality of the content and the production to get you better insights to stay ahead in your career. Finally, do check out my bundle at bundle.aakashg.com to get access to nine AI products for an entire year for free. This includes Dovetail, Mobbin, Linear, Reforge Build, Descript, and many other amazing tools that will help you, as an AI product manager or builder, succeed. I'll see you in the next episode.

Episode duration: 1:12:13

Install uListen for AI-powered chat & search across the full episode — Get Full Transcript

Transcript of episode RlsOGvrpEsw

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome