Skip to content
Lenny's PodcastLenny's Podcast

Reganti & Badam: Why most AI products fail in production

Why treating LLMs as non-deterministic APIs and earning autonomy beats hype; human-in-the-loop calibration prevents the failures that sink AI products.

Lenny RachitskyhostAishwarya Naresh RegantiguestKiriti Badamguest
Jan 11, 20261h 26mWatch on YouTube ↗

EVERY SPOKEN WORD

  1. 0:005:03

    Introduction to Aishwarya and Kiriti

    1. LR

      We worked on a guest post together. They had this really key insight that building AI products is very different from building non-AI products.

    2. AR

      Most people tend to ignore the non-determinism. You don't know how the user might behave with your product, and you also don't know how the LLM might respond to that. The second difference is the agency control trade-off. Every time you hand over decision-making capabilities to agentic systems, you're kind of relinquishing some amount of control on your end.

    3. LR

      This significantly changes the way you should be building product.

    4. KB

      So we recommend building step by step. When you start small, it forces you to think about, what is the problem that I'm gonna solve? In all this advancements of the AI, one easy slippery slope is to keep thinking about complexities of the solution and forget the problem that you're trying to solve.

    5. AR

      It's not about being the first company to have an agent among your competitors. It's about, have you built the right flywheels in place so that you can improve over time?

    6. LR

      What kind of ways of working do you see in companies that build AI products successfully?

    7. AR

      I used to work with the CEO of now Rackspace. He would have this block every day in the morning, which would say, "Catching up with AI, four to six AM." Leaders have to get back to being hands-on. You must be comfortable with the fact that your intuitions might not be right, and you probably are the dumbest person in the room, and you want to learn from everyone.

    8. LR

      What do you think the next year of AI is gonna look like?

    9. KB

      Persistence is extremely valuable. Successful companies right now building in any new area, they are going through the pain of learning this, implementing this, and understanding what works and what doesn't work. Pain is the new moat.

    10. LR

      [upbeat music] Today, my guests are Aishwarya Reganti and Kiriti Badam. Kiriti works on Codex at OpenAI and has spent the last decade building AI and ML infrastructure at Google and at Kumo. Ash was an early AI researcher at Alexa and Microsoft and has published over thirty-five research papers. Together, they've led and supported over fifty AI product deployments across companies like Amazon, Databricks, OpenAI, Google, and both startups and large enterprises. Together, they also teach the number one rated AI course on Maven, where they teach product leaders all of the key lessons they've learned about building successful AI products. The goal of this episode is to save you and your team a lot of pain and suffering and wasted time trying to build your AI product. Whether you are already struggling to make your product work or want to avoid that struggle, this episode is for you. If you enjoy this podcast, don't forget to subscribe and follow it in your favorite podcasting app or YouTube. It helps tremendously. And if you become an annual subscriber of my newsletter, you get a year free of a ton of incredible products, including a year free of Lovable, Replit, Bold, Gamma, n8n, Linear, Devon, PostHog, Superhuman, Descript, WhisperFlow, Perplexity, Warp, Granola, MagicPad, AndroidCast, Chapter, dmobit, and Stripe Atlas. Head on over to lennysnewsletter.com and click Product Pass. With that, I bring you Aishwarya Reganti and Kiriti Badam, after a short word from our sponsors. This episode is brought to you by Merge. Product leaders hate building integrations. They're messy, they're slow to build, they're a huge drain on your roadmap, and they're definitely not why you got into product in the first place. Lucky for you, Merge is obsessed with integrations. With a single API, B2B SaaS companies embed Merge into their product and ship two hundred and twenty-plus customer-facing integrations in weeks, not quarters. Think of Merge like Plat, but for everything B2B SaaS. Companies like Mistral AI, Ramp, and Drata use Merge to connect their customers' accounting, HR, ticketing, CRM, and file storage systems to power everything from automatic onboarding to AI-ready data pipelines. Even better, Merge now supports the secure deployment of connectors to AI agents with a new product, so that you can safely power AI workflows with real customer data. If your product needs customer data from dozens of systems, Merge is the fastest, safest way to get it. Book and attend a meeting at merge.dev/lenny, and they'll send you a fifty-dollar Amazon gift card. That's merge.dev/lenny. This episode is brought to you by Strella, the customer research platform built for the AI era. Here's the truth about user research: It's never been more important or more painful. Teams want to understand why customers do what they do, but recruiting users, running interviews, and analyzing insights takes weeks. By the time the results are in, the moment to act has passed. Strella changes that. It's the first platform that uses AI to run and analyze in-depth interviews automatically, bringing fast and continuous user research to every team. Strella's AI moderator asks real follow-up questions, probing deeper when answers are vague, and surfaces patterns across hundreds of conversations, all in a few hours, not weeks. Product, design, and research teams at companies like Amazon and Duolingo are already using Strella for Figma prototype testing, concept validation, and customer journey research, getting insights overnight instead of waiting for the next sprint. If your team wants to understand customers at the speed you ship products, try Strella. Run your next study at strella.io/lenny. That's S-T-R-E-L-L-A.io/lenny.

  2. 5:037:36

    Challenges in AI product development

    1. LR

      [upbeat music] Ash and Kiriti, thank you so much for being here, and welcome to the podcast.

    2. AR

      Thank you, Lenny.

    3. KB

      Thank you. Thank you for having us. Super excited for this.

    4. LR

      Let me set the stage for the conversation that we're gonna have today. So you two have built a bunch of AI products yourself. You've gone deep with a lot of companies who, uh, have built AI products, have struggled to build AI products, build AI agents. You also teach a course on building AI products successfully, that... And you're kind of, like, on this mission to just reduce pain and suffering and failure, uh, that you constantly see people go through when they're building AI products. So to set a little just foundation for the conversation we're gonna have, what are you seeing on the ground within companies trying to build AI products? What's going well? What's not going well?

    5. AR

      I think 2025 has been significantly different than 2024. One, the skepticism has significantly reduced. Um, there were tons of leaders last year who probably thought this would be yet another crypto wave and kind of skeptical to get started.... and a lot of the use cases that I saw last year were more of slap chat on your data, right? And that was, you know, um, calling themselves an AI product. And this year, a ton of companies are really rethinking their user experiences and their workflows and all of that, and really understanding that you need to deconstruct and reconstruct your processes in order to have AI, um, uh, in order to build successful AI products, right? And that's, that's the good stuff. The bad stuff is the execution is still all over the place. Um, think of it, right, this is a three-year-old field. There are no playba- playbooks, there are no textbooks. Um, so you really need to figure out as you go. And the AI life cycle, both pre-deployment and post-deployment, is very different as compared to a traditional software life cycle. Um, and so, so a lot of old contracts and handoffs between traditional roles, like say, PMs and engineers and data folks, has now been broken. It's, it's-- and people are really getting adapted to this new way of working together and kind of owning the same feedback loop in a way. Because previously, I feel like PMs and engineers, and all of these folks had their own feedback loops to optimize, and now you need to be probably sitting in the same room. You're probably looking at agent traces together and deciding how your, uh, product should behave, so it's a tighter form of collaboration. So companies are still kind of figuring that out. That's kind of what I see, um, in my consulting

  3. 7:3613:19

    Key differences between AI and traditional software

    1. AR

      practice this year.

    2. LR

      So let me follow that thread. We worked on a guest post together that came out a few months ago, and the thing that stood out to me most, that stuck with me most after working on that post is, you have this really, uh, key insight that building AI products is very different from building non-AI products. And the thing that you're big on getting across is there's two very big differences. Talk about those two differences.

    3. AR

      Yes, um, and again, I, I wanna make sure that we drive home the right point. Um, there are tons of, uh, similarities of building AI systems and software systems as well, but then there are some things that kind of fundamentally change the way you build software systems, um, versus AI systems, right? And one of them that most people tend to ignore is the non-determinism. Uh, you're pretty much working with a non-deterministic API as compared to traditional software. What does that mean, and why does that have to affect us, is in traditional software, you pretty much have a very well-mapped decision engine or workflow. Uh, think of something like Booking.com, right? You, um, you have an intention that, uh, you wanna make a booking in San Francisco for two nights, et cetera. Uh, the product has kind of been built, uh, so that your intention can be converted into a particular action, and you kind of are clicking through a bunch of buttons, options, forms, and all of that, and you finally achieve your intention. But now that layer in AI products has completely been replaced by a very fluid, um, interface, which is mostly natural language. Which means you... the user can literally come up with a ton of ways of saying, uh, or communicating their intentions, right? And that kind of changes a lot of things because now you don't know how your user is going to behave. That's on the input side, and the output is also that you're working with a non-deterministic probabilistic API, which is your LLM. And LLMs are pretty sensitive to prompt phrasings, and they're pretty much black boxes, so you don't even know how the outputs of this will look like, right? So this, um, you don't know how the user might behave with your product, and you also don't know how the LLM might respond to that. So you're now working with an input, output, and a proc- process, and you don't understand all the three very well. You're trying to kind of anticipate behavior and build for it. And with agentic systems, this kind of gets even harder, and that's where we talk about the second difference, which is the agency control trade-off, right? What we mean by that, and I'm kind of shocked so many people don't talk about this, they're extremely obsessed with building autonomous systems, agents can, that can do work for you. But every time you hand over decision-making capabilities or autonomy to agentic systems, you're kind of relinquishing some amount of control on your end, right? And when you do that, you wanna make sure that your agent has, um, gained your trust, or it is reliable enough that you can allow it to make decisions. And that's where we talk about this agency control trade-off, which is if you give your AI agent or your AI system, whatever it is, more agency, which is the ability to make decisions, you're also, um, losing some control. And you wanna make sure that the agent or the AI system has earned, um, that ability or has built up trust over time.

    4. LR

      So just to summarize what you're sharing here, essentially, people have been building product, software products for a long time. We're now in a world where the software you're building is, one, non-deterministic, can just do things differently. Ev-- like, you know, as you said, you go to Booking.com, you find a hotel, it's gonna be the same experience every time. You'll see different hotels, but it's a predictable experience. With AI, you can't predict that it's gonna be the exact same thing, the thing that you, uh, plan it to be every time. And then the other is there's this trade-off between agency and control. How much will the AI do for you, versus how much should the person still be in charge? And the... what I'm hearing is the big point here is this significantly changes the way you should be building product, and we're gonna talk about the impact on how the product development life cycle should change as a result. Is there anything else you wanna add there before we get into, into that?

    5. KB

      Yeah, it's definitely, like, one of the key points that, uh, this kind of distinction needs to exist in your mind, like, when you're starting to build. For example, think about if your, like, objective is to hike, uh, Half Dome in Yosemite, right? You don't start hiking it every day, but you start, you know, training yourself for like, you know, in, in minor parts, and then you slowly improve, and then, like, you go to the end goal, right? I feel like that's ex- extremely similar to what you want to build AI products.... in the sense that when you don't start with, like, agents, with all the tools and all the context that you have in the company in day one and expect it to work, or like, you don't even tinker at that level. You need to be deliberately starting in places where there is minimal impact and more human control, so that you have, like, a good grip of what are the current capabilities and what can I do with them. And then slowly, you know, like lean into the more agency and lesser control. So this gives you that confidence that, okay, I can know that, okay, this is the particular problem that I'm facing, and the AI can solve this extent of it. And then, like, let me next think through what context I need to bring in, what kind of tools I need to, uh, to this, to improve the, uh, experience, right? So I feel like it's also, uh, it's a good and a bad thing in the sense that it's good that you don't have to see the complexity of the outside world of like, you know, all of this fancy AI, uh, agents force, and feel like I cannot do that. It's all where everyone is starting from very, uh, minimalistic structures and then evolving. And the second part is like, it's also go-- uh, it, the, the bad thing is that as you are like, you know, trying to build this one-click agents into your company, you don't have to be overwhelmed with this complexity. You can, like, slowly graduate. So that's extremely important, and we see this as a repeating

  4. 13:1915:23

    Building AI products: start small and scale

    1. KB

      pattern over and over.

    2. LR

      Okay, so let's actually follow that thread, 'cause that's a really important component of how you recommend people build AI stuff. AI stuff, AI products, AI agents, all the AI things. Um, so sh- give us an example of what you're talking about here, and this idea of starting, uh, slow with agency and control, and then moving kind of up this rung.

    3. KB

      Yeah. For example, a very important or, like, very prevalent, uh, application of, uh, AI agents is like customer support, right? Uh, imagine, like, you are a company who has, like, a lot of customer support tickets, and why even imagine, like, OpenAI faced this, the exact same thing when we were launching products. And there was, like, a huge spike of, uh, support volume as, like, you know, we launched successful products like Image and, or, uh, you know, like GPT-5 and things like that. The kind of questions you get is different. The kind of like, you know, uh, problems that the customers bring to you is different. So it's not about just, like, dumping all the, uh, list of help center articles that you have into the AI agent. You kind of understand what are the things that you can build, and so initially, the first step of it would be something like, uh, you have your support agents, the human support agents, but you will be suggesting, uh, in terms of, "Okay, this is what the AI thinks that is the right thing to do." And then you get that feedback loop from the humans that, "Okay, this is actually a good suggestion for me in this particular case, and this is a bad suggestion." And then you can go back and understand, okay, uh, this is what the drawbacks are, or this is where the blind spots are, and then how do I fix that? And once you get that, you can increase the autonomy to say that, "Okay, I don't need to suggest the human. I'll actually show the, uh, show the answer directly to the customer sup- to the customer," and then we can actually add more complexity in terms of, okay, uh, I was only answering questions based on help center articles, but now let me add new functionality. Like, I can actually issue refunds to the customers. I can actually raise feature requests with the engineering team and all of these things. So if you start all, with all of this on day one, it's incredibly hard to control the complexity. So we recommend, like, you know, building step by step, and then increasing it.

  5. 15:2322:38

    The importance of human control in AI systems

    1. LR

      Awesome, and you have a visual, actually, that we'll share of what this looks like, but just to kind of mirror back what you're describing, this idea of start with high control, low agency. In your c-- the example you gave is the support agents just kind of giving suggestions, is not able to do anything. The user is in charge, and then as that becomes useful, and you are confident it's doing the right sort of work, you give it a little more agency, and you kind of pull back on the control the user has. And then, if that's starting to go well, then you give it more agency, and the user needs less control to control it.

    2. KB

      Yeah.

    3. LR

      Awesome.

    4. AR

      I, I think the higher level idea here is, with AI systems, it's all about behavior calibration. It's incredibly impossible to predict upfront how your system behaves. Now, what do you do about it? You make sure that you don't ruin your customer experience or your end-user experience. Um, you keep that as is, but then remove the amount of control that the human has, and there is no single right way of doing it. You can decide how to constrain that autonomy, right? Um, a very-- I mean, a different example of how you could constrain autonomy is pre-authorization use cases. Insurance pre-authorization is a very ripe use case for AI because, uh, clinicians spend a lot of time, um, pre-authorizing, uh, things like blood tests, MRIs, and things like that, right? And there are some cases which are more of low-hanging fruits, for instance, MRIs and blood tests, because, um, as soon as you know a patient's information, it's easier to approve that, and AI could n- do that. Versus something like an invasive surgery, et cetera, is more high risk. You don't want to be doing that autonomously. So you can kind of determine which of these use cases should go through that human-in-the-loop layer versus which of the use cases AI can conveniently handle. And then, all through this process, you're also logging what the human is doing, right? Because you wanna build a flywheel, um, that you could use in order to improve your system. Um, uh, so you're essentially, um, not ruining the user experience, not eroding trust, at the same time, logging what humans would otherwise do, so that you can continuously improve your system.

    5. LR

      So let me, let me give you a few more examples of this kind of progression that you recommend, and this... The reason I'm spending so much time here is this is a really key part of your recommendation to help people build more successful AI products, this idea of start slow with high control and low agency, and then build up over time once you've built confidence that it's doing the right sort of work. So a few more examples that you shared in your post that I'll just read. So say you're building a coding assistant.... V1 would be just suggest inline completion and boilerplate snippets. V2 would be generate larger blocks, like tests or refactors for humans to review. And then V3 is just apply the changes and open PRs autonomously. And then another example is a marketing assistant. So V1 would be draft emails or social copy, just like, "Here's what I would do." V2 is build a multi-step campaign and run the campaign, and then laun- and V3 is just launch it, A/B test it, auto-optimize campaigns across channels. Awesome.

    6. AR

      Yeah.

    7. LR

      And, and again, just to summarize where we're at, just to give people the, the, the advice we've shared so far. Uh, one is just important to understand, AI products are different, they're non-deterministic. And you pointed out, and I forgot to actually mirror back this point, both on the ins- on the input and the output, the user experience is non-deterministic. [chuckles] Like, people will see different things, different outputs, different chat conversations, different maybe UI, if it's designing the UI for you. And also, the output obviously is gonna be non-deterministic, so that's a problem and a challenge. [chuckles] And then, uh-

    8. AR

      I mean, if you think of it, it's also-

    9. LR

      Yeah

    10. AR

      ... the most beautiful part of AI, which is, I mean, we're all much more comfortable talking than following a bunch of buttons and all of that, right? So the bar to using AI products is much lower because you can be as natural as you would be with humans. But that's also the problem, which is, there are tons of ways we communicate. Um, and it's... You wanna make sure that that intent is rightly communicated, and the right actions are taken, because most of your systems are deterministic, and you want to achieve a deterministic outcome, uh, but with non-deterministic technology, and that's where it gets a little messy.

    11. LR

      Awesome. Okay. That's, uh... I love, I love the, [chuckles] the optimistic version of the- [chuckles] why this is good. Okay, and then the other piece is this idea of this trade-off of autonomy versus control, and when you're designing a thing. And what I imagine what you're seeing is people try to jump to the idea, like the V3, immediately, and that's when they get into trouble. Both, it's probably a lot harder to build that, and it just doesn't work, and then they're just like: "Okay, this is a failure. What have we been doing?"

    12. KB

      Exactly. I feel there is, like, a bunch of things that you actually have to, uh, get confidence in before you get to V3. And it's, it's easy to get overwhelmed that, "Oh, my AI agent is, like, doing these things wrong in, like, hundred different ways," and you're not gonna actually tabulate all of them and fix it, right? Even though you've learned, like, you know, how do you deal with the, uh, evaluation practices and stuff like that, if you're starting on the wrong spot, you are actually going to have a hard time, like, you know, correcting things from there. And when you start, uh, small, and when you start with a building, like a very minimalistic version with high human control and low agency, it also forces you to think about: "What is the problem that I'm gonna solve?" Uh, we, we use this term called problem first, and, uh, it-- to me, it was, like, obvious in the sense that, yeah, I, I do need to think about the problem. But it's incredible how well it resonates with the people, that in all this advancements of the AI that we are seeing, one easy slippery slope is to just keep thinking about, uh, complexities of the solution and not-- and forget the problem that you're trying to solve. So when you're trying to start at, like, a small... at, at a smaller scale of autonomy, you start to really think about: "What is the problem that I'm trying to solve, and how do I break it down into, like, uh, levels of autonomy that I can build later?" So that is incredibly useful when, like... and we keep repeating this pattern over and over with everyone we talk to.

    13. LR

      Hmm. And there's so many other benefits to, uh, limiting autonomy, 'cause there, there's just danger also of the thing doing too much for you and just messing up your, I don't know, your database, sending out all these emails you never expected. Like, there's, like, so many reasons this is a good idea.

    14. AR

      Yep. I, I recently read this paper from a bunch of folks at UC Berkeley, um, uh, basically Matei Zaharia, Ion Stoica, and the folks at Databricks, and it said about seventy-four or seventy-five percent of the enterprises that they had spoken to, um, their biggest problem was reliability, and that's also why they weren't, uh, comfortable, um, deploying products to their end users and building customer-facing products, because they just weren't sure, or they just weren't, um, comfortable doing that and exposing their users to a bunch of these risks, right? And, and that's also why they think a lot of AI products today have to do with productivity, because it's much lower autonomy versus, you know, end-to-end agents that would replace workflows. Um, and yeah, I love their work otherwise as well, but I think that's very in line with what, um, at least we're seeing at my start-up

  6. 22:3825:18

    Avoiding prompt injection and jailbreaking

    1. AR

      as well.

    2. LR

      Okay, very interesting. There's an episode that'll come out before this conversation, where we go deep into another problem that this avoids, which is around, uh, prompt injection and jailbreaking-

    3. AR

      Oh, wow. Yeah

    4. LR

      ... and just how big of a, uh, ex- risk that is for AI products, where it's essentially an unsolved and unsolvable problem, potentially. I'm not gonna go down that track, but that's-

    5. AR

      Yeah

    6. LR

      ... uh, it's a pretty scary conversation we had, but it'll be out before this conversation.

    7. AR

      I think that will be a huge problem once systems go mainstream. We're still so busy building AI products that we're not worried about security, but it, it will be, um, such a huge problem to kind of, uh, e- especially with this non-deterministic API again, right? So you're kind of stuck because, um, there are tons of instructions that you could inject within your prompt, and then... Yeah, it's, it's going to be bad.

    8. LR

      Okay, I, uh, let's actually spend a little time here, 'cause it's actually really interesting to me, and no one's talking about this stuff, which is, like, the conversation we had is just, it's pretty easy to get AI to trick- to do stuff it shouldn't do, and there's all these guardrail systems people put in place, but turns out, these guardrails aren't actually very good, and you can always get around them. And to your point, as agents become more autonomous and robots, uh, it gets pretty scary that you could get AI to do things you shouldn't do.

    9. KB

      I think this is definitely a problem, but I feel in the current spectrum of, like, customers adopting AI, the, the extent to which like, you know, companies can actually get advantage of AI, or, like, improve their processes or, like, you know, streamline the existing processes that they have, I feel it, it's still in the very early stages. Like, 2025 has been an extremely busy year for AI agents and customers trying to adopt AI, but I feel the penetration is still not as much as you would actually get advantage out of it. So-... with the right sort of, you know, human in the group, uh, points in here, I feel we can actually avoid a bunch of these things and focus more towards, like, streamlining the processes. And I, I am more on the optimist side, in the sense that, like, you need to try and adopt this before actually, like, trying to be only for highlighting the negative aspects of, like, what could go wrong. So I, I feel like strongly, um, that companies has to adopt this. They definitely, like, no company, uh, at OpenAI we talk to is, has never had been the case that, "Oh, AI cannot help me in this case." It has always been that, "Oh, there is this, like, set of things that it can, uh, optimize for me, and then let me see how I can adopt it."

    10. LR

      Sweet. I always like the optimistic perspective. I'm excited to-- for you to listen to this and see what you think, 'cause it's really interesting.

    11. KB

      [chuckles]

    12. LR

      And, uh, and to your point, there's a lot of things to focus on. It's one of, one of many things to, [chuckles] to worry about-

    13. KB

      Yes

    14. LR

      ... and think about.

  7. 25:1833:20

    Patterns for successful AI product development

    1. LR

      Okay, let's get back on track here. So we've shared a bunch of pro tips and important piece of advice. Let me ask, what other patterns and kind of ways of working do you see in companies that do this well and teams that build AI s- products successfully? And then, just what are the most common pitfalls people fall into? So we could just maybe start with, what are other ways that companies do this well, build AI products successfully?

    2. AR

      I always think of it as like a success triangle with three dimensions. It's never always technical. Every technology problem is a people problem first. And with companies that we have worked with, it's these three dimensions, right? Like, great leaders, good culture, and technical prowess. Um, with leaders itself, we work with a lot of companies, uh, for their AI transformation, training, strategy, and stuff like that. And I feel like, um, a lot of companies, the leaders have built intuitions over ten or fifteen years, and they are kind of highly regarded for those intuitions. But now, with AI in the picture, those intuitions will have to be relearned, and leaders have to be vulnerable to do that, right? Um, I used to work with the CEO of now Rackspace, Gajen. So he would, um, have this block every day in the morning, which would say, "Catching up with AI, four to six AM," and he would not have any meetings or anything like that. And that was just his time to pick up on the latest AI, um, you know, podcast or information and all of that. And he would have-

    3. LR

      Yeah

    4. AR

      ... um, weekend wipe coding sessions and stuff like that. So I think leaders have to get back to being hands-on, and that's not because they have to be implementing these things, but more of, uh, rebuilding their intuitions, because you must be comfortable with the fact that your intuitions might not be right. Um, and you, you probably are the dumbest person in the room, and you wanna learn from everyone. Um, and that... I've seen that being a very, um, distinguishing factor of companies that build products, um, which are successful, because you're kind of bringing in that top-down approach. It's almost always impossible for it to be bottom up. You can't have a bunch of engineers go and get buy-in from the leader if they just don't trust in the technology or if they have misaligned expectations about the technology, right? I've heard from so many folks who are building, that our leaders just don't understand the extent to which AI can solve a particular problem, or they just wipe code something and assume it's easy to take it to production, and you really need to understand the range of what AI can solve today, so that you can guide decisions within the company. The second one is the culture itself, right? And again, I work with enterprises where AI is not their main thing, and they've, um, they need to bring in AI into their processes just because a competitor is doing it, and just because it does make sense, because there are use cases that are very ripe. Then, along the way, I feel a lot of companies have this culture of FOMO, and you will be replaced, and those kind of things, and people get really afraid. Um, subject matter experts are such a huge part of building AI products that work, because you really need to consult them to understand how your AI is behaving or what the ideal behavior should be. But then, I've spoken to a bunch of companies where the subject matter experts just don't wanna talk to you because they think their job is being replaced. So as... I mean, again, this comes from the leader itself. You want to build a culture of empowerment, of, um, augmenting AI into your own workflows, so that, you know, you can 10X in what you're doing, instead of saying that, you know, probably, uh, you'll be replaced if you don't adopt AI and stuff like that. So that kind of an empowering culture always helps. You wanna make, um, your entire organization be in it together and make AI work for you, instead of trying to, you know, guard their own jobs, et cetera. And with AI, it's also true that it opens up a lot more opportunities than before. So you could have your employees doing a lot more things than before and 10X their productivity. Um, and the third one is the technical part, which we talk about, right? I think folks that are successful are incredibly obsessed about understanding their workflows very well and augmenting parts, um, that could be, um, um, e- uh, that could be ripe for AI versus the ones that might need human in the loop somewhere, et cetera. Whenever you're, uh, trying to automate some part of a workflow, it's never the case that you could, you could use an AI agent and that will kind of solve your, uh, problems, right? It's always, you probably have a machine learning, uh, model that's going to do some part of the job. You have deterministic code doing some part of the job. So you really need to be obsessed with understanding that workflow, so you can choose the right tool for the problem, instead of being obsessed with the technology itself. And, um, I-... Another pattern I see is also folks really understand this idea of working with a non-deterministic API, which is your LLM. And what that means is they also understand the AI development lifecycle looks very different, and they iterate pretty quickly, which is, can I, um, can I build something, iterate, uh, quickly in a way that it doesn't ruin my customer experience, at the same time gives me enough amount of data so that I can estimate behavior, right? So they build that flywheel very quickly. As of today, it's not about being the first company to have an agent among your competitors. It's about have you built the right flywheels in place so that you can improve over time, right? When someone comes up to me and says: "We have this one-click agent, it's going to be deployed in your system, and then in, in two or three days, it'll start showing you significant gains," I would almost be skeptical because it's just not possible, and that's not because the models aren't there, but because enterprise data and infrastructure is very messy, and you need a bit to-- even the agent needs a bit to understand, um, how these systems work. There are very messy taxonomies everywhere. Um, people tend to do things like get customer data. We, one, get customer data, we two, and these kind of things, and y- all those functions exist and, um, they are being called, and there's... Basically, there's a lot of tech debt that you need to deal with. So most of the times, if, if you're obsessed with the problem itself and you understand your workflows very well, you will know how to improve your agents over time, instead of just slapping in an agent and assuming that it'll work from day one. I probably will go as far to say that if someone's selling you one-click agents, it's, it's pure marketing. You don't want to buy into that. I would rather go with a company that says: "We're gonna build this pipeline for you," and that, that will learn over time and kind of build a flywheel to improve than something that's gonna work out of the box. To replace any critical workflow or to, um, build something that can give you significant ROI, it easily takes four to six months of work, even if you have the best data layer and infrastructure layer.

    5. LR

      Amazing! There's a lot there that resonates so deeply with other conversations I've been having on this podcast. One is just for a company to be successful at seeing a lot of impact from AI, the founder CEO has to be deep into it. Uh, I had Dan Shipper on the podcast, and they work with a bunch of companies, helping them adopt AI, and he said that's the number one predictor of success, is the CEO chatting with ChatGPT, Claude, whatever, uh, m- many times a day. I love this example you gave of the Rackspace CEO [chuckles] as, like, catch up on AI news in the morning every day. [chuckles] I was imagining he'd be, like, chatting with, like, the chatbot versus, uh, like reading news.

    6. AR

      With the kind of information you have as of today, you could just, um... I mean, you wanna choose the right, um, channels as well, because everybody has an opinion. So whose opinion do you want to bank on? I feel like having that good quality set of people that you're listening to really makes sense. So he just has a list of two or three sources that he always looks at, and, and then he comes back with a bunch of questions and bounces it around with a bunch of AI experts to see what they think about it, and I was part of that group, so I kind of know, um-

    7. LR

      I love that

    8. AR

      ... about the questions that he comes up with. So-

    9. LR

      That's cool.

    10. AR

      It's pretty cool. I was like: "Why are you doing so much?" And then he says: "It trickles down into a bunch of decisions

  8. 33:2041:27

    The debate on evals and production monitoring

    1. AR

      that we would take."

    2. LR

      Okay, let me talk about, uh, another topic that's very, it's been a hot topic on this podcast. It was a hot topic on Twitter for a while: evals. [chuckles] A lot of people are obsessed with evals, think they're the solution to a lot of problems in AI. A lot of people think they're overrated. That way, you don't need evals. You can just feel the vibes, and you'll, you'll be all right. What's your take on evals? How far does that take people in solving a lot of the problems that you talk about?

    3. KB

      In terms of, like, what is going on in the community, I, I feel there's just this false dichotomy of, like, there's either evals is going to solve everything, or online monitoring or production monitoring is gonna solve everything. And I find no reason to trust, like, one of the extremes, in the sense that I will entirely bank my application on this, uh, or like that, to solve the, uh, thing, right? So if you take a step back, uh, think of what are evals. Evals are basically your, uh, trusted product thinking or, like, your knowledge about the product that is going into this, uh, set of datasets that you're going to build, in the sense that this is what matters to me. Like, this is the kind of problems that my agent should not do, and let me build a list of datasets so that I'm going to do well on those. And in terms of production monitoring, what you're doing d- doing there is, uh, you're deploying an application, and then you're having this, some sort of key metrics that actually communicate back to you on how customers are using your product. Like, you could be deploying, uh, any agent, and, like, if the customer, customer is giving a thumbs up for your interaction, you better want to know that. So that is what production monitoring is going to do, right? And this production monitoring has existed for products like in, for a long time. Just that now, with AI agents, you will need to be monitoring, like, a, a lot more granularity. It's not just the customer always giving you explicit feedback, but there is many implicit feedback that you can get. Uh, for example, in ChatGPT, right? Like, if you are, uh, liking the answer, you can actually give a thumbs up, or if you don't like the answer, sometimes customers don't give you thumbs down, but actually re- regenerate the answer. So that is a clear indication that w- the, the initial answer that he generated is not meet, matching, uh, ma- meeting the customer's expectation, right? So these are the kind of implicit signals you always need to think about, and that spectrum has been increasing in terms of production monitoring. Now, let's come back to the initial topic of like: Okay, i- is it evals, or is it production monitoring? What does it matter? So I feel, again, we go back to this problem first approach of: What is your-- what is it that you're trying to build? Like, you're trying to build a reliable application for your customers that's not going to do a bad thing. Like, it's always going to do the right thing, or if it is doing a wrong thing, you are, uh, you're basically alerted, like, very quickly, right? So the-- I break this down into two parts. Like, one is you... Like, nobody goes into, uh, deploying an application without actually, like, you know, just testing that.... this testing could be wipes, or this testing could be, okay, I have this, like, ten questions that it should not go wrong, any-- no matter what changes I make, and let me build this, and th- let's call this an evaluation dataset. Now, let's say you build this, you deployed this, and then you figured, uh, okay, now I need to understand whether it's doing the right thing or not. So if you're a high, uh, high, uh, throughput or like a high, uh, transaction customer, you cannot tactically sit and evaluate all the traces, right? You need some indication to understand what are the things that I should look at. And this is where production monitoring comes into the picture, that you cannot predict your, uh, the ways in which your agent could be doing wrong, but all of these other implicit signals and explicit signals, those are going to communicate back to you what, uh, y- what are the traces that you need to look at, and that is where production monitoring helps. And once you get these kind of traces, you n- need to examine what are the failure patterns that you're seeing in these, uh, different types of interactions, and is there something that I really care about that should not happen? And if that kind of failure modes are happening, then I need to think about building an evaluation dataset for it. And, okay, let's say I built an evaluation dataset for my agent trying to offer refunds, where explicitly I have configured it not to. So I built this evaluation dataset, and then, like, I made my s- changes in tools or prompts or whatever, and then I deployed the second version of the product, right? Now, uh, there is no guarantee that this is the only problem that you're gonna see. You still need production monitoring to actually have, like, you know, catch different kinds of problems that you might encounter. So I feel evals are important, production monitoring is important, but this notion of only one of them is going to solve things for you, that is, uh, completely dismissible, in my opinion.

    4. LR

      All right. A very reasonable answer. And the point here isn't, uh, it's not just as simple as do both, it's more that there are different things to catch, and one approach won't catch all the things you need to be paying attention to.

    5. KB

      Exactly.

    6. LR

      Awesome.

    7. AR

      I want to take two steps back and kind of talk about how much weight the term evals has had to take in the second, you know, half of twenty twenty-five. Because you go meet a data labeling company, and they tell you: "Our experts are writing evals." And then, uh, you have all of these, uh, folks saying that PMs should be writing evals, they're the new PRDs. And then you have folks saying that, um, evals is pretty much everything, which is the feedback loop you're supposed to be building to improve your products. Now, step back as a beginner and kind of think, like, "What are evals? Why is everyone saying evals?" And these are actually different parts of the process, and nobody is wrong in the sense that, yes, these are evals. But when a data labeling company is telling you that our, um, experts are writing evals, they're actually referring to error analysis or, you know, experts just leaving notes on what should be right. Lawyers and doctors write evals, that doesn't mean they're building LLM judges or they're building this entire feedback loop. And when you say that a PM should be writing evals, doesn't mean they have to write an LLM judge that's good enough for production. I think there's... There are also very prescriptive ways of doing this, and plus one to Kirti, which is you cannot predict upfront if you need to be building an LLM judge versus you need to be using ex- um, implicit signals from production monitoring, et cetera. I think Martin Fowler at some point had this term called Semantic Diffusion back in the two thousands, um, um, which kind of means that someone comes up with a term, everybody starts butchering it with their own definitions, and then you kind of lose the actual definition of it. That is kind of what is happening to evals or agents or any word in AI as of today. Everybody kind of sees a different side to it, I guess. Um, but if you make a bunch of practitioners sit together and ask them: "Is it important to build a actionable feedback loop for AI products?" I think all of them will agree. Now, how you do that really depends on your application itself. When you go to complex use cases, it's incredibly hard to build LLM judges because you see a lot of emerging patterns. If you built a judge that would, um, you know, test for verbosity or something like that, you-- turns out that you're seeing newer patterns that your LLM judge is not able to catch, and then you're just, um, you just end up building too many evals. And at that point, it just makes sense to, you know, look at your user signals, fix them, check if you have regressed, and move on, instead of actually building these judges. Um, so it all depends. I think one statement that every ML practitioner will tell you is, "It really depends on the context. Don't be obsessed with prescriptions, they're gonna change."

    8. LR

      Uh, that's such an important point, this idea that, especially that evals just means many things to different people now. It's just like a term for so many things, and, uh, it, it's complicated to just talk about evals when you're think- when you see it as the stuff data labeling companies are giving you, and things PMR, right? And there's also benchmarks. People call benchmarks a little bit evals. It's like-

    9. AR

      I, I recently spoke to a client who told me: "We do evals."

    10. LR

      Yeah.

    11. AR

      And I was like: "Okay, can you show me your dataset?" And he said: "No, we just checked LMArena and Artificial Analysis. These are y- you know, independent-

    12. LR

      Yeah

    13. AR

      ... benchmarks, and we know that this model is the right one for our use case." And I'm like: "You're not doing evals. That's not evals."

    14. LR

      [chuckles]

    15. AR

      "Those are model evals."

    16. LR

      But it makes sense, like the word, you know, it, like, could be used in that context. I get why people think that, but yeah, now it's just confusing it even more.

    17. AR

      Yep.

  9. 41:2745:41

    Codex team’s approach to evals and customer feedback

    1. LR

      Just, like, one more line of questioning here that I think, uh, that's on my mind is, the reason this became kind of a big debate is Claude Code. The head of Claude Code, Boris, was like: "Nah, we don't do evals on Claude Code. It's all vibes." What can you share, Kirti, on Codex, on the Codex team, how you approach evals?

    2. KB

      So Codex, we have, like, this balanced approach of, like, you know, you need to have evals, and you need to definitely listen to your customers. And, uh, I think Alex has been on your podcast recently, and he's been talking about how we are extremely focused on building the right product, right? And a part of-- a big part of it is basically listening to your customers. And coding agents are extremely unique, uh, compared to agents for other domains, in the sense that these are actually built for customizability, and these are built for engineers. So coding agent is not a product which is going to solve, like, these top five workflows or, like, top six workflows or whatever, right? It's meant to be customizable in multi different ways, and the-... implication of that is that your product is going to be used in different integrations and different kinds of tools, and different kinds of things. So it gets really hard to build an evaluation dataset for all kinds, kinds of interactions that your customers are gonna use your product for, right? But that said, you also need to understand that, okay, if I'm gonna make a change, it's at least not going to, like, damage something that is really core to the product. So we have, like, evaluations, uh, for doing that, but at the same time, we have-- we take, like, extreme care on, like, understanding how the customers are using it. For example, uh, we built this code review product recently, and, uh, it has been gaining, like, extreme amount of traction. And, uh, I feel like many, many bugs in OpenAI, as well as, like, even our external customers are getting caught with this. And now, let's say, if I'm making a model change to the code review or, like, a different kinds of, uh, uh, RL mechanism that I trained with it, and, uh, now, if I'm going to deploy it, I definitely do want to AP test and identify whether it's actually finding the right, uh, mistakes. And are users-- how are users reacting to it? And sometimes, like, if users do get annoyed by your, like, you know, uh, incorrect code reviews, they go to the extent of just switching off the product, right? So those are the signals that you want to look at and make sure that your new changes are doing the right thing. And it's, uh, extremely hard for us to, you know, uh, think of these kind of scenarios beforehand and, uh, develop evaluation datasets for it. So I feel like there's a bit of both. Like, there's a lot of vibes, and there's a lot of, like, customer feedback. Uh, and we are super active on, like, the social media to understand if anybody's having certain types of problems, and quickly fix that. So I feel it's a, it's a, um... How do I put this? It's, it's like a domain of things that you do here.

    3. LR

      That makes so much sense. Okay, what I'm hearing, Codex, pro evals, but it's not enough. You need to-

    4. KB

      Yes.

    5. LR

      But also, uh, just watch customer behavior and feedback, and also, there's some vibes, just like, "Is this feeling good? Is this, as I'm using it, generating great code that I'm excited about?" That I think is great.

    6. KB

      I, I don't think, like, if anybody's coming and saying that, like, my-- I have this concrete set of evals that I can, like, bet my life on, and then I don't need to think about anything else. Like, it, it's not gonna work. And every new model that we're gonna launch, we, uh, get together as a team and, like, you know, test different things. Each, each, each person is, like, concentrating on something else. And, like, we have this list of hard problems that we have, and we throw that to the model and see how well they are progressing. So it's like, uh, custom evals for each engineer, you would say, and just, like, understand what the, uh, product is doing in this new model.

    7. LR

      [gentle music] If you're a founder, the hardest part of starting a company isn't having the idea, it's scaling the business without getting buried in back-office work. That's where Brex comes in. Brex is the intelligent finance platform for founders. With Brex, you get high-limit corporate cards, easy banking, high-yield treasury, plus a team of AI agents that handle manual finance tasks for you. They'll do all the stuff that you don't wanna do, like file your expenses, scour transactions for waste, and run reports, all according to your rules. With Brex's AI agents, you can move faster while staying in full control. One in three startups in the United States already runs on Brex. You can, too, at brex.com.

  10. 45:4158:07

    Continuous calibration, continuous development (CC/CD) framework

    1. LR

      We've been talking for almost an hour already, and we haven't even covered your extremely powerful software development workflow for building AI products that you two developed, that you teach in your course, that you... Basically combines all the stuff we've been talking about into a step-by-step approach to building AI products. You call it the continuous calibration, continuous development framework. Let's pull up a visual to show people what the heck we're talking about, and then just walk us through what this is, how this works, how teams can shift the way they build their AI products to this approach to help them avoid a lot of pain and suffering.

    2. AR

      Before we go about explaining, um, the life cycle, a quick story on why Kiriti and I came up with this is because, um, there are tons of, um, uh, companies that we keep talking to that have the pressure from their competitors because they're all building agents. We should be building agents that are entirely autonomous, and we-- I did end up working with a few customers, where we built these end-to-end agents. And turns out that because you start off at a place where you don't know how the user might interact with your system, and what kind of responses or actions the AI might come up with, it's really hard to fix problems when you have this really huge workflow, which is taking four or five steps, making tons of decisions. You're-- You just, you just end up debugging so much, and then kind of hotfixing, uh, to the point where at a, at a time, we were building for a customer support, um, use case, which is what-- which is the example that we give in the newsletter as well. And we had to shut down the product because we were doing so many hotfixes, and there was no way we could, um, count all the emerging or, uh, emerging problems that were coming up, right? And there's also quite some news online. Um, recently, I think Air Canada had this thing where, um, one of their agents predicted or hallucinated a policy, um, for a refund, which was not part of their original playbook, and they had to go by it because legal stuff. And there have been a ton of really, uh, scary incidents, and that's where the idea comes from, right? How can you build so that, um, you don't lose customer trust, and you don't end up... or your agent or, um, AI system doesn't end up making decisions that are super dangerous to the company itself? At the same time, build a flywheel so that you can improve your product as you go, right? And that's where we came up with this idea of continuous calibration, continuous development. The idea is pretty simple, which is, um, we have this right side of the loop, which is continuous development.... uh, where you scope capability and curate data, essentially get a data set of what your expected inputs are and what, um, your expected outputs should be looking at. This is a very good exercise before you start building any AI product, because many times you figured out that a lot of the folks within the team are just not aligned on how the product should behave. And that's where your PMs can really give in a lot more information, and your subject matter experts as well. So you have this data set that you know, um, your AI product should be doing really well on. It's, it's not comprehensive, but it lets you get started. And then you set up the application, and then design the right kind of evaluation metrics. And I intentionally use the term evaluation metrics, although we say evals, because I just wanna be very specific on what it is. Because evaluation is a process, evaluation metrics are dimensions that you want to, uh, focus on, um, during the process, right? And then you go about deploying, um, run your evaluation metrics. Um, and the second part is the continuous calibration, which is the part where you understand what, um, behavior you hadn't expected in the beginning, right? Because when you start the development process, you have this data set that you're optimizing for, but more often than not, you realize that that data set is not comprehensive enough, um, because users start behaving with your systems in ways that you did not predict, and that's where you want to do the calibration piece, right? I've deployed my system, now I see that there are patterns that I did not really expect, and your evaluation metrics should give you some insight into that, into those patterns. But sometimes you figure out that those metrics were also not enough, and you probably have new error patterns that you've not thought about, and that's where you analyze your behavior, spot error patterns. You apply fixes for issues that you see, but you also design newer evaluation metrics to figure out that there are emerging patterns. And that doesn't mean you should always design evaluation metrics. There are some errors that you can just fix and not really come back to, uh, because they're very spot errors. For instance, there's a, there's a, a tool calling error, just because your tool wasn't defined well and stuff like that. You can just fix it and move on, right? And this is pretty much how an AI product lifecycle would look like. But what we specifically also mention is, while you're going through these iterations, try to think of lower agency iterations in the beginning, um, and higher control iterations. What that means is constrain the number of decisions your AI systems can make, and, um, uh, make sure that there are humans in the loop, and then increase that over time, because you're kind of building a flywheel of behavior. And, uh, you're understanding what kind of use cases are coming in, or how your users are using the system, right? And one example I think we give in the newsletter itself is, um, the customer support. This is a nice image that kind of shows how you can think of agency and control as two dimensions, and each of your versions keep on increasing the agency, or the ability of your AI system to make decisions and lower the control as you go. And one example that we give is that of the, uh, customer support agent, where you can break it down into three versions. The first version is just routing, which is, is your agent able to classify and route a particular ticket to the right department? And sometimes when you read this, you probably think, "Is it so hard to just do routing? Why can't an agent easily do that?" And when you go to enterprises, routing itself can be a super complex problem. Any retail company, any popular retail company that you can think of, has hierarchical taxonomies. Most of the times, the taxonomies are incredibly messy. I have worked in, you know, use cases where you probably have taxonomy that says, um, you know, some tax- um, some kind of hierarchy, and then that says, "Shoes," and then "Women's shoes" and "Men's shoes," all at the same layer, where ideally you should be having shoes, and then women's shoes and men's shoes should be sub, uh, you know, classes, right? And then you're like, "Okay, fine, I could just merge that." And you go further, and you see that there's also another section under shoes that says, "For women and for men," and it's just not aggregated. It's not, uh, fixed for some reason. So if an agent kind of sees this kind of a taxonomy, what is it supposed to do? Where is it supposed to route? And a lot of the times, we are not aware of these problems until you actually go about building something and understanding it, right? So, um... And when these kind of problems, um, or, or, real human agents see these kind of problems, they know what to check next. Uh, maybe they realize that the, the node that says, "For women and for men," that's under shoes, was last updated in 2019, which means that it's just a dead node that's lying there and not being used. So they kind of know that, "Okay, we're supposed to be looking at a different node," and stuff like that. And I'm not saying agents cannot understand this, or models are not capable enough to understand this, but there are really weird rules within enterprises that are not documented anywhere, and you want to, um, make sure that the agents have all of that context, instead of just throwing the problem at them, right? Um, yeah, uh, coming back to the versions we had, routing was one where you have really high control. Because even if your agent routes to the wrong department, humans can take control and, you know, undo, uh, those actions. Um, and along the way, you also figure out that you probably are dealing with a ton of data issues that you need to fix and, you know, um, um, uh, make sure that your data layer is good enough for the agent to function. Uh, we do is what we said of a copilot, which is, now that you've figured out routing works fine after a few iterations, and you've fixed all of your data issues, you could go to the next step, which is: Can my agent provide suggestions, uh, based on some standard operating procedures that we have for the customer support agent, right? And it could just generate a draft that the human can make changes to. And when you do this, you're also logging human behavior, which means that how much of this draft was used by the customer support agent, or what was omitted? So you're actually getting error analysis for free when you do this, because you're literally logging everything that the user is doing, that you could then build back into your flywheel. And then we say, post that, once you've figured out that those drafts look good, and most of the times, maybe humans are not making too many changes, they're using these drafts as is, that's when you wanna go to your end-to-end resolution assistant that could, you know, um, draft a resolution that could, uh, solve the ticket as well, right? And those are the stages of agency, where you start with low agency, and then you go up high, right? Um, we also have this really nice table that we put together, which is, What do you do at each version, and what you learn that can enable you to go to the next step, and what information do you get that you can feed into the loop, right? When you're just doing your routing, you have better quality routing data. You also know what kind of prompts you need to be building to improve the routing system. Essentially, you're figuring out your structure for context engineering and, um, building that flywheel that you want, right? And-... while I go through this, I wanna also be very clear that two things: One is, when you build with CC/CD in mind, it doesn't mean that you've fixed the problem all for once. It's possible that you've probably gone through V3, and you see a new distribution of data that you never previously imagined. But, um, this is just one way to lower your risk, which is you get enough information about how users behave with your system before going to a point of complete, um, autonomy. And the second thing is, um, you're also kind of, um, building this, um, uh, you know, implicit logging system. Uh, a, a lot of people come and tell us that, "Oh, wait, there are evals, right? Why do you need something like this?" The issue with just building a bunch of evaluation metrics and then having, um, them in production is, evaluation metrics catch only the errors that you're already awa-- uh, already aware of, but there can be a lot of emerging patterns that you understand only after you put things in production, right? So for those emerging patterns, you're kind of creating, um, um, uh, you know, a low risk, uh, kind of a framework, so that you could understand user behavior and not really be in a position where there are tons of errors, and you're trying to fix all of them at once. And this is not the only way to do it. There are tons of different ways. You wanna decide how you constrain your autonomy. It could be based on the number of actions that the agent is taking, which is what we do in this example. It could be based on topic. There are just some, um, domains where it's, uh, pretty high risk to make a system completely autonomous for, um, certain decisions, but for some other topics, it's okay to make them completely autonomous, and depending on the complexity of the problem. And that's where you really want your product managers, your, you know, uh, um, engineers, and subject matter experts to align on how to build this system and continuously improve it. The idea is just behavior calibration and not losing user trust as you do that behavior calibration, I guess.

    3. LR

      We'll link folks to this actual post if they wanna go really deep. You basically go through all of these steps by step, a bunch of examples, and the idea here is, as you said, that, like, the reason... Everything about what you're describing here is about making it, uh, continuous and iterative, and kind of moving along this progression of higher autonomy, less control. And this idea of even calling continuous calibration, continuous development, is communicating it's this kind of iterative process. And just to be clear, this, this naming is kind of a, a ode to, uh, CI, CI/CD-

    4. AR

      CI, yes

    5. LR

      ... continuous integration, continuous deployment. Sweet. And the idea here is, like, that this is the version of that for AI, where instead of just, like, integrating to unit tests and deploying constantly, it's, uh, running evals, looking at results, iterating on, on the metrics you're watching, and figuring out where it's breaking, and iterating on that.

  11. 58:071:01:24

    Emerging patterns and calibration

    1. LR

      Awesome. Okay, so again, we'll point people to this post if they wanna go deeper. That was a, a great overview. Is there anything else before we go into a different topic around this framework, specifically that you think is important for people to know?

    2. AR

      I think one of the most common questions we get is, "How do I know if I need to go to the next stage, or if this is calibrated enough," right? There's not really a rule book you can follow, but it's all about minimizing surprise, which means, let's say, you're calibrating every one or two days, um, and you figured out that you're not seeing new data distribution patterns, your users have been pretty consistent with how they're behaving with the system, then the amount of information you gain is kind of very low, and that's when you know you can actually go to the next, um, stage, right? And it's all about the wipes at that point. Like, do you know you're ready? Um, you're not receiving any new information. But also, it, it really helps to understand that sometimes there are events that could completely, uh, mm, you know, mess up the calibration of your system. An example is, um, GPT-4O doesn't exist anymore, or it's going to be deprecated in APIs as well. So most companies that were using 4O should switch to 5, and 5 has very different properties, so that's where your calibration's off again. You wanna go back and do this process again. Sometimes users start, users start behaving with systems also differently over time, or user behavior evolves. Even with consumer products, right? You don't talk to ChatGPT the same way you were talking, say, two years ago, just because you know the capabilities have increased so much. And, and also just people get excited when, um, you know, these systems can solve one task, they wanna try it out on other tasks as well. Uh, we built this system, um, for underwriters at some point, right? Uh, underwriting is a painful task. There are agreements that are like, you know, uh, you know, loan, uh, applications that are, like, 30 or 40 pages, and the idea for this bank was to build a system that could help underwriters pick policies and, you know, um, um, information about the bank, so that they could approve loans, right? And for a good three or four months, everybody was pretty impressed with the system. We had underwriters actually report gains in terms of how much time they were spending, et cetera. And for three months, we realized that they were so excited with the product, that they started asking very deep questions that we never anticipated. They would just throw the entire application document at the system and go, like, "For a case that looks like this, what did previous underwriters do?" And for a user, that just seems like a natural extension of what they were doing, but the building behind it should significantly change. Now, you need to understand what does, for a case like this, mean in the context of the loan itself? Is it referring to people of a particular, you know, income range, or is it referring to people in a particular geo and stuff like that? And then you need to pick up historical documents, analyze those documents, and then tell them, uh, "Okay, this is what it looks like," versus just saying that there's a policy X, Y, and Z, and you want to, um, you know, look up that policy. Um, so something that might seem very natural to a end user might be very hard to build as a product builder, and you see that user behavior also evolves over time, and that's when you know, you, you know that you wanna go back and recalibrate.

  12. 1:01:241:05:17

    Overhyped and under-hyped AI concepts

    1. LR

      ... What do you think is, uh, overhyped in the AI space right now? And even more importantly, what do you think is, is underhyped?

    2. KB

      I am, as I said, like super optimistic in different things that are going in AI. So I wouldn't say overhyped, but I feel kind of misunderstood, is the concept of multi agents. Uh, people have this notion of like, uh, I have this incredibly complex problem, now I'm going to break it down into, "Hey, you're this agent, take care of this. You're this agent, take care of this." And now, if I somehow connect all of these agents, they think they're the agent utopia, and it's never the case that-- There are incredibly successful multi-agent systems that are built, right? Like, there's no doubt about that. But I feel a lot of it comes in terms of how are you limiting the, uh, ways in which the system can go off tracks. And, uh, for example, like if you're building a supervisor agent, and there are, like, sub-agents that actually do the work for the super agent, supervisor agent, that is a very, uh, successful pattern. But coming with this notion of, "I'm going to divide the responsibilities based on functionality," and somehow ex- uh, expect all of that to work together in some sort of like gossip protocol, uh, that is, like, extremely, uh, misunderstood that you could do that. I don't think, like, current, uh, ways of building and current, like, uh, model capabilities are, like, right there in terms of, like, uh, building those kind of applications. I feel that is kind of misunderstood than overrated. Uh, underrated, I feel... It's hard to probably believe, but I still feel coding agents are underrated, in the sense that I feel like you can go on Twitter, and you can go on Reddit, and you see a lot of chatter about coding agents. But talking to an engineer in, like, any random company, uh, especially outside of Bay Area, you s- you can see, like, the amount of impact these coding agents can create, and the penetration, uh, is very low. So I feel like twenty twenty-five, uh, and twenty twenty-six is going to be, like, an incredible year for optimizing all of these processes, and I feel that is going to be creating a lot of value with AI.

    3. LR

      That's really interesting on that first point. So the idea there is, uh, you'll probably m- be more successful building and using, uh, an agent that is able to do its own sub-agent splitting of work, versus like a bunch of, say, Codex agents, "Well, you do this task, you do that task."

    4. KB

      You can have agents to do these things, and you, as a human, can orchestrate it, or you can have, like, one, uh, larger agent that is going to orchestrate all of these things. But letting the agents communicate in terms of peer-to-peer kind of protocol, and then especially, uh, doing this in say, like, customer support kind of use case, is incredibly hard to control what kind of agent is replying to your customer because you need to shift your guardrails everywhere and things like that.

    5. LR

      Yeah, okay. Uh, great picks. Okay, Ash, what do you got?

    6. AR

      Can I say evals? Will I be cancelled? [chuckles]

    7. KB

      [chuckles]

    8. LR

      On which-- in which category? Which, which bucket do they go?

    9. AR

      Overrated.

    10. LR

      Overrated.

    11. AR

      Yeah.

    12. LR

      Okay, go, go for it. You-- we won't let you get cancelled. [chuckles]

    13. AR

      Uh, just kidding. I think evals are misunderstood. They are important, folks. I'm not saying they're not important. But I think just, um, this, um, I'm going to keep, um, jumping across tools and going to pick up and learn a new tool is overrated. I, I still am old school and feel like you would need-- really need to be obsessed with the business problem you're trying to solve. AI is only a tool. I try to think of it that way. Of course, you need to be learning about the latest and gr- greatest, but don't be so obsessed with just building so quickly. Building is really cheap today. Um, design is more expensive. Really thinking about your product, what you're going to build, is it going to really solve a pain point? Is, is what is way more valuable today, and it will only become, uh, more true in the near future, right? So really obsessing about your problem and design is underrated, and just rote building is overrated, I guess.

    14. LR

      Awesome. Okay,

  13. 1:05:171:08:41

    The future of AI

    1. LR

      uh, similar sort of question: from a, a product point of view, what do you think the next year of AI is going to look like? Give us a vision of where you think things are going to go by, say, by the end of twenty twenty-six.

    2. KB

      Yeah, I feel, uh, there's a lot of promise in terms of, uh, this background agents or proactive agents, who is... Like, they're going to, like, basically understand your workflow even more. Uh, if you think, if you think of like, where is AI failing to create value today, it's mainly about not understanding the context. And the reason that it's not understanding the context is it's not plugged into the right places where actual work is happening, right? And as you do more of this, you can give the agent more of context, and then it start to see the world around you and understand what is the, what are the set of metrics that you're optimizing for, or what are the kind of activities that you're trying to do. It is a very easy extension from there to actually gain more out of it, and then let the agent prompt you back. Uh, we already do this in terms of ChatGPT Pulse, which kind of gives you this daily update of things you might care about. And it's, it's very nice to actually have that, like, jog your brain up in terms of, "Oh, this is something that I haven't thought about. Maybe this is good." And now, when you extend this to more complex tasks, like a coding agent, which says that, like, "Okay, I fixed five of your linear tickets, and here are the patches. Just review them at the start of your day." So I feel that is going to be, like, extremely useful, and I see that as, like, a strong direction in which, like, products are going to build in twenty twenty-six.

    3. LR

      That is so cool. So essentially, agents kind of anticipating what you want to do and get it going, getting ahead of you, and, "Here's, I've solved these problems for you," or, "I think this is going to crash your site. Maybe you should fix this thing right here," or, "I see the spike here, and let's refactor a database." Amazing. What a world! Okay, Ash, what do you got?

    4. AR

      I'm all in for multimodal experiences in twenty twenty-six. I think we have done quite some progress in twenty twenty-five, and, um, not just in terms of generation, but also understanding. Um, until now, I think LLMs have been our most commonly used models, but as humans, we are-... multimodal creatures, I would say. Like, um, language is probably one of our last forms of evolution. As the three of us are talking, I think we're constantly getting so many signals and like, "Oh, Lenny is nodding his head, so probably I would go in this direction," or, "Lenny's bored, so let me stop, stop talking." So there's a chain of thought be- behind your chain of thought, and you're constantly altering it. With language, that dimension of expression is not explored as well, so if you- we could build better multimodal experiences, that would get us closer to, uh, human-like, um, conversation richness. And, um, yeah, I think, um... A- and just, you will also, just given the kind of models, there's a bunch of boring tasks as well, which are ripe for AI if multimodal understanding gets better. There are so many handwritten documents and really messy, uh, PDFs that cannot be parsed even by the best of the models as of today. And if it's possible, there's, there'll be so much, um, um, data that we can tap into.

    5. LR

      Awesome. I just saw Demis from DeepMind, AI, Google, whatever they, they call the whole org, uh, talking about this, where he's thinks that's gonna be a big part of where they're going, combining the image model work, the LLM, and also their world model stuff. Genie, I think is what it's called.

    6. AR

      Yes.

    7. LR

      So that's gonna be a wild, wild time. Okay, uh, last question:

  14. 1:08:411:14:04

    Skills and best practices for building AI products

    1. LR

      If someone wants to just get better at building AI products, what's just maybe one skill or maybe two skills that you think they should lean into and develop?

    2. AR

      I think we did cover a bunch of best practices for AI products, which is start small, try to get your iteration going well, and build a flywheel and all of that. But again, if you kind of look at it at, at a 10,000 feet level, for anybody building today, like I was saying, implementation is going to be ridiculously cheap in the next few years. So really nail down your design, your judgment, your taste, and all of that. Um, and in general, if you're building a career as well, I feel for the past few years, your, your former years, say, the first two, three years of, uh, building your career is always focused on execution, mechanics, and all of that. And now we have AI that could help you ramp pretty quickly. And post that, I mean, after a few years, I think everybody, everybody's job becomes about your taste, your judgment, and kind of, um, uh, you know, uh, what is uniquely you. I think nail down on that part and try to figure out how you can bring in, uh, that kind of a perspective. Um, it doesn't have to mean that you should be significantly old or have ex- um, years of experience. We recently s- hired someone, and we use this very popular app, uh, for tracking our tasks, right? And we've been using it for years, and we pay a high subscription fee for it. And this guy just came with his own white-coded app to the meeting. He onboarded us, um, to all of it, and he's like: "Okay, let's start using this." And I think that kind of agency and that kind of ownership to really rethink experiences is what, uh, will set people apart. And I'm not being blind to the fact that white-coded apps have high maintenance costs, and maybe as we scale as a company, we have to replace it or we have to think of better approaches. But given that we're a small-sized company now, and just... I, I was really shocked because I never thought of it. Um, a- a, if you've been used to working in a certain way, you associate a cost with building, and I feel like folks who grew up in this age, uh, have a much lower cost associated in their mind. They just don't mind building something and going ahead with it, and that's... They're also very, um, enthusiastic to try out new tools. Um, that's also probably why AI products have this retention problem because everybody's so excited about trying out these new tools and all of that. But essentially, um, having the agency and ownership, and I think it's also the end- going to be the end of the busy work era, right? You can't be sitting in a corner doing something that doesn't move the needle for a company. You really need to be thinking about, you know, end-to-end workflows, how you can bring in more impact. I think all of that will be super important.

    3. LR

      Hmm. That reminds me, I just had Jason Lemkin on the podcast. He's, um, uh, very smart on sales, go-to-market around SaaStr, and he replaced his whole sales team with agents. He had 10 salespeople, now he has 1.2 and 20 agents. And one of the agents, it was just tracking everyone's updates to Salesforce and kind of, uh, updating it automatically for them based on their calls, and one of the salespeople, uh, was like: "Okay, I'm, I, I quit." And it turned out he wasn't really doing anything.

    4. AR

      Oh, wow. [chuckles]

    5. LR

      He was just sitting around, and he's like: "Okay, this will catch me. I gotta get out of here." [chuckles]

    6. AR

      Yeah, yeah.

    7. LR

      So to your point about you can't... It'll be harder to sit around and twiddle your thumbs, uh, I think is really right.

    8. KB

      Yeah. I think to, to add on to that, I feel like persistence is also something that is extremely valuable, especially given that anybody who wants to build something is... The information is, like, at your fingertips even more than, like, the past decade, right? You can learn anything overnight and become that sort of, like, Iron Man kind of approach. So I feel like having that persistence and, like, going through the pain of, like, learning this, implementing this, and, like, understanding what works and what doesn't work, and as you are going through this, like, pain of, like, m- developing multiple approaches and then solving the problem, I feel that is, like, going to be the real moat as an individual. Like, I, I, I like to call it, like, pain is the new moat. But, uh, I feel like that is ex- exactly super useful to actually have this in, especially in, like, you know, building these AI products.

    9. LR

      Say more about this. I love this concept: Pain is the new moat. Is there more there? [chuckles]

    10. KB

      Yeah. I feel as a company, I mean, like, successful companies right now building in any new area, they are successful not because they're first to the market or, like, they have this fancy feature that more customers are liking it. They've went through the pain of understanding what are the set of non-negotiable things and trade them off exactly with, like, what are the features or, like, what are the model capabilities that they can use to solve that problem. It, it... This is not a straightforward process, right? There's no textbook to do this, or, like, there's no straightforward way or, like, a known threaded path to be here. So-... a lot of this pain I was talking about is just, like, going through this iteration of like, "Okay, let's try this, and if this doesn't work, let's try this." And that kind of knowledge that you built across the organisation or across, like, your own experience, lived experiences, I feel that the, that pain is what, uh, translates into the moat of the company, right? This could be like a, a product of evals or, like, something that you've built, and I feel that is going to be the game changer.

    11. LR

      That is awesome. It's like, uh, turning, uh, coal into diamond.

    12. KB

      Yes. [chuckles]

  15. 1:14:041:26:22

    Lightning round and final thoughts

    1. LR

      Okay. Uh, I feel like we've done a great job helping people avoid some of the biggest issues people consistently run into building AI products. We've covered so many of the pitfalls and the ways to actually do it correctly. Before we get to our very exciting lightning round, is there anything else that you wanted to share, anything else you want to leave listeners with?

    2. AR

      Be obsessed with your customers, be obsessed with the problem. Um, AI is just a tool. And, um, try to make sure that you're really understanding your workflows. Eighty percent of so-called AI engineers, AIPMs, spend their time actually understanding their workflows very well. They're not building the fanciest and the, you know, most, uh, cool models or, um, workflows around it. They're actually in the weeds, understanding their customers' behaviour and data. Um, and w- whenever a software engineer who's never done AI before hears the term, "look at your data," I think it's a huge revelation to them, but it's always been the case. You need to go there, look at your data, understand your users, and that's going to be a huge differentiator.

    3. LR

      That's a great way to close it. It's not... The AI isn't the answer, it's, it's a tool to solve the problem. With that, we have reached our very exciting lightning round. [thunder rumbling] I've got five questions for both of you. Are you ready?

    4. AR

      Yay. Yes.

    5. LR

      All right, so you can both answer them. You can pick one, which one to answer, either way, up to you. What are two or three books you find yourself recommending most to other people?

    6. AR

      Uh, for me, it's this book called When Breath Becomes Air, Lenny. Uh, it was written by Paul Kalanithi. I think he was, um, um, a, an Indian origin neurosurgeon who was diagnosed with lung cancer at thirty-one or thirty-two, and the whole book is his memoir, and just it's written after he was diagnosed. And it's, it's really beautiful, especially because I read it during Covid, and all we ever wanted to do during Covid was stay alive. Um, there are a bunch of really nice quotes within the book as well, but I remember one of them, he was kind of arguing against a very popular quote by Socrates, which is, "The unexamined life is not worth living," or something like that, and which means you really need to be thinking about your choices. You need to, you know, understand your values, your mission, and all of that. And, um, Paul says, "If the unexamined life is not worth living, was the unlived life worth examining?" Which means, are you spending so much time just understanding your mission and purpose, that you've forgotten to live? And I think it, uh, everybody who's, uh, staying in the AI era, and building, and continuously going through this phase of reinventing themselves, need to take a pause and live for a bit, I guess. They need to stop evaling life too much. [chuckles] That's what really-

    7. LR

      I was gonna say that.

    8. AR

      [chuckles]

    9. LR

      That's where my mind went. You've gotta write some evals for your life.

    10. AR

      [chuckles]

    11. LR

      Oh, my God, we've gone too far.

    12. AR

      Yep, yeah, yeah.

    13. LR

      Beautiful.

    14. AR

      That's, that's my favourite book.

    15. KB

      I, I like more of science fiction books, so I, uh, really like this Three Body Problem series. Uh, it's, like, a three-book series. It's, it's like... I- and as it has elements of, like, grander than science fiction, uh, life outside Earth, and how it impacts, like, human decision-making process, and it also has, like, elements of geopolitics, and how, how much important or, like, valuable abstract science is to human progress. And then that gets, when that gets stopped, it's, it's not noticeable in everyday life, but it, it can cau- cause, like, devastating effects. So I feel like AI helping in these areas, for example, is going to be, like, extremely crucial, and that book is, like, a nice example of what could happen otherwise.

    16. LR

      Completely agree. Absolutely, love... Might be my favourite sci-fi book, except, or series even, and it's three. You have to read them all three, by the way. I find that it only got really good about one and a half books in, so if anyone's-

    17. KB

      Yes

    18. LR

      ... tried it and like, "What the heck is going on here?" Just keep reading and get to the middle of the second one, and then it gets mind-blowing.

    19. KB

      Yes.

    20. LR

      Uh, if you love sci-fi and you're in AI, you gotta read this book called A Fire Upon the Deep by, uh, Vernon Vinge.

    21. KB

      Mm-hmm.

    22. LR

      Check it out. It's incredible. Uh, I saw Noah Smith on his newsletter recommend this book, and there's, like, a whole-- there's, like, sequels to it, but this is the one. It's so incredible, and it's actually, turns out it's about AGI and super intelligence and all these things, and it's just, like, so epic.

    23. KB

      Nice.

    24. LR

      And no one's heard of it.

    25. KB

      Thank you.

    26. LR

      There you go. I'm giving you one back. [chuckles] Okay, next question: What's a favourite recent movie or TV show that you've really enjoyed?

    27. AR

      I started rewatching Silicon Valley, and I think it's so true, it's so timeless. Everything is repeating all over again. Anybody who's watched it a few years ago should start rewatching it, and you'll see that it's eerily similar to everything that's happening right now with the AI wave.

    28. LR

      That's, that's a good idea to rewatch it. I love that their whole business was, like, an algorithm to compress, like, a compression algorithm. [chuckles]

    29. KB

      Yes.

    30. LR

      It's like maybe a precursor to LLMs in some small way. [chuckles] Oh, yeah. All right, Kiriti, what you got?

Episode duration: 1:26:22

Install uListen for AI-powered chat & search across the full episode — Get Full Transcript

Transcript of episode z7T1pCxgvlA

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome