Skip to content
Lenny's PodcastLenny's Podcast

OpenAI chair Bret Taylor: Why agents kill seat-based pricing

FriendFeed lost to Twitter while it onboarded celebrities, not on product; Taylor argues the AI market goes toward agents and outcomes-based pricing.

Lenny RachitskyhostBret TaylorguestChristina Cacioppoguest
Jul 31, 20251h 28mWatch on YouTube ↗

EVERY SPOKEN WORD

  1. 0:004:10

    Introduction to Bret Taylor

    1. LR

      You were CTO of Meta. You were co-CEO of Salesforce. You're chairman of the board at OpenAI. How do you think the AI market is gonna play out?

    2. BT

      The whole market is going to go towards agents. I think the whole market is going to go towards outcomes-based pricing. That's just so obviously the correct way to build and sell software.

    3. LR

      This makes me think about it, I had Marc Benioff on the podcast, you guys were co-CEOs. He was extremely agent-pilled.

    4. BT

      It's so hard to sell productivity software, which I learned (laughs) the hard way.

    5. LR

      What's a story that comes to mind when you think about your biggest mistake?

    6. BT

      I was the product manager for what was called Google Local. Had a pretty tough product review with Marissa and Larry. And to not do that well with a link from the Google homepage is like kind of embarrassing.

    7. LR

      I think it's really empowering for people to hear it's possible to succeed in spite of a massive failure like this.

    8. BT

      They sort of gave me another shot to do the V2 of it that resulted in Google Maps. We got about 10 million people use it on the first day.

    9. LR

      What mindset contributed to you being successful in such a variety of roles?

    10. BT

      Waking up every morning, "What is the most impactful thing I can do today?"

    11. LR

      Today my guest is Bret Taylor. Bret is an absolute legendary builder and founder. He co-created Google Maps at Google. He co-founded the social network FriendFeed, which invented the like button and the real-time newsfeed, which he sold to Facebook. He then became CTO at Facebook. He then started a productivity company called Quip, which he sold to Salesforce for $750 million. He then became co-CEO of Salesforce. He's also currently chairman of the board at OpenAI. At one point he was chairman of the board at Twitter. Today he's co-founder and CEO of Ciara, an AI startup building agents to help companies with customer service, sales, and more. In our conversation, we cover so much ground, including what skills and mindsets have most helped Bret be so successful in so many roles, why we're all still sleeping on the impact that agents are gonna have on the business world, how coding is going to change in the coming years, where the biggest opportunities remain for startups, lessons on pricing and go-to market in AI, the story behind the like button, and so much more. This is a truly epic conversation with a legendary builder. If you enjoy this podcast, don't forget to subscribe and follow it in your favorite podcasting app or YouTube. Also, if you become an annual subscriber of my newsletter, you get a year free of a bunch of incredible products, including Replit, Lovable, Bolt, n8n, Linear, Superhuman, Descript, Whisperflow, Gamma, Perplexity, Warp, Granola, Magic Patterns, Raycast, ChatPR, Dmob, and more. Check it out at lennysnewsletter.com and click "bundle." With that, I bring you Bret Taylor. This episode is brought to you by CodeRabbit, the AI code review platform transforming how engineering teams ship faster with AI without sacrificing code quality. Code reviews are critical, but time-consuming. CodeRabbit acts as your AI co-pilot, providing instant code review comments and potential impacts of every pull request. Beyond just flagging issues, CodeRabbit provides one-click fix suggestions and lets you define custom code quality rules using AST GREP patterns, catching subtle issues that traditional static analysis tools might miss. CodeRabbit also provides free AI code reviews directly in the IDE. It's available in VSCode, Cursor, and Windsurf. CodeRabbit has so far reviewed more than 10 million PRs, installed on one million repositories, and is used by over 70,000 open source projects. Get CodeRabbit for free for an entire year at coderabbit.ai using code LENNY. That's coderabbit.ai. This episode is brought to you by Basecamp. Basecamp is the famously straightforward project management system from 37signals. Most project management systems are either inadequate or frustratingly complex, but Basecamp is refreshingly clear. It's simple to get started, easy to organize, and Basecamp's visual tools help you see exactly what everyone is working on and how all work is progressing. Keep all your files and conversations about projects directly connected to the projects themselves so that you always know where stuff is and you're not constantly switching contexts. Running a business is hard. Managing your projects should be easy. I've been a longtime fan of what 37Signals has been up to and I'm really excited to be sharing this with you. Sign up for a free account at basecamp.com/lenny. Get somewhere with Basecamp.

  2. 4:108:24

    Bret’s early career and first major mistake

    1. LR

      Bret, thank you so much for being here, and welcome to the podcast.

    2. BT

      Thanks for having me.

    3. LR

      My pleasure. There's so much that I want to talk about. You've done so many incredible things over the course of your career, it just boggles the mind the things that you've done, and we're gonna talk about a lot of that sort of stuff. But I want to actually start with the opposite, I want to talk about a time that you messed up, a time that you screwed up in a big way. We have this recurring segment on the podcast I call Fail Corner, and so I thought it'd be fun to just start there before we get into all the great stuff you've done. What's a story that comes to mind when you think about maybe your biggest mistake in building a product?

    4. BT

      It may not be the biggest, but it was my first prominent mistake as a product manager at Google. So, um, it's, uh, for me it feels big because it was very formative, uh, for me as a, a product designer. So I joined Google in, uh, late 2002, early 2003, and I was one of the earliest associate product managers at the company, and first was working on the search system, uh, essentially expanding our index from one billion web pages to 10 billion, uh, which was a big deal at the time. It sort of seems quaint, uh, now. And then I did a decent job, and so my boss, Marissa Mayer, um, gave me the opportunity to lead a new product initiative, which was a big bet on me. And I was, you know, it was both an opportunity to do something for Google, but I was also being pretty scrutinized just, uh, as a young, new product manager. And the premise given to me was, work on local search. Uh, at the time, the Yellow Pages was still dominant, and while Google was really good at searching the web, it wasn't really good for finding a plumber or a restaurant just because it wasn't really a huge part of the internet at the time. So this content wasn't necessarily on the internet, and even if it was, it was, you really needed a different... Uh, you didn't really want to find, you know, plumbers in Manhattan, you wanted to find plumbers in San Francisco, if you're me. And so, it was a, kind of a, both a technical problem and a product problem and a content problem.We launched a, the first version of that product that, uh, I was the product manager for was called Google Local. And it was, you know, the... I'll be a little bit more critical now than I might have been at the time, but it was, uh, a little bit of a, a me too version of Yahoo! Yellow Pages. You know, it was sort of a, um, essentially grafting on Yellow Pages search on top of Google Search, and with the properly crafted query you could, you know, see those listings at the top of your search results, but a standalone, uh, site at local.google.com. And it was actually, it was an important enough initiative that actually there was a... on the Google homepage, it had, you know, web images and, and Local was up there as well.

    5. LR

      Wow.

    6. BT

      So, you know, it's got top billing. I mean, you could put almost any link on the Google homepage and get a lot of traffic to it. And despite that, it didn't do that well. And to not do that well with a link from the Google homepage is, like, kind of embarrassing, you know? It's, it's, uh... I mean, there's not a, there's not much one can do other (laughs) like, more than giving you that kind of traffic to give you an at bat as a, as a product leader or product manager. And, um, the product is fine, like, it worked, but it really wasn't differentiated, and, uh... And I think in many ways, uh, I think, again, I think I've had these reflections more since than at the time that I had some of the time, but why use this instead of Yahoo! Yellow Pages? But more than anything else, like, why use this instead of the Yellow Pages, you know? It was sort of a digital version of, of something that had come before. Had a pretty tough product review with Marissa and Larry and others, and it was fine. I wasn't, like, about to get fired or something, but it was like, you know, the, uh, I don't know, the shine on the, uh, on my reputation was sort of, uh, waning a little bit, and they sort of gave me another shot to do, like, a V2 of it, uh, and, uh... And, and I sort of got the impression... It wasn't like my last shot, but it was sort of, you know... I- I certainly was feeling a little dejected from going from sort of a hotshot new PM to a new thing. So we spent a lot of time thinking about how can you make something that's just much more compelling and, and not just sort of a digital version of, of, uh, the Yellow Pages and not just so, so similar to some of the other products out there.

  3. 8:2411:57

    The birth of Google Maps

    1. BT

      And that's ended up being the thread that we pulled, uh, that, that resulted in Google Maps. Um, we had, uh, licensed from MapQuest the ability to put, like, this little map next to the search results. It was always the ugliest part of the product, and we always, you know, made sort of these, like, backhanded comments about it internally. And we spent a lot of time saying like, "What if we sort of inverted the hierarchy here and made the map the canvas?" We ended up finding Lars and Jens, uh, Rasmussen, who had been working on this Windows mapping product, and we sort of, uh, got them into the company, and started exploring this space. And, and, uh, it ended up where through that exploration we ended up integrating a lot of different products. We ended up integrating mapping, local search, driving directions. Like, all of these products at the time were actually separate product categories, and ended up with something that kind of, uh, redefined the industry and, and certainly my career. But it took kind of, uh... I think for me as a product leader, it changed the way I think about product, just because there's sort of feature and functionality and then there's like, "Why should I use this thing in the first place?" And it was notable, there was a couple of interesting moments. I mean, when we launched Google Maps, we got about 10 million people using it on the first day, which at that scale of the internet at the time was huge. And then in August of 20- 2005, we integrated satellite imagery from a recent acquisition called Keyhole, which became Google Earth, and we got 90 million people using it on the same day. Everyone wanted to look at the top of their house, you know, when the imagery came out.

    2. LR

      (laughs) Mm-hmm.

    3. BT

      And it was really interesting, because there's so many subtle product lessons in there. Um, you know, first is, I think as you have these new technologies, rather than literally digitizing what came before, if you can create an entirely new experience, it, it creates, it sort of answers the question for a new customer like, "Why should I give this the time of day?" You know? And so, really dissembling the Lego set and reassembling it into something new rather than just digitizing what was there before, certainly that was a lesson I think in Google Maps. It, it really was native to the platform in a way that, like, a paper map couldn't be, you know? And, and that was, like, a, a really meaningful breakthrough. Um, and then with satellite imagery, it honestly wasn't the most important part of Google Maps, but it was sort of the sizzle to the steak, and it created, uh... You know, I don't think the term viral was a thing people said back then, but it created a viral moment. We'd run Saturday Night Live, which is the coolest thing. Andy Samberg, in, I think it was called Lazy Sunday, you know, rapped about Google Maps, and Lars and I were texting each other, "We did it." (laughs)

    4. LR

      (laughs)

    5. BT

      "We're on Saturday Night Live. Mission accomplished." And it was also showing that, you know, as your

    6. NA

      (cheery music)

    7. BT

      ... there's... thinking about products, there's the, you know, why you decide to use a product and then what is this, the enduring value? And those are deeply related, but not all the same thing. And I just learned so many lessons that I took with me for, like, every subsequent product, um, that I worked on.

    8. LR

      That is, that is an awesome story. One, I, I think it's really empowering for people to hear. Uh, even you, Bret, who I'm gonna share all the successes you've had, have had a massive failure with, like, the CEO of Google, Marissa Meier, just like, "Bret, you screwed up. This is..." And it was, like, such a big bet. So one just, uh, like, it possible to succeed as you have succeeded in spite of a massive failure like this. And then some of the product lessons you shared, just to highlight a few of these things, 'cause I think this is great, is just, uh, you will often not win if you just make something that's kind of a better copy of something else. What you wanna look for is something that is an entirely new experience, something that's differentiated, something that's a lot more compelling.

  4. 11:5731:30

    Lessons from FriendFeed and the importance of honest feedback

    1. LR

      Um, let's flip to talk about what you've learned from actually being very successful at a lot of things. So I was looking at your resume, and you basically have been very successful at every level of the career ladder, and in such a huge variety of roles. So let me just read a few of these things for folks that aren't super...... familiar with your background. You were CTO of Meta, you were co-CEO of Salesforce. You were also CPO and COO at Salesforce. At Google, you joined as an associate product manager, where you famously... You didn't mention this, but you rebuilt Google Maps in a weekend. (laughs) We're not gonna talk about that. You were chairman of the board at OpenAI. You were chairman of the board at Twitter. You have also founded three different companies; one social network, one productivity docs company called Quip, and now Ciara. Fun fact, at FriendFeed, you invented the like button. I don't know if people know that. And also just the newsfeed. I'll just throw that out there (laughs) to give you some credit. So, you're basically an associate product manager, an IC product manager, an engineer, CPO, COO, CTO, CEO of three different companies, including a public company. Very rare (laughs) that somebody is successful at all these types of roles and all these levels. So, let me just ask you this question. What mindsets, or habits, or just ways of working have you worked on building in yourself that you think have most contributed to you being successful in such a variety of roles and levels?

    2. BT

      Yeah. It's actually something I am proud of. I, I like the fact I've worn different hats. It's actually amusing, when I meet colleagues that I've known from one of those jobs, they'll often think of me through the lens of that job, you know?

    3. LR

      (laughs)

    4. BT

      And so, uh, you know, I'll go to meet folks from Facebook and they think of me largely as an engineer. They'll meet folks from Google, they think of me largely as a product, you know, person. At Salesforce, you know, a lot of the folks there interacted with me as like a, for lack of a better word, a suit. (laughs) You know? Like the boss, and I, I'm not sure they think of me as a, as an engineer at all, even though, you know, I was still probably coding on the weekends for fun. And one of the things that is a principle for me is to have a really flexible view of my own identity. I really think of myself... I probably would self-describe as an engineer, but more broadly, I think of myself as a builder, and I like to build products, and, and I think companies are one of the most effective ways to build products. There's also things like open source, but I think, uh, I'm a huge believer in the confluence of technology and capitalism to produce, you know, just incredible outcomes for customers. And as a consequence, I think to, to really build something of significance, uh, you know, I think to be a great founder, you really need to be able to, uh, not have such a ossified view of your identity that you can't transform into what the company needs you to be at that point. And every founder you'll talk to, you know, one day... I think selling is a big part of being a founder. You have to sell investors on wanting to invest in your company. You have to sell candidates on wanting to work at your company. You have to sell customers to want to use the product that your customer produces. Um, you have to have good design taste, um, not just for your product, but for your, your marketing and, you know, essentially soliciting new customers. Uh, you have to have, uh, uh, good engineering. I mean, if you're building a technology company, the technology comes first. It's, you know, why this industry is so transformative. I probably credit, and I've told this story before, but I am, I'm very grateful for her, but I probably credit Sheryl Sandberg for, um, really changing the way I approached new jobs. Um, the story, and I might be embellishing a little bit, but I think it's broadly accurate. Um, so I had, uh, just become the chief technology officer of Facebook, and when I first got the job, it was, uh, sort of the flavor of CTO where I had a relatively small group reporting into me, but, uh, uh, contributed almost as like a very senior kind of architect, you know, on, on a number of projects. And then at some point, uh, Mark Zuckerberg reorganized the company and kind of split it into a bunch of different groups, and I ended up with a very large group, uh, under me. And I was essentially running our platform in mobile groups, uh, products, design, engineering. So, I went from, you know, a handful of reports to like, I don't know, over a thousand or something. It was a, it was a big group. And it was the largest, you know, management job I had. I had become a manager at Google, but a modest, uh, modest team. And so, uh... And I was doing okay, but not great. And I had this moment where Sheryl saw me. I was, I think I was editing a presentation for a partner just 'cause the, the presentation I got didn't meet my quality bar, and I was editing it and sort of griping about it. She sort of pulled me into a room and, um, kinda gave me a talking to, like a little bit about holding my team to as high of a standard as I have. Uh, if someone wasn't, you know, meeting my expectations, you know, what was my plan to, like, manage them out of the company? And, or, you know, just like kind of giving me management 101. Uh, and, and she, uh, she's a remarkable mentor in the sense she can kinda give you feedback that's very direct and, like, often a bit uncomfortable, and, uh, but you know she cares about you, you know? And so it was the type of feedback you listen to. I sort of went home that night and I was kinda stewing on it and, like, not very happy. I was like, you know, you get sort of naturally a little defensive in those moments. Like, "Is that really true? Am I really fucking it up or is it, you know, is she overreacting?" And then I woke up the next day and I was like, "No, she's right." And I had realized sort of this subconscious, like, limiter that I, that was limiting my success in the job, which is I was trying to conform the job to the things I thought I liked to do. So, I was spending a lot of my time on some product and technology things that were... I was passionate about, thinking, you know, "I'm the boss," you know? "I should, you know, focus on what I want to focus on." Instead of thinking about, "Okay, I'm running the mobile and platform teams at, at Facebook. What's the most important thing to do today to make our mobile plat- mobile and, and developer platform successful?" And when I reframed the job that way, I did different things. And the thing that was the biggest pleasant surprise to me was I liked it. Uh, you know, I thought I liked engineering and product, but in fact, when I...... you know, changed an organization and it turned out to be more successful, I derived a great deal of joy from seeing that success. Uh, you know, our developer platform had a lot of partners and, you know, when there was an issue there, I'd spend time on partnerships and it worked and, you know, our platform became healthier, the partner became more successful. I was, took pride in that success. And then I just started being better at my job and I realized that, um, the actual act of engineering or product design or all the things I thought I liked, what I really liked is impact. And, and, uh, and so that conversation led to my sort of waking up every morning, sometimes literally, but certainly in the broadest sense of the word saying, "What is the most impactful thing I can do today?" And really thinking, uh, almost like a, if you had an external board of advisors, you know, telling you, like, where are the, what are the things where if you focus on them, you can maximize the likelihood that what you're trying to achieve will happen? And sometimes it's recruiting, sometimes it's product, sometimes it's engineering, sometimes it's sales. And I've become much more self-reflective just about what is important to work on, and I have become much more receptive to doing things that I previously would have said aren't my favorite things to do, because I derive so much joy from having an impact that I enjoy a lot more things now. And, uh, so I really credit Sheryl. I'm so grateful. And actually it's interesting, I think a lot about this when I give feedback to people now, just, like, uh, those moments that can kind of like change the trajectory of your career, uh, I mean, I give her all the credit for it. There's so many people that share stories of Sheryl Sandberg giving them advice and that changing their life. Yeah. (laughs) What a, what a mensch. Yeah. My biggest takeaway from this, uh, which is this question of what is the most impactful thing I could do today, such a powerful heuristic just to kind of keep in mind. To your point, you may realize you don't want to be doing sales or hiring, but if that's the most impactful thing and you end up doing it, you may realize, "I like this and I'm good at this and, and maybe I double-clicked on that though for a sec? Absolutely. I think it's really hard. Um, one of the dangers for founders and product managers, uh, but I think particularly for founders, is incorrect storytelling. (laughs) Uh, people don't like my product because of X, and if you tell that to yourself and you tell it to your team, all of a sudden it goes from being an intuition to being a fact. Um, well, you better hope you're right, because if you orient your strategy around fixing that problem, uh, and you're wrong, your company is going to fail. Um, so, you know, why did you lose a deal? Uh, you know, you could talk to the salesperson who was on the account or perhaps maybe a product manager was involved in the conversation. It's very important to have intellectual honesty in those moments, because you could say something like, "Oh, uh, they didn't buy it because the platform costs too much." Um, that, and that's something a salesperson might say. Maybe the real reason is they didn't actually see much value in your platform, so it was communicated to the salesperson as it was too expensive, but in fact, it, the problem was product differentiation, and you could end up going into a discussion on pricing when in fact there was a much deeper, much harder problem to solve there, but it's not, you know, just like when you break up with someone, you don't say, "It's because I don't like you anymore." You say, "It's not you, it's me." You know, you say all these sort of pleasantries because we're all social, uh, animals and y- and you want to be pleasant with the people that you, around you. So, you know, literally taking what a customer says or what a user says in, like, a focus group or a usability study is rarely, uh, correct. Um, it often is, uh, uh, related to what the truth is, but it's very important to get right. And so I think one of the things I've observed with first-time founders in particular is, you're often a single issue voter based on your skill set. So if you're a great engineer, the answer to almost every problem in your business is engineering. If you're a product designer, the answer almost to, you know, y- the, the proverbial redesign, I joke it's like the dead cat balance of a consumer product. Like, a re- this next redesign will fix all of our problems. I, I don't know if it's ever, ever worked. Um, and then y- if you, I met a lot of entrepreneurs who were like, come from sort of a business development background, they're always thinking about partnerships and, and, you know, "Oh, if we just get this partnership done for this distribution channel, everything's gonna change." And I think it's really important when you're a founder to be self-aware that you will naturally subconsciously pick the thing that is your strength, your superpower, as a solution to more problems, and in fact, if that, you think that's a solution to your problem, it may be right, but you probably by default should question it. Like, if you think the thing that you've been doing your whole career is the way to fix your problem, it's at least 30% likely that you've chosen that because of comfort and familiarity, uh, not truth. And so I think it's, like, one of those skills I think is, uh, it really goes around to, like, do you have a good co-founder? Do you have a good, you know, leadership team? Uh, if you're a product manager, like, your partner in engineering, your partner in marketing, you really want to have very real conversations, um, to ensure that you're actually working on the right, the actual correct thing, and I think it's easy to say, "What's the most impactful thing to do today?" My guess is a lot of people try that, they'll lie to themselves more often than not (laughs) and it's a very challenging question to answer. The question's interesting. Being able to answer it accurately is actually the hard part. This feels like such an important lesson you've learned. Is there an example that comes to mind where you learned this the hard way, where you actually ended up getting the wrong answer? Oh, yeah. Well, you're spending this whole thing on my failures, but I'm fine with that. (laughs) Um, so... You've had too much success. Front View was my first company. Um, at our peak, we had 12 employees, um, 12 of the best people I've ever worked with. Um, started the company with Jim Norris, who was an engineer I've known since Stanford, and Paul Buchheit and Sanjiv Singh, who, um, Paul started Gmail, Sanjiv was the first engineer on Gmail. So we had the Google Maps people and, like, Gmail people. It was, like, pretty awesome, uh, founding team.We made a social network. As you said, we sort of invented a lot of concepts that, um, became popular in, in the news feed. We invented the like button. It was really neat. It was a fun time. We were only really popular in Turkey, Italy, and Iran, and at one point we were blocked in Iran so we were only popular in Turkey and Italy and Silicon Valley. Um, to this day actually, a lot of folks in Silicon Valley are like, "I love, love FriendFeed." I'm like, "That's awesome." But it wasn't really a successful business. There was a ... We were a follower-oriented social network, um, not a friendship-oriented social network, which meant a lot of our content was more like, uh, X or Twitter than it is Facebook in that respect, and a lot of sharing newspaper articles, interests, scientific communities, things like that. And, uh, there was a period when, um, Twitter, uh, which was one of our competitors at the time, there was a lot more social networks at the time. Uh, I, I'm probably screwing this up a little bit. I think Obama, Ashton Kutcher, and, like, Oprah Winfrey all went on Twitter, like, in a, in a summer, and we just got our ass kicked. You know, it's like ... And it was a great example of you ... I think 11 of those 12 people were engineers, and we were just making product. And, uh, I think it was Biz Stone, I mean, if you talk to the Twitter folks they could give you the history on this, but I think Biz was really focused on, like, getting celebrities and public figures onto Twitter, which is totally obvious. Like, if you have a, a social service that's oriented towards following people, put some people on there worth following. You know, like ... And instead, you know, we were exclusively focused on polishing the product, and we actually, I think, you know, at our, uh, uh, sort of peak of popularity, we were very confident, just, you know, I think it was a time when, like, Twitter had the fail whale and was down half the time and people couldn't even use it. And, you know, we, our product, we were innovating faster, we had more features, people liked it. We could ... And, and we were up 100% of the time. And we totally lost for no reason related to product at all. And, uh, and it was an example of, you know, I think, uh, somewhat famously, not a, like, a lot of great entrepreneurs have come out of Google, because once you're ... Like, Google was so successful, I think it's hard as a product manager to sort of see, like, distribution and all, product design and even business model when you have AdWords and, you know, money's raining from the sky. It's hard to, you know, uh, you, there wasn't as much sort of scrutiny, and I think, like, it's folks like the PayPal mafia, I think learned a lot more about entrepreneurialism than, like, a typical PM at Google. So I ... We're just getting punched in the face, you know, and learning this the hard way. And so that was probably the most prominent example of it, you know, and I think we probably did have a ... I can tell you all the flaws of that product, but I don't think that was, like, the reason why we lost. There's a lot of reasons. I think there was a lot of flaws with the product. But it was a lot of other stuff. And so, um, I've learned, like, accumulated these skills over time. When I say the hard part of that question is answering it correctly, is it's hard when you don't have experience in something to have intuition in it. Um, so I think if there was probably a structural flaw, it wasn't that I ... I don't know if I could've figured out how to reach out to, uh, Ashton Kutcher (laughs) if I wanted to, right? You know, it's not like he's on my, you know, uh, on my Rolodex. But I probably wasn't soliciting advice from the right people. You know, I think that what's great about the technology industry is there's a lot of advice. Choosing whom you listen to is actually quite difficult. But I think we were somewhat myopic. You know, we're kind of in our own little world, uh, uh, creating this product, and we weren't asking people to, like, from the outside in to say, like, "What, what are you seeing that could go wrong? What are you seeing that could go right? What are you seeing in the industry that we're not doing that you think we might wanna do?" And this is why boards are important. This is why, you know, finding the right advisors, the advisors who actually tell you what you, uh, not to say want to hear, but you need to hear. I think that was probably the missing part. I'm not sure I was great at marketing at the time, but if I had solicited the right advice, I would, you know, uh, could have learned that that was a shortcoming. Um, and I think that was, uh, a deep lesson I took from that, and I'm a huge believer in, in boards and- and getting good advice.

    5. LR

      Any kind of heuristics or advice for people (laughs) to know whose advice to listen to? What do you pay attention to when you're like, "Okay, ignore this person, but listen to this person"?

    6. BT

      Yeah. That one's tough. It is definitely ... It does come down to good judgment, and being judge of people's character. One thing that is particularly hard is there's not a strong correlation between the confidence, uh, with which someone expresses an opinion and the quality of that opinion. Uh, I don't want to say it's inversely correlated, but, you know, it's funny with all the podcasts out now, if there's topics I know a lot about, you know, sometimes the most eleguent- eloquent, confident statements about things I know a lot about are- are the least accurate, and it sounds extremely persuasive, and, and the ... Uh, so it does require very good judgment. Um, one thing is, I think not just asking for advice but asking people, "Who should I talk to to get good advice?" and you'll find some common answers there, and that's often a really, um, strong signal, um, of- of good judgment. And then one thing I've found is, um, when you ask for advice, don't just ask what to do but why. Like, being like an obnoxious two-year-old kid, you know, "Why, why, why, why, why?" and really trying to understand the framework that someone is using to give you advice. The interesting thing about advice is people are often extrapolating from relatively few experiences. Um, so, you know, they'll say, "Never do this," or, "Always do that," and it's because they had one experience where that something backfired or something could have gone better if they had done it. So it's- it's a useful anecdote, but if you don't ask why and understand they had one experience and here's what happened, uh, it can come across as a rule when in fact it's- it's anecdata. Um, and if you ask advice with three people and they all have very similar interactions, you can create kind of, like, a first principles framework from which that advice emerges, and when you start applying it, you're applying it with a degree of nuance that you couldn't if you were just following a rule. Um, so I think one is it does come down to good judgment. I think, you know, uh, I don't know how to teach that. I think it is probably a very-I'm a huge believer in good judgment. It's one of the things I hire for. I just think if that's something that, uh, you know, probably comes from a mix of self-reflection, you know? Like, you really need to hold yourselves accountable, like as a, as an entrepreneur, as a product manager. Like, if you made a bad decision, spend time reflecting on it, like number one. And really try to understand why and try to, like, always improve your judgment. I think at the end of the day, that is why you are a good entrepreneur or a good, good product manager. And number two, when you get advice, really understand where it's coming from and why so that you can create sort of your own independent view of, of where that advice came from. And recognize that no one's advice is statistically significant, or very rarely is it... I mean, if you're getting, like, advice on investing, you know, from Warren Buffett, yeah, okay, it's statistically significant. But that's not... Most advice is, like, something happened to you once and, uh, and you have regrets (laughs) , so...

    7. LR

      I love that you're like, "I don't, I don't know if I have a great answer," and then you just give us an incredible answer to this

  5. 31:3045:26

    The future of coding and AI’s role

    1. LR

      question. I wanna go into kind of a different direction. You mentioned that you describe yourself as an engineer. You, I know... I hard you code to relax still. Um, let me just ask you this question, something a lot of people in college are thinking about. Do you think it still makes sense to learn to code? Do you think this will significantly change in the next few years?

    2. BT

      I do still think it's... Studying computer science, w- is a different answer than learning to code, but I would say I still think it's extremely valuable to study computer science. Um, I say that because I think computer science is more than coding. Um, if you understand things like, you know, Big-O notation or complexity theory or, uh, you know, uh, study algorithms and, you know, why, uh, why a randomized algorithm works and, and, you know, uh, why two algorithms with, like, the same sort of Big-O complexity one can, in practice, perform better than others and why a cache miss matters. And just all these little... There's a lot more to coding than, than writing the code. The reason I think that is I do think the act of, uh, creating software is going to transform from typing into a terminal or typing into Visual Studio Code to operating a code generating machine. Um, I think that is the future of creating software. But I think operating a code generating machine requires systems thinking, and I think that computer science... There are other disciplines as well, but computer science is a wonderful, uh, uh, major to learn systems thinking. Um, and at the end of the day, AI will facilitate, uh, you know, creating this software. It may do a lot more in the next few years we can't even imagine. But your job as the operator of that code manager- generating machine is to make a product or to solve a problem, and you really need to have great systems thinking. And you're gonna be managing this machine that's doing a lot of the tedious work of making the button or, you know, connecting to the network. Um, but as you're thinking of the intersection of a technology and a business problem, you're trying to affect a system that will solve that problem at scale for your customers. And that systems thinking is always the hardest part of, of creating products. Uh, I'll just give you, like, it's, it's this cheesy, simple example, but I think it's representative. At Facebook, we would always, uh, you know, we spent a lot of time designing the newsfeed. And if you ever had, like, a really, really good designer and they showed you, at the time, a Photoshop mockup of, of the newsfeed, it was just always beautiful. The photos, the family was happy and the photo was, like, a perfect photo and the posts were, like, all perfectly grammatically correct and of a completely normal length. And the comments and the, you know, there was, uh, the like button. Everything was just perfect. And then you'd, like, implement that design and you'd look at your own newsfeed and it looked like shit because it turns out, like, not everyone's photos were made by, like, a professional photographer. The posts were all these different lengths. The comments were like, you know, uh, "You suck," and dah, dah, dah. Like, all that stuff. And then all of a sudden you realize that, like, designing a newsfeed, like, Photoshop is the easy part. You need to actually design a system that produces a, like, uh, both in content and visual design, like, a delightful experience given input you don't control. And that's a system. That's not... I mean, it's sort of a design, uh, and it's just... What we did practically, I am sure it's changed a lot since, you know, I left in 2012, but we, um, made a, a system so, you know, designers had to show their newsfeed designs with real newsfeed data that was messy (laughs) , uh, rather than, you know, anything artificial because I think it forced the process to be more realistic. But I say that because I think that, like, whether AI is writing code or doing the design or doing all these other things, like, you need to learn how to have a system in your head. You need to understand the basics of what's hard and what's easy and what's possible and what's impossible. And AI can help you do that too, by the way. But I, I do think that's a really useful skill. I think in general with the advent of AI agents and, you know, uh, AI approaching superintelligence in certain domains, I think the tools with which we do our job will change a lot. I think it's very important to have a very loose, uh, attachment to the way we do our jobs. Um, and you know, I... That story that we won't talk about when I, like, rewrote Google Maps. Like, everyone talks about that story 'cause it, like... And it's, I think it's because of Paul Buchheit who told it on some podcast and then it sort of made the rounds. I think that's gonna end up sort of this vestige of the past. Like, I... Almost like the human calculators at NASA before computers were invented. Like, "Wow, a person was a calculator? Whoa, that's fun." Like, "Tell me that story." I think just, like, what I was good at will no longer be useful, uh, in the future, or certainly not, like, valuable in the future, and that's okay. Um, so I think we need to have a really loose view of it. But the idea that you shouldn't study these disciplines is sort of like people say, "I don't wanna study math 'cause I'm not gonna use it in my career for X." Well, studying math's quite important. Like, it teaches you how to think. It teaches you, like, how the world works, physics, math. And I think computer science, uh, especially at least sort of the, the foundations of it...... will continue to be the foundations of how we build software, and understanding that when you're interacting, particularly with something that's smarter than you, producing code you might not completely understand, how you constrain it and how you get it to produce these outcomes, I think it will require a lot of sophistication actually.

    3. LR

      That's such an great answer. There's this always sense of this binary, should I learn to code or not? And your point here is learn th- to understand how engineering works and how systems work and how, what your code does and how it all interconnects, but the way you actually do the coding at your desk will change significantly. This reminds me of something you mentioned on a podcast recently, this idea that you think there's gonna, or there should be a new programming language that is more designed for LLMs versus humans. Can you just talk about that? 'Cause I think a lot of people aren't thinking about that.

    4. BT

      I don't know if it's a language. I would call it a programming system-

    5. LR

      Hmm.

    6. BT

      ... because I think language might be too limited.

    7. LR

      Mm-hmm.

    8. BT

      Uh, my reductive version of the past, you know, what a 40 years of, of, uh, s- computers, maybe more, is, you know, you, we created the hardware for computers then we created punch cards, which is the way, you know, in like the late '70s, you know, uh, you would tell a computer what to do. Um, or maybe mid to late '70s. Uh, then we ended, uh, you know, invented early operating systems and, uh, time-sharing systems. And from the invention of things like, uh, Unix at Bell Labs, and, and Berkeley, you ended up with, um, s- the C programming language, Fortran, uh, and, uh, and a lot of sort of higher level programming languages. I think Fortran and then C. And you, we've sort of moved up the le- layers of abstraction, so no one does punch cards anymore obviously. A few people write assembly language. Some people write C, some people write Rust, but a lot of people write Python and, and TypeScript and things like that. And as we've invented more and more abstractions, um, we've made it easier to do high leverage things. Um, so, uh, you know, I, I always look, if you look at how remarkable Google was back in the day, or Google Maps, like you could probably give a lot of React programmers the task of make a draggable map now, and I think a lot of people could do it. That was true R&D, you know, back in the day. When Salesforce was created in 1998, just putting a database in the cloud was hard, and, you know, that was just like, that alone was a technical moat that is now trivial with Amazon Web Services. And, and that technical moat is, is comically narrow, but the product moat is, is quite, quite large. I think that if the act of writing code is, um, going from something that is very costly to like the marginal cost of that going to zero, how many of the abstractions that we've built are based on, you know, uh, human programmer productivity? I think a ton. Um, you know, like I always laugh that I, I assume Python is probably the most common generated code just because how much it's in the training data, and data scientists love Python, and I love Python too. It's such a comically bad thing for AI to generate just because its wh- one of the most inefficient programming languages of all time. If you know the global interpreter lock and just slow, and I've written a lot of high-scale web services and it's just quite slow. And it's very hard to verify. Um, like it, it's, it's not as bad as Perl, but like, you know, if you have a big Python program, how many errors will you find at runtime versus, you know, before releasing it? So it was, Python was designed to be very ergonomic, almost look like pseudocode for humans, for me to write code in a delightful way. That's why data scientists love it so much. So as we move to a world where like, let's just, uh, postulate, and I'm not sure this will be completely true that like we're not gonna write a lot of code as people, we're gonna be operating these code generating machines, w- we probably don't care how ergonomic the programming language is. What we care about is when this machine generates code, do we know that it did what you wanted it to do? And if it doesn't do what we want it to do, can we change it easily? I think there's a lot of insights in programming languages that could serve this. So, you know, Rust I think is interesting because if I, if I asked you to look at a C program and say, uh, "Does it leak memory?" You probably couldn't do it that well just 'cause it's really hard. And if it's a very, like a million line C program, this'll be very, very hard. If I asked you to verify that a Rust program doesn't leak memory, you would just have to compile it. And, you know, because it has compile time, memory safety, just the act of compiling successfully tells you that's true. I think we need more things like that because if a AI is generating this code, by definition, um, if you have to read every line, that is gonna be the limiting factor for producing the code. Or worse, you're just not gonna read every line and you're gonna emit a bunch of unsafe, unverified code into the wild. And so the question is, how do you enable humans to have as much leverage as possible, which means using computers to do the work on your behalf. You could have obviously the simplest form of this is AI supervising AI and doing code reviews, and that's great. Um, certainly self-reflection is a really effective way of improving the robustness of an AI system. But I do think if you, you know, if it doesn't matter how tedious it is to write the code, you could probably layer on, uh, some techniques that are sort of out of fashion like formal verification, uh, unit testing, other things. And if you layer all these on, I'm sort of thinking about it as I as a, it's like the guy in The Matrix with the, you know, green letters coming down. Like how can I make something so I as a operator of the code generating machine can produce like incredibly complex scale software incredibly quickly and know that it works? And if you start with that as your design center, I think you'd probably change the languages, you'd probably change the systems, you'd probably change all these things, and you're probably gonna bring to bear a lot of things. And what's really fun about it is you can loosen a lot of constraints. Like coding is free, okay, so that's neat. What w- with that in mind, what do you wanna do? What would be best suited for the language, the compiler, for testing, for self-reflection, you know, for supervisor models, all these things?I think that's more of a programming system than a language. But I, I think when we create something like that, it can really enable, um, creators, builders to create incredibly robust, incredibly complex systems. And I'm super excited about vibe coding, but I don't know if like generating a prototype has been the limiting factor in software ever. Um, it's actually like building increasingly complex systems and actually changing them with agility. Uh, you know, if you look at the famous like Netscape 1.0 to Netscape 2.0 rewrite, they sort of like, uh, somewhat, a lot of people attribute that to part of their failure against Internet Explorer. It's like making these things is not hard, like maintaining them is hard and ensuring they're robust is hard. And, and I think we've just sort of, we're in the very early phases of defining what this new system for developing software looks like. And I'm, I'm very excited to see what emerges.

    9. LR

      I feel like we're definitely living in the future when someone like you is suggesting that we build a matrix-like experience and that's wha- gonna be potentially the future of (laughs) coding and, and building (laughs) . I can't wait for that. It feels like a great opportunity and a fun project. This episode is brought to you by Vanta, and I am very excited to have Christina Cacioppo, CEO and co-founder of Vanta joining me for this very short conversation.

    10. CC

      Great to be here. Big fan of the podcast and the newsletter.

    11. LR

      Vanta is a longtime sponsor of the show, but for some of our newer listeners, what does Vanta do and who is it for?

    12. CC

      Sure. So we started Vanta in 2018, focused on founders, helping them start to build out their security programs and get credit for all of that hard security work with compliance certifications like SOC 2 or ISO 27001. Today, we currently help over 9,000 companies, including some startup household names like Atlassian, Ramp, and LangChain start and scale their security programs and ultimately build trust by automating compliance, centralizing GRC, and accelerating security reviews.

    13. LR

      That is awesome. I know from experience that these things take a lot of time and a lot of resources and nobody wants to spend time doing this.

    14. CC

      That is very much our experience, both before the company and to some extent during it. But the idea is with automation, with AI, with software, we are helping customers build trust with prospects and customers in an efficient way. And you know, our joke, we started this compliance company so you don't have to.

    15. LR

      We appreciate you for doing that. And you have a special discount for listeners. They can get $1,000 off Vanta at vanta.com/lenny. That's V-A-N-T-A .com/lenny for $1,000 off Vanta. Thanks for that, Christina.

    16. CC

      Thank you.

  6. 45:2648:46

    Preparing the next generation for an AI-driven world

    1. CC

    2. LR

      Okay. One more question along these lines, and then I wanna zoom out on just kind of where AI is heading. And, uh, something I love to ask folks like you that are at the cutting edge of AI is, is what you're teaching your kids. I know you have kids. I feel like the world is gonna be very different when they grow up. What are you encouraging them to learn, uh, that you think might, is different maybe from previous generations to help them be successful in a world of AI abundance?

    3. BT

      I don't know if I'm teaching them differently, but I'm really trying to encourage them to make AI a part of their lives. Uh, I was reflecting actually, um, when I took the AP calculus exams, uh, in, uh, '97, '98 AB and BC, I could use a graphing calculator. And, uh, I haven't done this research. I was actually meaning to plug this into ChatGPT before our conversation, but I'll do it after. Did the calculus exam change before and after they allowed the calculator in the exam? I assume it did. Um, but essentially to, when you allow the calculator in the exam, you need to make sure that none of the questions, you know, benefit people for having a calculator or not, and which actually forces you to sort of rethink the problems to test calculus knowledge that don't benefit from like rote arithmetic or, you know, the other things you can do on a graphing calculator. I think that a lot of education, um, is sort of, doesn't presume you have a super intelligence in your pocket (laughs) . And so, you know, if you ask someone to write an essay on a book that they read, you could probably hallucinate one pretty easily from one of the big, you know, providers like ChatGPT. And maybe if you are skilled enough at prompting, maybe even your teacher won't know it's written by an AI. Uh, so what do you do? Like how do you teach kids differently? It's really hard for teachers right now because I think we haven't gone through the transition of adding calculators to the exams. I think a lot of the mechanisms we have to evaluate students are broken by the existence of ChatGPT and the like. So I think we're in a very awkward phase, but I, I think we can still both teach kids how to think and teach kids how to learn. And I think our education system can catch up. And I actually think these models can be one of the most effective educational tools, uh, in history. I don't know if you're a visual learner or reading learner. I like to read. I didn't love going to lectures. I don't learn that well from them. I like to like read the book. Um, and, uh, you know, if you have a teacher who doesn't teach in your style, you can now go home and ask ChatGPT to teach you in another mechanism. I, uh, my kids use ChatGPT to quiz them before a test. Um, you can use audio mode or chat mode. It's like better than cue cards. You, uh... My daughter took home a Shakespeare book. She took a picture of the page she didn't understand and ChatGPT explained it to her way better than I would've, uh, uh, as well. I think every child in this world has a personalized tutor that can teach them in the way that they best learn, visually, over audio, reading. Um, we have, uh, a platform that can test you, that can quiz you. Um, I think it's really an amplifier of agency. I think, you know, the folks who ha- like kids who have agency, who, uh, have aspirations to, to learn something,

  7. 48:4652:05

    AI in education

    1. BT

      I think you have what is the best combination of every teacher you've ever had in these, these models and you can use it.So with my kids, um, you know, my oldest daughter learned how to code and she was making a website, and every time she had a question for me, I would just make them use ChatGPT. Not because I was trying to be an obnoxious father, but I'm like, she needs to learn that like, to, to use this tool because it's, it's amazing. Um, and I... so I really am trying to have them learn how to use it constructively in their, in their lives. But that, all that said, I just feel a ton of empathy for public school teachers right now. Um, it's very hard because we're just... with the technologies moving faster than our educational system, and I think particularly as it relates to, uh, evaluation, uh, it's just really challenging for teachers right now. And I worry, you know, because these technologies amplify agency, the opposite can also be true if you, if you, uh, are a student trying to like not learn something, I think these tools probably provide a lot of mechanisms to avoid it as well. And so I think there's a challenge for parents and teachers, and I think we're gonna end up with kind of like a bumpy handful of years here. But I brought up the calculus AP exam because obviously a graphing calculator is not ChatGPT, don't get me wrong, but I think we've been able to con- figure out a way to conform, you know, homework and in-class learning and tests around the technologies available to us fairly successfully to date, um, and I'm fairly confident we'll figure it out. You know, and I like... and I... and I think it's gonna, uh... and I... on the much more positive side, and I went to public schools, I don't know if you did too, like you end up with some pretty bad teachers t- you know, at times, and now you have an outlet. You don't need to be the, you know, rich kid who can afford a tutor anymore to get tutoring. Uh, you know, if you are a kid who excels in math and your school doesn't have advanced statistics classes, well, now you do. So I think this is just an incredibly democratizing force with kids who have agency and I think that's very exciting. I'm hopeful that this... there's a 11-year-old right now who's going to start a really amazing company, you know, uh, ten years from now, who's... like ChatGPT is going to be like their primary tutor that like led to that, that outcome and I think that's pretty, pretty cool.

    2. LR

      I have a two-year-old and it feels like there's like a new milestone of there's like when to give him a phone, when to give him, I don't know, Snapchat, whatever kids use these days, and then there's like when to give him their first ChatGPT account. (laughs) Uh, oh, no. I wonder, I wonder how soon that's supposed to happen.

    3. BT

      I think ChatGB-... my personal take, 'cause it's different from the former two...

    4. LR

      Yeah.

    5. BT

      I, I, I don't think mobile phones are great in school or great for kids and I, I, I personally advocate for waiting a long time. But I think that ChatGPT is more like Google Search and, you know, it's one thing to have a device in your pocket that's addictive and has push notifications, but it's another thing to use AI to, to learn. And so I think the two are different. And I really think of AI fundamentally as a utility. Um, and, and I don't think a lot of parents before ChatGPT said, "When should I let my kid use Google Search?" You know, that's like a different type of tool. And I think thinking of it like that is the way I think about these technologies.

    6. LR

      And so is the form factor for your kids like an iPad or a laptop or something?

    7. BT

      Yeah, they use like the computer on the desk. That type of thing.

    8. LR

      Got it. All right. Good tips. This is good for me to learn all these things as my kid

  8. 52:051:04:38

    Business strategies in the AI market

    1. LR

      ages. Okay, I'm gonna zoom out and let's talk about business strategy, AI. One of the biggest questions a lot of founders think about these days is just, "Where should I build? What will foundational model companies not squash and do themselves?" Uh, being someone building a very successful AI business and also being on the board of OpenAI, I feel like you have a really unique perspective on what is probably a good idea and it's probably not a good idea. Why do you think the AI market is gonna play out and where do you think founders should focus and also just try to avoid?

    2. BT

      I think there's three segments of the AI market that will end up fairly meaningful markets, and then I'll, I'll end with how I think it's going to play out. So, uh, first is the frontier model, um, market or foundation model market. I think this will end up the small handful of hyperscalers and really big labs, uh, just like the cloud infrastructure as a service market. So, uh, and the reason for that is that creating a frontier model is entirely a function of CapEx, and you need a company with huge amounts of CapEx capacity to build one of these models. All of the companies that were startups that tried to do this have already been consolidated, or almost all of them, Inflection, Adapt, Character and others, and I think it's just not... it doesn't appear to be a viable business model for a startup because of the amount of CapEx required and there's just not enough runway, you can... yeah, fundraising runway to get to escape velocity and also the models deteriorate in value fairly quickly as an asset class. And so you need just a lot of scale to make a return on the investment for, uh, a model that deteriorates in value so quickly. Um, so I think that's gonna end up probably no entrepreneur should build a frontier model. That's my, my take.

    3. LR

      Unless you're Elon.

    4. BT

      Yeah, no, yeah, he's, he's not... he's, he's different, right?

    5. LR

      (laughs)

    6. BT

      And he has the capacity to raise billions in capital and my guess is most of your other listeners don't and, and he's, he's the greatest of all time for a reason and he's different. You don't compare yourself to him, you know. The other part of the market is the tooling. Uh, and I think there's, you know, a lot of folks selling pickaxes in the gold rush. This is data labeling services. This is, uh, you know, data platforms. It's, uh, eval tools, um, more specialized models like ElevenLabs has a great set of voice models that a lot of companies use that are really high quality. And it- and it's sort of like if you're trying to be successful in AI, what are the different tools and services that you need? There is some risk to the tooling market because it's probably... it's pretty close to the sun. So, uh, if you look at the infrastructure as a service market and the cloud tooling market, like the Confluent and Databricks and Snowflake, a lot of the, um, Amazon and Azure and others have competing products in those areas because they're very adjacent to the f- uh, to the infrastructure itself and every infrastructure provider is trying to differentiate by moving up the stack and, and you're right there. And so there's some real meaningful companies, as I mentioned, like Snowflake, Databricks, Confluent and others, but there's a lot of others that were sort of obviated by, um, uh, technology from the, the infrastructure providers themselves. So.Those companies probably are the most at risk for, you know, a developer day from one of these big foundation model companies releasing exactly what they do. So you have to, there's probably a lot of people who need your, your tool, but the question will be if or when, is probably the right way to think about it, one of these large infrastructure providers introduces a competitor, why will people continue to choose you? Um, so it's, it's a good market, but it's a little bit close to the sun, as I said. And then there's the applied AI market. I think this will play out for companies who build agents. Uh, I think agent is the new app, uh, and so I think that's gonna be sort of the product form factor. So there's companies like Siera, we help companies build agents to answer the phone or answer the chat for customer experience and customer service. There's companies like Harvey that make agents for, uh, both the legal, paralegal profession, antitrust reviews, reviewing contracts, et cetera, et cetera. There's companies that do content marketing. There's companies that do supply chain analysis. Uh, I think this is sort of like the software as a service market, um, they'll probably be higher margin companies because you're selling something that achieves a business outcome as opposed to being a byproduct of the models themselves. They will almost certainly pay taxes down to the model providers, uh, which is why those model providers will end up extremely large scale, but probably slightly lower margin. And I think, you know, you, you, the market for them will be probably less technical. I mean, if, if, you know, if you think about the purest form of software as a service, it's not like you ask, like, "What database do you use," right? It's, it's really about the feature and function. I think that's where agents will go. I think it's gonna be more about, uh, product than it is about technology, uh, over time. Just, you know, uh, just going back to my metaphor, you know, in 1998 when Mark and Parker started Salesforce, just getting that database run in the cloud was, like, a technical achievement. You know, nowadays, like, you know, no one asks, asks about that 'cause you can just spin up a, a database in AWS or Azure and it's, like, no problem. I think today, you know, getting an orchestration, orchestrating an agentic process on top of the models is, like, sounds really fancy and it's really hard and all that stuff. You know, I'm pretty sure that's gonna be easy in, in three or four years, it's just, like, just as the technology improves. And so over time, you say, like, "What is an agent company?" Well, it looks a little bit more like software as a service. You're gonna talk a little bit less about how you deal with the models, uh, in the same way modern SaaS few people ask what database do you use. But you'll probably ask a lot about the workflows and, and what, you know, business outcomes that you're driving. Are you generating leads for a sales team? Are you, you know, minimizing your procurement spend? Whatever value you're providing, it's going to sort of slowly evolve, uh, towards that. Um, I, I'm very excited. I don't think startups should probably build foundation models. I think, uh, but I just, I mean, you can shoot your shot, you know, if you have a, a vision for the future, go for it, but I think it's probably a, a challenging market that's already sort of consolidated. I'm very excited about the other two markets. I'm particularly excited as building agents becomes easier to see a lot of, um, long tail agent companies come out. Um, I was looking at a website for the top 50 software companies in the stock market, and obviously, like, the top five are the big, big boy ones like Microsoft, Amazon, Google, all that. But, like, the next 50 are all SaaS companies, and they're, like, some of them are very exciting, some of them are, like, super boring, but this is, like, how the software market has evolved. And I think we're gonna see something kind of similar with agents. Like, it's not just gonna be, like, these huge markets like we're in, like, customer service and software engineering, uh, it's gonna be, like, a lot of, like, things where people are spending a lot of time and resources that an agent can just solve, but it requires an entrepreneur who actually understands that business problem, like, and deep, deeply. Uh, and I think that's where, like, a lot of the value is gonna be unlocked, uh, in the AI market.

    7. LR

      That is incredibly helpful. This makes me think about, I had Marc Benioff on the podcast, you guys were co-CEOs, and he was, uh, extremely agent-pilled (laughs) . All he wanted to talk about was agent force. Uh, clearly you are also very agent-pilled. What is it that you think-

    8. BT

      I've never heard the term agent-pilled. Hold on, I'm gonna use that one, so...

    9. LR

      (laughs) Uh, clearly you guys saw something that was just like, "Okay, we need to go all in on agents. This is the future." What is it you think people are missing about just, like, why this is such a critical change in the way software is gonna work? What are, what's, what are people not seeing?

    10. BT

      If you talk to an economist like Larry Summers, who's on the OpenAI board with me, they'll talk about, like, what is the value of technology? Well, it helps drive productivity in the economy. And if you look at the, uh, one of the big jumps in productivity in the economy was in the '90s, and I think a lot of folks I talk to think it was actually that very first wave of computing where people made, like, ERP systems and just, like, put accounting into computers and databases, even, like, mainframes, we're talking, like, the PC era, because it was such a huge step up. Like, you know, just imagine, like, the ledgers of, you know, uh, numbers that you'd have for, like, a large multinational company before, and it truly just transformed departments. I'll, I'll give you a little toy example. My dad just retired, he was a mechanical engineer, and he was talking about when he first started his career in the late '70s, and he went into a mechanical engineering firm, the majority of the firm were draftspeople. So basically, you'd take an engineering design that you needed to do all the different vantage points and for all the different floors and to give to the contractor to do the thing. Now, there are zero draftspeople at his company. You just make the, the design in first AutoCAD and now Revit, and it, you know, it's a 3D model, and, you know, the drafting has actually been eliminated. It's just not a thing one needs to do anymore. The, the actual design and drafting, drafting is not a thing that exists. It's just, like, you can, it's just a design. I, I, that's true productivity gains, right? It's like, you know, the job of the mechanical engineering firm was to do a design. The drafting was, like, uh, sort of this necessary output for the contractor, but it wasn't really adding value, it was just sort of, like, the, the supply chain change.If you look at the history of the software industry from the PC on, um, there's been meaningful productivity gains, but just not nearly as meaningful as that first huge jump. Uh, and I'm not smart enough to know exactly why, um, but it is interesting. Like, there has... The promise of productivity gains from, from technology, um, hasn't been as realized, I think, as some people thought. I think agents will truly, l- like, start to bend the curve again, like we did in the very early days of computing, because software is going from helping an individual be slightly more productive, um, you know, to actually accomplishing a job autonomously. And as a consequence, just like you don't need draftspeople in a mechanical e- engineering firm, you just won't need someone doing that thing anymore. It means they can do something else that's higher leverage and, and more productive, and you can actually... You know, uh, a smaller group of people can accomplish more and, uh, you know, truly drive productivity gains in the economy. And, you know, I think if you've ever sold enterprise software, you end up in these discussions as a vendor with the customer where you'll have, like, a, a value discussion, and you'll do these, like, somewhat convoluted, you know, things like, okay, it's like you're selling a sales thing. "Okay, well if every salesperson sells, you know, 5% more, da-da-da-da-da, y- you should pay us a million dollars." Like, you know? And it's roughly that conversation. And it's so unattributable, you know, especially... And it's why it's so hard to sell productivity software, which I learned (laughs) the hard way is, you know, it's just hard to know, you know, what's the value of making everyone 10% more productive? Did you actually make them 10% more productive or did something else change? You don't really know all these things. But now with an agent actually accomplishing a job, not only is it actually truly driving productivity in a very real way, but it's measurable as well. So, all those things combined means I think this is actually, like, a step change in how we think about software, because it does a job autonomously, which is, like, sort of more self-evident a productivity driver. It's measurable, so people value it differently as well, which is why I also believe in outcomes-based pricing, uh, for software. And all of that combined, to me it feels like as significant as the cloud or... Uh, I think more, technologically, but just in terms of, like, how it, like, transforms the business model of the software industry where there's gonna be, like, a before and after. Like, I don't know how many people still sell perpetually licensed on-premises software, but it's de minimis at this point. I think we're gonna go through a similar transition. Like, the whole market is gonna go towards agents. I think the whole market is going to go towards outcomes-based pricing. Uh, not because it's the only way, but it's gonna be... Like, the market is gonna pull everyone there because it's just so obviously the correct way to build and sell software.

    11. LR

      Let me pull on that last thread. So, we had Madhavan on the podcast recently, a pricing expert, legend, Monetizing Innovation author, and he, uh, talked about pricing strategy for AI companies, and he was very much in your camp of, if you c- can, you need to price your product as an outcome-based product. And, uh, the axis used is exactly what you shared, which is you can do that if you can attribute the impact and it's autonomous, it's running on its own. Maybe just ch-... And he actually used Sierra as one of the shining examples of, of this s- being successful.

  9. 1:04:381:09:15

    Outcome-based pricing in AI

    1. LR

      Can you just briefly just explain a little bit what is outcome-based pricing for people that haven't heard this term before? And then just how does it work for Sierra, to give an example.

    2. BT

      Yeah, I'll start with the example and then I'll broaden it. Um, so-

    3. LR

      Awesome.

    4. BT

      ... uh, at Sierra we help companies make customer-facing AI agents, primarily for customer service, but more broadly for customer experience. So, um, if you have a problem with your SiriusXM radio, you'll call or chat with Harmony, who's our AI agent. If you have ADT home security and your alarm doesn't work, you can chat with their AI agent. Sonos speakers, a lot of different consumer brands. And, you know, if you think about running a call center, um, the... There's a cost for every phone call, um, that you take. Um, most of it is labor costs, um, but if you have, let's just say a typical phone call is anywhere between $10 and $20, uh, US dollars. Most of it... Some of it's software, some of it's telephony, but a lot of it is just, like, the hourly wage of the person answering the phone. So, if an AI agent can take that call and, uh, solve it, you know, that, uh, is in the industry often called a call deflection or a containment. Um, and that essentially means you saved, you know, call it $15, uh, because you didn't have to have someone pick up the phone. Um, so at a... In our industry, basically, we say, "Hey, if the AI agent, you know, solves the customer's problem, they're happy with it, and you didn't have to pick up the phone, there's a pre-negotiated rate for that." Uh, and that's, uh, we call it, like, resolution based. There are other outcomes as well. We have some sales agents being sales... Paid a sales commission, believe it or not. You know, we do. We-

    5. LR

      Wow. I didn't know that.

    6. BT

      ... really think of our agents as truly customer experience, like the concierge for your brand.

    7. LR

      Mm-hmm.

    8. BT

      And we wanna make sure that, you know, uh, our business model is aligned with our customer's business model. As you said, these agents need to be autonomous and the outcome has to be measurable. That's not always possible, but I think it's broadly possible. And what's really neat about it is if you talk to any CFO or head of procurement, you know, with their big vendors, they look at the bill of materials and they... It's, like, overwhelming and it's impossible to know if you're getting the, the value that you hoped, uh, from that contract. I think consumption-based, uh, which was popular p- particularly in the infrastructure space is closer to it, but I'm not sure, like, a token is actually a good measure of value from AI either. Um, I always use the analogy, like, right now most of the coding agents are priced per token or, or per utilization, but there's this famous story of a Apple engineer who had a bad manager who was like, "How'd you report how many lines of code you wrote every day?" Um, which every engineer in the world knows is an idiotic way to measure...... productivity. He famously went in with a report that had a negative number, so I think he did a big refactoring and deleted a bunch, and it was his way of saying, like, "Fuck you," to the man. I think tokens are similar. Uh, like, yeah, you used a lot of tokens. Like, good for you. Did the... You know, did it produce a pull request, you know, that was good? And, and I think that's the whole point of all this. I don't think... I think there's a huge difference between outcomes-based pricing and usage-based pricing, because, uh, especially in AI, they're not necessarily even correlated, and you could have a long phone call, not solve the customer's problem, and they give you a negative review online and call at the call center again. It... All that effort was for nothing. In fact, you might have added negative value (laughs) . And so, I am a huge believer in this. And what's fun about it is, it really just aligns... I think every technology company aspires to be a partner, not a vendor. And I think at Ciara, we are truly a partner to every single one of our customers, because we're all aligned on what we want to achieve, and I think that is, uh, really where software, the software industry should go. It requires a lot of different shape of a company. You just have to have... You have to be able to help your customers achieve those outcomes. You know, you can't just throw software at the wall, 'cause you'll never get paid (laughs) if it doesn't. You have to, you know, uh, really just... Your orientation becomes so extremely customer-centric when you do this the right way. I think it's just a, a better version of the software industry. So, I think it's right from first principles, it's right for procurement partners, and I think it's right for the world.

    9. LR

      We've been chatting a little bit about productivity gains. There's a lot of skepticism in, in the headlines these days of just, like, what is AI actually doing? Like, is it actually helping people be more productive? There was a recent study actually, I don't know if you saw, where they showed engineers were less productive, uh, with AI because it was just putting them in different directions. They had to research all what's going wrong here, and so I think CX is a really good example where you clearly are seeing gains. Are you seeing actual gains at your company or any other company you work with outside of CX in terms of productivity that is, like, clearly, yes, this is working and a huge deal?

  10. 1:09:151:17:35

    Productivity gains and AI

    1. LR

    2. BT

      I'm extremely bullish on the productivity gains from AI, but I do think the tools and products right now are somewhat immature and it c- and it's quite counterintuitive. So, for example, I, uh, almost every software engineering firm I know uses something like Cursor to help their, their software engineers. Most people use Cursor right now as a, uh, kind of coding autocomplete. They, they have a lot of agentic solutions and there's a lot of, uh... Like, OpenAI has Codex and there's, you know... Claude has... I can't remember the, the Anthropic product. So there's lots of agentic, you know, agents coming as well. One of the interesting things, because the technology is sort of immature, the code it produces often has problems, um, so there's a lot of people sort of approaching this to sort of actually realize those productivity gains. Because as any engineer who's written a lot of code will tell you, it's pretty easy to, like, look at and edit and fix code you wrote. Reviewing other people's code, or particularly finding a subtle logical error in someone else's code is actually really hard. It's actually much harder than, you know, uh, editing code that you wrote yourself. So, if the code produced by a coding agent is often incorrect, it actually can take a lot of, like, cognitive load and time to fix it. And in fact, if you end up producing lots of, you know, uh, issues with your customers, you could end up, you know, uh, producing a lot of features, but actually is like, you know, mucking up the machine a little bit and having something that's not ideal. There's a couple of techniques that I think are interesting. Like first, I think there's a lot of, uh, AI startups now working on things like code reviews. I think this idea of self-reflection in agents is really important. Having AI supervise the AI is actually very effective. Just think about it this way: if you produce an AI agent that's right 90% of the time, that's not that great. But how hard would it be to make another AI agent to find the errors the other 10% of the time? That might be a tractable problem. And if that thing's right 90% of the time, just for argument's sake, you can wire those things together and have something that's right 99% of the time. So, the... It's just a math problem. Like, you know... And it turns out that you can make something to generate code, you can make something to review code, and you're essentially using compute for cognitive capacity, and you can layer on more layers of, of cognition and thinking and, and reasoning and produce things increasingly robust. So I'm very excited about that. The other thing, though, is root cause analysis. So, we have an engineer at Ciara who exclusively focuses on the model context protocol server, serving our Cursor, uh, instance. And our whole philosophy is rather than if it... if Cursor generated something incorrect, rather than just fixing it, try to root cause it. Um, try to get it so, like, the next time Cursor will produce the correct code. So like, uh... And essentially it's context engineering. Like, what context did Cursor not have that would have been necessary to produce the right outcome? So I think people who are trying to get productivity gains in departments like software engineering need to stop sort of waiting for the models to magically work if they want to see the gains now, and you really have to create, like, root cause analysis and systems and say, like, "You know, how do we sort of go root cause every bad line of code and actually give the right context and produce the right system so the models can do it today?" Over time, that'll probably be, like, less necessary and, and you'll have less context engineering necessary to do it, but you really have to think of this as a system. And I think people are sort of like waiting for the models to just magically get better, and I'm like, "Well, that'll happen eventually." But if you want the gains now, you gotta put in the work. I mean, that's essentially why applied AI companies exist. And the work is non-trivial, but it's... You can do it. And so, you know, for customers using platforms like Ciara, yeah, AI agents aren't perfect, but we're creating a system that lets customers create a virtuous cycle of improvement. If you want to go from a 65% automated resolution rate to 75%, we have a billion tools to let AI help you do that. Identify opportunities for improvement, figure out why people are frustrated. What new capabilities can we add to our agent to improve the resolution rate? Uh, and you sort of let AI put the needles of the, of the hay- uh, at the top of the haystack on your behalf, and I think that's really the way to optimize these systems.

    3. LR

      I've never heard of this technique of improving cursor by adding additional context. What's the actual way of doing that? You build an MCP server that everything runs through, or is it, like, you add cursor rules? What's the actual approach there?

    4. BT

      Uh, I'm probably out of my depth here, but it's essentially-

    5. LR

      Oh, okay.

    6. BT

      ... MCP, but it's essentially, you know-

    7. LR

      Huh, wow.

    8. BT

      ... because that's how you provide, uh, context to cursor, and I think that almost always when you have a model making a poor decision, if it's a good model, it's lack of context. And so you really want to, like, you know, find the intersection of your particular product and codebase with the context available to these coding agents and, and systems and fix it at the root, is sort of the principle here.

Episode duration: 1:28:57

Install uListen for AI-powered chat & search across the full episode — Get Full Transcript

Transcript of episode qImgGtnNbx0

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome