Skip to content
The Twenty Minute VCThe Twenty Minute VC

Bret Taylor: Why Pre-Training is for Morons & Companies Will Build Their Own Software | E1209

Bret Taylor is CEO and Co-Founder of Sierra, a conversational AI platform for businesses. Previously, he served as Co-CEO of Salesforce. Prior to Salesforce, Bret founded Quip and was CTO of Facebook. He started his career at Google, where he co-created Google Maps. Bret serves on the board of OpenAI. ----------------------------------------------- Timestamps: (00:00) Intro (06:46) Are We in Peak AI? (12:48) The Threat of AI Models Replacing Traditional Software (18:14) AI Services Companies & Their Role in Next-Gen Applications (29:05) Balancing AGI Pursuit and Product Development (34:04) Sustainable AI Business Models Amidst Commoditization & High Costs (41:35) Is There Ever a Stop to the Escalating Costs in AI Development? (44:36) AI Agents & the Future: The Decision to Build Sierra (54:38) Unanticipated Challenges in Building Sierra (55:38) Transitioning from Software Rules to Guardrails (01:04:37) Content Verification & Trust: Concerns in a Misinformation-Driven Era (01:08:55) Quick-Fire Round ----------------------------------------------- In Today’s Discussion with Bret Taylor: 1. The Biggest Misconceptions About AI Today: Does Bret believe we are in an AI bubble or not? Why does Bret believe it is BS that companies will all use AI to build their own software? What does no one realise about the cost of compute today in a world of AI? 2. Foundation Models: The Fastest Depreciating Asset in History? As a board member of OpenAI, does Bret agree that foundation models are the fastest depreciating asset in history? Will every application be subsumed by foundation models? What will be standalone? How does Bret think about the price dumping we are seeing in the foundation model landscape? Does Bret believe we will continue to see small foundation model companies (Character, Adept, Inflection) be acquired by larger incumbents? 3. The Biggest Opportunity in AI Today: The Death of the Phone + Website: What does Bret believe are the biggest opportunities in the application layer of AI today? Why does Bret put forward the case that we will continue to see the role of the phone reduce in consumer lives? How does AI make that happen? What does Bret mean when he says we are moving from a world of software rules to guardrails? What does AI mean for the future of websites? How does Bret expect consumers to interact with their favourite brands in 10 years? 4. Bret Taylor: Ask Me Anything: Zuck, Leadership, Fundraising: Bret has worked with Zuck, Tobi @ Shopify, Marc Benioff and more, what are his biggest lessons from each of them on great leadership? How did Bret come to choose Peter @ Benchmark to lead his first round? What advice does Bret have to other VCs on how to be a great VC? Bret is on the board of OpenAI, what have been his biggest lessons from OpenAI on what it takes to be a great board member? ----------------------------------------------- Subscribe on Spotify: https://open.spotify.com/show/3j2KMcZTtgTNBKwtZBMHvl?si=85bc9196860e4466 Subscribe on Apple Podcasts: https://podcasts.apple.com/us/podcast/the-twenty-minute-vc-20vc-venture-capital-startup/id958230465 Follow Harry Stebbings on Twitter: https://twitter.com/HarryStebbings Follow Bret Taylor on Twitter: https://twitter.com/btaylor Follow 20VC on Instagram: https://www.instagram.com/20vchq Follow 20VC on TikTok: https://www.tiktok.com/@20vc_tok Visit our Website: https://www.20vc.com Subscribe to our Newsletter: https://www.thetwentyminutevc.com/contact ----------------------------------------------- #20vc #harrystebbings #brettaylor #sierra #openai #ai #venturecapital #founder #boardmember #fundraising #zuckerberg

Bret TaylorguestHarry Stebbingshost
Oct 2, 20241h 16mWatch on YouTube ↗

EVERY SPOKEN WORD

  1. 0:006:46

    Intro

    1. BT

      (instrumental music) I think we are in a bubble. I am inherently skeptical of companies doing pre-training. Unless you are a AGI research lab, (laughs) doing pre-training on a model, I believe, is just burning capital. Software is like a lawn, it needs to be tended to. It's not like you write software once and it just works forever.

    2. HS

      Ready to go? (instrumental music) Brett, I am so excited for this, my friend. I've been a fan from afar for a long time. You've had such an incredible career, so thank you so much for joining me.

    3. BT

      No, thanks for having me.

    4. HS

      I know this one's a little bit off back, 'cause it's not even on the schedule. So you're like, you're breaking the rules from round one. But when I go through the different achievements you have, eh, it really is incredible. When you were young, did you know that you were gonna be successful? Did you have that innate feeling?

    5. BT

      I don't think so. You know, when I was young, first I wanted to be Indiana Jones, which I know is not a job, but to me, he was by far the coolest example of an adult that I'd ever seen. By the time that I, uh, you know, was in school and started thinking about a job, I wanted... I thought I wanted to be an attorney, um, in high school. And, uh, I'm happy to tell this story. It's actually kind of an interesting story, but I ended up getting a job at a gas station, and then, uh, sort of hustling my way into making a website for a mechanic that was nearby. Um, I was getting paid $4.25 at the gas station an hour, which was minimum wage at the time, and, uh, ended up getting paid $400 for the website. So I quit the gas station (laughs) job the next day, and ended up making websites for a lot of local businesses in my, my hometown. Most of those websites endured for decades, you know, because it turns out if you're a florist, it's not like you're actively SEOing your website, so... (laughs) You know, my, my fingerprints on the internet in 1996 and '97 lasted for longer than you'd expect. And even when I went to Stanford, I, um, I think if you'd met me that summer before, I probably would've said, "I probably want to be a lawyer." But then the combination of my accidental entrepreneurial, um, experience, plus going to Stanford in the dot-com bubble, I... Uh, my first quarter at Stanford, I took a class called CS106A, which was sort of the intro class, and the rest is history. I, I was so obsessed with software at that point. I would do it in my spare time. Um, it had nothing to do with school. I was just totally obsessed with the, the craft.

    6. HS

      Do you think people are born entrepreneurs, or do you think it can be learned?

    7. BT

      I think most things can be learned. Uh, I've, through, certainly through my career, I have... Most of the time when I've thought of something as an innate skill, um, I've later recalibrated and felt that with, like, enough focus, one can, um, improve at most things. I, I could never become, like, an Olympic track star, nor could I ever, you know, win the fields medal. So I, I, obviously I don't mean to trivialize true innate ability. But on skills like public speaking or leadership, uh, or, um, even things that aren't quite fields medal, like becoming, you know, good at finance, I think most people can whi- with focus. The thing that is unusual about being an entrepreneur is, um, how intense it is, and I do think there's a certain personality type that is conducive to that. You know, I think, uh, it's hard to be an entrepreneur if you're prone to anxiety, because everything's on fire all the time. You know, that's just the nature of the business. Um, and, uh, as a s- as a consequence, it probably... There's certainly some nature, not just nurture there. But I have met folks who, uh, you know, might not have identified as an entrepreneur earlier in their career who developed the confidence in their own, you know, resolve, uh, through early parts of their career and end up great. And it's really interesting too, if you look at the enterprise software industry versus, uh, consumer, you know, a lot of the greater enterprise software companies were started by entrepreneurs later in their career, um, which is, I think, really interesting as well. I mean, if you look at PeopleSoft, and Aneel, I guess, was the second, you know, there's so many great examples. And so, I, I don't think there's a reductive stereotype of entrepreneur that resonates with me. Um, you know, I, I really think it's one of the defining characteristics of the United States and Silicon Valley, and I have a sense as like, we should keep the, the, the door with the open sign brightly lit for anyone who wants to come, you know, forge their path.

    8. HS

      Brett, what do most people think is an innate skill that actually can be learnt? So like, for me, people think, "Oh, I could never be an interviewer or do content." And I just say, "Listen, I was shit. I've just done 3,000 over 10 years." I learned to be a little bit better, hopefully, so I'm now tolerable. But it's just like going to the gym. What do you think everyone thinks is an innate skill, but actually can be learned?

    9. BT

      Leadership. I actually have made so many... I mean, you've probably heard the phrase, "Oh, this person's such a natural leader." And I... Clearly, if you're a sociopath, you probably won't be a great leader, you know? But if you have decent amount of EQ, uh, the innate skills of becoming a leader at different scales of organizations is absolutely something I believe can be learned. It's interesting, because if you meet, uh, people who have served in the Armed Forces, uh, in the Western world, most of the military treats leadership as a craft that you learn. Um, and it's in part because of your, you know, uh, growing through the ranks of, say, the Army. You know, at each step, you're managing larger and larger, uh, groups of, of soldiers, and, you know, they, they've formalized a lot of, like, principles of leadership. In contrast, I, I think the, you know, if you go into most large companies (laughs) and you go into, you know, a promotion discussion, you know, corporate promotion discussion, like, that person's just not a natural leader. Not... That person needs to, you know, uh, train or learn these skills. And I think we've, um... Well, it's not true of all of corporate America. Um, I do think it's one of those things that I think, you know, companies should invest in more, uh, which is, like, formal training of characteristics of leadership, uh, how to motivate, you know, uh, people who are different than you, how to motivate... I mean, if you've ever managed a researcher versus a sales leader-... the conversations you need to have to align and motivate are pretty, pretty damn different. And, and I think it's something that is fundamentally a skill. Uh, like any skill, uh, some people are more naturally, um, prone to it than others. (laughs) Uh, but I think it's something that

  2. 6:4612:48

    Are We in Peak AI?

    1. BT

      can't be learned.

    2. HS

      I do have to start though with some semblance of structure. You know, we've seen some mega rounds go down in the last few months from some of the biggest people in AI. Iliya recently raised a billion-dollar seed round. I just wanna start on the foundations of where we're at.

    3. BT

      Is that the first billion-dollar seed round, by the way? Of all time?

    4. HS

      Go on. Who was the first?

    5. BT

      No, that's what I was asking. I think it is the first, right?

    6. HS

      It must be.

    7. BT

      There's no

    8. HS

      Yeah, yeah, no, absolutely.

    9. BT

      There must be, right? Yeah.

    10. HS

      I, I, I, I wanted to think, like, if Elon did another company. I mean, sure.

    11. BT

      Yeah. Yeah, it might be a ten billion dollar seed round.

    12. HS

      This, this is it. Um, I'm, I'm here, Elon.

    13. BT

      (laughs)

    14. HS

      (laughs) Ready to write the check. Um, my question to you is, are we in peak AI, and is this the ultimate sign of a bubble?

    15. BT

      I think we are in a bubble, but I think, um, bubbles have different shapes. Um, and there's a Mark Twain quote that history doesn't repeat itself, but it rhymes. And I think the AI bubble will rhyme with the dot-com bubble. And I believe, with the benefit of hindsight, most of the excess of the dot-com bubble might have been justified. Um, if you look at the top market cap companies in the world, they include Amazon, they include Google. Um, you know, if you look at, uh, across segments, it's, uh, PayPal, eBay. If you look at the, you know, enterprise software companies like Salesforce, started in 1998, if I'm remembering correctly. All of these companies were started in the dot-com bubble, and I think people associate mentally and emotionally the dot-com bubble is associated with Webvan and Pets.com. But actually, if you look at the most frothy statements about the dot-com bubble and the transformation of the economy, and then you fast forward almost 30 years from that point, maybe it was true? Uh, you know, when you look at how much Amazon disrupted commerce, how much, uh, you know, uh, consumer payments have, have tra- been transformed by digital technology, it took a few waves of, of technology like smartphones and NFC to really, you know, fully, um, realize that vision. And, you know, a huge percentage of, you know, the gains in the stock market over the past 30 years have more or less been these digital companies created in the dot-com bubble. And so, I haven't done the math on, you know, how much money was burned in that period, but I think that doesn't mean that the excitement around the impact of the internet on the economy was false. So, I think the same thing is likely to happen in AI. We will look back and laugh at some of the excess, but I'm confident we will have, you know, a, a, a brand defining, likely trillion dollar consumer company come out of this, 10 plus enterprise software companies that are enduring, you know, public companies coming out of this that are native to these new technology. So, I think it is both a bubble. I think there are areas of excess, just sort of like there were areas of excess in 1997 and 1998. Um, but I think it would be dangerous to dismiss, uh, a bubble as, as strictly excess. Uh, and in fact, there'll probably be outsized returns within it.

    16. HS

      Can I ask, is it not different in the way that the risk was priced in? And what I mean by that is, Salesforce's first rounds were not done at billion dollar valuations.

    17. BT

      (laughs)

    18. HS

      Amazon's was not either.

    19. BT

      Yeah.

    20. HS

      Uh, w- you know, the companies of 1998 to 2002 were priced n- not insanely. When you have x.AI raising 18 billion, I mean these are potential trillion dollar companies where, with dilution, you'll get less than 100X.

    21. BT

      I think it's a reasonable point, and it's, as a venture capitalist, makes a ton of sense you're thinking about it that way. I, you know, I, I'm more thinking about it as the impact on the economy. So, you know, I think we, we're in a world where there's a lot more capital than there was, there's a lot more, um, I'd say structure around how people invest in technology companies, as y- you talked about the private equity surge over the past, you know, 20, 30 years. It doesn't, uh, surprise me that, you know, given the amount of capital available, valuations are sort of, you know, markedly different than they, they perhaps were. Um, though I think it seemed excessive back then, too, right? I don't think people could contemplate a trillion dollar company in 1998, rationally anyway. Um, so I, I think, you know, it's pr- y- what you're saying is reasonable. I also think that, from my vantage point, I'm not investing, I am creating. And, you know, my perspective is like where are consumer behaviors going? How will the automation implied by large language models and agents change productivity, change the structure of companies, change the economy, and how do you define a generational company based on those trends? It's up to you to figure out the, the nuances of whom to invest in and why. (laughs)

    22. HS

      (laughs)

    23. BT

      I'm happy to give my perspective, but I think, um, my, but it, you know, for what it's worth, you know, for companies that are pursuing artificial general intelligence, it's hard to figure out like what's the valuation of a company that creates that? It's, the numbers might be insane, so maybe it's completely rational. Um, and I, I'm not the one making, writing those checks, so, you know, um, but I, I, I also don't look at it dismissively. You know, I look at it and say, "There's probably a case to be made." Um, I'm not sure I would write all those checks. But it's also s- I wouldn't say it's entirely irrational either, just because I do think this technology in current form...... has a ton of value. But particularly, as you project forward towards things resembling super intelligence or general intelligence, uh, there's so much value in, in, uh, platforms like that. Um, it's a very c- unusual investment, but it might

  3. 12:4818:14

    The Threat of AI Models Replacing Traditional Software

    1. BT

      not be irrational.

    2. HS

      Before we discuss, I, I love that also, a venture investor thinks multiples, an entrepreneur is like generational defining company impact. I feel like a schoolboy who's been told off, perhaps. (laughs)

    3. BT

      (laughs) Uh, continue to do it that way.

    4. HS

      I feel terrible. I feel really guilty for that. But, uh, anyway, uh, you mentioned kind of AGI and kind of the value that could come from that, absolutely. There is kind of a step before that, though, which is, you know, the models themselves are actually so good and so advanced that they bundle all verticalized or unbundled software products, really, and subsume them, so to speak. To what extent do you think that is a threat, that everything will really just be subsumed by very sophisticated models?

    5. BT

      I don't believe that will happen, personally. Um, I, going back to, uh, analogies are dangerous, but I, I think they might be illustrative in this case. I actually think the AI market commercially will play out like the cloud market did over the past 20 years. So if you look at the cloud market, I would say there's really big, three big categories of cloud software. The first is infrastructures of service, so Amazon Web Services, Azure, Google Cloud, services like that. There's tool makers, so, you know, Snowflake, Databricks, Datadog, you know, basically what is the software that you need? Confluent. What is the software that you need when your company is moving to the cloud? And then there's softwares of service, so Salesforce, ServiceNow, Adobe, um, and the extremely long tail of solutions there. And I would say, you know, when you're talking about the companies, the public companies in the stock market in that kind of $2 billion to $20 billion range, there's a huge number of really interesting and really valuable softwares of service solutions. Why did that play out that way? Um, you know, one could argue and, and, you know, certainly I heard, "Isn't, isn't Salesforce just a database in the cloud?" I'm like, "Come on." You know, like, it's a solution. You know, it's a solution for sales, service, and marketing teams and, and it has a ton of value, and the same, you know, um, reductive, backhanded comment could be made of any software as a service application. And I think it's borne by companies, CIOs, CTOs, CEOs, knowing that actually they don't want to be the one building software. They just want a solution that works. Uh, you know, software is like a lawn. It needs to be tended to. It's not like you write software once and it just works forever. And the total cost of ownership of building and maintain- maintaining software is so great that I think almost every company that has chosen to build their own in a, um, area of their business where there is a softwares of solution service available has regretted it. And you've seen this, like, secular trend towards, you know, away from build your own, uh, towards softwares of service. I think the same will be true of AI. I think there's a bit of a, uh, focus right now on both the data centers and the models because it, because the future is so unclear, it is by far the clearest way to sort of invest in AI right now, is to invest at the lowest layer of the stack because you know that whatever happens on top, that, those layers will sort of collect taxes (laughs) of everyone working on AI above it. But I don't really, um, see why companies would wanna take bag of floating point numbers and morph it into a solution themselves, because I believe the same dynamic that played out in cloud will play out in AI. You know, so at SeerA, which is my company, we make a solution. We're not doing pre-training or, you know, we're fine-tuning other people's models to, to build this solution, and we're helping companies build customer-facing agents primarily for customer service, so for companies like Sonos or SiriusXM or Chubbies. There are other companies like Harvey who are making legal agents, uh, there's companies making coding agents that are, you know, uh, essentially, you know, building software. And I think that if you are a head of a legal department or you're the CTO of a company, why would you wanna take a model and try to, you know, build all the workflows for your engineering team or take a model and say, "Okay, like let's work with our IT department and see how our partners can use this instead of a paralegal?" What you want is a push button solution that solves a problem. And so I think this idea that somehow the way the world wants to buy software will change because these models are really smart doesn't resonate with me. And I think the area, actually, of AI that I am most excited about, obviously everyone's excited about AGI, it's why I, I chose to work with OpenAI, but I'm really excited about applications. I think it's early there, and I think there's a bunch of companies saying, you know, uh, "We're gonna actually build a product that solves a problem. It doesn't just, you know, help with productivity, it actually sol- solves a problem, and we're going to cater that solution to a department or a buyer that isn't technical, and it's going to be magical." There's a ton of value there, and I believe that's the way most companies wanna buy software.

    6. HS

      There's a couple of things I just have to unpack there. You said about kind of companies wanting to buy solutions and the ease that they require when implementing these solutions. I actually said before, uh, on Twitter that I think AI services companies over the next three to five years will actually be the biggest winners in AI. And you've seen a lot of these consulting firms post billions in, in profit. There was one that actually had more revenue than OpenAI.

  4. 18:1429:05

    AI Services Companies & Their Role in Next-Gen Applications

    1. HS

      Do you agree that AI services companies will be a dominant strain of this community and that they will be needed, though, for the implementation of this next generation of application layer?

    2. BT

      In the early days of technology adoption, you tend to have...... very low level platform building blocks available and quite a bit of professional services spend because there is no option other than building it yourself, so you tend to get a short-term spike in professional services spend along with some low-level building blocks. And my guess is at least some of that revenue you're describing is companies not having an out-of-the-box software as a service solution available. They see these amazing models like GPT-4 available, and if they want to apply them to their business, you know, a year and a half ago, two years ago, their only option was essentially to pay, um, one of these firms to, to do the last mile themselves. Over time, I do think that that will diminish as solutions become available that, um, have shorter and simpler implementations. I think that's what companies like mine are doing, is essentially, you know, reducing the, the last mile to actually configure the software. However, the reason I think, you know, this is nuanced, and, and you may be right, and, and actually I think it can be, uh, a lot of value that professional services firms provide is around change management.

    3. HS

      Mm-hmm.

    4. BT

      So if you manage, uh, imagine you have a contact center, uh, in the Philippines, uh, managed, you know, as a, as a BPO with one of these customers and you're migrating, you know, a huge percentage of your cases to AI, it's not just a technology change, right? It is actually a huge change in the operations of your company. Um, and then similarly, if you imagine, you know, these technologies becoming even more advanced, whether it's re-skilling your workforce or, um, actually transforming even the way an entire department operates because there's a agent that comes to, you know, comes out, that actually means you can completely restructure the way a department is run. I think one thing that software companies have always been bad at, for good reason, I don't think it's necessarily what we do, is actually helping companies manage the adoption of this technology. Uh, you know, I think that most software companies, you know, try to be trusted advisors to, uh, their companies, but at the end of the day, they're, you know, they have a vested interest in the product that they're selling, and, you know, it often helps to have a, a third party there to help you actually manage that change. So I do think that my, if I had to, you know, answer s- more succinctly with all that nuance is I think there's probably some short-term professional services spend that reflects the lack of the maturity of the AI applications market right now. Uh, you know, and I think that when there are solutions like Cira and others for specific domains available, you shouldn't have to spend as much to deploy those effectively in your business. However, I, I think that as AI changes and, and disrupts the way companies operate, you know, I would, uh, hope that the best professional services firms have consulting arms that can help companies with that change management, and it might compensate it for in a different way. So, I think if you itemize the receipts, the revenue might change over time.

    5. HS

      As AI changes, one thing that's really striking to me is the speed of commoditization among models. Is this not the fastest technology to commoditize? I mean, every week, brat, I see like, you know, Mistral kills it and next week Gemini kills it, next week OpenAI has crushed it. And I'm sitting here going, "F- fuck, I'm getting dizzy." Like, which one should I use? Oh my god. Clo- and then Claude comes and it's like five things you can do with Claude that you can't do with anything else. And I'm like, "Christ, I have no idea what's going on." Are they the fastest technology to commoditize?

    6. BT

      I, well, I'll start with the high level. I really like Reid Hoffman's framing of this market as foundation models and frontier models. So, foundation models are any of these large language models that aren't necessarily the best of the best or the higher p- highest parameter count, but particularly now, uh, where you have relatively low parameter count models that, uh, meet or exceed the quality of, say, GPT-3.5, uh, that market of foundational models is quite important, um, and quite commoditized. You know, I think that in that market, uh, probably if you need a model like that, you should download LLaMA (laughs) . That's, that's the answer. It's like you don't need much of a cheat sheet on that, you know, and, um, or maybe Mistral, but pick one of the open source models that are adequate and fine-tune it. And the frontier model market is a little different when you talk about this, um, you know, the, the experience you've had being dizzy using these tools. My perspective is that, uh, we've seen real leaps there. So when ChatGPT came out, that was a meaningful step function change that lasted for a while, um, and the insight around instruction tuning and the quality of sort of the GPT models after GPT-3 was pretty remarkably different. Similarly, when GPT-4 came out, I, I haven't done the math on it, but it certainly had a meaningful lead for quite a while, um, and now you're seeing a lot of, a lot of models sort of catch up to that. My sense is we see a lot of incremental improvement followed by step changes, um, in quality. But going back to the market itself, I am inherently skeptical of companies doing pre-training, uh, unless you are a AGI research lab (laughs) , um, doing pre-training on a model I believe is just burning capital. Um, it's roughly the equivalent of an entrepreneur coming to you and saying, "You know, we're building this software solution, and the first thing we're gonna do is build our data center from ha- by hand." (laughs) And, you know, I think, you know, for 99% of software companies, they should lease their servers from an infrastructure as a service provider, not because it's the most vertically integrated and efficient, but because it's not what their company does. Um, and similarly, as you're exploring and finding product market fit, the last thing you want to do is have a big upfront investment to build a data center. And I think that there was a number of companies that were started by incredibly talented AI researchers and, you know, step one of their product plan was build, pre-train a model, and I think for, especially with the existence of these high quality models like, you know, GPT-4o, uh, Mini that you can fine tune or, um, the open source models like LLaMA 3.1, uh, to spend capital on pre-training now unless you're one of the, uh, behemoths, um, I think is, is nonsensical.

    7. HS

      Can- can I ask, can you continue to have step function changes with every model in terms of GPT-3 to GP4? Like, obviously, there's GPT-5 coming next. Uh, don't worry, that's not a spoiler. It would just be a natural guess. (laughs)

    8. BT

      (laughs)

    9. HS

      Unless they're going for a radical rebrand.

    10. BT

      This could bring forward, you know, go-

    11. HS

      Don't you know I'm- I'm not that smart, Brad. I'm a VC.

    12. BT

      Yeah. Yeah.

    13. HS

      But you know, GPT-4 left me with one thing as an option. Uh, but like-

    14. BT

      Do you remember in the early 90s, it went from like Windows NT to 95 to 98 to 2000, you know, or something like... I might be mixing it up. So, you know, we could pull that out to start changing things up.

    15. HS

      Right. I was- I was born in 96. (laughs)

    16. BT

      Oh. So yeah, you should change... Yeah.

    17. HS

      Sorry. (laughs) You, uh... but I- I- I, you know, remember reading about it.

    18. BT

      (laughs)

    19. HS

      Uh, so there- there we go. Uh, but- but we can't continuously have step changes, can we? Are we at a stage where you start to see slightly diminishing returns?

    20. BT

      Those questions are distinct to me. So starting with the step ch- step function, maybe, maybe not. You know, I- I don't think it's a foregone conclusion that we'll have step function changes. I do think that, you know... I believe the most responsible way to develop AGI is responsible iterative deployment and the reason for that is I believe that as you're thinking about things like the societal impact, access to this technology, and the safety, uh, side of AGI as well that the best way we can learn about how to, you know, en- ensure that these models benefit humanity is to consistently release them, learn, uh, from those experiences on the safety side, learn about the harm, learn about really specific vulnerabilities like jailbreaking and improve it at every turn. We could end up with a plateau of progress or, as you said, diminishing returns. I... The three inputs to, you know, progress in AI are number one, data, number two, compute, number three, algorithms and methodology. Um, so if you look at the history, short history of sort of this current wave of modern AI, you know, it started, I think, with the transformers model which, you know, was a, um, paper from Google called Attention Is All You Need which changed the scale, uh, of... with which you could, um, build these models which led to, you know, many of the sort of GPT breakthroughs that- that came next. Um, you ended up with instruction tuning, uh, which was how you turned one of these models into a chat interface, which was a breakthrough as well. So, you know, given even existing data and existing compute, we have all of the best minds in computer science thinking about different techniques. It's- it's similar. There's folks even looking beyond the transformers model and things like that, so I think that there's... That's one area where you could have a- a big breakthrough. Um, you have compute and, you know, there's... You- you just pick up a newspaper and read about the investment in GPUs and, you know, these clusters are getting even bigger and bigger. Um, and even with the same amount of data, you know, training and... Both pre-training and post-training can have a really big impact on quality. And then on the data side, um, the, uh... there's a lot of writing about sort of running out of some of the textural data, but there's a lot of really interesting companies working on simulation, there's a lot of interest in, um, explorations and synthetic data generation. There's multimodality, so, you know, what is true of text is, you know, there's lots of video, audio, image content as well. So, you know, in any one of those, you could probably make a very rational, intellectual case that we're gonna hit a wall, but then you have the two others, and I don't think you can make the case for all three that they're all, uh, coming up on a wall, and I think like any big scientific effort, it will probably be a mix of- of progress in all of those and, um, as a consequence, I am, uh, quite optimistic, you know, in the progress of- of these models towards something that resembles artificial general intelligence, um, and, uh, and I'm excited about it.

  5. 29:0534:04

    Balancing AGI Pursuit and Product Development

    1. BT

    2. HS

      I- I have to, again, take things in turn. In terms of kind of the pursuit of AGI, and then also building useful applications for consumers, a company has to have a priority. Uh, I think we both agree on that. How does one hold dual priorities of chasing AGI and building a great consumer or enterprise product at the same time?

    3. BT

      What is the purpose of building AGI? It should be to benefit humanity.

    4. HS

      Mm.

    5. BT

      And so what does it mean to benefit humanity? Um, I think that, you know, the OpenAI mission is to ensure that artificial general intelligence benefits all of humanity, and that can mean a lot of things. Uh, I think it means a lot- different things to different people, which is why OpenAI's been sort of a- a honeypot for controversy, uh, you know, in a lot of ways because it's very important in this space, and that mission, uh, r- can be reflected through the lens of your own values to mean a lot of different things. Um, but it... One, it can mean access. So when you think about how do people access this amazing new technology, one could argue that ChatGPT has perhaps been, um, the biggest breakthrough in providing universal access to AI. Um, you know, it's... Uh, I'm not sure the idea of, you know, building this conversational agent that everyone can just use by visiting a URL, that was not a thing people conceived of probably before that, and it's why, at least my understanding, why it has sort of a goofy name is it was a research preview (laughs) that turned out to be the most important product of the past decade, you know? And, um, and I think that, you know, one of the things I think about is, wow, what an important mechanism to deliver the value of AI and AGI to the world. Um, and I think it's very aligned, you know, with that- that high-level mission. Um, and I think that, you know, to your point on-You know, there are things you would do building consumer products that are different than AGI, yes, and that's sort of the complexity that all of these, uh, research labs or mission-driven companies are dealing with. But to, uh, imply that sort of building a widely used consumer experience is somehow contrary to delivering value of AGI, I don't buy it because, how else are you gonna deliver it? And- and there- there could be different answers, by the way. Uh, but, you know, you really wanna ensure that, you know, once these technologies exist, that it's broadly available to everyone in the world, uh, obviously in a responsible and a safe way. And so, uh, I think it's really great that a lot of these research labs have found a form factor that resonates with so many people.

    6. HS

      I get you, but like our end of perplexity is like, "No, we're not- we're not doing that. We're building a Google killer. That's what we want to replace." And then, you know, OpenAI has like an enterprise product with an enterprise division, uh, and then like AGI and safety teams, and it's not as- it's- it's kind of cloudy. Do you see what I mean?

    7. BT

      I think these issues are complex, to be honest with you, Harry. I mean, I don't think it's, uh... You know, look- y- you could describe it, you know, as a enterprise team. You could also say if you're trying to take the value of these models and ensure that they benefit humanity, do you want every product that benefits humanity to be built exclusively by OpenAI (laughs) you know? And so enabling developers to build on top of it is a meaningful part of distributing the value to the world. So, uh, and then similarly, you know, I think the- so I don't wanna, uh, minimize the- the- the complexity of all of these decisions, but I also think that, um, you know, if you- as you think about delivering the- the value of these models in a way that maximizes their benefit, uh, it doesn't seem that far off, you know? And- and it's, uh, and I'm- and it's also, you know, what a lot of other research labs are doing for, I think, similar reasons with similar missions. And, uh, so I think it's, um... Uh, I'm excited about the impact that it's having. You know, I- I ended up so many of the, uh, entrepreneurs I know who are working in AI do in large part because of how inspired they were by using these models as consumers, using the APIs, and I think it's having a super positive impact on the world right now.

    8. HS

      Do you think knowledge is proprietary to companies, given the incestuous nature of just some of the movements we've seen between people and teams?

    9. BT

      Certainly some knowledge is. Uh, I also think that there's a- right now, a lot of these companies are pursuing a mission that's bigger than any one organization. And so... And then similarly, uh, you know, a lot of these- the folks working on AGI are in or come from academia where the ethos is to publish, um, which has obviously, uh, shifted, uh, you know, a bit over the past few years. So, uh, it's a very complex question. Um, but I think that right now, I think the breakthrough ideas, um, you know, uh, sort of like- I do- I don't know the story actually, but, you know, the Wright brothers invented the plane. Apparently, there was another group of- you know, I actually don't know who it was, like came close as well and they were the ones who hit it. I think there's also this dynamic where these ideas are sort of in the air, you know, between different researchers

  6. 34:0441:35

    Sustainable AI Business Models Amidst Commoditization & High Costs

    1. BT

      as well.

    2. HS

      We- we mentioned the commoditizations of foundation models as a technology. We've also seen price dumping and a- a race to the bottom in terms of price as well in a lot of cases. How do you think about AI business models that are sustainable given incredible training and inference costs?

    3. BT

      So when I made the comment earlier about skeptical of companies doing pre-training, it was really based on the premise that most companies should be applying AI to build solutions and most companies should have relatively modest training costs, and most of their costs should be correlated with inference, which should be correlated with revenue and usage of your product. Uh, and- and I think that that's- and it- essentially because if you end up pre-training a very large model, you know, you end up with su- uh, such upfront capital requirements that you have to have a really valuable business model on the other side of that to justify that investment. So first, I think companies should really focus on how to find product market fit, you know, prior (clears throat) to taking on meaningful training costs that are... Fine-tuning might be fine, uh, you know, but, you know, certainly sort of pre-training models. On the inference side, um, I actually think, uh, the costs of AI are going down really, really rapidly. Um, I've seen a lot of people tracking sort of the cost of the GPT models over time and what's remarkable about the cost going down is the quality is also going up, you know? So it reminds me, you joked about- you- (laughs) when you were born, but, uh, but well, around the time you were born, every time I got a new computer in my house, it was twice as cheap and twice as good (laughs) . So I think, you know, on the inference side, I think that, uh, margins will probably improve for a lot of these use cases. Um, there's a lot of interesting technology trends like distillation, you know, taking a large-parameter account model and making a smaller-parameter account model from it that has similar levels of quality. And essentially what that means is you're sort of transferring some of the- you- y- while you trained a very large model, the a- you can run inference on something that's much smaller, cheaper and faster. Um, and then there's obviously a bunch of improvements on the hardware side as well, and, uh, I'm incredibly optimistic that just the cost of running AI will- um, could, uh, probably track something like Moore's Law. Like Moore's Law, I don't think it's a law (laughs) . I just think it's a trend, you know, that- um, and I think that's a really exciting thing for- for all companies.

    4. HS

      If we think about that reducing cost, uh, over time and Moore's Law proving out, we're also just seeing, you know, Meta, we're seeing, uh, Amazon, we're seeing Google, let's say they are gonna invest ungodly amounts in the next three to five years. Does that go against Moore's Law and the reducing costs for them? And- and how do you think about those two seemingly kind of paradoxical things?

    5. BT

      I think the large hyper-scalers, uh, you know, are in a challenging position where there's a big difference between, you know, owning and operating one of the best frontier models and not. Um, so as a consequence, I think that, you know, I probably make very similar decisions to all of those firms because there's so much value in, you know, for consumer products, for, uh, infrastructure as a service providers, um, to have the- the- a differentiated frontier model available to their customers, um, that, you know, the betting on the future and then similarly betting on breakthroughs and AGI I think is- is really rational. Um, but the reason I was talking about the sort of Moore's Law part of it is I think that sort of like in the infrastructure as a service market, it really consolidated around a very small handful of- of companies. And much like AI, you know, building data centers like scale helps. So, you know, the more data centers you operate, the more you can afford the CapEx to expand your data center footprint. Um, it's just one of those things which I think, you know, it should be financed and built by the large hyper-scalers because of the CapEx requirements to do so. And I think as the training market sort of consolidates and people start, you know, uh, I think it will probably help because the revenue will sort of consolidate, you know, around those providers as well. Um, so it is a complex situation. I think these companies have an imperative because of the potential impact of AI to spend. Uh, and, you know, the CapEx numbers are mind-boggling, but I- I probably would do the same thing.

    6. HS

      Can I ask you, when you look at Google and Amazon, their cash cow to fund this is cloud. Zuck and Meta do not have a cloud business being their cash cow to fund this. What does that enable or mean that Zuck can do differently with the ca- with the cash cow not being cloud? Say that after 10 tequilas. (laughs) Uh, is there anything he can do differently? Is there any freedoms that he has?

    7. BT

      Yeah, I thought it was... You know, one of the things that, uh, I have changed my mind on over the past year is how quickly open source foundation models would be impactful. So, I had a thesis when we started CIRA, uh, in- in March of- of last year that eventually we'd end up with a few frontier model, uh, frontier models essentially, uh, built and financed by some of the hyper-scalers, um, in partnership with the research labs and that we would eventually have a meaningful open source model or two, the equivalent of PostgreSQL and MySQL in the database market that would come out, eventually be adopted by one of the, uh, larger tech companies that- that wasn't one of the hyper-scalers just in the same way Google adopted Linux or Facebook adopted MySQL and Memcached, uh, you know, and contributed a lot of, uh, patches upstream to- to those projects. Um, and I would say Mark Zuckerberg sort of accelerated that by a meaningful amount, not only the timing of when that happened, but the quality. You know, LLaMA 3.1 is a really high-quality model. Um, so I think it comes from what you said, you know, without a cloud business to finance it, his incentives are different than, you know, the- the cloud providers. Uh, and I think he wrote, uh, no need for me to say, I mean, if you just read his post on why he believes that this is the right strategy, I thought it was a really well-articulated post. I think it's probably good for the AI market overall. You know, I think if you look at, um, just look at the cloud infrastructure market, you have a lot of proprietary solutions like, you know, DynamoDB to store data and you have a lot of open source, uh, things like Kubernetes to manage your infrastructure and then you have commercial companies commercializing those open source projects like Confluent with Kafka. So, I think that, you know, a healthy AI market probably needs all of the above. You know, you're gonna have the frontier models that are the best of the best that are licensed directly. The cloud providers will probably provide both, you know, both options. And, you know, I think it's, you know, if you're building these frontier models, you need to maintain a quality lead on- on the rest and I think it's really great for the ecosystem that there's a super high quality open source model available

  7. 41:3544:36

    Is There Ever a Stop to the Escalating Costs in AI Development?

    1. BT

      right now.

    2. HS

      Is there ever a stop to the cash tap that's been turned on? Someone said the other day it's kind of like the Manhattan Project for them, which is just like you- you're in and you can't stop and the sunk cost is there and you're like, "Oh, fuck, another 20 billion." Is there any turning off of that cash tap requirement?

    3. BT

      Uh, you know, one of the big questions is, you know, what scale of supercomputer and what methodology and what data is required to create something that might resemble AGI or create that breakthrough in economic value that would justify the investment. No one really knows that. You have a lot of theories, you know, about it, but I think when you look at these companies investing this kind of CapEx in that future, I think it's absolutely great. You know, I think it's, uh, totally understandable investors would look, you know, at these- the CapEx and say, "Give me the spreadsheet that justifies the returns." I think, um, while that's completely rational and I'm sure there's folks doing that, I think the- the idea that we have this potential to create something that benefits humanity this much, to have this kind of impact on the economy, to create something that valuable, I'm very grateful that there's some bold CEOs investing in that future. I think at every stage you end up with that sort of increasing resolution about how it will be monetized, um, what the great products will be. You know, in the first wave it was lots of copilots. Now you have, as I said, agents. My sense is there could be a ton of value created here and I think, you know, you're in this position now where you, um, you don't want to be sort of penny wise pound foolish when you're sort of investing in this future. It doesn't mean that... As I said, that's why when I mentioned I'm skeptical of startups doing pre-training, like that's a risk that I find irrational because you don't have the capital structure to take on that risk. It's probably, you know, uh, essentially putting your- your- your company on a-... uh, running towards a cliff that you probably won't, you know, have wings to fly off of by the time you get there. If you're a, you know, one of the larger companies you're referring to, you know, and you think about how do you grow your revenue by a meaningful amount over the 10-year period, is that... tell me the better option than this, you know? And so, uh, I think there's a lot of, uh, understandable skepticism, but I also think it's a very exciting future.

    4. HS

      Everyone on the show has said that we will see consolidation in the market. We've had the founders of Adept on. We've had Character.AI. We've had Cohere. Uh, we've ha- uh, we've had Reid, obviously, from Inflection. Do you agree that if you are not one of those core, then we are in a cons- consolidation market?

    5. BT

      I think we'll see consolidation of companies, uh, pre-training their own models, uh, you know, and I think that the cost structure of, um, the tools and applications companies are different and perhaps more sustainable. So like any market, you'll see consolidation when there's winners, but I think it will, um, happen over a more measured

  8. 44:3654:38

    AI Agents & the Future: The Decision to Build Sierra

    1. BT

      time period.

    2. HS

      I do wanna... You mentioned agents there. I do wanna move into kind of agents and the future of agents. First off, with Seeraa, why did... (sighs) You can literally do anything, Brett, if we're honest. Why did you decide to do Seeraa?

    3. BT

      So let me, uh, just describe what Seeraa does, and then I'll, I'll tell you why that's, I think is very exciting. So, at Seeraa, we help primarily consumer brands build branded, customer-facing AI agents. So if you buy a new Sonos speaker or you're having a problem with your speaker, you'll chat with the Sonos AI powered by our platform. Um, if you get a new car and it's got SiriusXM, you'll chat with Harmony, which is their AI agent. If you go to retail sites like Olukai or Chubbies, you'll chat with, um, I think, the Chubbies agent named Duncan Smothers or something. It's a really h- a great personality agent, um, that will help you everything from finding your order, to order returns and exchanges. Um, so we're essentially helping companies build their branded AI agent, um, for all parts of their customer experience. The reason why I think this is a really exciting area for our customers, and, and for me personally, is that I think we're in the era of conversational software. So I, I remember when, uh, in 2007 when, when Steve Jobs announced this, and now I'm guessing you were 11 then based on our previous conversation.

    4. HS

      Mm-hmm.

    5. BT

      So, you may not remember it as vividly as I do, but-

    6. HS

      I, I remember it well, Brett.

    7. BT

      Okay, good.

    8. HS

      I, I can remember there's, yeah-

    9. BT

      That's good. That's good.

    10. HS

      ... a thousand songs in your pocket, the iPod. It was mind-blowing.

    11. BT

      Yeah, it was mind-blowing. It, it really was. What's interesting though is in the corporate world, the dominant smartphone at the time was the Blackberry. And if you talked to anyone who had a Blackberry, they'd be like, "There is no way I'm gonna ever type on a touchscreen." Like, the Blackberry keyboard is and was beloved. People still talk about how efficient they were with it. But you fast-forward 10 years and 100% of those people had iPhones in their pocket. Why was that? I think the reason was is the multi-touch interface in the iPhone, plus all the benefits afforded by having a big touchscreen, from having a full-featured web browser to be able to watch media, um, we crossed a quality threshold where it was actually effective enough relative to the Blackberry keyboard that everyone said, "Yeah, this is just better. We're just gonna adopt it," and now we have more smartphones in the world than people. And I think if you measure what percentage of human-computer interactions are coming from smartphones' touchscreens today versus mice and keyboard, it's gotta be 95% plus. I think with GPT-4, we crossed, uh, that quality threshold of effectiveness with conversational AI, meaning you can now have a conversation with a computer and it actually works. It understands nuance. It understands sarcasm. Uh, you can actually, you know, have a really nuanced conversation and it will actually work. And as a consequence, you know, I think that, you know, if you fast-forward four or five years, when you're interacting with any of the consumer brands you work with, your insurance company, your phone company, um, a car rental company, you will probably be having a conversation with an AI more than you'll be clicking around on a website or clicking around on an app. And just like mobile apps didn't replace websites, they just sort of, uh, took a number of use cases away from them, if you think about when do you go to your bank- your bank's website versus the app on your phone, um, I don't think conversational AI agents will replace apps and websites, but I do think that every company will need one. We like to say like in 1995, the way you existed digitally as a business was to have a website. In 2025, the way you will exist digitally is to have an AI agent. Um, so in the context of Seeraa, in the context of that word "agent," we're all trying to enable companies to build their own, the one with their brand on it that does everything that their customers wanna do. And really in the fullness, fullness of that vision, if you think about everything you can do on a company's website, it's, it's pretty expansive. Um, so the reason I'm really excited about it is I think that it isn't just about automating, uh, something that exists and helping with customer service, though that's a meaningful part of it. Uh, we really think this is a new category of digital experience, and, uh, companies, uh, will and do want to be present in this world of conversational AI, but it's very hard to do, and that's why we're building a solution to facilitate it.

    12. HS

      Why is chat the right form factor? And is it multimodal? Is it like I can take a picture of the Domino's pizza and put it in my agent, and it's like, "Oh, that's the Mighty Meaty 17-inch." You can tell I'm not a vegan. Uh, to all vegans, I'm sorry. I just lost a big swathe of our audience. Uh, (laughs) uh, and like i- image-based is like me on a run being like, "Oh, you know, I want this." Uh, how do we think about multi-modalities and like why chat isn't

    13. NA

      (laughs)

    14. HS

      ... and maybe the dominant interface.

    15. BT

      I think chat and voice and multimodality, I think, eh, the reason why I think conversational AI is a meaningful form factor is because it's low friction. So if you look at the use of WhatsApp around the world, this means that you can essentially exist as a business in WhatsApp and be a completely full-featured customer experience. If you think about CarPlay, which if you've, you know, tried to use apps while driving your car and using CarPlay, it's fairly limited, right? But now imagine that you can have a full productive experience on your commute, uh, into work. You look at, what was it, five or ten years ago when every- Alexa was exploding and everyone was putting smart speakers on their kitchen counters. We have them in our house as well. All of a sudden, you're right now, for our family, that's getting the weather, turning on music, right, that type of thing. Imagine that was a full-featured computer and you could, uh, order an Uber, you could, you know, check your calendar, you could, you know, follow up on an email while you're making your coffee. Uh, having these conversational experiences, both text, voice, um, while I'm not arguing that it is the perfect form factor for every experience, but just like the r- the same way, I don't know what percentage of your email you type on your phone versus your keyboard-

    16. HS

      Majority ni- 90%

    17. BT

      (clears throat) Probably 90+%. And you wouldn't say it's because typing on your phone is easier than typing on your keyboard. It's a convenience thing. And so my point on going through those different form factors of being in your car, being in your kitchen, um, you know, being in WhatsApp and not having to uninstall an app, those things, like, I think consumer experiences are driven by convenience and, and lower in friction. And my thesis is just like touchscreens have come to dominate our experience with computers, uh, because of convenience. You can have a conversation in so many different places. You don't need an instruction manual. I think it will be the main way we work with computers.

    18. HS

      Do you think we see the removal of the phone, though, as the primary interface? You've seen Zuck with the Ray-Ban glasses.

    19. BT

      Yeah.

    20. HS

      Why do you, why do you need the phone at all if I can just talk to myself? Which would look kind of weird, but normal because-

    21. BT

      (laughs)

    22. HS

      ... I do often, as the venture investor with too much free time clearly, uh, like, you know, I could ask myself, "Hey, like, get an Uber, uh, I- I'm here." Do we see the removal of the phone?

    23. BT

      It certainly seems feasible, but it's, I temper that with the, if you look at the past 15 years of consumer electronics innovation, how many companies, including the ones that make smartphones, have tried to make devices that, you know, replaced or augmented the phone unsuccessfully. You know, this, this device here, this phone, it's so good at so many things and everyone already has one. It's essentially completely removed the market for almost every other type of consumer device. Um, so in the short term, my intuition is that the combination of a smartphone with Ray-Ban glasses or AirPods or the like, um, probably, you know, meaning you might need to look at your screen less, uh, than you do today. But my intuition is because of the prevalence of smartphones around the world, it will still end up being, you know, the, the primary computer that mediates those conversations. But to the point that you made, you know, as conversational experiences start working more, the, I always get the big phone, you know, just because I like the big screen, you know, I, I think that a lot changes and, you know, I always go back to the early app store days and the early apps being such skeuomorphic apps like flashlights and then you have the mobile native experiences like WhatsApp, DoorDash, you know, Uber, um, Instacart. It took one generation for, for those things to, to really, um, exist. I have a sense that we will, it will take a little while to see agent native consumer experiences and agent native devices and the hard part about particularly consumer electronics is, y- you kind of need the consumer experiences to lead a little bit to have the market available, so it might take a while, but I, it certainly seems in the cards now in ways it wasn't before.

    24. HS

      Are WhatsApp not best placed in terms of installing an app store for every big brand in the world to implement their own channel, and then you have existing distribution to a billion, however many users it is, integrated already into functionality and apps that they use already?

    25. BT

      I think WhatsApp is very well situated. Um, and it's in particular, if you look at the usage of WhatsApp in places like Brazil and India, um, you know, it is, uh, uh, approximating this already, but I think, you know, large language models and agents like the ones we built at Ciara open the door to sort of much more full-featured experiences. Um, but I also think the true, same is true as most mobile platforms. You know, I think that, you know, when you install an app on this, uh, it's probably going to be an app and an agent in the future. Like, when we work with our customers, you know, we want to enable them (clears throat) to take their AI agent in whatever form factor becomes a dominant consumer experience you should be, be able to install, uh, your agent in that, in that

  9. 54:3855:38

    Unanticipated Challenges in Building Sierra

    1. BT

      experience.

    2. HS

      Brat, what was the hardest thing with Ciara that you did not anticipate being so hard?

    3. BT

      I'll describe a technology problem and then I'll describe the human problem that was harder than I expected around it. So generative AI is very creative but inherently non-deterministic. Uh, you know, there, it's very hard to create determinism, the same inputs creating the same outputs, in particular because if you think about the breadth of human language, you know, and not, it's just inherently less precise tha- than most. And then similarly if you afford, um, AI the ability to reason, um, you know, sort of by definition you can't enumerate all the possible outcomes from there. So when you're building industrial grade agents, you know, for businesses that have real business rules they need to follow, um, we like to say software is going from the age of rules to the age of goals and guardrails. And the hard challenge there is how do you enable... businesses to express their goals and

  10. 55:381:04:37

    Transitioning from Software Rules to Guardrails

    1. BT

      guardrails.

    2. HS

      What's the difference between rules versus goals and guardrails? Are guardrails not rules?

    3. BT

      When I think of, uh, rules, just imagine, um, you're a retail website. Uh, you probably have a menu at the top left and you click it and it has, you know, the ability to sort of filter down all the items that you sell. Men, women, shoes, socks, pants, that type of thing. You've probably experienced this. You've essentially enumerated, you know, the rules by which people engage with your site. Here's the categories, here's what you can click. You could probably have someone actually click through all possible pages on your site and verify that they look correct, if you wanted to. Now imagine you put an AI agent on your site. It's a freeform text box. People can type whatever that they w- whatever they want. And, um, if you explicitly enumerate all the things the agent can say, it's gonna feel like a robot. You know, it's gonna... Because it... And that's essentially what chatbots from like three or four years ago felt like. And actually, in fact, many of them had the almost like the multiple choice e- uh, options available to you because they were... They couldn't figure out how to express that universe, um, nor did they have the natural language understanding to, to create a meaningful experience. So with an agent, you want to enable the AI to have agency and creativity to actually understand and really comprehend what the customer's problem is. But then once you go to say a, um... Let's just say you're a, um, streaming service and you want to use your AI agent to process cancellations. So when someone wants to cancel their account, probably the thing you should do is ask why. You might want to offer a discount. And if the person doesn't, still wants to cancel, you might want to cancel their subscription. The goal might be to process the cancellation and the goal... And, you know, you probably want to afford the AI agent some creativity on how to present those discounts to really do some discovery like a good salesperson was on, like, what value you hope to get from the streaming service. You know, things like that. And then eventually you want to cancel. Within that, there's lots of areas where you want to afford the AI agency and creativity, just like a really good, um, salesperson would, you know, have that conversation with you, um, and in an empathetic, not pushy way. Just try to figure out if there's a way to retain you as a customer. And that's nuance, right? Empathetic, not pushy. That's where you need to give the AI a lot of agency. You don't want the AI to go off script. You know, there was a article-

    4. HS

      Wouldn't it be quite funny if you were like, "I'd like to cancel." "Well, you're a dick." (laughs)

    5. BT

      Yeah.

    6. NA

      (laughs)

    7. BT

      Or even worse, there was an airline that had a chatbot that hallucinated a bereavement policy. You know, someone had a, um, uh, death in the family and the chatbot's like, "The ticket's on us." I won't name the brand on your podcast, but it was like, it was a pretty bad thing. So you don't want the AI to have so much agency that in the extreme case it hallucinates. And in the, you know, the case that you mentioned, you don't want the AI to, uh, just basically represent your brand poorly as well. So essentially when you're making a AI mediated customer experience, like a conversational agent, you need to really be able to declare both the goals of what the AI is supposed to do, and the guardrails, which could be around language and brand. It could be around tone, how pushy you want to be, how forceful. Um, and, and then similarly, like, here's the offers that are available, things like that. So that's the technical problem that we solve at Seer. And I think fairly novel, like in a novel way.

    8. HS

      Does that mean, sorry, then you only see kind of agentic implementation for bluntly low risk activities? Hey, I want money back on my Domino's. Listen, if you fuck it up, like, kind of who cares? But if it's like, you know, my operating system for finances, you know, whatever that may be, or your sales force, oh, I really don't want to fuck up pipeline for a billion dollar business.

    9. BT

      You know, I think that as AI improves, you'll see these agents adopted for increasingly more mission critical systems. So, you know, I think the adoption curve rationally starts with relatively low risk interactions and then progresses from there. You know, but our customers already are using it for revenue generation, sales, uh, subscription churn management for subscription services, things like that. So, uh, you know, I think that as companies develop confidence in their agents, they can go to sort of increasingly higher risk, um, areas. But this is actually sort of getting to the, the challenge where we started this conversation, is it's a very different design problem than traditional consumer design problems.

    10. HS

      Mm.

    11. BT

      You know, if you think about, um, designing a website or designing a marketing campaign, you know, you can have quite a bit of control over it. You can sort of enumerate all the different permutations that your customers might see. Um, the, uh, you know, AI, AI, AI agent, in addition to your consumers being able to say whatever they choose to, the agent, the more you give it agency, the more it will have empathy and feel delightful, but the less control you'll have over it. So the really interesting discussion we have with our customers is, you know, if you want, you know, your agent to have a ton of personality and a ton of empathy, you probably need to turn the knob up on agency. Um, but with that comes risk. You can turn the knob all the way down to zero, which by the way, our platform supports for the high risk cases. You know, there's some cases where you don't want a ton of creativity or, or non-determinism. But in that case, your agent might sound more robotic. You might sort of regress back to the chatbots of a few years ago. And so we don't come in with necessarily with a prescriptive view on what's right for a particular, you know, customer workflow or a particular brand. But it's a really interesting discussion and I think that just like the concept of a user experience designer was a new category of job as the web took off, um, uh, you know, and it wasn't just the domain of, uh, box software to design user interfaces. You know, we think that there's the, a role of an agent engineer who builds these agents on their platform. We think there's a role of an agent, a AI architect who's a customer experience leader whose job is to do the conversation design and shape the behavior of these agents. And we're essentially building products and tools for these different new types of jobs that we think are just as meaningful as UI design or a web developer. And, and I think that's really exciting, but it's also creating this natural tension with a, uh, at our customers. And I mean tension not in like the personal way, but just actual intellectual tension, which is how much agency do we want to afford our AI? And, you know, if you... And, and making the guardrails more narrow, um, makes the agent slightly less delightful, but making them more broad reduces control. And that's such an interesting discussion to have with brands.

    12. HS

      On the flip side, how much agency do you give a, a human who's been trained for a week and sits in your Detroit...... customer service department and could get high and then abuse a cl- customer. Like, do you know, I always think we forget this when we talk about AI hallucinations. We're like, yeah, and humans hallucinate a shitload, too. (laughs)

    13. BT

      This is the interesting thing about modern large language models and what I think the industry has come to call generative AI, is I think it violates most of the rules we have in our head about computers. You know, computers are designed to be reliable. You click this button, the same thing happens every time you click it. Um, they're designed to be databases. They're not designed to be creative, right? They're designed to, like, give you facts, uh, follow the rules that we have really, really fast. And, you know, just think about, uh, software engineering, the craft of software engineering, you know? There's entire methodologies now about how to get increasingly reliable software, which involves using source control like GitHub and using immutable binaries so that, you know, you can roll back and have the same behavior you had yesterday if something goes wrong. We've essentially spent decades trying to make things deterministic, repeatable, reliable. And now you make this new piece of software that is slow, somewhat expensive, extremely creative, and fairly non-deterministic, it, like, blows people's minds. I think that as a consequence, people are modeling, like, AI through the lens of how do we make it as deterministic as software was two years ago? I'm not sure that's the right model. I actually think, you know, the- the thought exercise you did is, okay, let's assume that, you know, our salespeople or our call center agents occasionally go off script. How do we deal with that? And there probably are operational mechanisms at your company to deal with those situations. Okay, why don't you just use the same mechanisms to deal with AI as well? And actually thinking of stop putting, you know, AI software in the bucket of computers and- and that rule set and how you deal with it to try to get to, you know, five nines of- of, uh, repeatability and say, "Okay, this is actually gonna be a really creative, really impactful, much lower cost solution." It will do some things that are incorrect some of the time. How do we deal with that eventuality rather than try to fully prevent it, which right now is- is almost

  11. 1:04:371:08:55

    Content Verification & Trust: Concerns in a Misinformation-Driven Era

    1. BT

      impossible.

    2. HS

      Can I ask a- a slightly off tangent one, but it makes me think of moderation there when you said about kind of how do we really think about, uh, determining whether someone went off script or not and what we do with it. My biggest concern, honestly, is, like it or not, I look at most of the stuff on my, like, Twitter timeline and I'm like, "Is that real or fake?" And I send it to my family and they're like, "Fake or real?" And we've, it's unbelievable the switch in terms of our questioning the verification of content. Um, and someone said on the show very recently, Arvin Narayanan, who's at Princeton University. I now get to interview professors. Very, very intelligent. Uh, uh, but, uh, and my mother's like, "Really?" Um, but he said, "You know, the- the thing is, Harry, it's not that we will believe stuff that you see that's not true. It's that you won't believe anything at all." Do you agree with him? And what are your biggest worries about this next wave?

    3. BT

      I really do believe for most of the problems in AI, there are AI solutions to those problems as well. Um, you know, I, you know, for a lot of content you're looking at, you know, it would be interesting to put it into things like ChatGPT and ask it, "Is this real? You know, how should I determine if it's real?" And you might get some good advice. As we think about, you know, in- information veracity, authenticity, um, you know, my hope is that you end up with the sort of white hat and the black hat, you know, and the white hat, um, you know, teams in this, just like in the world of cybersecurity will give us all the Iron Man suits we need, um, to, you know, be successful and trust, uh, or distrust, uh, the information that we see. So I think just like with all of these things, that's why I mentioned that- that outlook where, you know, I think as these technologies get developed, you know, you end up with, you know, collectively are learning about the ramifications of these technologies, which is why I believe in responsible iterative deployment of AI, because I think it's very hard to, in an ivory tower, predict all of the first and second order effects. But then it is an imperative for, as an industry, that we develop technologies and mediations and to, you know, for, um, uh, these different downsides of the technology. But I feel confident we can. I think, you know, all the great AI minds are- are trying to think about how this benefits humanity. And, you know, for every problem, there's a great entrepreneur or technologist or researcher who I think will come up with ways of- of, um, uh, meeting that challenge.

    4. HS

      Can I ask you a weird one before we move into a quick fire?

    5. BT

      Yeah.

    6. HS

      You're Bret Taylor. When you go out to fundraise, must be a little bit different now, Bret.

    7. BT

      (laughs)

    8. HS

      Like, how does that work? (laughs) Do you know what I mean? It's like... So just help me understand. You decide you're gonna do Sierra and you're like, ah, he, you know, like why... Yeah, I mean, I guess it's kind of a question of, like, why fundraise? (laughs) But then there's also a question of, like, how did you approach that now you could raise from anyone?

    9. BT

      Well, first, why did I fundraise? Um, I really believe in the importance of boards and having stakeholders and the accountability of, you know, having a board and investors and employees. And I want the employees coming to Sierra to know that Clay and I aren't doing this as a side hustle. You know, we wanna build a generational company. And then similarly, I really value the advice, I've been a board member as well as an executive, and I really value the strategic advice I got. So when we started the company, I just called Peter Fenton, who I've worked with twice before. He's the only person I talked to. And, you know, that- that was our first board member and with our subsequent round, similarly went on-

    10. HS

      How long was that conversation?

    11. BT

      Uh...... a, a, you know, I don't wanna disclose private details, but Peter and I have worked together a lot before. It probably could've been even shorter than it was but I'm not there to... I'm, I wanna talk to Peter about what we're doing and why and get his advice. So, it was the right conversation because I wasn't there in a transactional capacity and I think that the best relationship between, you know, investors and entrepreneurs, um, is one where you really like, they're your first phone call on a strategic issue and, and thankfully I've known Peter for, you know, almost 20 years, so it was, uh, pretty clear to me like the first person I was gonna call.

    12. HS

      I have a man crush on Peter's brain, so it's totally fine.

    13. BT

      (laughs)

    14. HS

      I, I remember when I had him on the show first, I was like, "Wow, he's the most articulate orator I think I've ever had on this show."

  12. 1:08:551:16:03

    Quick-Fire Round

    1. HS

      Um, listen, I wanna do a quick fire round, Brett. So I say a short statement, you give me your immediate thoughts. Does that sound okay?

    2. BT

      Yeah.

    3. HS

      So what have you changed your mind on most in the last 12 months?

    4. BT

      How quickly the cost of AI will go down, largely thanks to the emergence of distillation and open source models like LLaMA.

    5. HS

      What is the biggest misconception of the next 10 years of AI?

    6. BT

      The focus on hardware and models and not enough focus on the applications of AI. Uh, I think many of the defining companies in AI will be delivering consumer and business solutions that happen to be powered by AI, not just the models themselves.

    7. HS

      Who's the best board member you've sat on a board with and why them?

    8. BT

      I'll skip Peter since we just spoke about him. Uh, I don't stack rank these board members but one board member that I have worked with twice is Fidji Simo, so she's the CEO of Instacart. I worked with her at Shopify and she's also on the board of OpenAI, and one of those folks who's an operator who knows also how to be an effective board member and a remarkable intellect.

    9. HS

      Which VC is the single best picker do you think and why them? It can't be Peter.

    10. BT

      (laughs) I don't know. I don't follow it enough to know.

    11. HS

      This episode is brought to you by Peter Fenton. (laughs)

    12. BT

      Yeah. (laughs) I, I actually honestly don't follow them but not because I don't care but I, I, I follow the companies more than the, the investors so I don't, I just don't know.

    13. HS

      Can I ask your advice? You've sat on some of the best boards. I sit on boards now. I am a young board member. I wanna be the best board member that I can be. Is there any advice that you'd give me having seen many different types of boards and types of entrepreneurs?

    14. BT

      You know, I think the, the art form as a board member is how to be involved enough without jumping into the operations of a company and knowing how to give advice in a way that the CEO and the management team actually hears. Um, so, you know, I think the finding that balance of creating a cadence with the companies you work with to get the information you need so that you know where you're gonna add value, when you know when to like, you know, uh, call the proverbial bat phone because something's wrong, um, is the biggest art form. So, I would say, you know, board members who treat every engagement the same are probably not doing it right because different executive teams, different CEOs have... well, hear devise- advice in different ways and the businesses are very different, so I think really treating it very uniquely and finding an operating cadence so you can get the information you need to actually provide good advice.

    15. HS

      What yes that you got was the most important or significant yes?

    16. BT

      Probably the most impactful, unexpected point in my career, um, was Mark Zuckerberg making me Chief Technology Officer of Facebook. I'm not sure I was qualified to do that job, um, but he saw something in me and, and I obviously saw it in myself as well. I would say that moment kind of changed my own conception of myself, uh, from being sort of an engineer to, you know, being able to lead larger teams and it was largely because of Mark's faith in me.

    17. HS

      What's your favorite story from Facebook?

    18. BT

      When the movie The Social Network came out, we rented a movie theater and all watched it together and it was, you know, it's a fine movie but there was this funny scene where they order appletinis, which was like a kind of a lame drink, let's be honest. Like, no one, you know, orders an appletini and, you know, maintains their, their-

    19. HS

      Coolness.

    20. BT

      ... reputation on the other side of it. So we go out to a bar in Palo Alto afterwards and I walk up to get a beer and the bartender's like, "What the fuck is it with you guys and appletinis? People have been ordering it all night." So after the movie, like, everyone just orders appletinis and the guy at the Old Pro is like, "I ran out of" whatever that toxic looking green liquor is and he was like, "What is it with appletinis today?" So, that, that's a... one of my favorite moments.

    21. HS

      Do you know how I first, I first heard about Venture when I was 13 because I was sitting in a cinema in London and I saw the scene with Peter Thiel in Clarium where he invests in the young Zuck. And I was like, "Oh my God. I, I wanna be a venture investor." That was-

    22. BT

      Yeah. I mean, the ironic part of that movie is whatever the director was trying to achieve, I've met a lot of entrepreneurs who, um, view it as a source of inspiration which I'm not sure was the director's goal (laughs) when he-

    23. HS

      (laughs) I know. I, I think the exact same and also like I think everyone took like entrepreneurship away from it and I was like, "VC."

    24. BT

      (laughs)

    25. HS

      Brett, I, I literally don't think anyone has the, the view of leaders that you've had working alongside Zuck, Benioff, uh, board of Shopify with Tobi, uh, board of OpenAI with Sam. This is the greatest leaders of a generation. What do they have that is non-obvious that makes them great leaders?

    26. BT

      One of the things that I've admired the most about the leaders you mentioned whether it's Larry and Sergey, Marc Benioff, Mark Zuckerberg, uh, Marissa whom I work for at, at Google, um, is this sort of relentless drive. Every time you might, uh, get comfortable with a situation, they're always looking out towards the horizon. Uh, I always found Mark Zuckerberg particularly remarkable at this. Uh, every time I thought I was thinking long term, whatever Mark was thinking was about 2X farther in the future than I was thinking. (laughs) And, you know, it was so, uh, disconcerting and motivating for me, uh, when I was there. Uh, I think when I became Chief Technology Officer of Facebook after they had acquired my social network, I was 29 if I'm remembering correctly, I think it was, uh, 2009, um, and see how his brain worked definitely changed my perspective on what bold leadership meant and taking bets that could have been unpopular or complex in the short term to achieve a long term goal. And I think you really see with some of the great entrepreneurs this ability to think extremely long term and make decisions, uh, that, you know, especially if you're... nowadays if you're a public company, it's such a challenging, uh, you know, cadence to, to, uh, parade yourself out in front of investors every three months, and, you know, while investors claim to be long term, very few have the patience that they extol on their website, you know? And it really requires, um, this, uh, relentless focus on the future, and the other thing that I would say that all, all of them have in, in very different styles is the ability to communicate that vision to employees and stakeholders. Uh, employees probably being the most meaningful when you're running a company and motivating the team to work forward, but you really have to tell people what the future will look like, why it will be im- important and why it's a, an important thing to pursue and why people need to sort of overcome these short term challenges. You both have to have the vision and you have to bring, uh, you know, bring the team along with you. And all of them have it in very different styles but in ways that are incredibly inspiring.

    27. HS

      Dude, rock and roll. You're a star. Thank you so much.

    28. BT

      Thank you. Really appreciate it.

    29. HS

      I'll see you man.

Episode duration: 1:16:09

Install uListen for AI-powered chat & search across the full episode — Get Full Transcript

Transcript of episode vRhPc0zt2IE

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome