Skip to content
The Twenty Minute VCThe Twenty Minute VC

Christian Kleinerman: Do OpenAI and Anthropic Have a Sustaining Moat? Who Wins the AI Wars? | E1063

Christian Kleinerman is the SVP of Product @ Snowflake. Before Snowflake, Christian spent close to 5 years at Google as a Senior Director of Product Management @ YouTube working on their infrastructure and data systems. Before YouTube, Christian spent over 13 years at Microsoft serving as General Manager of the Data Warehousing product unit where he was responsible for a broad portfolio of products. ---------------------------------------- Timestamps: (0:00) Intro (0:30) Introduction and Professional Background (02:44) Professional Insights and Principles (05:33) AI: Insights and Impact (13:31) AI and Data: Ethics, Challenges and Legalities (18:08) AI’s Future Developments and Business Strategies (38:10) Reflections on Leadership (42:32) Quick-Fire Round ---------------------------------------- In Today’s Episode with Christian Kleinerman We Discuss: 1. Lessons from the Greats: How did Christian first make his way into the world of product? What are 1-2 of his biggest lessons from working with Satya Nadella and Frank Slootman? What are 1-2 of hs biggest product lessons from Google and Microsoft? 2. Generative AI: Real vs Fake: How does Christian analyze the current generative AI landscape? Which segments will be the fastest to adopt? Which will be the slowest? What aspects of the ecosystems are overblown? Which are under-appreciated? How does Christian respond to many VCs who suggest that many startups are simply wrappers on GPT? 3. Models 101: What matters more, the size of the data or the size of the model? Will any of the models used today be used in a year? Does Christian believe Alex @ Nabla is right in saying that “the most successful companies will be those that are able to transition between models the easiest”? How are we seeing the evolution of model size impact the accuracy of result snad size of data required? 4. Incumbent vs startup & Open vs Closed: Who is best positioned to win; startups or incumbents? What are the nuances; which spaces are best served for startups to win vs incumbents? Will open or closed source be the dominant mode? What are the single biggest challenges preventing open from being successful? ---------------------------------------- Subscribe on Spotify: https://open.spotify.com/show/3j2KMcZTtgTNBKwtZBMHvl?si=85bc9196860e4466 Subscribe on Apple Podcasts: https://podcasts.apple.com/us/podcast/the-twenty-minute-vc-20vc-venture-capital-startup/id958230465 Follow Harry Stebbings on Twitter: https://twitter.com/HarryStebbings Follow 20VC on Instagram: https://www.instagram.com/20vc_reels Follow 20VC on TikTok: https://www.tiktok.com/@20vc_tok Visit our Website: https://www.20vc.com Subscribe to our Newsletter: https://www.thetwentyminutevc.com/contact ---------------------------------------- #ChristianKleinerman #Snowflake #HarryStebbings

Christian KleinermanguestHarry Stebbingshost
Sep 22, 202346mWatch on YouTube ↗

EVERY SPOKEN WORD

  1. 0:000:30

    Intro

    1. CK

      I don't think that it's a mass firings happening next week. It's more incremental productivity boosts happening over the next 6, 12, 24 months. And then over time, you decide wheth- whether you take those productivity gains and you turn them into fewer employees versus more productively deployed employees.

    2. HS

      Christian, I am so excited for this. I've heard so many good things. So, I would love to start with your entry into products. How did you come to be SVP of product at Snowflake? Let's start there, Christian.

  2. 0:302:44

    Introduction and Professional Background

    1. HS

    2. CK

      Uh, thank you for having me, Harry. Um, background, born and raised in Colombia in South America. I did a startup there. I learned what not to do. I did another startup in the US. I learned what else not to do. So at some point, like, I need to learn from, from the guys that really know how to build software. This is 1999, I joined Microsoft, did a long stint in data all the time, SQL Server, appliances when appliance is were the thing to build, and then cloud services. And from then, I went over to YouTube at Google, where I was responsible for the infrastructure, including data systems. And I think all of that set me up for understanding data, being a data junkie. And when the opportunity opened up for Snowflake, I'm like, "I could appreciate the technology and the type of company." So I'm like, "I'm ready to be there."

    3. HS

      It was the charm and charisma of Franck. I don't blame you. I, I had the same feeling when he looked into my eyes. I do have to ask, you mentioned that you learned what not to do. If there were one or two things that you really learned what not to do, what would they be, Christian?

    4. CK

      I would say talent being the driver of truly great outcomes. I would say don't ever compromise on talent. Don't ever say, "Yeah, this person doesn't have the background, but maybe has the right intention. Just take a bet." No, the... I think that nothing substitutes talent. That's a very clear lesson learned. And, and the other thing that I ha- has been very clear is building a scalable business is difficult. Well, one of those startups, we, we built some scheduling software for airlines. And the thesis was, you build it once, works, then you just resell it and you can be the next Microsoft. It was not the case. There was a lot of customization. It would turn out into more of a services business. So, s- scalability and building platforms was another lesson learned.

    5. HS

      If that's the lessons learned from, um, respectfully, the startups that maybe didn't go to plan, we could say, when you think about Google and Microsoft, they're such symbolic institutions in our environment. What are one to two of your biggest takeaways from 13 years at Microsoft? I mean, shit, that's a long time (laughs) .

    6. CK

      Yeah.

    7. HS

      And then, you know, the four to five years that you had at Google. What are one or two of those big takeaways?

  3. 2:445:33

    Professional Insights and Principles

    1. HS

    2. CK

      Yeah. From, from Microsoft, I, I think I got the ease of use and s- and value of simplicity in products. At the time, SQL Server was coming from way behind, competing with, uh, market leaders that were Oracle and, and IBM. And the way in was not to have every single feature and capability that those technologies had. It was just simplify things. If you turn something that is a subset of the capability, but dramatically easier to use, that gets a following. And that, that was a very, very clear lesson learned. And I've seen it over and over. And by the way, the Snowflake story follows a big part of that, uh, journey, which is, if you simplify things to a point that it is delightful to use, people adopt. That's from the Microsoft time. Let me think. From the YouTube time, maybe the biggest lesson learned, uh, is that consumer products have many more elements beyond just technical difficulty. There's a lot of timing, what are the trends with consumer behavior? Um, sure, you need to have a good business model, you need to have good technology, but there are, there are things that you can't control.

    3. HS

      Can I ask you a really hard one? Is simple always better in product?

    4. CK

      I would say yes. Uh, well, uh, y- you'll, you'll say all things being equal. There are points where you will oversimplify, but I do think that make things as simple as possible and no more.

    5. HS

      If you could call yourself up the night before your first role (laughs) in product, before Microsoft, before Google, what would you call yourself up and advise yourself, knowing all that you know now?

    6. CK

      That the fundamentals of making something work as advertised make a very big difference. So simplicity is part of it. Quality is a big part of it. Even things like latency make a big difference. So, so there's, there's nothing like the magical experience of you try to use a product, and, and whatever technology or a device or an appliance at home, and if it just simply works and does what is expected, I think that is a magical thing. I- it's very hard to, to make happen in all situations, but that would be the, the big word of advice.

    7. HS

      Now, I, I do wanna focus the show on a, a joint passion for both of us. It's funny, we, we discussed a little bit in terms of topics to discuss today, and I was very excited when I got your suggestions. And I wanna start from the top, which is kind of ecosystem level, and then we'll move down. But as generative AI is kind of the biggest buzzword of today, uh, VCs and-

    8. CK

      Never heard of it.

    9. HS

      Never heard... What is this (laughs) ? So when we think about GenAI as a buzzword, like top down, how do you analyze the ecosystem today? Is the hype overblown

  4. 5:3313:31

    AI: Insights and Impact

    1. HS

      as a starting point?

    2. CK

      Yeah. I would say that for sure there is hype. For sure there's a FOMO on, what are you doing on, on GenAI? And people are rushing to somehow stitch some GenAI to their products. But if you look past that, there is fundamental innovation there. There is fundamental disruption. I do think that there is the opportunity to change pretty much every single interaction between humans and computers into something that is friendlier and simpler. So yes, the- there's fog and, and there's noise around it, but it will clear up, and at the end of the day, it's gonna be a different world.

    3. HS

      Do you think mobile is a good analogy to AI, or do you think it's actually more significant?

    4. CK

      I think it's comparable. Uh, I, I, I think it's on the scale of the internet, I think it's on the scale of mobile. One of those things where everything and everything, uh, a- anything and everything will get better.

    5. HS

      Tell us, what do you think is most exciting? When you look at the different verticals, use cases, opportunities, what do you think is most exciting?

    6. CK

      I would say that this is a real shot in the arm to the creative businesses. And, and that's how I, I think where, where we talked about generative AI, that it's generating or creating something. That's where we saw all the initial use cases and Stable Diffusion or Midjourney and those things. In many ways, because what in other industries we, we would call a bug or a hallucination, in the creative world, they're features. They're, they're, they're so... it's a goodness to come up with something that b- has not been done before, or it's a mix and match of existing things. So creative industries, uh, are probably the, the sweetest spot. I love what Adobe has been doing with their products and, and how they've integrated gen AI. Massive kudos to them. Then there's the opportunity in every other industry, in every other vertical. It's only that that requires with, requires a little bit of understanding, correctness understanding, data maturity, things like that. But every customer that I talk to these days, they're looking into doing something. They're all trying to figure out how to get started and how to get there. But I think it applies to all sorts of businesses.

    7. HS

      It's interesting you said there, about kind of, they're all looking to do something. You know what I'm finding though, 'cause I speak to these enterprises too. They've got no freaking clue how to do it, Christian. They're all like, "Yeah. So, it's exciting. How, how do I do that, Harry?" And I think the biggest businesses in AI will be built in the implementation services businesses, helping large enterprises implement AI in a meaningful way into their enterprise over the next decade. Do you agree with me in terms of this lack of enterprise education on implementation, and the potential opportunity for implementation services given that?

    8. CK

      Uh, I agree part of it is, uh, services, implementation, education, but I also think that the stack and the way to think about this is evolving. I don't think that they, we have a very clean, "This is the, the three or four components that you use, and this is the type of use case that you apply." Companies are playing with, hey, the LLM is magical, and you send it, send it questions and magic answers come, come back. Some others have combined, uh, vector retrieval with LLMs. But within that, there's a lot of variability. Which model, which database, or which vector database, how much do you prompt versus fine-tune? So I would say a lot of that is still being figured out, and I think the, the stack continues to evolve, will continue to mature, and in parallel, okay, then company need to get educated on when to use what for what use case.

    9. HS

      I get you. In terms of kind of, uh, like verticals that are fastest to adopt, I think this is the other thing that we all get a little bit overexcited about, which is like, speed of adoption always takes longer than people think. Many, if you can believe it, Christian, enterprises in Europe still have no idea what Slack is. And so my question to you is, when you think about the different verticals, who do you think is first to adopt and the fast movers? Who do you think is slow to adopt?

    10. CK

      I think it co- completely correlates with data maturity. You may have heard some of us at Snowflake talk about there's no AI or gen AI strategy without a data strategy. It's not just a line. Uh, uh, it's a truth that we strongly believe in. And from that perspective, I would say financial services are at the forefront. Most of the financials have figured out for a long time how to organize data and leverage data for competitive advantages. Go look up the sophistication of hedge funds, as an example. Retail, uh, and CPG companies have also been very wise at using data. Maybe I would p- point the public sector. Not that they're not data savvy, but they have a lot of regulation and constraints that may make it harder for them to adopt.

    11. HS

      How do you think about incentives being a problem? And my, my point here is like, if you think about healthcare, healthcare is largely dictated by, you know, government budgets, by the NHS in the UK or, you know, the a- alternative in the US, which is run by politicians. You don't want to replace nurses with AI-driven robots, procedures, 'cause then you'd be firing nurses, and then the front page of the newspaper is, "Biden or Rishi Sunak fires nurses." So the incentive mechanism is misaligned to increasing efficiency in healthcare or in a lot of public services, because it likely leads to job loss. How do you think about the incentive function being the problem for adoption versus implementation speed?

    12. CK

      I think it may be true that once we're ready to wholly replace nurses and other functions, probably there's an incentive problem. But I don't think that, that we're there yet. I, I think of most of the use cases right now are around productivity boost or assistance, copilots, as opposed to replacement. So, so think of it as, "I wanna help people be more productive," not wholesale replace, so at this point, I would say the incentives should be working, uh, maybe, I don't know, a year, a couple of years from now, what you're saying becomes true.

    13. HS

      You said there about kind of, there's no gen AI without being kind of a- incredible data strategy. You said to me before, "Generative AI is democratizing data access." You left me with that as a cliffhanger, Christian. So how so, and why do you believe this?

    14. CK

      Yeah, if, if you think of the role of traditional business intelligence technology was to sort of bridge the impedance mismatch between the business and business terminology and business users, a technology that-... frankly, it's just for a few people. Like, writing SQL statements is not something that most people in a company do. And that was the role of that technology to do, to do that translation. But it still requires some mapping and curation and, and, and effectively, how, how do you inform that impedance mismatch? I think gen AI has the opportunity to turbocharge this type of translation where the language is natural language and the answers come in natural language. But along the way, there's traditional database look-ups, traditional retrieval, and has the opportunity to democratize data, uh, dramatically more than what we are today. And now that, now that BI has not done a really good job, but I think it's, it's gonna be now data for everyone.

    15. HS

      When we think about kind of data as a competitive moat, you know, I've had a lot of people on the show say before, bluntly, that data is so freely accessible today, it's no longer this prized possession that incumbents can hail and use to their advantage given how freely accessible it is. To what extent do you still price, place a premium on data ownership to leverage

  5. 13:3118:08

    AI and Data: Ethics, Challenges and Legalities

    1. HS

      versus the freedom to access data?

    2. CK

      Yeah. I, I would separate there's both public data and private data, privately owned by enterprises. But the premise of your question is based on public data. And I would say if things did not change, the language models or the models in general would all converge towards they're all training on the same data. And at some point, it's what mix of data you use but you'll trend towards the same answer. The interesting trend towards there is the notion of many companies realizing that their data is being used and monetized by these models. So there are companies rethinking and changing their data policies. Are you allowed to call me? Are you allowed to train models with this? I think all of this will shift in the next six, 12 months. The, th- this is all starting already because in the same way that search changed the rules of engagement with public data, gen AI is doing the same thing. And, and the companies that are behind that data are doing so.

    3. HS

      You've left (laughs) me on, on a very big statement there which is like you, you mentioned there companies realizing that their data is being used, bluntly, and they're not being able to monetize it and they're losing eyeballs because of it on their sites. How do you think about the business model of the future to ensure that they don't lose this revenue despite their loss of data control?

    4. CK

      I don't know if they're gonna be able to keep all the existing revenue. But for sure, they should be able to capture some amount of revenue. So I don't know exactly wh- what would be the, the replacement ratios. But clearly, their data is valuable. Right now, it's not being, uh, uh, paid for and that says that there's a value gap there. And then you, you shared some of the license terms that are being floated around. I think it will shift, um, so then it changes the economics of all this.

    5. HS

      Can I ask, if you were to place a value on data versus model, what would you place a hundred as your total pie size? (laughs) Is it 80/20? Is it 50/50? How do you weigh the importance in terms of data and model?

    6. CK

      The vast majority goes to data. I don't know if it's 90-plus percent. Certainly, there is a lot of IP and technology that has gone into how do you build these models. But that is becoming less a differentiating aspect and, and what's becoming bigger is data. If anything, you h- you hear folks doing the math on are we gonna run out of public data to improve these models significantly? So I would say-

    7. HS

      I'm sorry. I, I, I-

    8. CK

      ... data takes the edge.

    9. HS

      ... I would just love your advice. (laughs) Why are models as little as 10? And bluntly, why is such value being placed on the likes of OpenAI, Bard, Anthropic if actually models are 10 and not a significant chunk more?

    10. CK

      Well, I, I would say if you look at it where, where things are today or where they were six months ago, maybe models would have taken, uh, a bigger edge 'cause they pave the way on how do you model the data. You, you could make the case data has been there a- all along. But I, I would stick to the 10% or the smaller number because now you see how many foundation models are being created. We've seen companies that with seven employees have creating models that are comparable for some use cases to what OpenAI or Anthropic do. So I would say at the end of the day, it's a data problem. And model, uh, th- those are strong words, but I think they're getting commoditized until the next big innovation comes and you allocate some more value to the model.

    11. HS

      Can I ask, what's the next big innovation do you think? Sorry, I'm really freewheeling here. (laughs) I'm just learning in, in real time. (laughs)

    12. CK

      I'm hearing, uh, I'm, I'm largely repeating the what has happened for, for language models is coming for computer vision for images. The democratiza- democratization, how do you simplify it. Then there's the intersection of those true multimodal languages, uh, which there are, there are many of them out there, but how do you turn it into it's completely seamless to go and intersect images, speech, text, all of it into a richer human-computer interface?

    13. HS

      Do you think it is the existing kind of model incumbents, your OpenAIs, your Anthropics of the world who chase down those innovations? Or do you think it's net new platforms?

    14. CK

      I have no idea. Um, I'm pretty sure all of them are chasing that. I, I don't know for a fact, but I'm,

  6. 18:0838:10

    AI’s Future Developments and Business Strategies

    1. CK

      I'm, I'm willing to, to venture that they're definitely chasing those innovations. But I also ... What we're seeing right now has generated such a big spur of creativity. Most new startups getting created, somehow they all wanna chase some aspect of the gen AI revolution. So maybe it's a hybrid of both. I don't think anyone knows how all of this plays out.... even 12 months from now.

    2. HS

      You mentioned there are startups chasing it. So many VCs say, "Ah, you know what? They're just a wrapper on top of a, you know, GPT model." Like, pfff. And it's kind of brushed off in that way. "It's just a GPT wrapper." Do you think that's fair, and actually these models and the providers will create and kill off all the startups with their verticalized use cases? Or do you actually think that VCs are being short-sighted with the kind of handoff of, "It's just a wrapper"?

    3. CK

      I think there's many categories. There are some very shallow wrappers on top of GPT-4. I don't place much value on them. I, I usually ask, "Hey, how long did it take you to build this?" Oftentimes it's a week or two. I don't think there's a company there. I do think that there are some very deep wrappers on top of GPT-4 that, uh, apply domain-specific, legal, or, or other domain, that I think you end up with a true way to bring GPT-4 to a given market or, or industry. I think those are of value. There are actually many startups chasing the different aspects of, um, innovation on the core technology. I, I, I was at a dinner a few weeks ago and someone was saying, "We're trying to blend fine-tuning with prompting." Someone else was saying, "We're trying to address the limitations of the transformer model." Someone else was trying to look at, um, uh, how, how to do computer vision better. Once you start looking at the core tech, not just application of existing models, I think there's innovation everywhere, which is why I would say there will be a number of new startups creating new things, and probably the OpenAs and Anthropics are continuing their innovation. Which tho- those are effectively research companies.

    4. HS

      They absolutely are in many ways research companies. The thing that I wanna ask is, like, often the size of the model is quite hailed. How important do you think model size is today, Christian?

    5. CK

      I think it depends on the use case.

    6. HS

      Hm.

    7. CK

      For a generic consumer product like ChatGPT, where anything is fair game, the model is supposed to know about every possible topic, it speaks every language, et cetera, I would say that for those use cases, a model that is large and has lots of cumulative knowledge is very valuable. I would say that for specialized use cases, which is what I see more in the enterprise, model matters less. If anything, model size will influence things like cost and latency. So, smaller may be better. And now there is plenty of examples that have, uh, been, uh, run where a smaller model fine-tuned for a specific purpose or a specific dataset produces results better than a generic model. So, I would say it's use case dependent.

    8. HS

      Do you think we actually do that, we downsize model size to increase efficiency, you know, to reduce latency, to reduce cost? Or do you think we actually just increase efficiency of compute to be able to ingest and work with larger model sizes more efficiently?

    9. CK

      Both. Both, and, and, uh, as I was mentioning startups, one of the companies that, that I talked to was working entirely on model compression only to improve latency and cost. So, so I think there is, uh, research and, and, and development on, on both. Sure, the computer's getting faster and cheaper, but smaller models are better. Like if, if you wanna have, uh, one or more model calls in the serving path of a consumer product, you have a budget of a few milliseconds to go make things happen, and size will matter.

    10. HS

      It's interesting, that, kind of thinking about kind of the efficiency, the cost, the latency. I, I had, uh, Noam from Character on the show, and, and he said, "The biggest problem that we have is the cost of training." He was like, "Two million dollars to train one single model." How do we think about the cost of training changing over time? Will we see a democratization there which will allow startups to fine-tune and train themselves? Will it continue to be high? Help me understand that. (laughs)

    11. CK

      For sure, like, all of this, the computers is, is, uh, trending down. But the other piece is there's a lot of reinvention of the core training. If you think about h- how many of the data s- uh, sets and data sources are common between all the foundation models out there, and they're going through the exact same processes, or very similar processes, are there ways to take a common subset and then use fine-tune on top of it and avoid the cost? I think we've seen reasonably good results under that path. So, I do think that both approach-wise and compute-cost-wise, cost of training is gonna get lower.

    12. HS

      Okay, so cost of training gets lower. There's also another thing which is, like, the longevity of models. I, I have one guest, Emad at Stability, and he said that no models we use today will be used in a year. Do you think that's true?

    13. CK

      Depends on how you define a model. If you say LLaMA and LLaMA 2 are the same model, or the model family, then yes, the, all of them will continue to b- to be used for a while. If you pin a specific version, then yeah, the, for sure there will be new versions of any of those models, Claude, Claude 2, et cetera. So, de- depending on the definition of a model or, or a model, uh, hierarchy, I would say things are gonna change or think it's gonna be stable. But the reality is there will be new models and new refinements on an ongoing basis.

    14. HS

      I agree with you there. Can I ask, when we think about kind of those refinements, companies that are able to transition between models obviously have the most flexibility. I think it was Alex at Nabla, a- another incredible fan that we had on the show, he said, "The best companies will be able to transition between models at ease, and those that can will win." Do you think that's right in terms of the importance of flexibility to transition between models being a core determinant of success?

    15. CK

      100% agree.I don't know if that, that's what determines who wins, but I, I do think that there is so much innovation in the landscape of models that anyone that builds too tightly coupled to a given model is sort of, uh, giving up optionality for the future. I just came back from Japan two days ago, and a big recommendation I was doing i- in a large forum was make sure that you build optionality into which model you're leveraging. Even though it's easier to learn the ins and outs of a single model, I think the optionality matters, especially there's way too much change and innovation going on.

    16. HS

      Can I ask you, what does it mean to make sure you have optionality? What do you do differently if you're thinking with optionality in mind?

    17. CK

      Well, y- you'd wanna be able to plug in different models. But it's not just replace the model and use everything else the same. Certain models r- r- re- respond different to a specific type of prompt. So, at some point, you need to have the... In the same way that you have a hardware obstruction layer or a cloud obstruction layer, you should have a model obstruction layer that knows how to translate a specific request that your application needs to do to a model with its intricacies or, or specific, uh, characteristics.

    18. HS

      I, I, I totally get you there. C- can I ask, in terms of, like, you know, the transition between models, implementation is not that easy. You're not just handing over data with ease. What are the biggest challenges to adoption, do you think, for startups and for companies moving forwards when they think about working with new models and this kind of data migration to them?

    19. CK

      There are a lot of issues. Um, probably the, the most obvious one is around the correctness and dependability of answers. If you ask folks, one of the k- key concerns are like, "Oh, these models make up stuff." So that, that is the, the obvious one. But then there are second order issues. Important, but maybe le- less obvious for, for people, which is security and privacy of data, the data t- that is used for the question. And then there are even more complex questions on the rights to the answers. Your Wall Street company asked me, "If we feed a number of portfolio trading strategies into a model and it makes a recommendation and that recommendation makes money, could anyone have claims on that, um, answer?" And it gets very complicated very quickly.

    20. HS

      Okay. Sorry. I, I'm writing down notes on my hand. Uh, (laughs) th- that was great. How do we solve the security side? I totally get you. You've got customer data, you've got transaction data. But you do want access to the LLMs. What do enterprises do to get the benefits of access to them without the security issues that come with just sending thousands of customers' data?

    21. CK

      I would say w- what you're seeing many of us, so Snowflake for sure on the list, is evolve their platforms where it's easier to bring LLMs to the data as opposed to send large data volumes to where the LLMs are. That's where you see Amazon has n- new services to do this. Microsoft obviously has it with, with A- Azure OpenAI. The trend is create private and secure endpoints that can run close to your data, and by implication, not only don't have to move and copy a lot of data, but more important, there are some assurances on what is done with your data.

    22. HS

      Okay. So, we bring them closer in terms of the endpoints. In terms of the, you mentioned the hedge funds there and, like, the right to the answer, and whether it's proprietary or whether it's spread across 10 hedge funds that also wanted that information, how do you think that plays out? Does anyone have rights? Do you buy rights to answers? Talk to me about that.

    23. CK

      It is complex. I don't know how it plays out. I do think that some of these statements about, um, public web data owners changing licenses or potentially licensing the data gives one path forward. There's been interesting developments in the last week or so. Microsoft saying that they'll stand by customers from a copyright perspective. That was meaningful. There was another one from IBM saying-

    24. HS

      Can I ask, why, why is that meaningful? For, for people listening, why is that meaningful that Microsoft will stand by customers for copyright data?

    25. CK

      Because enterprises are worried about if they incorporate gen AI into any of their products or services in a way that they truly depend on them, and at some point lawsuits start to fly, they're exposed. And because of those concerns, it has held back enterprises. So, Microsoft's statement is, is I would say material in alleviating those concerns that are very real. I talk to people every week, and those are real concerns.

    26. HS

      Do you think they will set a precedent now which for the next year or two while there is ambiguity in confidence from enterprises that actually these large providers must provide a backstop to their enterprise customers to allow them to onboard with confidence?

    27. CK

      I think it alleviates a faction of customers in a category of concerns. I don't think it takes care of all of it. The, the, there are opinions out there on the does copyright law even apply to gen AI. And, and it's a fascinating debate. Like, I'm not deep enough l- legally to, to m- make the, uh, that type of assertion, but I've heard the debate and it makes sense. So, I don't know how it plays out, but I think they... Standing by folks from a copyright perspective is a step forward.

    28. HS

      Can I ask, are there any other big regulatory intricacies or challenges that you think not enough people are paying attention to?

    29. CK

      The use case conversation is difficult. Much has been said, "Oh, don't worry about it. Most of the bad things that you can do with gen AI are already regulated and illegal, so nothing new." But I do think that there are entire categories of, um, of work products that we need to go think through what does that mean.

    30. HS

      Do you not think there's, like, an inherent challenge is in terms of the opacity of models? Like, when we think about the regulatory challenges, we can't continue to have such opacity to get to outcomes.... will there not need to be more transparency in models to show people the pathway to answers?

  7. 38:1042:32

    Reflections on Leadership

    1. HS

      of AI?

    2. CK

      I would say that for certain use cases, that's entirely true. Uh, like if I have a UI to, I don't know, configure a cluster, like, I don't need to know all the options. I just can specify what I want. There are many use cases where you may want a richer way to interact with data or with what- whatever is the problem space that I don't think GenAI would, would dramatically change it. It does, it does continue to help with personalization, and we've been on the journey of personalization for a long time. So I would say, yes, some use cases, uh, it shifts the, the value of UI, but many others has the opportunity to continue to enrich them, enrich them.

    3. HS

      I, I totally get you. Can I ask, what do you think Snowflake's biggest challenge is in terms of embracing and getting your arms around this next wave of innovation around AI? I know it's a continuation given the data-first strategy and mindset, but if there was a challenge or an internal, "Hey, we're gonna solve this," or, "How are we gonna get around this?" What do you think that is?

    4. CK

      Probably perception. Uh, I, I talk to folks on a regular basis, and many of them still think of us as data warehousing. We moved on from, from those or- uh, origins, I don't know, six years ago. And move on is not, not the right word. We, we expanded from those origins six years ago, but I think we need to make sure that organizations across the world understand that they can do AI and GenAI close to the data within Snowflake without having to copy the data to a different platform.

    5. HS

      Why don't they already?

    6. CK

      When you're very successful with some positioning, that comes and, and bites you later that you were too successful with that positioning. And, uh, for many years, we said, "Snowflake is a data warehouse built for the cloud." And that still keeps getting repeated over and over, and it's a journey. If you think about most companies end up stuck with their original use case for a long time, we see a little bit of that here.

    7. HS

      Okay, I was one final one before we move into a quick fire. I've so enjoyed this, but when we think about your leadership style as the product leader that you are today, how do you think your style of product leadership has changed over the years?

    8. CK

      I've been more willing to push opinions in a slightly more top-down way as more time has gone by. Earlier on, it was I need to be a great manager and listen to everyone and accommodate everyone's opinion, and, and I'm not trying to say that's not important. But in some instances where you want a product to come out with a consistent view as if it came from a single unified set of principles and individuals, sometimes you have to go and push for something like, "Hey, this is what we're doing." I think that confidence comes or evolves over time.

    9. HS

      A really hard one that's just a subsequent one from that. I had Gustav, who's the CPO at Spotify, on the show, and he says, "Talk is cheap, so we should do more of it." How do you think about the balance between internal debate on product and product ideas, iterations, versus just speed of execution and getting it done?

    10. CK

      I think it depends on the nature of the technology or the nature of the product. At Snowflake, we have both types of technology. So, the core subsystem that does, say, clustering of data on disk, I think you wanna design that thing really, really well, measure 100 times and cut once, because nobody wants their data to get corrupted or their results to be wrong if that thing is not built the right way. But if you want the, the UI for a query editor and you have 10 different ways on how you could do suggestions for customers, there's no right and wrong. Might as well go quickly, iterate, learn from, from users. So I would say both are the right tools and the right approaches depending on what you're trying to do.

    11. HS

      Listen, I can spend all day chatting to you. Uh, I wanna move into a quick fire round, so I say a short statement, you give me your immediate thoughts. Does that sound okay?

    12. CK

      Sounds good.

    13. HS

      Okay, so what do others not know that you know to be true?

    14. CK

      I don't know if others don't know, but I for sure know that it always comes down to people.

    15. HS

      In terms of hiring, in terms of customers, in terms of-

    16. CK

      Everything, the, the results, relationships, how things are going. Everything is just people.

    17. HS

      Can one succeed as a PM today without being deeply technical?

    18. CK

      For the most part, no. There may be

  8. 42:3246:36

    Quick-Fire Round

    1. CK

      a few types of products that you might get by, but I, I like deep technical PMs.

    2. HS

      What's your biggest piece of advice to a PM starting a new role today?

    3. CK

      Learn the product that you're a PM of. Go be a user. Go as deep as you can, know the technology.

    4. HS

      Do all founders need to be in the Valley who are innovating in AI?

    5. CK

      Absolutely not.

    6. HS

      What makes you say that?

    7. CK

      Well, there, there is amazing talent throughout the world.And e- even though the Valley has something special from the community and the ability to bounce ideas of w- one another, it's very clear by now there is a lot of innovation happening elsewhere.

    8. HS

      What's the best product decision you've made, and how did you learn from it?

    9. CK

      Focusing Snowflake on ease of use.

    10. HS

      What did you learn from that, focus on ease of use?

    11. CK

      It, it, it is something a little bit counterintuitive there. You may put out a product faster to the market if you just say, "I don't know if the, uh, th- we should be used this way or that way," so you just surface choices to users. And sometimes it takes longer for us to take the automatic choice and simplify it for our customers. So it may be counterintuitive that faster is not necessarily better if simpler is what is being traded off.

    12. HS

      What is the single biggest element that you'd most like to change about the AI community?

    13. CK

      I will double down on they need to know that Snowflake is a great platform for AI.

    14. HS

      (laughs) Listen, always be selling, baby. Uh, tell me, Quentin Clark said, "Were you right about Satya in the beginning?"

    15. CK

      I was super wrong. When Satya came into the enterprise business, this is before he was CEO, he came and in very short order made a lot of really difficult decisions, how the org was structured, uh, how contractors were hired, how we thought about the cloud versus the on-premises products. And at the time, my thinking was like, "I don't think that he understands all of this." And obviously with the benefit of hindsight, or actually shortly after, it was very clear he understood it better than all of us, and he's proven to be a brilliant leader.

    16. HS

      What do you think makes Satya such a brilliant leader?

    17. CK

      He is very clear on what the needed outcome or desired outcome is, and so he's a clear thinker would be the attribute. And then he can relentlessly drive towards it and not get i- i- encumbered by all the hundred reasons that we all make up for ourselves.

    18. HS

      I love Frank Seidman. Okay? I love him 'cause he's no bullshit. He says how it is in a world where no leader says how it is. What have been your biggest lessons from working with Frank?

    19. CK

      He is also such a clear thinker. He, he has that commonality w- with, with Satya, that he becomes a clarifying, uh, force. Oftentimes, if you're just picturing yourself telling Frank about a problem and a couple of options, just in the formulation it becomes very obvious that you don't even need to get his opinion 'cause you know where he's gonna stand. So that, that type of clarity that he has simplifies decision-making, accelerates decision-making. Uh, he's also a- an amazing, amazing leader.

    20. HS

      I, I love that. And I, I always, you know, love his, um, you know, uh, emphasis on focus. I think it's the, the most important lesson I've learned from him. Final one for you, my friend. Next 10 years, what role does AI play in society then?

    21. CK

      Productivity boost on pretty much everything we do. Uh, everything is gonna be simpler, easier, faster.

    22. HS

      Wh- what impact does that have on GDP? Is it like 2%? Is it like 10%?

    23. CK

      Well, I don't, I don't know the, the, the magnitude, but for sure net positive. (laughs)

    24. HS

      Should we have a bet?

    25. CK

      Sounds good to me. (phone chimes)

    26. HS

      What do you wanna bet on? Would you do 10 or two?

    27. CK

      Two.

    28. HS

      Yeah, I would too. Yeah. Bugger. Well, listen, I'll buy you dinner in London next time you're here to thank you for this anyway.

    29. CK

      Sounds wonderful.

    30. HS

      Great. Frusion, this has been fantastic. You are a star. Thank you so much for joining me today.

Episode duration: 46:36

Install uListen for AI-powered chat & search across the full episode — Get Full Transcript

Transcript of episode wzNiEpCxx9g

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome