Skip to content
The Twenty Minute VCThe Twenty Minute VC

Kevin Scott, CTO @ Microsoft: An Evaluation of Deepseek and How We Underestimate the Chinese

Kevin Scott is the CTO of Microsoft, where he leads the company’s AI and technology strategy at global scale and played a pivotal role in Microsoft’s partnership with OpenAI. Prior to Microsoft, Kevin spent six years at Linkedin as SVP of Engineering. Kevin has also enjoyed advisory positions with Pinterest, Box, Code.org and more. ---------------------------------------------- In Today’s Episode We Discuss: 00:00 Intro 01:08 Where is Enduring Value in a World of AI 08:13 Why Scaling Laws are BS 10:00 What is the Bottleneck Today: Data, Compute or Algorithms 13:21 In 10 Years Time: What % of Data Usage will be Synthetic 18:59 How Will AI Agents Evolve Over the Next Five Years 30:15 The Future of Software Development 35:05 The Thing That Most Excites Me in AI is Tech Debt 39:01 Quick-Fire Round 41:27 Leadership Lessons from Satya Nadella 42:36 DeepSeek Evolution: Do We Underestimate China ----------------------------------------------- Subscribe on Spotify: https://open.spotify.com/show/3j2KMcZTtgTNBKwtZBMHvl?si=85bc9196860e4466 Subscribe on Apple Podcasts: https://podcasts.apple.com/us/podcast/the-twenty-minute-vc-20vc-venture-capital-startup/id958230465 Follow Harry Stebbings on X: https://twitter.com/HarryStebbings Follow Kevin Scott on X: https://twitter.com/kevin_scott Follow 20VC on Instagram: https://www.instagram.com/20vchq Follow 20VC on TikTok: https://www.tiktok.com/@20vc_tok Visit our Website: https://www.20vc.com Subscribe to our Newsletter: https://www.thetwentyminutevc.com/contact ----------------------------------------------- #20vc #harrystebbings #kevinscott #microsoft #cto #aiagents #deepseek #satyanadella

Kevin ScottguestHarry Stebbingshost
Mar 31, 202547mWatch on YouTube ↗

EVERY SPOKEN WORD

  1. 0:001:08

    Intro

    1. KS

      This is the best time to be alive if you have an entrepreneurial spirit. I can very clearly see what we're doing now, and, like, what we're doing next, and I don't see the limit to the scaling laws. Don't believe in this, like, one agent for everything sort of theory. I think you'll have a lot of agents, and, uh, the reason I think you're gonna have a lot of agents is because your product managers are probably going to have to be domain experts. The agents, they will definitely be less transactional, less session oriented going forward.

    2. HS

      Ready to go? (instrumental music plays) Kevin, I am so excited for this. I was just telling you, I was listening to you and Shrep on my run. I, I don't think I've ever run quite as fast, which clearly means the conversation was brilliant, um, and I need to listen to all of the shows. But thank you so much for joining me.

    3. KS

      Well, either brilliant or awful, uh, in that (laughs) you're trying to end your run so you can be done with it. (laughs)

    4. HS

      I've never done a 10K so fast. Um, I, I wanted to start with a super easy question, which

  2. 1:088:13

    Where is Enduring Value in a World of AI

    1. HS

      is, my job as a venture investor is to try and determine where value lies in different given moments. And I look at the world today, and for the first time in quite a long time, Kevin, I don't know. (laughs) And my question to you is, in this next generation of AI, where does value lie sustainably, do you think?

    2. KS

      Yeah, so I, I think the thing that you just described, which is, like, all of a sudden things have gotten a little less clear than they had been, is exactly the thing that happens at the beginning of every big technological paradigm shift and every new cycle that's driven by it. So it was super confusing in the early days of the internet, and I think it was super confusing in the early days of mobile, where everybody, you know, had these ideas about what was gonna be valuable, and very few of those ideas were actually the durable ones that proved all the way through.

    3. HS

      In, in those moments of transition where there is this confusion, what have you learnt is the right action to do? Is it to be active, to iterate and learn, but you'll make so many mistakes that you regret, or should you sit on your hands and watch others make those mistakes?

    4. KS

      Oh, God, no. Like, you definitely shouldn't do the latter. Um, so, like, this is the best time to be alive if you have an entrepreneurial spirit. Um, and, like, the thing, the thing I think that you have to do in these moments is not forget the things that you've learned from the past moments about what works. And it's not like, okay, well, like, it's do this specific thing, but it's, like, how you go about doing, uh, you know, that exploration that you just described. Which is, you know, product matters. Uh, you know, I've, I've been saying this for the past couple of years, that models aren't products, uh, because everybody, like, was just so fascinated by the infrastructure itself, and, you know, like, "Oh, we..." A- and, and, like, this is also a characteristic of the beginning of these cycles, is you have technical people who get just swept up in the technical bits, and they kind of forget that the only thing that really matters is making good product. And, like, that's where we're at right now. Like, you have to make good product, and, um, you know, you have to have ideas and have conviction, and then you have to go get stuff done really fast, uh, so that you can see whether you're full of crap or not about the conviction that you have. And y- you're not... You have very few patterns at the beginning of a cycle to go snap to. Like, you're not looking at someone else's success and saying, "Okay, well, like, I'm gonna do that, but just a little bit better." Like, you're trying to figure out something completely new, and the only way to figure that out is, like, you gotta launch stuff, and gather data, and iterate, and, you know, be super, super brutal with your own self about what you're seeing. Like, you can't love your idea so much that you overlook what it is you're seeing about the data and the feedback that you're getting.

    5. HS

      You said that models aren't products. You know, I just had Andrew from Cerebras on the show, and I asked him this question. I said, "If we think about compute, uh, or kind of hardware, and then we think about models, and we think about apps, where does the value lie?" And naturally, he said, "Compute." Um, but when you think about that kind of three-pronged tier of value, and you said that models aren't products. If they're not products, does that mean they're not valuable?

    6. KS

      No, no, they're super valuable, but they're only valuable e- to the extent that you can connect them to things that users need via product. So, like, in the limit, I think product is the most important thing. Now, there, i- if, if you build good models, and you build good infrastructure around models, and you have good, efficient compute, and, like, you have all of these other things, uh, you're going to get lots of ability to monetize all of those things. Because as people build those products, like, they will need to consume your platform and your infrastructure, and, like, all of that's good. Uh, but most of the value has to be in the products. Like, you know, we don't build infrastructure just for the sake of infrastructure. Uh, like, we build infrastructure so people can make product.

    7. HS

      This is a leading question. Then again, if you think about... And, I think you might know, but if you think about those products, who benefits most? Is it startups who are able to integrate new technologies very easily from the bottoms up, starting from nothing? Or is it Microsoft integrating AI into incredible distribution already, Google doing the same? Who benefits most in that respect?

    8. KS

      Again, if you look to past cycles, like, you've got a pretty good mix of where value gets created across, uh, startups and new ventures and existing enterprises. And so, I think everybody's kinda doing the same thing. Like, you're trying to discover the new. If you are...If you're a big company like Microsoft, uh, with a long tradition and a bunch of successful things already in the market, like, the thing that you are trying to do is figure out what are the things that you already know super well, and, like, which of the customers that you're already serving super well can you do for them with this new set of capabilities that you, uh, you know, that you can provide. And, you know, hopefully, you know, like, I run, among other things, Microsoft Research, and so, like, I, uh, I also have a charter of, like, you know, hey, can we go sh- try to shine flashlights in places that no one else has shown them before, and, like, try to discover some, like, super disruptive brand new things? But, like, that's kind of the job of s- the startup ecosystem as well. And I, I, I also am an angel investor, and, like, I buy startups, and I've worked at startups. And, you know, so I, I think it's just really important that you've got lots of people hunting for those interesting new things. And, like, I, and I, I have super high conviction on that in the AI platform transition that we're going through right now, because, uh, it's impossible for any entity, like Microsoft or any other big company, to have enough imagination and enough perspective to know what every interesting thing is. Uh, so having just this vibrant ecosystem, lots and lots of people sort of exploring, you know, where value exists, I think, is incredibly exciting and necessary. And I also think that there has never been a moment where the tools, the infrastructure, and the platforms are as cheap and accessible and, uh, available and easy to use as they are right now. So, it's just super easy to, like, pick stuff up and just go get

  3. 8:1310:00

    Why Scaling Laws are BS

    1. KS

      cracking.

    2. HS

      I was listening to your show, as I mentioned earlier, and y- you push back on the idea of, uh, reaching scaling laws and this kind of asymptote of, uh, efficiency or effectiveness. When many people suggest that we are hitting scaling laws soon, first, why do you think we're not, and that's a ridiculous statement?

    3. KS

      I can very clearly see what we're doing now and, like, what we're doing next, and I don't see the limit to the scaling laws. Like, if you're just sort of thinking about the raw capability of the models and, like, how well you can condition them to reason over increasingly complicated things, like... Uh, uh, and I'm sure, like, there, there, at some point will be, maybe, a limit. Like, I, I intuitively feel like there mu- uh, there must be. There's some people who don't believe that there's a limit, that, you know, if you... The limit that human beings have on intelligence is, like, you've got so many neurons, like, packed into your skull, and, like, you've got about a 20-watt power envelope, and, like, that's the limit. And some people believe that, you know, if you're, you h- you have AIs, that, like, there, there is no such limit and, like, things, you know, will continue to scale into, like, you know, weird territory. I, I don't really... Like, that I don't necessarily believe. I believe we will get to some point where we'll hit a scaling asymptote and, you know, like, there'll just be diminishing marginal returns, and, like, it's so expensive that we will decide it's not worth spending that next dollar to, like, make this thing one unit smarter, because we haven't, you know, figured out how that translates into something that's useful for, uh, like, the, the people who are using the tool. Um, I think that point will come. I don't, I just don't see it yet. Like, it's not, it's not not in the, not in the viewfinder right

  4. 10:0013:21

    What is the Bottleneck Today: Data, Compute or Algorithms

    1. KS

      now. (laughs)

    2. HS

      When we think about, like, the three core elements that, that make up kind of efficiency in this respect, it's kind of data, compute, and algorithms. When we d- d- drill into data, what are your biggest observations on data efficiency, the importance of quality of data versus quantity, synthetic versus human? How do you think about that today?

    3. KS

      Yeah, I mean, like, the, the mix of synthetic data is going up. High-quality data's becoming, uh, much more useful, uh, in, you know, especially in the post-training parts of the, um, model production pipeline than low-quality data. So, like, I, I, I think we're clearly at the point now where, you know, if you have, if you have the right infrastructure, and you have super high-quality data and super high-quality expert human feedback, you can amplify that into, like, the, the right set of tokens for, uh, training bigger and bigger models. And, like, that stuff is way more value than just, you know, sort of the undifferentiated tokens that are, you know, like, floating around on, on the web.

    4. HS

      What questions do we not know about data and its usage that we would like to know and would be most helpful to know?

    5. KS

      Uh, there's sort of a super interesting thing right now that, um, we don't have, um, around assessment. So, it's very hard to know, uh, like, quantitatively what the incremental value of a token of data is to the quality of a model generated that uses it in its training. So, in other words, like, if you're sort of asking, like, "Okay, well, I think my data is super valuable, um, um, and, like, if this gets used in a model, it's gonna make the model better." Uh, like most of those assertions that people make are, uh, well, like, most of the assertions that people make are just unfounded in any kind of science. Like, no, no one's got a... And, like, the measurements we do have show that there's a pretty big disconnect around what some people think valuable data is versus, like, how valuable it actually is to producing, uh, capabilities and models that are legitimately useful. And, uh, and, and most of what's legitimately useful is, like, models, you... People who think of models as repositories of factual information, and they're treating them like the world's worst and most expensive databases, like, it's not-... not super useful. Like, we've got search indices and, like, databases, and those things are, like, plenty good enough for retrieving, retrieving information. Uh, like, what you want models to be able to do is to be able to reason over information. So, like, if, if you give them access to information, like, how well are they able to reason over a set of information to go do something that's useful for you? Um, and so you just need different tokens for training a model to make them good at reasoning than you do at making them, you know, recallers of facts.

    6. HS

      It's so funny you said about reasoning there, because it just made me f- think about kind of inference. I get really annoyed by the word inference. I wish we could just, like, delete it and just call it usage. And it's like, it's usage. There's training and there's usage. And then-

    7. KS

      Yeah.

  5. 13:2118:59

    In 10 Years Time: What % of Data Usage will be Synthetic

    1. KS

    2. HS

      And, and my, my question to you is, you've been very clear about the transition of emphasis, importance from training, that which we had over the last few years, to inference.

    3. KS

      Yep.

    4. HS

      What are we not talking or seeing in inference that we need to spend or think more about?

    5. KS

      You know, I think the thing that most people miss, uh, although the, um, DeepSeek R1 launch, uh, you know, a few weeks back kind of clued everybody into it, is that we just have an incredible track record over the past handful of years. And it's many years now of, like, just repeated year over year, like, mind-boggling progress in optimizing the performance of models so that performance of inferences is just better and better and better. So, like, o- over time, the models have gotten bigger and the API calls have gotten cheaper. Uh, and like a little bit of that is because you get, you know, maybe like a 2X benefit price/performance from hardware every generation, like, if you're lucky. Um, but you get a much bigger improvement, uh, price/performance-wise from all of the things that you're doing in the software stack. You know, a- again there, there's just a ton of work happening there. Um, you know, the DeepSeek R1 stuff, uh, which, which was good work, uh, is, you know, the way you should think about it is it's like a point on a line of, uh, price/performance improvement that maybe was invisible to everyone else but, like, not invisible to the people who are like, you know, neck deep in optimizing these systems. And it's not the last point. Like, it, you know, it, it just marches on. (laughs)

    6. HS

      What was the internal sentiment towards that when it came out? And to what extent do you think-

    7. KS

      I was s- I was surprised at h- what the public reaction was. We've had models more interesting than DeepSeek R1 that, like, we didn't even... we chose not to even launch them. It's like, I was surprised at how interesting people thought that it was. They did good work. So like, don't, don't get me wrong. Like, it was, it was good, solid technical work and, and like, it was super cool. They chose to, you know, release this thing and make it, you know, open source-ish. Uh, and, you know, and it's like really interesting like seeing, you know, how the, how the public reacted to what they did. Um...

    8. HS

      Is there anything that you learned from how the public reacted in the release that you take with you to your releases?

    9. KS

      Even when you ha- you've made it as easy and cheap as humanly possible for folks to go do something, uh, like, they still have super strong preferences about the how. We're, we're paying very close attention to that. You know, we have to give people more, more how than we have been, uh, doing it, because like, I, I think developers want lots and lots of choice.

    10. HS

      What did you believe that you now no longer believe? Or what have you changed your mind on in the last 12 to 24 months?

    11. KS

      When I was a graduate student, I was like a complete open source zealot. You know, and as I've gotten older, like, I, I sort of have become a lot more pragmatic and it's like, okay, well, like, it, it's probably more important for me to make a set of pragmatic decisions about how it is I'm going to go build these things rather than singly optimizing for my curiosity.

    12. HS

      When you look forward at the next three to five years, three to 10 years, how do you think about the pervasiveness of open versus closed and which will be more dominant than the other?

    13. KS

      I, like, I think there's gonna be lots of both. Um, and I, I think, you know, part of it is... I, like, let's just, like, forget about AI which is sort of the controversial thing at the moment. It's like a thing where, you know, indu- in- industry structure hasn't settled yet and like, we don't know exactly what it's gonna be. But you just sort of pick your previous, uh, things, like search for instance. Like, there's a whole bunch of open source, uh, search engine projects out there. Um, and people who want to do search to like, have a search feature in their application or who want to go build a search engine themselves have lots of options. Like, they can go grab something open source as a starting point. They can stand up a product. Like, they can go, you know, they can go load their data into something like Azure Cognitive Search, uh, which is a search as a service platform. Like, Google has one, Amazon has one. Like, they're, they're readily available. And then you still have search engines like, uh, Bing and Google, um, that are out there. And so like, they all exist. Um, all of the economics, uh, in search go to, like, somebody who stood up, uh, gigantic infrastructure and who's sort of running like a whole search business, uh, like with its own feedback loop. Um, you know, and so I think, you know, we're probably gonna have similar sorts of things happening here. Um, for the infrastructure layer, you're going to, you're gonna have lots of open source infrastructure products and people are going to use them in lots of different ways. But like, you're also gonna have a lot of people who don't want to have to...... go stand up their own infrastructure from scratch or to, like, take an open-source project and go build it, you know, out where it's lacking for the things that they need it to do. And it... And, like, it's, it's good to live in a universe where you have both of those things.

  6. 18:5930:15

    How Will AI Agents Evolve Over the Next Five Years

    1. KS

    2. HS

      You mentioned earlier the centrality of product. (laughs)

    3. KS

      Yep.

    4. HS

      Um, kind of taking that into account with the current conversation here, how do we think about whether chat is the right UI for the next kind of paradigm of this product realm? You know, OpenAI and ChatGPT has made it the default. To what extent do you think it is the right default and how we will see that change?

    5. KS

      Look, I think it's a, you know, reasonable step in the right direction. Like, the thing I've been s- saying for a few years now is, uh... Like, I think one of the most interesting things happening with AI is we- we've had one paradigm for using computing devices for effectively 200 years, since, uh, Ada Lovelace wrote the first program. So, if you want a computing device to go do something for you, you had to be a programmer yourself, which is a pretty, you know, high barrier to entry for a lot of people, or you have to rely on the fact that a programmer has anticipated some need that you might have and, like, packaged up, uh, like, a piece of software into an application that th- you are able to run. And those are the two ways you can get a computing device to do something for you, uh, until now. And so, like, the, the thing that changes with AI is it can understand a thing that you want your computing device to go do for you, uh, and it can figure out a way to go make that thing happen, and you don't have to be a programmer. And, you know, it's, it's kind of a profound change because, like, it basically means... And, and, like, I don't think this is next year, but it's probably not gonna be 10 years, um, this whole notion that the- that, that you have teams of people who are... Whose, whose job is to go anticipate a bunch of very granular user needs in some narrow space, and then they're gonna go write a bunch of code and then figure out how to hang that code onto some user experience. And they hope that they've done a good enough job, and they've anticipated the needs in the right way, and they've designed the user interface in the right way, and, you know, they've gotten all of the code right. You know, and, and, you know, they just sort of grind away on figuring out what that feedback loop is. That's gonna change. Like, you just aren't gonna need as much of that anymore. What you're gonna need instead... A- and, like, it doesn't mean that that completely goes away. Like, you, you will still need all of the capabilities that these applications provide, but you're probably going to want some kind of agent actuating those capabilities on your behalf rather than you having to, you know, do this weird impedance-matching that we've got right now. Between, like, how a user has a set of expectations and how, uh, you know, a, a product team has imagined what those expectations are.

    6. HS

      Is there a role in engineering or product teams that we have today which you're like, "In 20 years' time, people will look at and go, 'What? You had secretaries who typed out, you know, voice-recorded notes from a doctor? What?"'

    7. KS

      The role of engineers are... You're, you're still gonna have to have people who build capability infrastructure. So, you know, make this thing happen in the real world. Like, um, you know, provide access to, like, this, you know, weirdly situated repository of information. Like, you know, d- It's, like, just a bunch of capability things that people will need to build. But, like, the user interface that surfaces those capabilities will probably be agents. Uh, and, you know, product managers, like, I, I, I don't als- don't believe in this, like, one-agent-for-everything sort of theory. I think you'll have a lot of agents, and I... The reason I think you're gonna have a lot of agents is because, um, your product managers are probably going to have to be domain experts, like people who sort of deeply understand something like medicine or, you know, drug discovery or early-round venture investing, or, you know, like... You know, just sort of pick your thing. Uh, and, like, you know, they will have to deeply understand the idiosyncrasies of that, that, and they will have to, like, help set up the feedback loops that help agents that are, like, assisting people doing those tasks, like, better and better do their job. And so, like, a little bit like the combination of the product manager and the users of the agents teaching the agents how to be better and better at the things that you're trying to get them to assist you with.

    8. HS

      I often think that we overestimate adoption in, y- a year or the short term-

    9. KS

      Oh, I agree.

    10. HS

      ... and underestimate in the long term. When I look at the hype around agents, I share the excitement-

    11. KS

      Yeah.

    12. HS

      ... but I question the immediate adoption or the expectation that some of the world's largest companies will be using agents in the next year or three years even. To what extent do you think I'm right? Or, to what extent do you think actually this wave is different, given the distribution of someone like Microsoft?

    13. KS

      I think usage always follows utility. So, like, you make useful things, like, they get used a lot. And so, like, clearly, with software development agents, like, we're getting a lot of adoption right now. So, like, it... You know, we've gone very quickly from, you know, developers being skeptical about these tools to, like, "You will get this from my, you know, cold, dying fingers." Like a... This is, like... I, I think of this as, like, one of the most essential tools in my toolkit, and I'm... I will never give it up.

    14. HS

      (laughs)

    15. KS

      And, you know, the agents are, you know, becoming more and more powerful. And, like, I, I even see...

    16. HS

      Can I ask you-

    17. KS

      Uh-

    18. HS

      ... to what extent is there lock-in there? You know, when I look at them and when I speak to, to people about them, you're right, there's user love, but everyone says, "Oh, but there's no lock-in. I'd happily switch to the next person tomorrow." To what extent does that mean it's valuable?

    19. KS

      Well, there's no lock-in in search. Like, uh, like you can send... You can send your next query to a different search engine than the one you're using right now, um, and yet you don't.

    20. HS

      And so it's ground-

    21. KS

      So... And, and the re- and the, and the reason that's true is, like, uh, it is our job building these agents to, like, grind and grind and grind and, like, go every day, try to make the agent better and better and better, and to do more and more and more of value for our users. Uh, and, you know, they will... if you do that and you do it well, like, they will continue to choose you.

    22. HS

      Can I ask, when you think about a five-year time horizon, what will the interaction model l- like between humans and agents?

    23. KS

      I think the thing that's missing right now with our agents is, like, they, they, they are conspicuously missing memory, which makes them awfully transactional. Um, and even in the places where agents have memory, it's, like, a pretty limited form of memory. And so, like, I think one of the things that's gonna happen, because I know lots of people are working on it right now, is memory is gonna get a lot better over the next, uh, year or so. Which means that as you're using an agent, like, and it remembers more and more about your past interactions with it, it will be able to conform itself more and more to your preferences. And, you know, it, it will be able to do things that we do very naturally, which is, like, you solve a problem once and you record the solution to a problem, and then you don't go and solve it from first principles over and over and over again. Um, you know, so memory even gives you the ability with these agents to have some kind of abstraction and compositionality, where, you know, that you can just sort of build up, uh, like more and more powerful ways of doing things inside of the agent over time, because it's, you know, remembering the past things that it's done and learned. So I, I think, I think the agents definitely... I mean, for sure, like, this is gonna be true, like, they will definitely be less transactional, less session-oriented, uh, going forward. I hope we get more asynchronous things happening over the next 12 months, which mean- Like, right now, you know, it's very interactive. Like, you go to your agent and, like, you send a prompt in and it goes and does something immediately and, like, gives you the response back, and it's like, "Yep, I've done it." And so, you know, I think there's gonna be more over the next year of you sort of dispatching your agent to go do something, and it, like, goes and works, uh, while you are not paying attention to it. 'Cause the thing you want with agents, by the way, like this is, you know, you, we should just never lose the plot on where we're going. So, you know, the first generation of agents are good at five-second tasks, uh, and then the generation after that were good at five-minute task. Uh, and, you know, what we're going towards are things that you can delegate increasingly complicated tasks and increasingly, you know, beefy work to over time, uh, like the same way that you would to a coworker. And in order to do that... I mean, so this is how I kind of think about the future. Like, that's what everybody's going to want and that's where the capabilities are headed. And so, you know, how do you think about how to build product around where the future is almost certainly going to be? And, like, what do you need to go augment these systems with to allow them to do more of this thing that is, like, what ultimately I think we, we want? Like, you don't want the, uh, thing that's just a good email summarizer, right? Like, you know, you want, you want something that you can sort of tell it, you know, "I get up every morning at 5:00 AM. Like, please, uh, you know, uh, at 5:00 AM every morning, like, digest all of the email that came in overnight, uh, you know, draft, uh, responses to anything urgent and, uh, like, show 'em to me while I drink my coffee." Like, that ought to be an entirely possible, doable thing.

    24. HS

      Is there anything that many people think about agents that you often hear that you think is wrong?

    25. KS

      I often think that skeptics, uh, about, like, "Oh, this is, like, hard or impossible," um, like, I, I think they're probably wrong. Uh, but, like, I'm not unique in thinking they're wrong. Like, you, you just... there are plenty of optimists out there who think that, you know, the technology is gonna get more capable. And, you know, it's, like, fine to have skeptics. Uh, you know, I don't know what skin in, in the game they have. I have a colleague who wrote this, uh, you know, book called There Is No Prize for Pessimism, and there really isn't. Like, I, I don't know what prize you win for being skeptical about something, uh, if you're not going to go do something about it. Uh-

    26. HS

      (laughs) Uh, well, I, I think it's John or Patrick Collison, I'm never quite sure which one, but one of the Collisons, they said, uh, "Pessimists are right and optimists make money."

    27. KS

      (laughs) Yeah, uh, th- they're, they are not wrong. Um-

    28. HS

      Uh,

  7. 30:1535:05

    The Future of Software Development

    1. HS

      w- we mentioned that, you know, software development being one of the most, um, widely adopted usage mechanisms that we're seeing today. Uh, when we look forward five years, what percent of net new code do you think will be AI-created versus human-created?

    2. KS

      95% is gonna be AI-generated. I think very little is gonna be line by line is gonna be human-written code. Now, that doesn't mean that the AI is doing the software engineering job. And so I think the i- the more important and interesting part of authorship is still gonna be entirely human. Um, like humans are gonna-

    3. HS

      What does authorship mean in a world where you're not the input master?... you're a prompt master?

    4. KS

      Well, so, look, it's just ra- it's just raising the level of abstraction. So, like, i- i- i- are, are you a programmer?

    5. HS

      No.

    6. KS

      Well, so look, we, we've, we've just sort of accepted, uh, before AI, uh, over the past 35, 40 years, and like, it was mostly happening, like, uh, when I was a kid. Like, I, I wrote my first program when I was 12. I'm 52 years old, so like, I've been doing this for 41 years. Already by the time I was 12 years old, which was like in the '80s, uh, you were mostly writing your code in a high-level language. But like, the thing that runs on the machine is not high-level language. It's not even assembly language. It's like, you know, some machine encoding of assembly language instructions that, like, run on the wire. And like, nobody, like, nobody bemoans the fact that like, I'm not ... I mean, there, there was a period where this was true. Like, you know, there was i- in the transition from assembly language programming to high-level language programming, like there were some old farts who would say like, "You're not a real programmer if you don't know how to write, uh, in assembly language," and that's the only real coding. And, um, you know, like the, the way to do things the right way. Uh, nobody talks about that anymore. I think this is going to be a similar sort of distinction, uh, i- in the same way that like, GUI builders and things that have been around for 20 years, like, you know, like when you, you're designing an iPhone application in Xcode, like, you don't write all of the code. Like, you sort of drag a whole bunch of y- you know, user experience elements around on the screen. Like, you know, and the, the system is just emitting a crap ton of boilerplate code for you. Thi- this is, like, in my mind, just this- the same, the same trend. Like, we're raising the level of abstraction. Like, we are changing the interface that the programmers use to, uh, communicate to the machine that here's a problem that needs to be solved. Um, and I, I think probably also, like, that, you know, the, the, one of the things that's true, like, the extraordinarily good programmers right now, even when they're using tools that are at a very high level of abstraction, they kind- they understand like, all the way down. So, like, if something's broken, like, you can go into the machine code. Like, you can go look at the boilerplate that you're, you know, your dev environment is generating. You muck around and figure out what's going on. Like, the same will almost certainly be true when you've got mostly AI-generated code that, you know, the very best programmers are gonna be able to say, "Okay, well, you know, the thing emitted this but like, you know, something's off. Like, let me go spelunk down into the lower levels of abstraction."

    7. HS

      Is everyone a programmer in a world where you have a Bolt or a Lovable, (laughs) which allows you to create simple websites, but websites-

    8. KS

      Yeah.

    9. HS

      ... with pure prompts?

    10. KS

      Yeah. I think so. But, you know, it, it also doesn't mean everybody is, uh, solving the same sorts of programming challenges. So, uh, so again, you know, if like, if you think about this as sort of raising everyone's level, so it makes everybody a programmer in that like, you no longer, uh, you no longer have to go get someone to make a website for you. Uh, but if you are trying to solve the word- well, the world's hardest computational problems, like, you still, I think you're gonna need computer scientists. Uh, and like, they are going to use these tools, like, insanely well to go solve problems that were just harder than they could solve before.

    11. HS

      Will the structure of engineering teams be fundamentally different in the future?

    12. KS

      Yeah, I think so. Um, but, you know, maybe not in the ways that people think. Um, yeah, I'm, I'm guessing that, and I'm hoping, that it will get easier for small teams to go do big things. Like, the reason that's important is I think small teams are just faster than big teams are. You can do a lot with like, 10 really great, super motivated engineers, uh, you know, with

  8. 35:0539:01

    The Thing That Most Excites Me in AI is Tech Debt

    1. KS

      really powerful tools.

    2. HS

      What would you most like to do, but because of scale, decision-making, whatever it is, you're not able to do?

    3. KS

      Scale is usually tough for two reasons in a technology company, but it does mean that sometimes you are slower than you would like to be. Um, and sometimes slow is necessary, uh, but it's like, sometimes slow is like, a side effect of big.

    4. HS

      Where have you been slow where you would like to be fast?

    5. KS

      Oh, I, like, I've, like, I wanna be fast all the time. Like, I want more product happening. I want ... I mean, like, there, there are things, there are things that can't go faster than they go because, like, laws of physics are attached to them. Like, we, we have, uh, we have been running 1,000 miles an hour building infrastructure over the past, uh, you know, two and a half years since GPT-4, and like, we, we are literally going as fast as is possible to go. Um, and it's still like, you know, you just sort of wish you could change, you know, the rate at which concrete can be poured and like, you know, power grids can be augmented and, you know, all of this sort of stuff. Like, I wish it could go a little bit faster. Uh, but what I would love to be able to do in an ideal world at Microsoft and everywhere else is like, I, I don't want there to be any space between an engineer's ambition for, uh, what they wanna do and like, a good idea they wanna try, and their ability to go try it. Um, and so, like, a lot of our internal use of AI right now is to try to figure out like, how to go enable that for all of our people at Microsoft. And like, you know, there's another thing too, like, i- if you've ever managed, uh...... any size engineering team, like one of the nastiest problems that you have, uh, that's very zero sum traditionally is, uh, like accumulation of tech debt. Like you almost always, like, uh, y- at some point, you're gonna be confronted with a painful trade-off. It's like, "All right, I gotta get this thing out, which means I can't quite get the technical bits of it in exactly the state that I want them to be. And so like, I'm gonna launch now and I'll go fix this thing later." And at the minute that you've done that, you have minted technical debt. And technical debt is just sort of like, uh, financial debt. It carries interest and you have to pay the interest payments, uh, and if you don't pay the tech debt down plus interest, like you will be in trouble at some point because it will just sort of a- accumulate to this large extent and then, you know, things start failing in your infrastructure. And so like one of the things I am absolutely most excited about with AI is, um, like I think we can turn this very zero sum problem of tech debt accumulation into something non-zero sum, like where you don't have to make those trades the same way that you have in the past. Uh, and like there's a big research initiative, uh, we've started at Microsoft Research about a year ago where like the whole mission of the lab is like eliminate tech debt to scale using these new AI tools. Um, it's super exciting stuff. And like just, like un- and I, again, I've been leading engineering teams for 20 years now and like tech debt is like just my mortal enemy.

    6. HS

      What have you learnt from doing that program now for the last year?

    7. KS

      That the AI tools are more capable than people think they are. And like this, this is the thing in general. Like I, I, I think honestly right now, there's a bigger gap than there was even two years ago between what the most capable frontier models can do and what they're being used for.

  9. 39:0141:27

    Quick-Fire Round

    1. KS

    2. HS

      Um, listen, Kevin, I could talk to you all day. Uh, I would love to move into a quick fire if that's okay?

    3. KS

      Sure.

    4. HS

      But I give you a, a... So let's start with, um, it's a tough one. Which competitor do you most respect, Google, Anthropic, or Meta, and why?

    5. KS

      If I gotta pick one, maybe Anthropic.

    6. HS

      O- out of interest, why?

    7. KS

      I don't know. I think, I think Dario's doing a good job.

    8. HS

      What's the best advice you've ever received?

    9. KS

      Yeah. I had, I had a mentor one time who, uh, told me that you can sort of imagine, um, an individual or a team's competencies on a histogram where the bucket all the way on the left is idiot and, uh, the bucket all the way on the right is genius, and like m- middle bucket's, you know, mediocre or average. Um, and their assertion was that you could take everything that you do and, uh, like everything that you're trying to do and assign it to one of those buckets, and that the mistake that people make is they w- a- with great effort, you can take something and move it up one, maybe two buckets to the right on the histogram. And that the mistake that people always make is they focus on trying to improve at the things that they're worst at. And if you believe this theory, like the best you're ever gonna do if you're an idiot at something is to get mediocre at it. Um, and all of the time that you spend trying to get to mediocre, you are not spending doing the things that you're a genius or very good at. I think that's very good advice because like the, the thing about everything that's worth doing is you probably have to do it with a team, and it is super easy to construct a team where you complement people.

    10. HS

      What are you bad at that you've consciously decided not to get mediocre at then?

    11. KS

      Oh, dude. I'm bad at so many things. Like I'm super impatient with bureaucratic things, like I hate budgets and facilities and like all of the mechanical parts, so like being a engineering leader, um, like I'd, I just, like bureaucratic things just bug. And like I could probably be a very mediocre bureaucrat if I wanted (laughs) to be. I'm just terrible at it.

    12. HS

      I love that. I, I'm the same mind- uh, delegation is the secret to life.

    13. KS

      (laughs)

    14. HS

      Uh, Satya's one of the most incredible leaders of our generation.

  10. 41:2742:36

    Leadership Lessons from Satya Nadella

    1. HS

      What have been your biggest lessons from working so closely with Satya and seeing him operate?

    2. KS

      Eh, you know, I think his just core leadership principle is that you have to simultaneously for people, um, create energy and you have to produce clarity. So like y- you really do have to make sure... And, and he's very good at this. Like he's, you know, his job's hard, uh, but he is always trying to make sure that the energy of conversations is, uh, like positive and that we are... You know, that people walk out of reviews and conversations and like anything that we're doing where they are carrying energy with them that's going to help them go do the hard thing ahead of us. And like you also have to, at the same time, like you can't just like go produce a bunch of energy and rah, rah, rah, and like not, at the same time, clarify for folks like what the most important things are.

  11. 42:3647:03

    DeepSeek Evolution: Do We Underestimate China

    1. KS

    2. HS

      Uh, we mentioned Deep Seek earlier. Do we underestimate China's ability in AI?

    3. KS

      Uh, well, I don't think I have. Hopefully, we, and like I, uh, so we, we should, we should really, really, really respect the capability of Chinese entrepreneurs, scientists, and engineers. They are very good. Uh, like we shouldn't, you know, if, if you are underestimating it, uh, like you, you shouldn't. Um-I, uh, you know, I think maybe some people did. Like, that's another interesting thing about that DeepSeek reaction is, like, how surprised everyone seemed to be. Like, "Oh my God," like, "This is coming from China?" Uh, like, that shouldn't have been surprising.

    4. HS

      What's the crazy AI prediction that most people would call science fiction that you believe to be true?

    5. KS

      It is already the case that I, I think the frontier models are probably better, uh, like, uh, health diagnosticians than your average GP is. It's a good thing to sort of realize and act on, uh, as quickly as possible, because we have a whole world of people who have inadequate access to high quality healthcare, uh, including my own family in rural central Virginia where, yeah, it's just not, not good. Um, and so, yeah, there, there are just sort of a bunch of these things, uh, like this where, yeah, the models are already really good, and you've got, um, you, you basically need the whole world to wake up to the fact that they're good so that we can go deploy this stuff, and like, deploy it because the thing that we really care about is the good of the public, uh, not trying to, you know, sustain some status quo.

    6. HS

      Kevin, a lot of people ask you a lot of questions, um, team members, journalists, uh, you name it. Um, what question are you not often asked that you think is an important question that you should be asked?

    7. KS

      Um... I don't know. Are we going fast enough?

    8. HS

      Do you think we're going fast enough?

    9. KS

      No.

    10. HS

      Is it possible to go much faster?

    11. KS

      Yeah, I think so.

    12. HS

      How could we go faster?

    13. KS

      Uh, I, I think in a, a bunch of different ways. Like, uh, we could... The, the, the thing that I would want in my ideal world is we really invest super heavily in education. Uh, like, I, I would love to see every child, uh, feel as if these new tools that we're building right now are for them, accessible to them, uh, expressly built for them to go, uh, accomplish the things that they think are most important. Like, I want billions of human beings off, like, taking all of this creative energy that we all have in doing, like, the most amazing thing with the best tools that they possibly have. I don't want anybody feeling constrained by anything. And then, like, I would love to make sure that, you know, across the public and private sector, that we are, you know, creating every incentive that we possibly can to go deploy these tools to, like, produce good, whether it's, you know, you, you've got healthcare and climate change and education and, you know, and, and, and. Like, pick your thing where we don't think we've got enough of. Uh, like what everybody's thought ought to be is, like, if I had a piece of technology that could create abundance in this thing where we currently think there's scarcity, like, let us go invest in that.

    14. HS

      Kevin, listen, I've so enjoyed talking to you. I, I so appreciate your tolerance with the wide range of questions, the future pontifications. Uh, and you've been fantastic. So thank you so much. Thank you for keeping me company on my runs, and this-

    15. KS

      (laughs)

    16. HS

      ... has been awesome.

    17. KS

      You're very welcome. Thank you for having me.

Episode duration: 47:04

Install uListen for AI-powered chat & search across the full episode — Get Full Transcript

Transcript of episode KN7KYzpPfiU

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome