Skip to content
Lenny's PodcastLenny's Podcast

The role of AI in new product development | Ryan J. Salva (VP of Product at GitHub)

Ryan J. Salva is the VP of Product at GitHub, where he led the incubation and launch of Copilot, which uses OpenAI to suggest code and entire functions in real time, right from your editor, and is changing the way we build software. Ryan is an experienced developer and product manager, with over a decade of experience working for Microsoft before moving to lead the GitHub product team. In today’s episode, he shares how Copilot got its start, how it moved from prototype to live product, and how he structures R&D teams within larger companies. He also discusses the ethical questions surrounding AI use and how to build a successful product team, and shares the inside story of the development of Copilot. Find the full transcript here: https://www.lennyspodcast.com/the-role-of-ai-in-new-product-development-ryan-j-salva-vp-of-product-at-github-copilot/#transcript — Where to find Ryan J. Salva: • Twitter: https://twitter.com/ryanjsalva • LinkedIn : https://www.linkedin.com/in/ryanjsalva/ • Website: http://www.ryanjsalva.com/ — Where to find Lenny: • Newsletter: https://www.lennysnewsletter.com • Twitter: https://twitter.com/lennysan • LinkedIn: https://www.linkedin.com/in/lennyrachitsky/ — Thank you to our wonderful sponsors for making this episode possible: • Amplitude: https://amplitude.com/ • Athletic Greens: https://athleticgreens.com/lenny • Modern Treasury: https://www.moderntreasury.com/ — Referenced: • GitHub Copilot: https://github.com/features/copilot • Make It So: Interaction Design Lessons from Science Fiction: https://www.amazon.com/Make-So-Interaction-Lessons-Science/dp/1933820985 • Brief Interviews with Hideous Men: https://www.amazon.com/Brief-Interviews-Hideous-Foster-Wallace/dp/0316925195 • The Memory Palace podcast: https://thememorypalace.us/ • Arrival: https://www.hulu.com/movie/arrival-6ec67b11-b282-4383-85ac-38c4731b40e4 • Oege De Moor’s LinkedIn: https://www.linkedin.com/in/oegedemoor/ — In this episode, we cover: [00:00] Ryan’s background and how he became involved in development [10:46] What is GitHub Copilot? [14:44] How GitHub Copilot can be utilized for education [17:46] How GitHub incorporated AI models with computer languages [27:24] Project horizons: delegating tasks based on confidence levels [30:39] How to put together a development team for “moonshots” [35:22] When and how to transition your R&D team smoothly [38:28] Dealing with ethical issues surrounding AI [44:40] The future of AI in development [48:48] Challenges with scaling Copilot [54:23] Allocating your energy as products scale [58:17] Lightning round  — Production and marketing: https://penname.co/

Ryan J. SalvaguestLenny Rachitskyhost
Sep 4, 20221h 4mWatch on YouTube ↗

EVERY SPOKEN WORD

  1. 0:0010:46

    Ryan’s background and how he became involved in development

    1. RS

      We had actually created a snapshot of GitHub's public code for what we call the Arctic Code Vault. Right? Essentially this is up in, like, way in the north lands of Finland. There's a seed vault, and we were like, "You know what? Like, seed vaults are really there to preserve the diversity of the world's flora in seeds in case of some crazy either natural or man-made disaster." But another really important asset to the world is our code, our open source. Like, this represents actually a lot of the collective, uh, well, certainly software, if not, like, intelligence of kind of like the modern world, right? And so we had put this snapshot of public repositories on these, like, this, like, silver, uh, film that would be preserved for thousands of years in this Arctic Code Vault. Well, we took that same data snapshot and we brought it to our friends over at OpenAI to see, like, okay, what can we do with these large language models built on public code? Well, it turns out we can do some pretty cool things. (upbeat music)

    2. LR

      Ryan Solva is VP of Product at GitHub, where, amongst other projects, he incubated and launched GitHub Copilot, which in my opinion is one of the most magical products that you'll come across. If you haven't heard of it, it uses OpenAI's machine learning engine to auto-complete code for engineers in real time as they're coding. And I think it's one of the biggest advances in product development and productivity that we've seen in a while. I'm always really curious how a big product like this starts, gets buy-in, builds momentum, and then launches, especially at a big company like Microsoft, and especially a product like Copilot that has surprising ethics challenges, scaling challenges, business model questions. Also this came out of a small R&D team that GitHub has, and it's so interesting to hear what Ryan has learned about incubating big bets within a large company and then taking them from prototype to Microsoft's scale. Ryan is also just super interesting as a human. He's got a very non-traditional background, and so I am excited for you to hear this conversation. And so with that, I bring you Ryan Solva. (upbeat music) If you're setting up your analytics stack but you're not using Amplitude, what are you doing? Amplitude is the number one most popular analytics solution in the world, used by both big companies like Shopify, Instacart, and Atlassian, and also most tech startups. Amplitude has everything you need, including a powerful and fully self-service analytics product, an experimentation platform, and even an integrated customer data platform to help you understand your users like never before. Give your teams self-service product data to understand your users, drive conversions, and increase engagement, growth, and revenue. Ditch your vanity metrics, trust your data, work smarter, and grow your business. Try Amplitude for free. Just visit amplitude.com to get started. This episode is brought to you by Athletic Greens. I've been hearing about AG1 on basically every podcast that I listen to, like Tim Ferriss and Lex Fridman, and so I finally gave it a shot earlier this year. And it has quickly become a core part of my morning routine, especially on days that I need to go deep on writing or record a podcast like this. Here's three things that I love about AG1. One, with a small scoop that dissolved in water, you're absorbing 75 vitamins, minerals, probiotics, and adaptogens. I kind of like to think of it as a little safety net for my nutrition in case I've missed something in my diet. Two, they treat AG1 like a software product. Apparently they're on their 52nd iteration, and they're constantly evolving it based on the latest science, research studies, and internal testing that they do. And three, it's just one easy thing that I can do every single day to take care of myself. Right now, it's time to reclaim your health and arm your immune system with convenient daily nutrition. It's just one scoop in a cup of water every day, and that's it. There's no need for a million different pills and supplements to look out for your health. Make it easy. Athletic Greens is gonna give you a free one-year supply of immune-supporting vitamin D and five free travel packs with your first purchase. All you have to do is visit athleticgreens.com/lenny. Again, that's athleticgreens.com/lenny to take ownership over your health and pick up the ultimate daily nutritional insurance. (upbeat music) Ryan, welcome to the podcast.

    3. RS

      Thank you, my friend. I am genuinely very excited to be here. Lovely to geek out with you for a little while.

    4. LR

      I'm excited as well. We were chatting briefly before we started recording, and you mentioned a little bit about your, your background, which is really unique for someone that is leading product at GitHub. And so could you just share, like, what you studied in school and then briefly just how that led to your career in product management?

    5. RS

      Oh wow, you're gonna, you're gonna make me re- remember all the way back to, to school. Okay.

    6. LR

      (laughs)

    7. RS

      Uh, yeah. So back in school, I, like, I was not a classic software engineering, uh, CS major. I studied the, um, kind of esoteric answer is philosophy of aesthetics and 20th century critical theory. The easier access answer is philosophy and English. But primarily it was really about, like, how do we as people communicate with each other? How do we express ourselves through creativity? We, like, you know, as, as humans since the dawn of time have been painting on cave walls, and dancing around the fire, and writing stories, and novels, and, and singing to each other. And I was just really interested in how we kind of convey our experience of the world to others. And so I kind of, I got...... started in software development and product management because I wanted to be in the business of creativity. And, like, we're at a really, really unique time in human history where, like, we actually get to witness the advent of a brand new medium. Like, software development and the worlds that it creates wasn't possible, I don't know, maybe 50, 60 years ago now. And, you know, if I'd been born in the 1700s, I probably would've been the guy making, I don't know, new colors of paint and paintbrushes. But I wasn't. I was born kind of at the turn of the 21st century, and so I work in engineering. Um, that's what I've been doing for the last, uh, about a little bit more than 20 years now, working sometimes in startups, some of them other people's, some of them my own, about 10 years at Microsoft and now three years at GitHub.

    8. LR

      Amazing. I didn't know that was a job to make new paint colors for paintbrushes. Is there a color you would...

    9. RS

      (laughs)

    10. LR

      ... come up with (laughs) that you would...

    11. RS

      Oh, man.

    12. LR

      ... know is your job?

    13. RS

      A very of, of... You know, it so happens that, like, yellow. I think- I think I would do a, like, a really, like- like, vibrant gold sunshine yellow if I was- if I was, you know-

    14. LR

      (laughs)

    15. RS

      ... in that business.

    16. LR

      Very positive, happy. I love it. That could be a new-

    17. RS

      That's right, that's right.

    18. LR

      ... GitHub, uh, brand color. (laughs) So today you're VP of Product at GitHub.

    19. RS

      Yep.

    20. LR

      And before that you were a super senior product leader at Microsoft. And I'm always curious how that transition happens when you move from just, like, a long time senior product leader at a larger company to taking on something like this that was an acquisition. And so I'm curious-

    21. RS

      Yeah.

    22. LR

      ... what made you decide to kind of take this leap. And then just is there anything interesting about the machination that went into just making that transition and figuring that out?

    23. RS

      Yeah. Yeah, it's a good question. You know, so like I said, I was, um, I was working on development tools and developer services when I was there at Microsoft. Uh, specifically, I was leading product for what they call One Engineering System. It's essentially like the shared developer infrastructure for all Microsoft products, like Windows and Office and Azure and things like that, uh, as well as a kind of Microsoft's DevOps solution called Azure DevOps. And when the acquisition happened, like, it was clear that so much of the energy, so much of the focus and the innovation that was gonna be happening around developer tools and services was going to be happening around GitHub. I mean, that's where- that's where the community is creating. That's where people are learning. That's where so much of the- the mindshare of just the development community is focused. And like I said, I'm- I'm motivated. What I care about is helping people create. And it, you know, it was very clear to me that there was no place that I could have a larger impact than working at GitHub. And so I really took that opportunity to make the transition out of, you know, a little bit more kind of enterprise-focused internal role at Microsoft to get going where I could work on everything from, I don't know, an AI technology like Copilot to a, you know, cloud-hosted development environments like Codespaces, Repos, which literally, like, every single developer on the planet is, like, participating in some way on GitHub Repos in the typical year. And so that was like- that was what I wanted to accomplish is just like, how do I get more connected to the community, especially the community outside of what Microsoft could reach on it- on its own? And the- the decision to move as well, I think, was- was really focused not just on what GitHub was and maybe is at the time, but what GitHub also can be. I mean, GitHub has more than a decade, nearly a decade and a half of history of bringing developers to- together to collaborate on code through repositories. But in the last few years, we've, you know, really expanded that portfolio to include so many different parts of the developer life cycle. Again, I talked there about Codespaces and Copilot, but it's also Actions for CI/CD and Advanced Security. Like, you know, as- as developers, we are so much more than just where we put our code. There's a whole part of the tool chain there. And to get to an opportunity to work on so many V1 products, like, that is creation itself, right? To be able to- to build an entirely new product, get it out to market, test it, iterate on it, and really feed on the energy that's coming back from the community.

  2. 10:4614:44

    What is GitHub Copilot?

    1. RS

    2. LR

      Awesome. There's definitely a lot of energy coming out of GitHub. And what I wanna spend most of our time chatting about is a product that your team helped launch and incubate, which is GitHub Copilot, which-

    3. RS

      Yeah.

    4. LR

      ... in my- just from my outsider perspective feels like one of the biggest advances in software development in, I don't know, decade, maybe more. And it's definitely one of the most magical products out there. And your team and you kind of led the incubation and launch of the Copilot. And so I'd love to spend most of our time chatting through that. And the first question-

    5. RS

      Sure thing, yeah.

    6. LR

      Okay, cool. So my first question, just for folks that don't know a lot about Copilot, is just like what is it? Can you just kind of briefly describe what Copilot is?

    7. RS

      Yeah, sure. So developers for the last 20 years, uh, or more have had essentially simple intelli- int- intellisense autocomplete. You hit the period and you get the next variable that might come up. It's, you know, it's helpful for moving a little bit faster through your code, helpful sometimes for remembering, you know, what the particular syntax might look like for a method or a function. Copilot is essentially that magnified by many lines of code.It is multi-line auto-complete, that is fundamentally powered by an AI model called Codex, which is a derivative of another kind of, one that you might be familiar with, GPT-3. Now...

    8. LR

      Mm-hmm.

    9. RS

      ... when you are in the editor, could be VS Code, it could be IntelliJ, it could be, uh, them. Essentially as you are typing, CoPilot will provide suggestions, usually in kind of this, like, italicized gray text that is really, to your point, kind of magical what it's able to infer, based upon the variables around it, the class names, the method names around it, your comments. CoPilot kind of infers what you intend to create and then hopefully does a pretty good job at nailing it by providing scaffolding code template that you can then riff on. Now, what we tend to find is that developers love it. They really, they enjoy it, they kind of find themselves getting a little addicted to it, because it helps them stay in the flow, right? Like, as developers, we love to be in that place. I love to be in that place where I am creating things, where I'm focusing on some, some product, some piece of software that I'm gonna give to, you know, my customers, my users. The, you know, the labor of remembering, you know, what's the order of a parameters that need to come into a particular API, or, "Hey, what's the particular syntax of this thing I'm supposed to do?" Or, "Oh, I've gotta create a bunch of, like, dummy data that is days of the week or months in the year." Like, that's just labor. It's not creating, it's just typing. CoPilot helps developers stay in the flow by bringing all of that information into the editor, preventing them from having to go check out documentation or watch a tutorial or go to Stack Overflow when, you know, either find an answer, or worse, have to ask a question and wait for an answer. It just brings all of that into the editor and gives the developer often multiple suggestions that they can choose from, and just kind of pick and choose what is the, what is the right solution to solve the problem for the thing they're trying to create.

    10. LR

      Awesome. What I'm most curious about, and we're gonna spend time on this, is just, like, how a product like this comes to be at a larger company. But before we get into that, what's, like, the craziest story of someone using CoPilot to write code? And I'll, I'll share one real quick. I was watching some YouTube videos to prepare for this chat, and one guy, (laughs) maybe this is the touring test of AI writing code as he used CoPilot to center divs. (laughs)

  3. 14:4417:46

    How GitHub Copilot can be utilized for education

    1. LR

    2. RS

      Yeah, yeah.

    3. LR

      Really well. And he's like, "Wow, this did it right." And then, uh-

    4. RS

      Yeah.

    5. LR

      ... another guy, he's like an instructor of code on, he makes YouTube videos teaching people how to code, and he's like, "CoPilot just gives you the answer immediately, (laughs) and so I can't make these videos as easily. I have to, like, turn it off so that it doesn't just give it away." And so I'm curious, yeah.

    6. RS

      Yeah.

    7. LR

      What have you seen?

    8. RS

      There are, like, there are so many of those. I mean, I'll, I'll just kind of give a couple of recent ones that, um, that I've heard. So I was talking to one developer who was, uh, he's actually an educator, and he's teaching kids how to code, uh, usually like, uh, like kind of high school age, right? So 16, 15, that kind of thing. And, you know, his experience matches my own, which is that many of us, we learn to code best by, not by, like, arbitrary exercises, but by actually building something that's gonna be useful, right? Solving problems. And so what he does is he matches small businesses and medium-sized businesses who need to build internal tools with essentially classes of students, uh, like, you know, like a group of maybe six or eight students, and then gives those students CoPilot, and says, "Here, small business, medium-sized business, you know, group of, of students, go build this internal tool for this business." And CoPilot is essentially kind of whispering in the student's ear, metaphorically speaking, "Hey, you know, here's how you solve this problem. Here's how you do this." And students build not only the, the kind of the tool, the software that the business needs, and then get to put that on their resume and their, you know, application for college and university. But they also get to learn by kind of using the tools that likely are gonna be part of the, the core DNA of the developer tool chain two, three, four years from now as AI starts to permeate our entire stack. So that was a, a pretty cool recent one that I, that I talked to.

    9. LR

      That is very cool. I didn't think about just the education lever here, just like making it so much easier to learn to code, not even just building code.

    10. RS

      Yeah, well, I mean, and that's the thing, like CoPilot is particularly good not just at taking away some of the effort, but often, you know, there's learning a new language, and then there's also just wading into a code base that you're not necessarily familiar with, right? Like, I mean, heck, sometimes I don't recognize some of the code that I wrote six months ago or a year ago. It feels like I'm wading into new territory. But maybe, maybe you need to fix a, a bug in an app that you don't often touch. Um, wading into that code base is kind of like learning and creating a mental map for that code base. One of the, you know, really magical pieces of CoPilot here is that, that AI is collecting context of the application that you're going into, and so it can help you build that mental map and learn the code base even if it's a language that you're already familiar

  4. 17:4627:24

    How GitHub incorporated AI models with computer languages

    1. RS

      in.

    2. LR

      Awesome. Okay, so, so going back to the beginning of CoPilot and how it started, I'm always curious how a project that ends up being a huge deal to a larger company begins, and especially-

    3. RS

      Yeah.

    4. LR

      ... how it builds momentum, how it gets buy-in, and then just kinda gets out the door. And so can you talk about just the original seed of this idea? Like who'd it come from? Who had the original vision? How did, uh-... had this idea emerge and build momentum where you put resources into it.

    5. RS

      Yeah, yeah. Oh, wow, what a- what a long and, I don't know, depending upon your point of view, sordid or exciting story that m- is.

    6. LR

      (laughs) .

    7. RS

      Um, so yes.

    8. LR

      Let's see.

    9. RS

      So Microsoft and OpenAI have been collaborating for, for quite a while now on large language models, making its way into kind of all different experiments and different parts of both Microsoft's kind of software portfolio, as well as just kind of helping OpenAI by providing, you know, like, the compute necessary. It- it takes massive amounts of compute to train these models. Um, they were mostly large language models. And so, uh, a couple of years ago now, w- it kind of dawned on us that, well, language models aren't just English and Spanish and German and Korean and Japanese, right, but Python and JavaScript and Java and C# and Clojure, all of these are languages too. In fact, they're kind of nice from an AI perspective because they're relatively constrained in terms of their semantics, right? The- the number of "words", and I put that, uh, uh, in scare quotes as it were, that can be expressed in Python, for example, is much smaller than the English language, which has all sorts of different grammar rules and nouns, verbs, adjectives, adverbs. And so we started to, uh, like, see what it would be like to actually bring code to these large language models. And the way that I actually got introduced to it is kind of funny. Microsoft and OpenAI had this idea, and at the time one of the teams that I was responsible for was GitHub's infrastructure team, you know, the- the team responsible for our data centers, our reliability, our uptime. And we noticed one day that we were getting hammered, I mean, absolutely hammered with a tremendous amount of clone requests. And like, we're like, "Oh my gosh, is this like a denial of service attack? How are we gonna respond to this? What's gonna happen?" And we figured out pretty quickly that it was actually OpenAI. They were cloning all of our repositories to harvest kind of the data out of GitHub.

    10. LR

      Wow.

    11. RS

      I mean, like, it's totally legit practice (laughs) , but like, you know, it does have a real consequence, and we were able to step in and mitigate it very quickly. There was not a reliability, uh, kind of an uptime incident there. But we're like, "Hey y'all," like, "cool, love this thing. Let's see if we can get that data to you in a, a more responsible way (laughs) , in a way that's packaged a little bit more, uh, to meet your needs." And so what we did is just like the year before that, we had actually, um, created a snapshot of GitHub's public code for what we call the Arctic Code Vault, right? Essentially this is up in like way in the north lands of Finland, there's a seed vault, and we were like, "You know what?" Like, seed vaults are really there to preserve the diversity of the world's flora in seeds in case of some crazy either natural or manmade disaster. But another really important asset to the world is our code, our open source. Like, this represents actually a lot of the collective, uh, well, certainly software, if not like intelligence of kind of like the modern world, right? And so we had put this snapshot of public repositories on these like, this like silver, uh, film that would be preserved for thousands of years in this Arctic Code Vault. Well, we took that same data snapshot and we brought it to our friends over at OpenAI to see like, "Okay, what can we do with these large language models built on public code?" Well, it turns out we can do some pretty cool things, just like a translation tool that goes from English to Spanish, Spanish to German. You can also go from English to Python or Python to C Sharp. We're like, "Okay, this is cool." We, you know, we can start to get not only translation, but a little bit of predictive text here as well. And we're all, I think, fairly already familiar with predictive te- text already in our code editors as IntelliSense, but in, I don't know, you go to your favorite word processor and chances are that you've got some kind of predictive text happening there as well. And we started experimenting with different user experiences, right? Do we want it so that you, I don't know, w- right click and get a little side panel that comes up with a bunch of different options for things that you might want here? That was nice because it would give you like whole functions, but it was kind of out of the, uh, it was out of the- the cursor, right? And you had to really, even if you weren't switching over to a different window, you still had to switch over to a different panel, which itself was a little bit distracting. And we eventually came to this idea of inline autocomplete, and we were able to, with the kind of partnership of some of our friends over on the Microsoft side of things, partner with our friends in Visual Studio Code, be like, "Hey, there's not really an extensibility yet in your editor for this multiline autocomplete, but we've got an idea for how this might work." You know, played around with kind of the, you know, the actual presentation of it. What should the keystrokes be? What should the presentation layer be? You know, the gray italicized text seemed to be a good way of indicating that it was ephemeral, as it were. And pretty early on we landed on this user experience that is Copilot, as most developers experience it today. That was now, I wanna say that was at least 16 months ago, 14, 16 months ago.Since then, we brought it to developers, we-

    12. LR

      Wait. Just, uh-

    13. RS

      Oh, yeah.

    14. LR

      ... just to double-click on that, so you're saying just, like, less than a year and a half ago, this kind of really started as a project, and now it's out to the world? Is that right?

    15. RS

      That is exactly right.

    16. LR

      Wow.

    17. RS

      That's-

    18. LR

      That's incredible.

    19. RS

      ... that's exactly right, yeah.

    20. LR

      How-

    21. RS

      So it's about a year and a half ago.

    22. LR

      Uh-huh. That's insane. What, uh, what was that period between OpenAI almost taking down GitHub to-

    23. RS

      (laughs)

    24. LR

      ... I guess that point?

    25. RS

      So the period in between kind of OpenAI, uh, almost taking down (laughs) GitHub-

    26. LR

      (laughs)

    27. RS

      ... and then us really arriving at the, kind of the user experience. You know, part of that was, frankly, a lot of really smart researchers at OpenAI e- experimenting and doing what, like, only world-class AI researchers can do. It was a lot of them experimenting, occasionally asking for updates to the dataset, tossing back to us a model that we might play with and tinker around with. I- these models have, like, literally thousands of parameters that you can pass to them. So when you're really thinking about kind of GPT-3 and Codex and then the transition from that to something like Copilot, it was not just like... The model, cre- creating the model is one thing, but then figuring out how to use the model in terms of what parameters do you want to kind of like adjust for, what do you wanna optimize for in terms of... Like, a great example of this is performance, right? When you're in a code editor, you don't necessarily want to type, type, type, and then have to wait one second, two seconds, three seconds to get a suggestion back when your entire goal is to stay in the flow. And so we would run experiments to see, like, how many milliseconds are the right amount such that a developer doesn't feel like they're being interrupted by Copilot and a suggestion-

    28. LR

      What's, what's the answer to that?

    29. RS

      ... but rather that, uh... Uh, it seems like right now it's around 200 milliseconds. So depending upon where you're in the world, your latency can go up or down a little bit from there. But it seems like the sweet spot is somewhere around 200 milliseconds.

    30. LR

      Good to know.

  5. 27:2430:39

    Project horizons: delegating tasks based on confidence levels

    1. RS

    2. LR

      Was there, was there kind of a point at which it was clear to you or leadership in general, like, "We should double down on this thing and go big," where this smaller team was working on this idea and then they're like, "Oh, wow. This is gonna work"? Or is it always like, "We will bet on this thing. This is such a big and great idea. We're gonna invest resources for sure from the beginning"?

    3. RS

      Yeah. Yeah, so the, the original team that was working on Copilot at GitHub was, you know, the team that we called GitHub Next, and essentially their job is to work on second and third horizon projects, what some folks might call moonshots, right? Things that we never really expect to work in the next one or two years, but might three, five years down the line actually turn into something meaningful.

    4. LR

      Is there a concrete definition of horizon two and three? Is it, like, number of years out, like, like Amazon style?

    5. RS

      Not necessarily a concrete definition. Like, for me, I usually ballpark it as first horizon is the next year, second horizon, the next three years, third horizon, next five years. But we generally think of it more as, like, a measure of ambiguity and confidence level more than calendar dates.

    6. LR

      This episode is brought to you by Modern Treasury. Modern Treasury is a next generation operating system for moving and tracking money. They're modernizing the developer tools and financial processes for companies managing complex payment flows. Think digital wallets, fiat crypto on-ramps, ride-sharing marketplaces, instant lending, and more. They work with high growth companies like Gusto, Pipe, ClassPass, and Marqeta. Modern Treasury's robust APIs allow engineering to build payment flows right into your product, while finance can monitor and approve everything through a sleek and modern web dashboard, enabling real-time payments, automatic reconciliation, continuous accounting, and compliance solutions. Modern Treasury's platform is used to reconcile over $3 billion per month. They're one of the hottest young fintech startups on the market today, having raised funding from top firms like Benchmark, Altimeter, SVB Capital, Salesforce Ventures, and Y Combinator. Check them out at moderntreasury.com. I'd love to spend a little bit more time on this. It's so interesting. Is this, uh, like a Microsoft thing, just having these three horizons in a certain percentage of resources or bet on different horizons?

    7. RS

      Yeah. It is... I would say it is not necessarily a Microsoft thing, but it is definitely at GitHub how we have-

    8. LR

      Mm.

    9. RS

      ... really contextualized it. Eh, not to say that there aren't teams at Microsoft who might also kind of use kind of that methodology. But where we've been really maybe explicit or intentional about it...... is at GitHub, where we've actually ring-fenced a team to think about that Horizon 2 and Horizon 3 work, and kept them separate from, you know, EPD. Uh, EPD here being engineering product and design, the folks who are working on building, you know, productized operational, like, products that we bring to market and we either give away or monetize in some way.

  6. 30:3935:22

    How to put together a development team for “moonshots”

    1. RS

    2. LR

      This is so interesting. There's a lot of companies that have these sorts of R&D groups. New product experience team at Facebook, and Google has one.

    3. RS

      Yeah.

    4. LR

      I'm not sure how many successes have come out of these teams, from what I've seen. And I'm curious.

    5. RS

      (laughs)

    6. LR

      What have you... And clearly you had a huge success, as far as I can tell so far.

    7. RS

      Yeah.

    8. LR

      Is there anything you've learned about how to do this, where you invest in these big moonshots within a larger company?

    9. RS

      Yeah. I mean, I think, like, the first step is to invest in it. Like, the first step is really, like, hire really smart people, attract smart people, and give them the opportunity to be creative. Don't expect any- anything out of them that is going to turn into a money-maker, or something that is going to be beholden to fundamentals around security, privacy, uptime, you know, accessibility, all that groovy kind of stuff upfront. They need space to create and experiment. And also, when you do get to, you know, a place where, like, that team has an idea that is clearly connected to, like, a representative set of customers who have a genuine problem, and there is signal, with at least medium confidence, that this solution, whatever it is, solves it in a novel way, that's the time to start thinking about, "Okay, like, let's actually put a little bit of, um..." I'm gonna call this market testing. It's nothing so formal as market testing. It's really just like, "Let's start to actually bring prototypes of this in front of more and more customers to kind of test it out and see, 'Hey, is this, is this actually solving a problem for you? Is this something that you would use?'" And this is where the transition between Next and EPD at GitHub really, you know, really started. And this is actually where, where my role in the, in the product lifecycle kind of really started to increase. You know, I had kind of been in tight connection and kind of been monitoring the work and, and kind of like consulting a little bit with the Next team prior to that. But it was that moment when we identified that, "Okay, this is actually something real. Customers are saying, developers are saying, 'This is magical. This does something extraordinary that I could not do on my own,'" that we started to think about, "Okay, how do we transition this over?" And so from there, what we really did was like, "Okay. We think we've got a hit here. We think we've got something that we can actually, uh, that we can actually bring, uh, to developers." And so we made an intentional decision to take some of the researchers who were in the Next team, and for a finite period of time, move them over to create a new EPD squad. Right? We want them to be researchers, but we need to do knowledge transfer, and we needed to actually kind of provide the seed for a team that could eventually operationalize and productize. And, and that kind of began the technical preview, where we started to invite tens of thousands, then hundreds of thousands to the technical preview. And in that technical preview, when we started to see, like, crazy, you know, mind-blown emoji tweets and, like, threads on Hacker News about people getting really, really excited about it, that's how we knew it was time to start scaling, and it was time to really start thinking about, "How do we do hiring so that we can build in some insulation around these researchers, so that they can eventually go back to GitHub Next to do what they do best, which is, you know, be innovative and creative and think about the next moonshot?" That process, that took... Well, we're actually still kind of at the tail end of it now. Here we are, you know, like I said, roughly a year and a half after kind of the initial kind of creation of the, of the product, having gone through technical preview, hav- uh, achieved general availability. We've now hired in a team around them, and the researchers, starting actually as early as last month, have started to gradually move back over to GitHub Next, and a EPD squad, multiple EPD squads, actually, are now taking the product forward and starting to respond to customer feedback, to think about, "Okay, how do we now, as a product team, carry this roadmap forward from an idea that originated in GitHub Next?"

  7. 35:2238:28

    When and how to transition your R&D team smoothly

    1. RS

    2. LR

      I love that insight of bringing the people along and not just kind of like, "Cool, we'll take it from here." If you were to build a team like this again somewhere, this kind of R&D Horizon 3 or 2 team-

    3. RS

      Yeah.

    4. LR

      ... is there anything else you would do differently? Anything, any lessons you take away from this experience for maybe founders that are, or PMs working at larger companies that are like, "Hey, we should have something like this"? Is there anything else that you find is important for making something like this successful?

    5. RS

      The criteria for moving researchers back into the, you know, their R&D team, whatever that happens to be for your organization, that can't be based on a calendar. It needs to be based on a replacement in seat who's actually doing the job and has picked up all of the skills necessary. And only then can the researcher move back.So make sure that you've got continuity of expertise and skill sets and, and domain familiarity before you move over. I feel like we've, we've, we've managed that pretty well today. As well, it's critical that the team who is taking over from the R&D shop feels like they have control over their own future. Uh, you can't really delegate roadmap to an R&D team. The team who's responsible for maintaining the product, for building the product, who has the closest feedback loop with the end customer, they're the ones who really need to own and, and feel like, you know, they control the roadmap. And so, you know, making sure that you're not outsourcing innovation exclusively to an R&D team, but that is happening within the product team as they take ownership over the idea and over, kind of, the use case and the customer. Last, I would say here is really that engineering fundamentals, in a lot of ways, are the contracts that differentiate an R&D team from a operational product team. And bringing that fundamentals process into it is gonna feel, candidly, a little bit unnatural to the researchers. And that takes, therefore, a little bit of cultural change management for everyone to just kind of like adapt their way of working and understand that we're graduating from an experiment and a research project to a, you know, an operational product. And often because those researchers are, you know, they're the first wave that come over. They're the, they're the seed of the project. It's gonna feel a little bit unnatural to them, and they probably won't have all the right skillsets in order to make that transition. And so making sure that you've got a good mix of engineers who are comfortable maintaining a service as well as, you know, engineers and researchers who are really thinking about what, what is the idea that we've created? What is the new thing that we brought to market? And can bring that vision to it.

    6. LR

      Yeah, I

  8. 38:2844:40

    Dealing with ethical issues surrounding AI

    1. LR

      can totally see the, the, the challenge that comes from, "This was my thing. I've been working on this." Like, "What are you guys doing to this project? Where is this going?" Or, "I'm not sure."

    2. RS

      Yeah.

    3. LR

      "I'm feeling..." And then, and then there's all these new asks that are coming at you. Like, "Oh my god. I, this was so much fun."

    4. RS

      Oh, dude.

    5. LR

      "And now I have to scale this freaking thing."

    6. RS

      Well, and I mean, this is the, the best problem in the world to have, talk about kind of customer ask, like for Copilot in particular, the amount of, of, of chatter, the amount of customer feedback that was coming in, especially for us with AI. I mean, the world is still figuring out AI, candidly. Like I mean, didn't, we're getting a lot, lot better at it, especially in the last couple of years, you know, with things like DALL-E and Copilot. But it brings with it not only engineering challenges, but also, frankly, ethical challenges, right? Like, you know, and, and legal challenges. Like, making sense of how we actually... What, what our expectations are of AI. And if AI produces something that is offensive, who's at fault? You know, our, our stance on it, what we en- ended up coming to is actually the, um, the framing of Copilot as an AI pair programmer I think is a useful one. Um, you know, pair programmer, I suspect most of your listeners will know, but like a pair programmer is usually two developers sitting side by side working on a problem together. One's at the keyboard, and the other one's kind of like, you know, helping them talk through it, talk through the ideas and, you know, make corrections, that kind of thing. Well, if Copilot is your AI pair programmer, and they're whispering crazy stuff into your ear and bringing politics into it or gender identity into it, or I don't know, whatever other... You know, they're spouting off slang and slander and all that kind of stuff, you're probably not gonna be able to focus on your work, right?

    7. LR

      Right. (laughs)

    8. RS

      (laughs) It's, it's gonna be really distracting.

    9. LR

      Yeah.

    10. RS

      And so really kind of coming down to some principles about what is the use case we're trying to solve? What is appropriate? I put this in scare quotes, "behavior" um, of the AI kind of bot sitting side by side with you helped us kind of create some principles or some, some guidelines for the developer experience that we wanted to create.

    11. LR

      Oh, I love that. Just kind of cre- creating like a persona of the thing to help you inform how to, how the behavior of the thing should work. How do you work through these challenges? Is it like discussions with you and the legal team? And I don't know, like these ethical things are really tricky, I imagine.

    12. RS

      Yeah.

    13. LR

      How does a... How do you approach something like that as a product team?

    14. RS

      It is conversations with a very, very wide cast of characters.

    15. LR

      Mm-hmm.

    16. RS

      It's, uh, th- this product in particular, I probably spent more time with legal than any other products that I've-

    17. LR

      Mm-hmm.

    18. RS

      ... ever kind of been responsible for. All wonderful, creative people. But it's not just legal. It is also, you know, privacy and security champions. It is, um, frankly, developers, like the people who are using it. Like, listening to them, like, "Hey, what works here? What doesn't work for you here? Why is this offensive? Why is it not offensive?" Now, when we, when we started out, we'll continue on the example of the crazy pair programmer whispering crazy things into your ear. When we first started out, we didn't really have any filter on Copilot whatsoever, in the very, very, very early days. And then eventually we're like, "Okay, it needs to be slightly more controlled experience. We need to edit out, you know, some of the most egregious so- stuff." And so we introduced a, a simple block list of, of words. And these block lists are always fraught with peril. Like, you know, which words are okay? Which words not okay? All of a sudden we become editors and that's just of, of like language, and that's kind of a scary place to be. I'm not comfortable with it, at least. But like at a certain level, it has to be done because otherwise you're gonna create a bad developer experience.

    19. LR

      Yeah.

    20. RS

      And, and so, you know, often we would get feedback from developers of like, "Hey, this particular word was blocked and, like, that it was blocked either was offensive to me, or prevented me from being able to kind of get good value out of the product."

    21. LR

      Oh, man.

    22. RS

      And so, always kind of like dancing the dance of, like, e- editorial content. We're actually at a place now where we're able to partner with the, um, Azure Department of, uh, Responsible AI, and they've created some really extraordinary models, that help detect, I'll call it sentiment, uh, for lack of a better word. Basically, when there is something that is patently offensive, because there are some words that, in some contexts may be offensive, and in some contexts may be totally reasonable, right? Um, especially, especially in, when you get into software for medical, um, kind of scenarios, right? And so, being able to start to shift a little bit, to... Or shift a little bit to focus or to rely on AI models that can also do a better job than we could with crude or, or simple block lists is an... maybe another proof point both of how AI as a solution for common development problems is getting way better at solving more parts of our stack, or filling in for more parts of our stack. And how, at least in our case, we were pretty fortunate to be able to deliver on or depend on a parent company's contributions to solve a real acute problem that GitHub probably could not have solved on our own.

    23. LR

      I never thought that (laughs) a co-pilot would be w- like, that you would have to worry about it saying things that are, uh, that are crazy.

    24. RS

      (laughs)

    25. LR

      So, that is wild that you guys have to deal with that. And wasn't it... Wasn't it Microsoft that had that bot that got... turned really negative and had to be shut down?

    26. RS

      (laughs) It was.

    27. LR

      Okay, so there's experience there.

    28. RS

      What was its name? Talia or something like that? I think.

    29. LR

      Yeah, something like that.

    30. RS

      Yeah, something like that, yeah. Yeah, we-

  9. 44:4048:48

    The future of AI in development

    1. RS

    2. LR

      Wow. What that makes me think about is your, your team is kind of at the forefront of AI in this applied way, and I'm curious what your thinking is on just, like, where this goes for developers especially. Like, I, I saw a stat that maybe 40% of people's code is now written by Co-Pilot. I don't know if that's right. But, like, is the vision in the future becomes something like 90? Where do you see this all going?

    3. RS

      Yeah, yeah. And so just to put a fine point on that stat-

    4. LR

      Oh, yeah.

    5. RS

      ... it is, uh, 40% is specifically for Python developers.

    6. LR

      Hmm. Mm-hmm.

    7. RS

      It candidly, it, it varies depending upon the language, because as you might imagine, some languages have better representation in kind of the public domain than others. And usually both the volume and the diversity of training data correlates with the quality of suggestions, which is then represented by either the number of lines written or the acceptance rate or, you know, any one of a number of other metrics, um-

    8. LR

      Awesome. Thanks for clarifying.

    9. RS

      ... in aggregate. Yeah, yeah, yeah, totally. We, we see it range anywhere from the upper twenties to the forties, um-

    10. LR

      Got it.

    11. RS

      ... across all the different languages.

    12. LR

      I, uh, as a-

    13. RS

      Uh-

    14. LR

      Just to throw this out there, as a not great engineer-

    15. RS

      Yeah.

    16. LR

      ... I used to be an engineer for about 10 years, uh, I, I welcome our AI overlords writing all my code, and so-

    17. RS

      Oh, yeah.

    18. LR

      ... I'm excited for this to do more and more. And yes, I'm curious where you think, where you think this goes.

    19. RS

      It, it does. It, like, it enables even mediocre developers like myself to be able to do-

    20. LR

      (laughs)

    21. RS

      ... some pretty amazing things.

    22. LR

      All right.

    23. RS

      But where's it going? Like, so first, I think, I hope it's obvious to most developers that AI is going to infuse pretty much our entire development stack in the not-so-distant future. Co-Pilot is really just kind of the very tip of the spear for a lot of innovations in, you know, better managing maybe our build cues or helping to... Here's a great one, you know, I don't know about you, but often the comments that I get with commit messages and PRs aren't super great. It kind of, you know, it puts a lot of effort onto the code reviewer to go figure out what the developer was actually trying to do. What if AI could summarize all of your changes with your pull request and you just have to, as the contributing developer, just review it to make sure it's accurate, send it on its way, and you don't have to put an extra effort for that. There are lots and lots of different opportunities for AI to essentially be able to take some of the drudgery out of our work so that we can focus on creative acts. Like, what I hear from developers and what I experience myself, is that Co-Pilot kind of forces me to think a little bit more about, "What are the design patterns I'm trying to create? What is the end user experience or the outcomes that I'm trying to drive with my code?" And then I can kind of rely on Co-Pilot to scaffold out a lot of that so that I can focus on more creative work. And that is really what, what I hope for our industry five, ten years from now, is that not only will we be inviting more developers or more people to become developers, right? By, uh, l- you know, essentially providing a s- a layer of abstraction a little bit, or at least a little bit of a, kind of a hand in development, but that also the really experienced developers are focusing on much larger problems and focusing on outcomes and creativity rather than really kind of low level, kind of difficult rote memorization of things like syntax or ordering of parameters and the like.

    24. LR

      Great. Yeah, and like, if nothing else, that'll keep people from just having a tab of StackOverflow copy and pasting every-

    25. RS

      (laughs)

    26. LR

      ... function that they're trying to figure out.

    27. RS

      If nothing, if... Yeah. Uh, no, like, I want StackOverflow to stay in business, but I would mind a little bit less context switching myself.

    28. LR

      (laughs)

  10. 48:4854:23

    Challenges with scaling Copilot

    1. LR

      So in the...... experience of scaling this thing, what would you say has been the biggest challenge, either technologically or even operationally, just kind of scaling it to a real product that people are paying for?

    2. RS

      Yeah. So there's, there's a few dimensions of that. One is a, a problem that's very much of, of our time in the world, namely that supply chains have been disrupted dramatically over the course of the last few years. And it turns out that Copilot, for both training and operating the models, requires some very rare and, uh, unique GPUs that there's not a lot of global supply of. And so part of it is just, like, can we get enough hardware in order to run these things? We've actually earmarked quite a bit of capacity, and we are greedy, greedy, greedy for more capacity globally.

    3. LR

      (laughs)

    4. RS

      As soon as we can produce those chips and, and get them in data centers, we, we do it. So that's, that's been one kind of unique challenge.

    5. LR

      Wow.

    6. RS

      I would also say here that, you know, operationally, uh, another challenge has been, how do we create a model that the community really feels, like, ownership over, right? And, like, the dialogue that's had to happen as we brought an AI tool to market, especially one that is trained on public code, right, has required a lot of dialogue between us and our community. And every good product manager should be spending, you know, as much of their time as possible with their customers, with their, you know, potential customers. Copilot in particular has been a more complicated kind of rollout, because we as a, kind of, as an industry, as, um, as a society, are still figuring out how to make sense of it. And so the amount of kind of give and take between developers and, and us as a product team has really required us to scale up more of the product team than it has the engineering team.

    7. LR

      Interesting. And that's... And why is that?

    8. RS

      So it's a, a couple of different reasons. I mean, one, like I said, we are trained on public code, and, like, not all of the community is really sure, like, when is it okay to train a model on public code? When is it not-

    9. LR

      Hmm.

    10. RS

      ... okay to train a model on public code? Is Copilot producing secure suggestions? Is Copilot producing buggy suggestions? Like, there's a lot of doubts. There's a lot of very healthy skepticism. And I actually... I mean that genuinely. I want people to be skeptical of, of Copilot. We, we owe it to ourselves as a community to be skeptical of any AI, because just like there's great potential for benefit, there's also great potential for harm. How, like, people keeping us accountable, like, how are you preventing things like model poisoning, right? Like, is there going to be a new attack vector that we just haven't really thought of yet around AI that might, you know, produce negative consequences? We think that we've done a, a really good and responsible job of that by making sure that, you know, first, we're very clear that Copilot is not a replacement for a developer. It will never be. Like, we do not want Copilot auto-generating code where a thinking, reasoning, breathing human being is not on the other side of that keyboard making reasoned decisions. We do not want Copilot to replace any other part of the stack, whether it is static analysis tools, or unit tests, or, you know, whatever kind of, like, measures you're putting in today to make sure that your humans produce good quality code. We want you to keep all of those same systems in place to make sure that humans who are leveraging tools like Copilot continue to produce that good quality code. But there's a lot of, uh, at the same time, like, anxiety of, like, where is AI stack? Is AI, you know, eventually going to be... This is back to your question about, where will we be five, ten years from now? Will it be writing 90% of the code? We don't want Copilot to be that r- w- we don't want it to replace anything. We want it to augment. The idea here is really that AI is an enabler for developers to focus on the creative work, to stay in the flow, to be able to move faster, right? And kind of working through those anxieties, working through that healthy skepticism takes conversation. It takes dialogue. And that takes us, you know, on the product side having kind of that, that guided conversation with the community.

    11. LR

      It feels like, uh, connects back to your, your education back in the day, philosophy and literature. And how convenient is that?

    12. RS

      (laughs) It often feels very connec- I mean, like, certainly, um, the education side of things taught me that the importance of dialogue, the importance of skepticism is, is valuable in so much more than esoteric, uh, armchair ponderings.

    13. LR

      Mm-hmm.

    14. RS

      It's actually applicable to the real world.

    15. LR

      Mm-hmm. Maybe a final question before we get to our very exciting lightning round. Um...

    16. RS

      Ooh.

  11. 54:2358:17

    Allocating your energy as products scale

    1. LR

      (laughs) So just looking back at this whole experience of, one, just building, incubating, launching this big bold bet within a big company, you, you can go in either direction, either just like, any lessons on just like taking a bold bet versus incremental wins and how you think about investing in these two kind of categories, or just within a large company, a lesson of just how to build something like this, like a massive new product a- from just like, a seed of an idea to a large new business line potentially.

    2. RS

      Yeah. So as a...... both a product manager and a portfolio manager of multiple products. You know, 'cause I'm responsible for multiple product lines at, at GitHub (instrumental music plays) . Like the, the allocation of time, of focus, energy, and resources becomes a really challenging question, the answer to which isn't always the same, depending upon the time, kind of world circumstances, organizational circumstances, technology circumstances. As a general rule, as a general principle, I certainly try to make sure that we're always reserving some capacity for bold, audacious, experimental research projects. You can kind of think of those, like those really uncertain bets as being five to 10% of the team's capacity. About 25, maybe 30% of the team's capacity should generally be on just operations. Like, how do we keep our end market products meeting customer expectations? And then the remainder of it, what is that, about 60% or so, is really on incremental progress for our end market products. You know, how do we make iterative improvements and continue to actually realize payoff for the larger bets that we made, you know, one, two, three, four years back, right? And from a rough kind of like distribution, that's generally how I run kind of my larger teams. That works when you have larger teams though. At startups, you know, where we were pretty much only a big bet, uh, obviously your percentages get very different and it becomes a matter of you're all in for that one proverbial lottery ticket, right?

    3. LR

      Awesome. Thanks for sharing that. I was gonna ask you the, the, the percentages that you recommend. And so thank you for getting to that. So with that we've gotten to our very exciting lightning round. I'm just gonna ask you five questions briefly and just whatever comes to mind, whatever answer you have, let's-

    4. RS

      Sure.

    5. LR

      ... let's do it. Sound good?

    6. RS

      Yeah.

    7. LR

      Okay. Uh, what are, what are two or three books that you recommend most to other people?

    8. RS

      Oh, good question. So one of 'em is a book on user experience called Make It So. Uh, it's a reference back to Star Trek. And the idea here is essentially that, uh, user experiences that are presented to us in sci-fi often make their way into our everyday, you know, products and tools, you know, 20, 30 years down the line. It is a great, eye-opening, illuminating, and just really fun book. So that's one. And then completely different take. I'll go outside of tech and I'll just do like entertainment value. There's a, uh, David Foster Wallace book called Brief Interviews with Hideous Men that I love.

    9. LR

      (laughs)

    10. RS

      It's a, it's a collection of short stories. And essentially, what it is, is it is like if you're watching a movie and like the villain gets their opportunity to like have their big speech which kind of explains why they are who they are,

  12. 58:171:04:59

    Lightning round

    1. RS

      right? And makes them maybe a little bit vulnerable in that moment. It's that speech like 10 times over, for different hideous people. Terrible, terrible people. So and inter- interesting read. I recommend it.

    2. LR

      I love that. Reminds me of this book that is The Interior Design of Dictators and they show you-

    3. RS

      (laughs)

    4. LR

      ... their like homes of like Saddam Hussein, Hitler, and all these guys. (laughs) And it's entertaining.

    5. RS

      Dude. Oh my gosh, that's awesome. I got, I gotta find that one.

    6. LR

      Yeah. (laughs)

    7. RS

      You'll have to, uh, you'll have to send it to me.

    8. LR

      It's like a... Found one at a old bookstore, like used bookstore. So I don't know, I don't know whether they're around anymore, but we'll, I'll, I'll find it.

    9. RS

      That's awesome.

    10. LR

      Okay. Second question. What's a favorite other podcast that you like to listen to or recommend, if there's any?

    11. RS

      Oh God, there's so many. I am, uh, I, I consume hundreds of hours of podcasts every month. It is crazy-

    12. LR

      How are you?

    13. RS

      Uh, yeah, I, um, so I can choose many. I'll give you just one. So, um, The Memory Palace with Nate DiMeo is an excellent storytelling podcast. He does about 20-minute vignettes, usually selected from kind of American history. He also was the artist in residence at, uh, one of the museums in Washington, DC. And if you're ever at, I think it's the American History Museum or something like that, if you're ever there, you can go to like different rooms in the museum and he'll tell you stories about the, kind of the objects or the rooms that you see there. It's a magical experience. Recommend it to anyone.

    14. LR

      Wow. I love those. What's a recent movie or TV show that you've really enjoyed?

    15. RS

      I don't know how, if this counts as recent, but it's, it's one that I watched recently, which was-

    16. LR

      Counts.

    17. RS

      ... Arrival. Yeah, that counts.

    18. LR

      Mm.

    19. RS

      Uh, Arrival. So a movie about, ostensibly about aliens, but is really about language and memory. And I found that really, really compelling.

    20. LR

      Have you read Cixin's books and short stories?

    21. RS

      I have not. I have not.

    22. LR

      Oh, wow. Oh, you would love it. It's, uh, like that Arrival is from one of his story, I believe, is one of his stories and there's a whole book of many more short stories by the same guy, and they're amazing.

    23. RS

      Oh, brilliant. Okay.

    24. LR

      Yeah.

    25. RS

      I, I, I've got my, um, my, my weekend cut out for me then.

    26. LR

      There you go. Just leave work and get to reading.

    27. RS

      (laughs)

    28. LR

      Um, what's a favorite interview question that you like to ask in interviews?

    29. RS

      Let's see here. So I'll give you a fun one more than it is a challenging one. This is kind of my icebreaker interview question, particularly for like more early to mid-career product managers.So I ask them to teach me something new in one minute. And so-

    30. LR

      Oh, wow.

Episode duration: 1:04:59

Install uListen for AI-powered chat & search across the full episode — Get Full Transcript

Transcript of episode awcd3P1DnX4

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome