Lenny's PodcastOpenAI's Sherwin Wu: How Codex reviews 100% of OpenAI's PRs
How OpenAI uses Codex on every code review, shrinking review time sharply; engineers now manage fleets of AI agents inside the company at scale.
EVERY SPOKEN WORD
90 min read · 17,990 words- 0:00 – 3:10
Introduction to Sherwin Wu
- SWSherwin Wu
ninety-five percent of engineers use Codex. One hundred percent of our PRs are reviewed by Codex.
- LRLenny Rachitsky
For engineers, I don't know what job has changed more in the past couple years.
- SWSherwin Wu
Engineers are becoming tech leads. They're managing fleets and fleets of agents. It literally feels like we're wizards casting all these spells, and these spells are kinda like going out and doing things for you.
- LRLenny Rachitsky
What do you think people aren't pricing in yet?
- SWSherwin Wu
The second or third order effects of the one-person billion-dollar startup. To enable a one-person billion-dollar startup, there might be a hundred other small startups building bespoke software. So I think we might actually enter into a golden age of B2B SaaS.
- LRLenny Rachitsky
I've been hearing more and more there's this stress people feel when their agents aren't working.
- SWSherwin Wu
There's a team that's actually doing an experiment right now with an OpenAI, where they are maintaining a one hundred percent Codex-written code base. They run into the exact problems that you're describing, and so usually you're like, "All right, I'll roll up my sleeves and figure it out." This team doesn't have that escape hatch.
- LRLenny Rachitsky
You've shared that listening to customers is not always the right strategy in AI.
- SWSherwin Wu
The field and the models themselves are just changing so, so quickly. They tend to, like, disrupt themselves. The models will eat your scaffolding for breakfast.
- LRLenny Rachitsky
What's your advice to folks that are like, "Okay, I don't wanna miss the boat?"
- SWSherwin Wu
Make sure you're building for where the models are going and not where they are today. There's a quote from Kevin Weil, our VP of Science here, and he likes saying: "This is the worst the models will ever be."
- LRLenny Rachitsky
[upbeat music] Today, my guest is Sherwin Wu, head of engineering for OpenAI's API and developer platform. Considering that essentially every AI startup integrates with OpenAI's APIs, Sherwin has an incredibly unique and broad view into what is going on and where things are heading. Let's get into it after a short word from our wonderful sponsors. Today's episode is brought to you by DX, the developer intelligence platform designed by leading researchers. To thrive in the AI era, organizations need to adapt quickly, but many organization leaders struggle to answer pressing questions like: Which tools are working? How are they being used? What's actually driving value? DX provides the data and insights that leaders need to navigate this shift. With DX, companies like Dropbox, Booking.com, Adyen, and Intercom get a deep understanding of how AI is providing value to their developers and what impact AI is having on engineering productivity. To learn more, visit DX's website at getdx.com/lenny. That's getdx.com/lenny. Applications break in all kinds of ways: crashes, slowdowns, regressions, and the stuff that you only see once real users show up. Sentry catches it all. See what happened, where, and why, down to the commit that introduced the error, the developer who shipped it, and the exact line of code all in one connected view. I've definitely tried the five tabs and Slack thread approach to debugging. This is better. Sentry shows you how the request moved, what ran, what slowed down, and what users saw. Seer, Sentry's AI debugging agent, takes it from there. It uses all of that Sentry context to tell you the root cause, suggest a fix, and even opens a PR for you. It also reviews your PRs and flags any breaking changes with fixes ready to go. Try Sentry and Seer for free at sentry.io/lenny, and use code Lenny for one hundred dollars in Sentry credits. That's S-E-N-T-R-Y.io/lenny.
- 3:10 – 6:53
AI’s role in coding at OpenAI
- LRLenny Rachitsky
[upbeat music] Sherwin, thank you so much for being here, and welcome to the podcast.
- SWSherwin Wu
Thank you. Thank you for having me.
- LRLenny Rachitsky
I wanna start with what's feeling like a barometer of progress in AI, especially in engineering. What percentage of your code, if you even write code anymore, and your team's code is written by AI at this point?
- SWSherwin Wu
I do write code occasionally now still. Uh, I'd actually say for managers like myself, it's way easier [chuckles] to use these AI tools, uh, than to manually code at this point. And so I know for myself and some of the other EMs, engineering managers at OpenAI, uh, all of our code is written by, by Codex, uh, at this point. But more broadly, there's just been this-- There's just so much energy. There's, like, a tangible energy internally around just how far these tools have gotten and how good Codex as a tool has gotten for us. And, uh, it's, it's a little hard for us to exactly measure how much of the code is, is written, because the vast majority of it, I'd say, like, close to a hundred percent is, is usually generated by AI first. Uh, what we do track, though, is, is, you know, at this point, uh, the vast majority of engineers use Codex on a daily basis, so ninety-five percent of, of engineers, um, use Codex. Um, one hundred percent of our PRs are reviewed by Codex daily as well, so basically, any code that goes into production that's merged in, Codex kind of has its eyes on and, uh, suggests improvements, suggests changes, uh, uh, in the PRs. And so, uh, that's kind of what we're seeing internally, but by and large, the most exciting is just the energy that, that there, that, that there is. Um, another observation that we've had is, uh, engineers who tend to use Codex, uh, more, uh, open way more PRs. So, uh, they're actually opening seventy percent more PRs, uh, and, uh, than, than the engineers who aren't using Codex as much, uh, and the gap is widening. So I feel like, you know, the people who are opening more PRs, um, are starting to, you know, learn how to use the tool more and more, get more efficient, and that seventy percent gap keeps, uh, uh, growing over time, and so might have actually increased since I last looked at the, at the number.
- LRLenny Rachitsky
Okay, so just to make sure we hear what you're saying, you're saying all of the code of these ninety-five percent, uh, engineers at, at, at OpenAI is written by AI. It's written, and then they review it.
- SWSherwin Wu
Yep, yep.
- LRLenny Rachitsky
It's, it's, like, crazy that that's almost, like, not crazy anymore, that we're just, like, getting used to this.
- SWSherwin Wu
I think there's still some getting used to, to be clear. Uh, uh, there's also, I think, some, you know, uh, engineers who I think trust, uh, Codex a little bit less, but, um, basically every day I talk to someone who, who, uh, is blown away by something that I can do and, and kind of like the-- their bar of, of trust kind of, uh, uh, or, like, how much they trust the model to do on its own goes up over and over, uh, over time, and... Uh, there's a quote from Kevin Weil, our, our, um, our VP of, of science here, and he likes saying: "This is the worst the models will ever be." And so this is the worst that the models will ever be for software engineering as well, and so over time, we just see people trusting it more and more, and then we'll see the models get better and better as well.
- LRLenny Rachitsky
... Yeah, Kevin Weil, former podcast guest, uh, he, he said exactly that line on this podcast-
- SWSherwin Wu
Yeah, yeah, yeah
- LRLenny Rachitsky
-a few times.
- SWSherwin Wu
Great one.
- LRLenny Rachitsky
Yeah. Uh, Peter, the Clawd bot slash Mult bot slash OpenClaw is what it's called now-
- SWSherwin Wu
Yeah
- LRLenny Rachitsky
... uh, developer, uh, recently shared that he uses Codex for his work, and he feels like anytime it does things, he just trusts that it has done the right job, and he's just, like, almost certain he could just commit it to master and it'll be great.
- SWSherwin Wu
Yeah, yeah, he's a great, um, user of Codex. I know he's in close touch with the team, gives us great feedback. Um, I'm not surprised that he uses it. I mean, uh, sorry, it's called OpenClaw now.
- LRLenny Rachitsky
OpenClaw. Yeah, exactly.
- SWSherwin Wu
OpenClaw is a great, is a great product. And then I saw that this mor-- I mean, this is very recent, but this morning, I think Mult book, uh, kind of like, uh-
- LRLenny Rachitsky
Yeah
- SWSherwin Wu
... uh, was shared as well, and seeing all of the, uh, AI agents talk to each other is pretty, uh, pretty surreal.
- LRLenny Rachitsky
It's basically Her is happening in real life is what I'm hearing.
- 6:53 – 12:26
The future of software engineering with AI
- LRLenny Rachitsky
[chuckles]
- SWSherwin Wu
Yeah. Yeah. [chuckles]
- LRLenny Rachitsky
So just like coming back to this crazy moment we are living through, for engineers in particular, we've gone from you write every line of code to now AI is writing all of your code. I don't know what job has changed more in the past couple of years, like job that we didn't expect to change this much, where just like the job of an engineer is so different. In the entire lifespan of an engineer, like in the past couple of years, it's now shifted to, "I don't write any more code." How do you imagine the role of an engineer and the job of a software engineer looks in the next couple of years? Just like, what is that job?
- SWSherwin Wu
Yeah, it's-- I mean, it's honestly been really cool to see. Um, uh, and it's part of where the excitement is, because, uh, like, the job is likely going to change pretty significantly over the next one to two years. It kind of feels like we're still figuring things out, though, and so there's, like, this excitement I know, especially from some of the software engineers, of like, we're in this rare moment, you know, maybe over the next twelve to twenty-four months, where we'll kind of get to figure things out ourselves and set our standards for ourselves. In terms of where I see, uh, I see this moving, so I think there's a common thing that everyone's saying, which is, uh, you know, people are generally... Like, I see engineers are becoming tech leads. They're basically like managers now. They're managing fleets and fleets of agents. Um, I know many of the engineers on my team basically have, like, ten to twenty, uh, threads kind of being pulled on at the same time. Obviously, not active running Codex, uh, jobs, but, uh, just a lot of parallel threads. They're checking in on what they're doing, they're steering the agents, uh, in Codex and, and, and, and giving it feedback. And so their job has kind of really changed from just writing the code itself into being almost like a manager. In terms of where I think this will go one to two years from now, so one, uh, kind of metaphor that, that I kind of always come back to here is, is actually from this, uh, is from this, uh, programming textbook, uh, that I read back in college called SICP. I don't know if you've heard of it.
- LRLenny Rachitsky
Mm-hmm.
- SWSherwin Wu
Uh, Structure and Interpretation of Computer Programs, so S-I-S-I-C-P. Um, at, at MIT, it was really popular, and, and it was actually used as the, uh, uh, introductory-- it was the textbook for the intro programming course for a very long time. Um, and it kind of has this cult following. Um, uh, it, it teaches you programming, uh-- it teaches you a dialect of Lisp, uh, called Scheme. Uh, and so it, like, introduces you to, like, functional programming. It's, like, very mind, mind-opening that way. But the thing that was memorable for me about that book, so I, I, I kind of read it in college, um, the very beginning of it kind of describes programming as a discipline and draws this metaphor to basically like sorcery. Like, it says, like, software engineers are like wizards, and you're like ca- like, programming languages are like incantations, and you're like, you know, you're, you're saying-- you're issuing these spells, and these spells are kind of like going out and doing things for you, and, and the challenge is, like, what incantation do you have to say to make the, the program do what you want? And this book was written in 1980, so this is, [chuckles] this is a while ago, and I think that metaphor is actually s- like, kind of persisted over time, and I think it's actually playing out as we move into this, uh, new era of vibe coding or just, like, what software engineering will look like, because programming languages were basically these incantations. They've changed over time, and the challenge is always-- and, and, and the trend has been that these- it's been easier and easier to kind of get the m- the, the, the computer to do what you want, uh, via programming. And I think the current wave of AI is, is probably the next stage of that evolution. It is now literally incantations, because you can tell, you know, your, uh, you can tell Codex, you can tell Cursor, uh, exactly what you want to do, uh, and then it'll go do it for you. Uh, and I particularly like the wizard and like the, the, the sorcery analogy, 'cause, uh, I think our current state is, is starting to move towards kind of like the, The Sorcerer's Apprentice, uh, you know, from Fantasia, uh, where Mickey Mouse is like, you know, he finds the sorcerer's hat, and he tries to do all these things. And I just think it's a really apt analogy, because, one, uh, it's just- it's really powerful now. These incantations you can do can-- is, is extremely high leverage, but you kind of have to know what you're doing, right? Like, in Sorcerer's Apprentice, the whole plot is, like, Mickey goes wild, the, the brooms, like, go crazy, and everything's flooding. I think he literally sets the-- like, sets the, uh, the brooms off on a task and then goes asleep. Uh, and, and so, you know, it's like vibe coding at its, at its, at its greatest, and then eventually, the, the, the old sorcerer comes back and, like, cleans everything up. And, um, you know, when, when I see engineers kind of like, doing these, these, these, these twenty different, uh, Codex threads at a time, there, there is some skill, and there's some seniority, and like, you know, uh, um, a lot of thought that needs to go into this, because you want to make sure that the, the, the models aren't going off the rails. Uh, you definitely don't want to just, like, completely, uh, go away and, and, you know, like, ignore, ignore the thing. But it's also extremely high leverage. Like, you know, a, a, a very senior engineer who's, who's really prolif- uh, proficient with these tools, uh, can now just do way more things via, uh, what they're doing. And I think it's also what makes it fun. Like, it literally feels like we're wizards now. You know, it feels like we're closer to, to, to, to having, uh, uh, uh, to, to making, making it feel like this, like, magical experience where we're, you know, casting all these spells and having software do all these things for you.
- LRLenny Rachitsky
I was thinking of The Sorcerer's Apprentice exactly as the metaphor as you were describing that, so I'm glad you went there. Uh, a previous podcast guest described it as you have a genie that you can-- that grants you wishes, and it's a useful frame because you have to be very clear about the wish you want. [chuckles]
- SWSherwin Wu
Yes.
- LRLenny Rachitsky
Like, if you want to be big-
- SWSherwin Wu
Yes
- LRLenny Rachitsky
... like, how big exactly?
- SWSherwin Wu
Yeah, or it might be like the monkey's paw type thing, where, you know, [chuckles] it's like, you got what you want, but what are the side effects?... Um, yeah, yeah, I think that and the analogy is great, and, um, yeah, the crazy thing for me is just the staying power of that book, SICP. Like, it's called the Wizard book. You know, people call it the Wizard book because that is the metaphor that they kind of weave throughout the, the book, and, um, we're-- we've basically reached that point now, which
- 12:26 – 15:07
The stress of managing agents
- SWSherwin Wu
is, which is, which is really cool.
- LRLenny Rachitsky
There's two kind of threads I want to follow here. One is I've been hearing more and more there's this, like, stress that people feel when their agents aren't working. You fire off all these, [chuckles] you know, Codex agents, and then you have to keep-- stay on top of them. "Oh, shit, one's not working, I'm wasting time!" Uh, do you, do you feel that? Do you feel that across your team at all?
- SWSherwin Wu
Yeah, yeah. I mean, it happens all the time, and I actually think, like, this, this is where the interesting part of all of this lies right now, because these models aren't perfect, these tools aren't perfect, and we're still trying to figure out how to best interact with these, uh, with, with, with Codex or with these AI agents to, to get work done. We see this come up all the time. There's a particularly interesting team that we have internally. So there's a team that, that's actually doing an experiment right now, uh, with an OpenAI, where they are basically maintaining a one hundred percent Codex-written code base. Uh, so you know, like, you know, uh, uh, some... You know, y- you'll have the AI write code, but you'll obviously end up, like, rewriting a lot of it, and, and, and you might need to, like, double-check and change things. But this team is just fully Codex-pilled and just, like, leaning in entirely. Uh, and they run into the exact problems that you're describing, which is, like, you know, their challenge is, you know, uh, "You know, I wanna get this thing, this feature built, but I can't get the agent to do it." And so usually, there's an escape hatch where, you know, then you're like: "All right, I'll roll up my sleeves and, like, figure it out," and then instead of using Codex, I might use, like, TabComplete and, and Cursor and, and things like that. But this team, uh, uh, for the experiment, this team doesn't have that escape hatch. Uh, and so then the challenge is, like, how do I get the, the, the, the agent to, to, to do this? And, um, I actually think we're gonna be publishing a blog post from some of our learnings here. Um, but a lot of fascinating, like, paradigms and best practices are falling out of this. Um, one interesting thing that we've noticed, I, I don't know if this is what you, you kind of feel, but we definitely feel it here, is a lot of the time, uh, when the coding agent is not doing what you want, it's usually a problem with context and just, like, information that you've given it. It's just either underspecified or there's just not enough information around how to do something available to the agent, available to Codex. Uh, and so, uh, when, when you have to solve it through, through that, uh, the challenge is then to, to, to add documentation and actually work around this, this limitation, and basically encode more tribal knowledge that's in your head somehow into the code base, either via, you know, code comments itself or code structure itself, or via text files like, you know, .md files, skills, any type of additional resources within the repository so that the model can, um, uh, can better do its task. There's a whole bunch of other learnings from this, uh, this group, which I think is fascinating, uh, to, to explore. But yeah, kind of giving-- removing that escape hatch of, of no longer using the AI has allowed them to start piecing together a lot of the problems that, uh, we'll have to solve
- 15:07 – 19:29
Codex and code review automation
- SWSherwin Wu
if we really want to lean into agents.
- LRLenny Rachitsky
Another, uh, issue people run into, you talked about how people are shipping PRs like crazy, a lot more PRs if they're working with AI. Uh, obviously, code review is becoming a, a bigger challenge. Is there anything you've figured out on your team to help speed that up, to make that scale, as-- and not just create this terrible job for people where they're just sitting there reviewing PRs all day?
- SWSherwin Wu
Yeah, I mean, one thing is Codex reviews one hundred percent of [chuckles] all of our PRs at this point. And so, uh, I actually think... So one, one really interesting thing that's happened is the things that tend to-- we hand-- we tend to hand to the models immediately tend to be the things that annoy us or, like, are the most boring parts of, uh, software engineering. It's also why it's more fun now, because we get to do more, you know, more of the fun things. Um, for me, um, speaking more for myself, I really hated code reviews. It was, like, [chuckles] one of the worst things for me. And then I remember, uh, in my first job, uh, out of college, uh, it was at, it was at Quora, um, I owned-- I was working on the newsfeed, and so I owned the code for the newsfeed, and so I was a reviewer for newsfeed. And, uh, it was just, like, the central piece of code that everyone would touch. And so I would just-- every morning, I'd log in and be, like, like, twenty to thirty code reviews, and just like: "Oh, my goodness, I gotta, like, you know, get through all of these." Um, I would procrastinate, and then it grows to, like, fifty. And so there's just, like, a, a lot of code reviews. Codex is really good at reviewing code. Uh, so actually, one thing that we've noticed that CLIVE 2, in particular, has gotten extremely, extremely adept at, is reviewing code and, and especially when you kind of steer it in the right direction. And so, uh, for code reviews, yeah, we create a lot of PRs, but Codex reviews all of them, and it makes, you know, code reviews go from a, you know, I don't know, ten, fifteen-minute task to sometimes even just, like, a two to three-minute task because you have a, uh, a bunch of suggestions, uh, already, already baked in. Uh, a lot of the times, people will, uh, especially for small PRs, like you, you actually don't even need people to review. We kind of trust Codex in this way. Um, the original author kind of looks at Codex. It is-- You know, the, the benefit of code review is to have a second pair of eyes to make sure that you're not doing anything dumb. Codex is a pretty smart second pair of eyes at this point, and so, uh, that's something that, that we've heavily leaned into. Um, the general CI process and, like, the post, uh, kind of push, and like, deployment processes have also been heavily automated via Codex internally at this point. If you talk to a lot of engineers, the thing that annoys them the most is after you've written your beautiful code, like, how do you get it into production? You know, you gotta, you know, run through all these tests. You gotta, like, you know, lint errors. You gotta have all the code review. Um, there's a lot of automated stuff you can do with Codex, and so we've actually built some tools internally that, that help automate that process, automate the lint. You know, if there's, like, a lint error, it's a very easy Codex fix, uh, and then just-- it could just patch it and then kind of restart the CI process. Um, so all of that is i- we're trying to collapse as, as into as, as little work for an engineer as possible, which... And, and the by-product of which is, uh, um, uh, they can, they can now merge and push out a lot more peers.
- LRLenny Rachitsky
Codex writing the code, Codex reviewing its own code. I'm curious if you're open to using other models to review your model's work. Is that, is that a path, or is it just, "It's good enough, we don't need anything else?"
- SWSherwin Wu
So I, I will say there's, there's definitely a circular thing here, and, like, going back to Sorcerer's Apprentice, like, you wanna make sure you're not letting the brooms go crazy here. Um, and so, you know, we- we're very thoughtful, I'd say, around which PRs kind of-... are completely just Codex, uh, reviewed. Most people still obviously take a look at their PRs, uh, and so it, it's not like it's going to zero, it's more like going from, you know, one hundred percent attention to, like, thirty percent attention, which, which just helps things push through. Uh, in terms of, like, multiple models, uh, so we, we obviously test a lot of models internally, and so we have a lot of those. Um, we use, uh, external models less. Um, it's-- we, we think it's important to kind of dog food our own models and kind of like get feedback there. But, uh, you can also... You know, there are a lot of, like, internal variants of models that you can use to give you a different perspective, um, here as well, and, and we found that to, to work quite well.
- LRLenny Rachitsky
Okay, so just to, just to make sure we get, like, a barometer of today's world at OpenAI in terms of AI and code, uh, just so I understand, and then I want to move on to a different topic. Uh, a hundred percent of code across OpenAI is written by Codex at this point? Is that the way to frame it?
- SWSherwin Wu
I wouldn't make the statement that a hundred percent of code running in production today was-- is, is written by AI. Uh, and, and, and it's kind of hard to, to, to do attribution there. But the-- like, almost every engineer heavily uses Codex in all of their tasks at this point. And so I-- you know, if I were to guesstimate, like, the vast majority of code at this point is, was probably authored by AI.
- LRLenny Rachitsky
Incredible.
- 19:29 – 24:14
The changing role of engineering managers
- LRLenny Rachitsky
Okay, so there's a lot of talk, and we've been talking about kind of the IC role, the work of an IC engineer. There's less talk about the changing role of a manager, especially an engineering manager. How has your life as a manager changed with the rise of AI, and just what do you-- where do you think managers... What's the role of a manager in the future?
- SWSherwin Wu
It's definitely changed less than an engineer. Uh, [chuckles] there's no, you know, Codex for managers just, uh, [chuckles] just yet. However, I use Codex quite a bit for, for some of the, um, uh, some, some of the, like, kind of more managery tasks that I do. I'd say a couple things are, are changing. There are, like, some trends. So I don't think it's changed that much yet, um, but I see trends, and I think if you play it out, you can kind of see where, where a lot of this is going. One thing that, that's becoming increasingly clear is Codex really empowers, like, top performers to, to get a lot li-- uh, like, to be a lot more productive. And so it, it really, like... And I think this is maybe true for AI more broadly, like, across society, which is, like, the people who really lean in are, like, the people who have high agency or, like, will really get, get, get good at these tools, will kind of supercharge themselves. Uh, and so I'm kind of noticing this now as well, which is, like, the top performers kind of end up, uh, uh, uh, being a lot more, a lot more productive. Uh, and so you see a, a broader spread, uh, in, in team productivity in this way. One-- so one thing that I've always done as, as a management philosophy is to spend, uh, actually the majority of my time with top performers, just, like, make sure they're unblocked, make sure they're happy, make sure, you know, they're- they feel productive, and they feel heard. I think this is even more true, uh, in an AI world where, you know, your top performers are gonna just, like, really be shooting ahead, uh, using these tools. I think, I think one example is the-- is the, the team that's, you know, maintaining a one hundred percent Codex-generated code base. Like, just letting them kind of rip and, and, and see what's happening there, is something that, that's paid dividends. So I think that- that's kind of one, one trend that I'm seeing, where, where you, where, um, spending even more time with top performers for managers, I think is, is likely gonna, um, uh, continue. The other thing is, mm, I, I... So this is more, uh, an observation, but my sense is, with a lot of these AI tools available to managers, so le- less like writing code, but just things like ChatGPT with organizational knowledge, like being able to do research and understanding organizational context a lot better. Another good example is, uh, um, we're doing performance reviews right now, and it's actually really easy to use ChatGPT with internal knowledge hooked up to GitHub and, like, our Notion docs and Google Docs to give a-- get a really good sense of what this person has done over the last twelve, twelve, uh, months, uh, and writing a little, you know, deep research report for it. My sense is, I think managers will be able to manage much larger teams in this world. Kind of like how, you know, like, software engineers are managing twenty to thirty Codexes, um, my sense is these tools will allow managers, uh, people managers to be higher leverage, um, and, uh, will allow them to, to, to manage, you know, teams of way more than, than the current best practice of, I think it's like six to eight, right, for software engineering? You kind of see this applied to, you know, like, uh, the non, uh, engineering domains, like support or, uh, operations, where it's, like, you know, previously, um, uh, where, where, where previously, like, the, the size of a support team might be limited, but, like, as you can pass off more things to agents, you can actually do more work and also manage more people this way. I think the same thing might happen for, um, people management as well, especially in tech companies. Um, and, uh, we're already seeing this. There are some teams, uh, where, uh, there are EMs managing, you know, quite a few people, and they're doing it pretty adeptly because of some of these tools, where they can get higher leverage and understand what their team's doing, understand organizational context a little bit better, uh, and operate in that way.
- LRLenny Rachitsky
I love this advice, that with the way you described it, is you've always leaned into top performers and spent more time with them, unblock them, and make sure they're happy. The way Marc Andreessen, he was just on the podcast, the way he phrased it is: "AI makes good people better, and it makes great people exceptional."
- SWSherwin Wu
Yeah, yeah.
- LRLenny Rachitsky
And what you're saying here is just, just doing this more and more is probably the right move, spending more time with the best people on your team to unblock them, make sure they have everything they need.
- SWSherwin Wu
Yeah, a very good example right now is, uh, there are, I would say, like, a, a group of engineers internally who are really Codex-built and are thinking through what the best practices are for interacting with this model. And that is just an extremely high-leverage thing for them to do. And so just, like, as a manager, I'm just like: "Yeah, go explore this," you know? Uh, whatever best practices come out of this, you know, we, we have to share with the org. We'll, we'll, you know, uh, we'll, we'll, uh, we do all these knowledge-sharing sessions. We'll, we'll, like, share documents and, like, best practices everywhere. So things like that just, uh, you know, elevate everyone. And, uh, and so I, I view that as like, you know, another example of this trend, um, uh, that, um, that we're seeing, where the top performers really get exceptional.
- 24:14 – 31:40
The one-person billion-dollar startup
- LRLenny Rachitsky
People just, like, have a sense. This is big. AI is changing so much. The world is changing. Uh, it's gonna be a huge deal. What do you think people aren't pricing in yet into-... what will change into where things are head- heading, just like what's an example of something you think you're like, "Okay, we're not realizing this yet?"
- SWSherwin Wu
So o- one of my favorite kind of, uh, uh, like, phrases or like things that have come out of, of this whole AI wave is, is the idea of the one person billion-dollar startup. I think- I actually think Sam may have keyed it, or like, uh, S- Sam may have been the first one to say it, but it's fascinating to think about, right? It's like, yeah, if, if, you know, if people are so high leverage, at some point, there will likely be, um, a one person billion-dollar startup. Um, and while I think that's really, really cool, I think people aren't really pricing in the second or third order effects of this. And, and really what... You know, w- 'cause, 'cause what the one person billion-dollar start- startup implies is that there's, you know, one person can just have so much more agency and so much more leverage using one of these tools, um, that it is just super easy for them to get everything done that they need to for, for their business, to, you know, ul- ultimately create something that's a billion dollars. But I think there are a couple other implications of this. So one of them is, uh, uh, if it's easy for a person to create a one person bill- or if it's possible for a person to create a one person billion-dollar startup, it also means it's way easier for people to just create startups in general. Like, I actually think this will-- like, one second order effect to this is I think there's gonna be a huge, like, startup boom and like small, like SMB-style boom, um, where anyone can build software for anything, right? Like, uh, uh, one, uh... you're kind of starting to see, starting to see this play out in the AI startup scene, where software's becoming a lot more vertical-oriented, where, like, these verticals, uh, like creating some AI tool for some vertical, tends to work quite well because, you know, you really lean into, uh, that particular domain. You, like, really understand the use case for it. And so if you play out AI, there's no reason why you can't have, like, hundred X more of these, these startups. Uh, and so I think, I think one world that we might end up seeing happen is, in order to enable a one person billion-dollar startup, there might be, like, a hundred other small startups building bespoke software that works extremely well to support, uh, other types of, you know, small, small one person, you know, billion-dollar startups. And so I think we might actually en- uh, enter into a golden age of, like, B2B SaaS, uh, and just, like, software and startups in general. And so I think, I think that's, that's a really interesting trend to, to kind of see, because as it's, as it's really-- as it gets easier and easier to build software, um, as it's easier and easier to, uh, you know, uh, uh, run a company, um, you might actually just end up seeing way more of these, these, these startups. And so the way I, I've, I've been thinking about is like, yeah, there might be one, uh, one person billion-dollar startup, or there might be like a hundred, you know, uh, hundred million dollar startups. There might be tens of thousands of ten million dollar startups, and as an individual, it's actually pretty great to have a ten million dollar [chuckles] business. Like, that's, like, enough for your set for life at that point, and so, you know, we might really see, see an explosion in that way. And I, and I feel like people aren't, aren't really, you know, pricing that in. Um, there's another kinda like third order effect to this, you know, and, and again, uh, all of these, like, as you get to the further and further out predictions, I think, uh, are-- there's a lot of uncertainty. I think if we end up moving to this world where you end up with these, like, kinda micro companies building software that works for one or two people, uh, who own the company and, and, and are working there, um, I think the startup ecosystem will change. I think the VC ecosystem will change. You know, it might-- we might end up in, uh, in a world where there's just, like, a handful of big players that are offering platforms and supporting all of these startups. But, you know, the types of venture scale return startups that can really hundred or thousand X your, your investment might actually end up shrinking if you end up having a bunch of these, you know, smaller ten to fifty million dollar, uh, companies, uh, which are not great for venture seller returns, but are great for the individuals, the high agency individuals who are now, you know, really leaning to AI to, to, to build these businesses for themselves.
- LRLenny Rachitsky
I love how many, uh, order, like, uh, order effects we've been through. [chuckles]
- SWSherwin Wu
Yeah.
- LRLenny Rachitsky
Uh, why don't we hear the fourth order effect now, Sherwin? I'm just joking. [chuckles] I-
- SWSherwin Wu
I can't. I-- it's too-- fourth order is too, too, is too giga brain for me. I can't, I can't think that far ahead.
- LRLenny Rachitsky
It's like Inception, where just everything gets slower every time you go deeper into someone-
- SWSherwin Wu
Yeah, yeah
- LRLenny Rachitsky
... every layer. Uh, okay, so the billion dollar startup, I've been-- I think about this a lot, 'cause I-- I'm not gonna be a billion dollar startup, 'cause what I'm doing is not venture scale in any way and not [chuckles] super high leverage, but just could-- seeing how many support tickets I get from just, like, the most ridiculous things, it's hard for me to imagine one person-- like, I'm bearish on this billion dollar startup. I just wanna share this thought. Uh, simply because of the support costs, even if AI is helping you, at a billion dollars, just like, unless your ACVs are, you know, very high, and you have very few customers, uh, just dealing with support... And people are like, you know, like, they can solve their own problems, but they're like, "I'll email support, ask about this thing." Just dealing with that is hard to scale, is, in my experience. So unless you have-- in my opinion, unless you have a bunch of contractors, which I don't know, does that count as a single person company? I feel like it's very difficult to scale a billion-dollar startup and not have someone helping you with at least the support work, and AI, I think, will only take you so far.
- SWSherwin Wu
So I, I, I think that's true. Uh, and actually, I think my view on it is, is, is slightly different, which is I think that your, you know, Lenny's Podcast might end up becoming a billion-dollar startup. But, um, what I think might happen is, uh, instead of you kind of being the one person who has to dispatch an AI to solve and fix those support tickets, I think what might end up happening is there might be a whole smattering of other startups that are building software and super, and, and, like, super tailored towards what you might need. And so, you know, uh, there might be, like, ten or twenty startups that build support software for podcasts and newsletters, and, uh, that might be a one-person startup. Like, it doesn't need to be a big one. And, uh, it's, it's-- and, you know, they might be able to just code up this product very, very easily. They're able to kinda, like, build their own thing, and because it's so tailored and unique and hopefully, you know, useful for you, it might be something that you purchase-... um, as the one-person billion-dollar startup. So-
- LRLenny Rachitsky
I would buy that. I would buy that.
- SWSherwin Wu
Yeah, there's like a question of, like, what you in-house and what you, what you, like, kind of, uh, outsource. And what I think might happen is because the cost of writing software and building products is, is, is, is collapsing so much, you might end up outsourcing a lot of this, and in doing so, reducing the size of your company. Uh, and so that's kinda the world that I think might end up happening. Again, there's, like, high uncertainty in what might play out here, but the end result still might be a one- like, one person driving this, like, high, high massive leveraged company that might actually reach a billion dollars.
- LRLenny Rachitsky
I could see that. I also think about Peter at Clawd bot slash multibot slash OpenClaw, of just, like, how he barraged he is right now by all these asks, and emails, and pings, and DMs, and PRs, just like, I'm curious to-- and he's not even making any money off this thing. Um...
- SWSherwin Wu
Yeah, I, I can't imagine what it's like to be him right now.
- LRLenny Rachitsky
[chuckles]
- SWSherwin Wu
It must be, like, absolutely insane. It, it-- it's probably like, um, ah, you know, like the, the, the months after we launched ChatGPT, the craziness that was-
- LRLenny Rachitsky
Yeah.
- SWSherwin Wu
Uh-
- LRLenny Rachitsky
As one, as one man.
- SWSherwin Wu
Yeah. Yeah.
- LRLenny Rachitsky
Uh, he's coming on the pod, by the way, in, in a week.
- SWSherwin Wu
Oh, that's exciting!
- LRLenny Rachitsky
Um-
- SWSherwin Wu
Yeah.
- LRLenny Rachitsky
Uh, maybe the fourth order effect is distribution becomes increasingly important because there are so many freaking things trying to get your attention. So people with an audience and platform, I think, become more and more valuable, which is good-
- SWSherwin Wu
Yep
- LRLenny Rachitsky
... good stuff.
- 31:40 – 37:28
Management lessons
- LRLenny Rachitsky
Okay, uh, I wanted to come back actually to your management stuff. So I, I really loved your insight about spending more time with top performers has been really successful to you. Just thinking about you as a manager of a team that is building the platform that powers basically the entire AI economy, like, every AI startup is building on your API, uh, clearly, you're doing a great job. What other kind of core management lessons have you learned? What do you find is really important and, and, and key to your success as a manager of engineers and just people?
- SWSherwin Wu
Yeah. Um, I, I think a lot of the lessons that I've learned here, I, I don't know how specific it is to the OpenAI API or, or some of our enterprise products in, in particular. I think my, my management philosophy has obviously changed over time, but I think it, it's, uh, probably stayed the same more than it's changed, uh, over time. Uh, o- one of these principles is, is kinda what I ta- talked to you about before, which is, you know, spending a lot of time with, with top performers, like, actually spending... And, like, to be very concrete, like, it's like more than fifty percent of your time with your top performers, with maybe your top, like, ten percent, uh, performers, and really, really trying your best to empower them. The way that I think about it is, um, is, is, is kind of come back to this analogy of software engineer as, as, as a surgeon, um, which comes from the The Mythical Man-Month book. So it's actually-- it, it's funny, so I, I pull it from the book, but in the book, they actually describe this world where, um, I think they were, like, predicting the future, uh, 'cause, 'cause I think the book was written, like, in the '70s or something. Um, they said that software engineering might end up moving into a world where the software engineers are like surgeons, where, like, in a surgery room, there's, like, one person doing the work, um, and, uh, you know, there's the one person, like, cutting or whatever and, like, doing all the surgery, and everyone else in the room is there to just support them, right? It's like the nurse, and, like, the assist- and the, the resident, and the fellow, and then the surgeon's like: "I need a scalpel," and they give him a scalpel. And then, uh, uh, they're like: "I need, you know, this tool a- and this machine," and they'll bring it over. Everyone's there to just, like, you know, support the one, uh, surgeon. And so the, the, the Mythical Man-Month actually predicted that that is kinda the direction that software engineering is gonna go. I don't think that's exactly played out, where, like, you know, it's [chuckles] much more collaborative and, like, it's not only one person doing the work. But I've always really liked that analogy. And, and, and, and, uh, that analogy is actually what I strive to, uh, a, kind of like emulate in my own management philosophy, which is, um, software engineering isn't really like surgery, where it's not just one person doing work, but the way in which I like treating the people on my team and the way that I act as a manager, is I want to, uh, empower them, make them feel like they're a surgeon, um, and, and insofar as, like, as, like, making sure that I'm supporting them and making sure they have everything that they need to, to do their work, and it feels like they have an army of people kind of supporting them, um, and looking around corners and giving them everything that they need, when it's really just me as the, as the manager. And so, like, the example i- that I give is, is looking around corners and unblocking people, especially from an organizational perspective, is extremely, extremely useful. And again, going back to the AI conversation, it's even more important nowadays, right? Like, uh, if, if people are just, like, cranking PR after PR, the main thing bottlenecking, uh, progress and, and, you know, shipping something tends to be organizational or, like, process-oriented. And if you, as a manager, can kind of look around corners and kind of unblock the team, if you can- you know, like, if, if the surgeon needs scalpel, but, you know, the manager kind of already has a scalpel ready for them, that- that's the best-case scenario. That's kind of the, the way that I approach, uh, um, um, management, and, and especially, uh, engineering management. And so that's something that- that's really, really, um, stuck with me over time. And, uh, even though, you know, software engineers aren't exactly surgeons, that metaphor has always kinda stayed in my mind as of, as of, uh, uh, progress my career.
- LRLenny Rachitsky
I love that, and I, I feel like... I wonder if that's something AI can help with, is look around corners and predict, "Here, this engineer is gonna be blocked by this decision. We need to figure this out. We need to get them-"
- SWSherwin Wu
Yeah, that's actually a really good, uh, point. I haven't tried this yet, but I wonder what would happen if I ask, uh, ChatGPT, hooked up to company knowledge, you know, like: "What are the active blockers? Uh, look through all the Notion docs. What are... " Uh, maybe Slack messages. You know, it's probably in Slack somewhere. "What are the active blockers on my team, and is there something I can do to, to help?" Um, now, that's very interesting. I have not thought about that, but you're right.
- LRLenny Rachitsky
We just had an insight right here.
- SWSherwin Wu
Yeah. Yeah, yeah.
- LRLenny Rachitsky
Uh, and it's-- I think even more interestingly, what do you anticipate will be a blocker for this engineer or this team in the, in the coming months, or-
- SWSherwin Wu
Yeah, you ask the, you ask the model, you ask the AI to do the second and third order- [chuckles]
- LRLenny Rachitsky
There we go. [chuckles]
- SWSherwin Wu
... things. Anticipate that, and anticipate what the blockers will be next month, too, uh, all of that.
- LRLenny Rachitsky
I think-
- SWSherwin Wu
Yeah
- LRLenny Rachitsky
... we've got a, we've got a good idea right here.
- SWSherwin Wu
Yeah, yeah.
- LRLenny Rachitsky
[music] This episode is brought to you by Datadog, now home to Eppo, the leading experimentation and feature flagging platform. Product managers at the world's best companies use Datadog, the same platform their engineers rely on every day to connect product insights to product issues like bugs, UX friction, and business impact.... It starts with product analytics, where PMs can watch replays, review funnels, dive into retention, and explore their growth metrics. Where other tools stop, Datadog goes even further. It helps you actually diagnose the impact of funnel drop-offs, and bugs, and UX friction. Once you know where to focus, experiments prove what works. I saw this firsthand when I was at Airbnb, where our experimentation platform was critical for analyzing what worked and where things went wrong, and the same team that built the experimentation at Airbnb built Eppo. Datadog then lets you go beyond the numbers with session replay. Watch exactly how users interact with heat maps and scroll maps to truly understand their behavior. And all of this is powered by feature flags that are tied to real-time data, so that you can roll out safely, target precisely, and learn continuously. Datadog is more than engineering metrics. It's where great product teams learn faster, fix smarter, and ship with confidence. Request a demo at datadoghq.com/lenny. That's datadoghq.com/lenny.
- 37:28 – 43:56
Challenges and best practices in AI deployment
- LRLenny Rachitsky
Okay, I'm gonna shift to talking about the API and the platform that you all build. Some-- so you work with a lot of companies implementing your API, your platform building on, on your, on your tools. You told me that you find that a lot of companies actually have negative ROI on their AI deployments, which, uh, I think is what a lot of people you read about and feel and think, and it's interesting you're actually seeing that. What, what's going on there? What are they doing wrong? What do you-- what, what's happening in the world of AI and deployments and ROI?
- SWSherwin Wu
Yeah. So, so to be clear, I, I, I don't like explicitly see quantitative numbers around this. Uh, you know, uh, it, it's actually really hard to measure these things, but especially from observing some companies kind of trying to do AI, I, I would not be surprised if a, a lot of AI deployments are actually, you know, negative ROI. I mean, part of this, too, is I think there's also general sentiment, um, from, uh, folks, uh, around the country, um, like basically outside of tech, that AI is being forced onto them. Um, and I think part of this is, is, is, uh, uh, uh, probably a symptom of some negative ROI, uh, AI deployments. A couple of things I've observed around this: so one, one thing is, and I think I, I come back to this again and again, like, I think we in Silicon Valley just forget that we live in a bubble. Like, we are so... like, Twitter is a bubble, sorry, X is a bubble, um, Silicon Valley is a bubble, software engineering is a bubble. Most people, uh, in the world, most people in the US are not software engineers, are not very AI pilled, um, are not following every single model release, and so, uh, uh, and so are just like, highly out of the loop on how to use this technology. And so, you know, like we, um, we always talk about all these, like, best practices for Codex, all these, like, Codex-pilled people within OpenAI. I'm sure everyone on X who posts are like crazy power users of, of these AI tools. You know, they, they lean into skills, they lean into agents.md-
- LRLenny Rachitsky
MCPs.
- SWSherwin Wu
Uh, yes, yeah. A- a- all, all of that. And, uh, when I talk to some of these companies and I, and I talk to the, the actual employees using these, it's like the most basic thing that they're trying to do, and they, like, have very little understanding of exactly how this technology works. And so that, that's, that's kind of like one big observation for me, which is like, they're asking very simple questions of, of these things. They're really not, not pushing it just yet. And so that kind of goes back to, uh... that kind of ties into, to, to what I, what I think, um, more companies do or like, what could do or, or what, what a more ideal AI deployment setup looks like. Um, and, and this is kind of how we've run things within OpenAI, too. Um, the companies where I think it's, it's started to work really well, have a combination of both top-down buy-in, so it's like the C-suite's, like, you know, "We're, we're-- we wanna become an AI, AI-first company." Um, so there's buy-in, they buy the tools, they have, you know, exec support, but it also has bottoms-up adoption and buy-in. And so what I mean by that is it has, like, actual employees doing the work who are really excited about this technology and are willing to learn, evangelize, build best practices, and kind of like knowledge share within the organization. We've, we've seen this a lot internally. So like, obviously, OpenAI has always wanted to be, uh, a very AI-centric company, but where-- when it really started taking off was when-- was with the introduction of Codex and these tools where like people them-- like actual employees themselves could start applying it to their work. Uh, and I think you really need this because at the end of the day, everyone's work is, like, very different. It's, like, very unique. Uh, software engineering is different than finance, is different than operations, is different than go-to-market and sales. Uh, and so there's, like, a lot of these, like, last-mile intricacies of work that needs to really be done in a bottoms-up fashion. And so my sense is a lot of these, these AI deployments don't have, like, don't have bottoms-up adoption. Like, it was like an exec mandate, and it's extremely top-down and is very divorced from what the actual work looks like. And as an end result, you end up with a giant workforce that doesn't really understand the technology, is like, "I know I'm supposed to use this, and maybe it's, like, on my performance review, too, but, um, I'm not sure what to do." And they look around, no one else is doing it. There's no one else to learn from. Uh, and so my, my, uh, you know, my recommendation for companies kind of pushing this is, is find or maybe even staff a full-time team internally that is this kinda tiger team internally, that can, um, explore the full extent of the capabilities, apply to specific workflows, do the knowledge sharing, uh, create excitement, uh, within folks, uh, who might wanna use this technology. Uh, 'cause in the absence of that, it's very difficult to-- it's actually very difficult to pick up.
- LRLenny Rachitsky
And who, who would you put on this tiger team? Is it like engineer-led, do you find in your experience? Is it a cross-functional sort of team?
- SWSherwin Wu
Yeah, it's, it's interesting, 'cause so, um, also, a lot of companies don't have software engineers. Uh, and so, uh, the, the pattern I've seen is it tends to be these, like, software engineering-adjacent, like, basically technical people, but are not software engineers. I think those, those are the ones who get-- tend to get most excited, uh, around this. It's like, you know, maybe the-- it's like maybe the, like, you know, support team operations lead, who doesn't code but loves using these tools and, you know, is like an Excel wizard or something.... and so it's like technical adjacent or like coding adjacent, and like, you know, pretty technical. Those are the kinds of-- like, those are the kinds of people I've seen in these companies who just like really light up and get excited around this. Um, and you can usually build a team, uh, a team around that. But yeah, it, it's like oftentimes not software engineers. Software engineers, I think, will understand this, but not every company has a software engineers, um, is actually kind of a rarity. They're, they're, they're hard to find, they're expensive, uh, and so it's, it's these other, other types of folks.
- LRLenny Rachitsky
What I'm hearing is the anti-pattern is top-down. This is very-- the CEO found an exec team, just like, "We are gonna go AI first, we're gonna lead into AI. Everyone's gonna be judged on their performance using AI tools, how much your productivity is increasing, thanks to AI." And without-- with that being just top-down and not creating a team that is bottom-up, spreading the, the gospel, you find that it doesn't work.
- SWSherwin Wu
Yeah. Yeah, exactly. Exactly.
- LRLenny Rachitsky
And the advice is, find the people that are most excited, and instead of kind of having them spread out through the organization, you're-- what you find works is create a little t-- AI kind of evangelist team that-
- SWSherwin Wu
Yeah
- LRLenny Rachitsky
... finds ways to use it and kinda spreads it across the org.
- SWSherwin Wu
Yeah, I mean, another-- just kind of like hearing you, you play back to me, another way to think about it, kinda tying back to [chuckles] my own management philosophies, is find the high performers in AI adoption and empower them. You know, let them build hackathons, let them, you know, hold seminars, do knowledge sharing, kinda create the seeds of, uh, of excitement
- 43:56 – 48:57
Hot takes on AI and customer feedback
- SWSherwin Wu
internally.
- LRLenny Rachitsky
Okay, amazing. There's a couple of hot takes I wanna hear, uh, from you, something that I've seen you talk about and share. One is, um, you've shared that talking to customers and listening to customers is not always the right strategy in AI, and it might often lead you astray.
- SWSherwin Wu
I don't know if it's that hot of a take. I think the main thing here is-- so obviously, you should talk to your customers. Like, it's, it's like you still talk to customers. I just think the AI field, um, e- especially what I've seen over the last kinda like three years, um, uh, working on the API and, and, and seeing kind of all that evolve, is the field and the models themselves are just changing so, so quickly. They tend to, like, disrupt themselves, especially around the like tooling and the scaffolding space. So, uh, there, there's this quote that I read actually ear- earlier this week from a-- it's from an X article, uh, by this guy named Nicholas, who's, who's the founder of a, a startup called Fintool, uh, where, uh, I think he was, he was sharing a lot of the best practices that he has learned through building AI agents for financial services, I think at a, at a start, Fintool. Um, and he had this phrase that I thought was really good, which is, uh: "The models will eat your scaffolding for breakfast." Like, if you, if you look-- if you rewind back to 2022, right when ChatGPT launched, um, these models were pretty raw, and there was, like, all this product scaffolding and, and things, especially in the developer space, to basically try and steer the model and build a scaffolding around it to get it to do what you want. Like agent frameworks, there's like, like vector stores, I think was, like, really popular back then, uh, and just like a whole smattering of tools here. And as you've kinda seen the field play out, that the models have just changed so much, uh, that, uh, and, and gotten so much better, that they ended up, yeah, literally eating some of, some of the scaffolding. Um, and I think this is even true today. So I think the, the, the article from Nicholas, um, actually, you know, the, the current scaffolding, which is, uh, fashionable, is skills, files-based context management. I could see a world where at some point, you know, that's no longer useful, [chuckles] uh, where the model can actually, you know, manage all that themselves, or like, you know, uh, uh, or, or, or there might be-- you know, it's hard to predict, but like might move on to some new paradigm where you no longer need this file-based, like, skills, skills type thing. You have literally seen this play out, right? Like the agent frameworks, I think, are a little less useful now. Um, there was a period of time in, like, twenty twenty-three, where we thought vector stores, um, is, is, is gonna be like the main way for you to, you know, bring organizational context into the models, and you need to, you know, uh, vectorize and, and embed every bit of your corpuses, and then you need to do all this work to, like, figure out the vector search, to, like, optimize that, to pull out the right information at the right time. All of that is scaffolding because the model, you know, was not good enough. And turns out, you know, in this case, it turns out, as the models get better, a, a better approach is actually to take out a lot of that logic and trust the model and give it a set of tools for search. It doesn't need to be a vector store. You could actually just hook it up to any type of search. It could literally be files on a file system, like skills, uh, and agents MD, uh, to kinda steer it, uh, as well. Obviously, there's still a place for vector stores, I know a lot of companies are still using it, but the, the, the entire scaffolding around that and building an entire ecosystem around that and assuming that's the only scaffolding that you need has, has really changed. And so tying this back to the like, you know, uh, it, it's, you know, you don't always have to listen to your customers, because the field is changing so much at any point in time, you know, a lot of people are kind of in this local, local maximum, and if you just blindly listen to your customers, they'll, they'll be like: "Yeah, I want a better vector store." Like, "I want a better, uh, I want a better, you know, agent framework for this." And, uh, if you had just kind of only chased down that path, it actually would have led you to, you know, build something that, again, is a local maxima. Whereas as the models get better, we've had to reinvent and kind of rethink the right, right, uh, uh, abstractions and the right tools and frameworks to, to, to, to build, uh, around these models. Um, and the cool/exciting/kind of crazy, annoying part is it's a moving target. And so yeah, like, the current, current smattering of, of tools and frameworks right now will likely need to evolve and change pretty significantly over time, um, as the models get smarter and better. But that is just the nature of building in this space. I think that's what makes it exciting. Uh, but it also means when you talk to customers, you kind of need to balance the exact feedback that they want, uh, with, uh, where you think the models are going and where you think things will, uh, trend over the next one to two years.
- LRLenny Rachitsky
It's interesting how this is, um... the bitter lesson is, uh, you know, this big lesson that AI and ML folks learned, which is just like, uh, don't-- the less you overcomplicate, the less logic you add to, to machine learning, to AI, the more it'll be able to scale and grow and just, like, take it all the way and let it just, just compute, basically. Just give it more power to, to get-
- SWSherwin Wu
Yeah
- LRLenny Rachitsky
... smarter on its own.
- SWSherwin Wu
Yeah, there's literally a version of the bitter lesson applied to like, building with AI, where, you know, we were trying to architect all this stuff around, and it turns out the models will just kind of, you know, eat it all away.... and, and, and, and honestly, like, OpenAI API team has, like, been guilty of this, uh, where we kind of, like, took some, you know, left and right turns, uh, when we shouldn't have. Um, but, uh, yeah, the models still end up-- models get better, and, uh, we're all learning the bitter lesson
- 48:57 – 50:16
Building for future AI capabilities
- SWSherwin Wu
day in and day out.
- LRLenny Rachitsky
So what would be the, the key takeaway for folks building on, say, the API or just building agents and, you know, having to build a little bit of this around for now? Is it just-- yeah, what would be the advice?
- SWSherwin Wu
My general advice, and I've been giving this to people for a while, and I think it's still true today, is make sure you're building for where the models are going and not where they are today. Um, uh, you know, the, the-- it's, it's clearly a moving target, and I think a lot of the companies that I've seen, startups that I've seen really, really do well, is they build a product for an ideal, like, type of capability that is, like, maybe eighty percent of the way there today, and it like, they end up, you know, having a product that, like, kind of works, but it's, like, just almost there. But then as the models get better, you know, suddenly it might click, and then their product now is incredible because it works, you know, like, uh, uh, like maybe with, like, oh, oh, three at some point, it suddenly works, with five point one, five point two, suddenly it unlocks it. But they're building these products with the in-- like, the model capability improvements in mind. And with that, you end up creating an experim- experience that's way better than if you had assumed that it's, it's static in the first place. Um, and so that would be my, my general, uh, advice, which is, you know, build for where, where, where the models are going and not, not where they are today. You end up building a better product. You may need to, you know, like, wait a little bit, but like, you know, the models are getting so much better so quickly, you, you often don't need to wait,
- 50:16 – 53:35
Where models are headed in the next 18 months
- SWSherwin Wu
um, that long.
- LRLenny Rachitsky
So to follow that thread, where are... Like, in the next six to twelve months, where is the API heading? Where's the platform heading? Where are the models heading? As much as you can share, I know there's a lot of secrets here, that maybe you're most excited about, or you think that people should start to prepare for, however much you can share.
- SWSherwin Wu
I mean, so the obvious one is, um, how long of a task, uh, these models can do coherently. Um, so there's, like, the, the meter benchmark that, that I think tracks software engineering tasks and how long, you know, like, how long of a task can these models do, uh, fifty percent of the time, eighty percent of the time? Uh, I think we're at something like multi-hour tasks being able to be done by, uh, software engineering tasks, being able to be done by, um, uh, these frontier models, uh, fifty percent of the time, and then I think eighty percent is something like just under an hour. But the, the, the sobering thing about that, that chart is they plot all the, uh, previous models [chuckles] on this chart as well, so you can really see the trend of this. That's something that I'm really excited about, which is, you know, I actually think products today really optimize for tasks that the model can do for, like, minutes at a time. Like, even Codex and, like, the coding tools, I'd say, like, you know, it's, it's in the CLI, you're kinda, like, seeing it be interactive. It's really, you know, quite optimized well for, like, maybe at most ten-minute type tasks. I have seen people push Codex to the [chuckles] limit into, like, multi-hour long, uh, tasks, uh, but again, I, I, I, I think that that's more of the exception. But I, uh, if you follow this trend, like, I think, like, in the next twelve to eighteen months, we could see models that could do multi-hour long tasks very, very coherently. At some point, it might reach, like, you know, six hours, a day-long task, where you kind of, like, dispatch it and have it do, you know, do things on, uh, on its own for a while. The types of products you build around that will look very different. You wanna give the model feedback. You obviously don't want it to completely run wild for a day. Maybe you do, but, but you probably don't. Um, and, and then the, the universe of things you can have the model do really expand. So that's something that I'm really, um, really excited about seeing. Another, uh, thing over the next twelve to eighteen months, which I think will be really cool, is, uh, improvements in our-- in the multimodal models. So, uh, and, and actually, by, by multimodality, I'm, I'm mostly thinking about audio here, where, uh, the models are pretty good at audio. I think they're gonna get a lot better, um, at audio over the next six to twelve months, especially the likes, you know, the, um, native multimodal models, the speech-to-speech ones. I think there's also interesting work, uh, being done around, um, new types of models and architectures on the, uh, multimodal audio side, uh, as well. But, uh, audio, especially in the enterprise and in, you know, business setting, I think is a hugely underrated, uh, domain still. Like, everyone talks about coding, it's all text, uh, but, uh, we're talking [chuckles] in audio. A lot of the world's business is done via audio, uh, a lot of services and operations are done via, uh, talking and audio. And so, uh, I think that that area is gonna look very exciting in the next twelve to eighteen months, and I think there will be, uh, even more unlock for, uh, what we can do, uh, with, with audio models, uh, there as well.
- LRLenny Rachitsky
Amazing. So quick summary, uh, expect agents and, uh, AI tools to run longer, to that, that trajectory to continue to increase-
- SWSherwin Wu
Yep
- LRLenny Rachitsky
... and then audio and speech becoming a bigger deal, more first party and, and native and better and, and core to the experience.
- SWSherwin Wu
Yeah.
- 53:35 – 57:22
Business process automation
- LRLenny Rachitsky
Extremely cool. Okay, I wanna go back to one of your hot takes. Another hot take that I've seen you discuss, you're big, uh, you're very bullish on business process automation as an opportunity in the world of AI. Talk about that.
- SWSherwin Wu
Yeah, this go-- this goes back to the thing that I said previously, which is, um, we, we, we live in a bubble in Silicon Valley, and, um, a lot of the work that we do, that we're used to, software engineering, you know, product management, building products, uh, is very differently shaped than the work that goes on, um, that runs our entire economy. And I see this day in and day out when I talk to customers. Uh, if you, if you talk to any, like, you know, company that's not based in-- [chuckles] that's not a tech company, um, there's a lot of business processes. And so what, what I mean by this is, is, you know, I, I generally delineate it as, you know, there's like, uh, like, software engineering is kind of like open-ended knowledge work, right? It's like-- and this is why I think, uh, tools like Codex tend to be quite, quite good because it's exploring and, and you're giving it these, like, open-ended things. But software engineering fun- is fundamentally, like, pretty open-ended, uh, and it's not very repeatable, right? So, like, you build a feature, you're not trying to build the exact same feature over and over again. And a lot of, like, tech-... jobs are in this space. I think like data science is kind of in this space as well, so even some of the like strategic finance stuff. But as you move further and further away from software engineering and like what, what is core in tech, a lot of jobs are just business processes. They're like repeatable things, uh, repeatable operations, um, that, you know, some manager at a company has kind of like iterated on. Um, there's usually a standard operating procedure that people wanna do, uh, and you don't wanna deviate from it that much. You know, there's like in software engineering, the ingenuity is, is, is in deviating, but a lot of, a lot of the, the work being done in the world is, is actually just, um, running through these procedures and, and operations. Like if I, you know, if I call, um, a support line, they're running through one of these. If I call my utility company, there's a bunch of processes and things that they can and cannot do, um, for me. Uh, and so I'm, I'm just extremely bullish on this general category of like-- and, and, and I think it's underrated because it's so different from what we think about in, in Silicon Valley, people tend to not think about it. But how can we apply, um, AI, uh, and, and some of the tools and frameworks that we have towards this business process automation, towards automated-- automating and making easier, um, repeatable business processes with high determinism, um, that is fully integrated with business, uh, data and business decisions, and, and, and different systems within an enterprise? Um, and how can we actually make that, that process better? Uh, because I actually think there's a lot of opportunity and a lot of work to be done, uh, in that area, and we just, we just don't talk about it 'cause it's, it's, uh, a little bit less, uh, uh, in our wheelhouse.
- LRLenny Rachitsky
So your take here, just to make sure I fully understand it, is you think there's a much, uh, bigger opportunity outside of engineering for AI to impact, uh, productivity of companies and also jobs of these folks that are doing these kind of repetitive, easily automated tasks?
- SWSherwin Wu
Im- impact jobs, and also just impact how work is done. Like, so much of work is done in this way. Like, you think about, you know, like what a-- like basically, we-- I, I talk to customers all the time, big enterprises, like, like, "How, how will AI transform my company? Like, how will it run in, in, in, in a world, uh, with AI in, like, twenty years?" Um, and, and, you know, software engineering is part of the story, but there's so much more on the business process side, and I actually think it might look even more different on the business process side, and, and the work there is, is pretty substantial. It's actually interesting. I don't know, like from an absolute percentage or absolute basis, I don't know if it's bigger or smaller than software engineering. Like, software engineering is pretty huge and pretty extensive, uh, as well, but it is pretty massive, and it's definitely bigger than, you know, uh, uh, uh... It's, it's bigger than you would think it is based off of how, how people talk about it or don't
- 57:22 – 1:00:50
OpenAI’s ecosystem and platform strategy
- SWSherwin Wu
talk about it on X or Twitter.
- LRLenny Rachitsky
Okay. Uh, and going in a slightly different direction, uh, ha- having built the platform, building the API, uh, people building on the API, the biggest question on people's minds is always just, uh: How do I not have OpenAI squash my idea and build their own thing, and then, you know, destroy this, this market I created? What's the general policy, what's the general philosophy of how startups should think about where OpenAI is unlikely to go?
- SWSherwin Wu
My, my general answer here is, is, um, the market is so big and so massive. Like, I actually think, you know, startups should just not overly think about where OpenAI or these labs are going. I've talked to a lot of startups, you know, that have, you know, not worked out, startups that are doing really well. Every startup that I've seen that has kind of fizzled out is not because OpenAI or, you know, a big lab or Google or something has, has come to squash them. It's because they built something, and it, like, really didn't resonate with, with the customers. Whereas the ones that take off, like even in very competitive spaces like coding, like Cursor is huge at this point, and it's because they built something that people really love. And so my general advice is, like, don't, you know, don't overly stress about this. Just build something that people like, and you will, you will have a space in this. I can't overstate how big of an opportunity there is right now. Like, the, the, the opportunity space in building with AI is so big... Uh, a good example of this is, is like the space is so big that the Overton window of what is acceptable and not acceptable for VCs to do has completely changed here. [chuckles] VCs are like investing in, like, competitive companies left and right. It's just like the space is so big because, because the opportunity is, is, is, is unlike anything that we've seen before. And while, you know, uh, that, that affects how VCs operate, from a startup perspective, it's like the most empowering thing in the world because the-- like, even if you just build something that, that some people really, really love, you will, you will end up with a massive, massively valuable business. Uh, and so, uh, that's why I tell people, like: "Don't, don't overly think about it." The other thing, like I also think is important to remember, uh, at least from an OpenAI perspective, one thing that, that, that we've always held very near and dear, which both Sam and Greg helped, you know, reinforce from the top as well, is we actually view ourselves fundamentally as a, like, ecosystem platform company. The API was our first product. We think it's really important for us to foster this ecosystem and continue to, you know, uh, support it and, and not squash it. And so if you kinda look at the decisions we make, it- this is all weaved, weaved through it. Every single model we've released in one of our products gets released in the API. Like, even, you know, we re- released these Codex models now that are a little bit more optimized for the Codex harness, but they always find their way into the API, and, like, all of our, you know, uh, customers end up, end up using those. We don't hold back on any of that. Uh, we think it's really important to keep our platform neutral, uh, and so, you know, we don't block competitors. Um, we allow people to have access to our models. Um, uh, we also want, you know, like, uh-- we've recently been testing more of, like, the sign-in with ChatGPT, you know, uh, product as well, and so we, we, we want to foster this ecosystem, and we think it's really important that we do so. Uh, the general, like, thinking about this is like, you know, a rising tide r- like, lifts all boats, and, you know, we might be a aircraft [chuckles] carrier. We're, like, pretty big at this point, but we think it's important to raise the tide, uh, 'cause everyone kind of, uh, benefits, and I think we'll benefit as well. Like, our API itself has grown pretty significantly because we, we act in this way. And so I'd really encourage people not to view OpenAI as this kind of like, you know, thing that'll just, uh, uh, shove people out of the way, but instead, focus on, on, on building something valuable. Uh, and we, you know, remain committed to, to, to providing an open ecosystem.
- 1:00:50 – 1:05:21
OpenAI’s mission and global impact
- LRLenny Rachitsky
... why, why is that important to OpenAI? Just this focus on building a platform, creating a way for people to build businesses, just like, is that just that's been the vision from the beginning, we want this to be a, a platform?
- SWSherwin Wu
It's been the vision from the beginning. It comes-- goes back to our charter, actually, like our, our mission. Um, so the, the OpenAI's mission has always been to, one, to build AGI, so, you know, where I was getting that. But then the second thing is to, like, spread the benefits of it to all of humanity. And there's kind of like a lot of, you know, uh, uh-- the main part there is all of humanity. Like, uh, and obviously, ChatGPT is trying to do this. You know, we're trying to reach however many, you know, the whole world. But very early on, and this is why we, we launched the API, you know, back in, I think it was like twenty twenty or something, like, really early. We don't think we, as a company, will be able to reach all of humanity, right? Like, there's, I don't know, every s- every corner of the world is like, like, pretty, pretty, pretty deep. And so we actually feel like in order for us to fulfill our mission, we need to have some platform-style thing here, where we can empower other people to build, you know, the customer support bot for podcasters and newsletter hosts, uh, because we're not gonna be able to do it ourselves. Uh, and so we've largely seen this play out with the API. Uh, this is why we, we, you know, we, we, we, we talk to so many of our customers and, and, and really, you know, love seeing the diversity of, of things built on. But yeah, it- it's been there since day one because it's, it's kind of-- we view it as an expression of our mission.
- LRLenny Rachitsky
And you haven't even mentioned the, uh, the app store that you guys are launching, the ChatGPT App Store.
- SWSherwin Wu
Yeah.
- LRLenny Rachitsky
Is, is that under your umbrella, by the way, or is that a different org and team?
- SWSherwin Wu
It's a, it's a different team.
- LRLenny Rachitsky
Okay.
- SWSherwin Wu
So it's under ChatGPT. We obviously collaborate very closely with them, and, uh, you know, they built like an apps SDK, uh, which was, uh, built in close collaboration with our team. Uh, but that is more within the ChatGPT umbrella. Uh, but that is also another-- like, that's another example of this, right? It's like ChatGPT is like... W- we, we, we, we, we kind of like have these eight hundred million weekly active users who are just coming over and over again. Like, it's a great asset to have as a business, but like, man, would it be better if we could somehow allow, you know, uh, other companies to come in and, and, and, and, uh, take advantage of this as well, and, and build for this, this audience as well. And, and then ultimately, we think it'll help us expand that, that, that group as well, right? And so it's all- it all kind of comes back to the mission, and, uh, we find that being a platform, being open, tends to help here.
- LRLenny Rachitsky
Just that number, eight hundred million, I think it's M- MAUs, uh, just like-
- SWSherwin Wu
No, no, no. It's weekly, weekly-
- LRLenny Rachitsky
Weekly ac-
- SWSherwin Wu
Yeah, it's crazy.
- LRLenny Rachitsky
A billion people using weekly. Uh, just like it's absurd how many now-- how these numbers we're just used to now, but that's in- insane, unprecedented.
- SWSherwin Wu
Yeah, it's, it's mind-boggling for me to think about from a scale perspective, uh, honestly. And the way I think about it is like ten percent of the world, uh-
- LRLenny Rachitsky
Yeah
- SWSherwin Wu
... and growing, by the way. Like, it's just, it's, it's shooting up. Um, uh, come to ChatGPT, uh, um, and, and use it every day, or sorry, every week.
- LRLenny Rachitsky
And this point, I just wanna double down on this point you're making. OpenAI's mission was to make AI available to all of humanity, and I think some people diss that. They're like: "Oh, you know, it costs money," and it's like, uh... Like, the fact that it, it's-- there's a free version of ChatGPT that anybody can use that is not so different from the most powerful AI model that exists in the world for free, that's not gated, that anyone can use. Like, if you have-- if you're a billionaire, there's only so much more you can get out of AI than what someone, you know, in a village in Africa can, can get. And I know that's always been really important to OpenAI.
- SWSherwin Wu
Yeah, yeah. I mean, like, uh, that, that's why I think we've leaned into the health work, we've leaned into, like, like, uh, like, uh, education is gonna be very interesting here. Um, the other in- insane kind of trend here is, is the free model has gotten so smart over time. Like, the free model back in twenty twenty-two was, you know, [chuckles] like, uh, well, it was good at the time, but it's like nothing compared to what you get today, because you get GPT-5 today. Uh, and so the, like, you know, raising the floor across the world is kind of, you know, something that we're really we're trying to do, and, and we view it as, as part of our mission. The other flip side of this, by the way, is like, you know, kind of talking about, like, the billionaires or, or whatever. I, I know people love saying, like: "You're using the same iPhone that like, you know, Steve-- or sorry, uh, like Mark Zuckerberg's probably using, or like the billionaires are using." Like, for, like, twenty dollars a month, you're basically using, you know, like, using the same AI that, you know, the billionaires are using. Uh, for, like, two hundred dollars a month, uh, you get the same Pro model that, you know, all the billionaires are using, but they're probably not using Pro for everything. They're probably just using the, the plus-tier ones, uh, for their day in and day out. And so, yeah, this kind of like democratization and just, like, spreading of this, this benefit, like, across all of the world is, seems really meaningful to us and something that, um, uh, drives
- 1:05:21 – 1:08:16
Building on OpenAI’s API and tools
- SWSherwin Wu
a lot of, of, of what we do.
- LRLenny Rachitsky
One last question, just for folks that are thinking about building on the API or just like, "Oh, wait, I could do cool stuff with OpenAI's models and APIs," what, what does your API and mo- and platform allow people to do? Like, I know you can build agents on top of the platform. Just talk about what you allow.
- SWSherwin Wu
So fundamentally, the API offers a bunch of developer endpoints. Uh, and, and, uh, and these developer endpoints basically let you sample from our models. The most popular one that we have right now is one called Responses API. Uh, and so this is a, an endpoint, and it's optimized for building long-running agents, so agents that'll work for a while. So where you can basically use, you, you can... At a very, you know, uh, uh, low level, you're basically just giving the model text. The model will work for a while. You can kind of, you know, pull it to see, see what it'll do, and then you'll get the model response back at, at some point. That's, like, the lowest level primitive that we have, uh, for people, and that's actually what a lot of people use. That's the most popular way of building on top of our API. With that, it is, like, super unopinionated, and you can do basically whatever you want. It's, like, the lowest level thing. We've also started building more and more kind of, like, layers of abstraction on top to help people build, uh, some of these. Uh, and so next layer up, we have this thing called the Agents SDK, which has also gotten extremely, extremely popular. Um, this allows you to use, you know, the Responses API or some other API endpoints that we have to build, uh, what you might more traditionally think of as an agent, like a, you know, an AI kind of working in an infinite loop. It might have sub-agents that it delegates to. It starts building all this framework, all this scaffolding, actually, [chuckles] you know, we'll see where this all goes. Um, but it makes it a lot easier for you to build these, these, these kind of agents.... giving it guardrails, allowing it to, like, farm out subtasks to other agents and, and kind of like orchestrate a swarm of agents. Uh, the A- Agents SDK, uh, kind of allows you to do that. And then above that, uh, we've now started building tools to help, uh, also with kind of like the meta level of deploying an agent. Uh, so we have this product called, uh, um, AgentKit, uh, uh, uh, and Widgets, uh, which are basically a bunch of UI components that you can use to very easily, um, build a very beautiful UI, um, on top of, uh, either our API or Agents SDK. Um, because, you know, a lot of the times, these agents kind of look very similar from a UI perspective. Uh, and so there's AgentKit. We also have a smattering of like, uh, evals, products, like an evals API, where if you want to test and, like, you know, see if your models or or your, your agent or your workflow is working, uh, you can test it in a very quantitative way, um, using our evals product. And so, yeah, the, the-- I, I view it as like these, these various layers. They're all kind of helping you build, um, what you want, um, with our, uh, AI, uh, with our models, um, and with increasing levels of abstraction and, and, and, and, and, uh, you know, how opinionated it is. And so, um, you can st- you can do-- you can use the whole stack, and, and it, it very quickly allows you to build an agent, um, or you can go down, down the stack, as low as you want to, to basically Responses API and, and build, um, whatever you want,
- 1:08:16 – 1:19:39
Lightning round and final thoughts
- SWSherwin Wu
uh, because of how low-level it is.
- LRLenny Rachitsky
Sherwin, is there anything else that you want to share, anything else you want to leave listeners with, anything we haven't touched on that you think might be helpful before we get to our very exciting lightning round?
- SWSherwin Wu
The only thing I'd, I'd leave folks with is, yeah, I, I think, um, I think the next, like, two to three years are gonna be some of the most fun, uh, in tech and in the start-up world, uh, that, that we'll have in a very long time. And, uh, I would just encourage people to not, uh, not take it for granted. Like, I, I entered the workforce in 2014. It was great for, like, a couple of years. I felt like there was, like, a period of, like, five to six years where it wasn't very exciting in tech. Uh, and then in the last three years, it's just been the most insanely exciting, [chuckles] energizing period, uh, of my career. Uh, and I think the next two to three years are gonna be a continuation of that. And so, uh, would encourage people not take it for granted. I'm trying to not take it for granted. At some point, you know, this wave's gonna play out, and it's gonna be a lot more, you know, incremental. Uh, but in the meantime, we're gonna get to explore a lot of really cool things, invent a lot of new things, and change the world and change how we work. And so, uh, that's the main thing I'd, I'd leave folks with.
- LRLenny Rachitsky
I love this message. I want to spend a little more time on it. Um, when you say, "Don't miss it," is it-- what do you recommend people do? Is it just build, lean in, learn, join a company building really interesting things? Like, what's, what's your advice to folks that are like, "Okay, I don't want to miss the boat?"
- SWSherwin Wu
Yeah, I would just say engage with it. So it's basically like what you said, um, lean in, um, building, uh, tools on top of this is, is part of the, you know, is part of the story. Um, just using the tools, like you don't, you know, you don't need to be a software engineer to, to lean into this. Um, all-- I think a lot of jobs are gonna, gonna, gonna change here. So just using the tools, understanding the limitations of what it can and cannot do, so that you can kind of watch the trend of what it can start to do, um, as the models improve. And yeah, and so it's basically like getting used and get- getting used to the technology and getting familiar with it, instead of kind of like s- laying back and, uh, uh, letting it, letting it pass you.
- LRLenny Rachitsky
On the flip side of that, there's a lot of, I think, stress and just anxiety around, like, "There's so much happening. How do I keep up? I gotta learn about Clawd bot this week. Oh, God!"
- SWSherwin Wu
Yeah.
- LRLenny Rachitsky
What-- is there something you've learned about it, just not... Like, you're at the center of this. [chuckles] How do you not get overly stressed and worried about missing things that are going on and just can you stay on top of news? What, what are some things you've done and learned?
- SWSherwin Wu
Yeah, so I, I think I'm personally a bad example of this because I am-- I'm basically chronically online, uh, on X and, uh, uh, our company Slack. So I, I, I actually try and absorb, I end up absorbing a lot of it. What I will say, though, is just, like, from observing other folks who are less, you know, addicted to this stuff like [chuckles] I am, um, yeah, a lot of it is noise. Like, you don't need to, you don't need to have, like, one hundred and ten percent of this kind of pass your mind, like, like, go into your mind. Honestly, just leaning into, like, one or two different tools, starting small, is already like, you know, more than you need. Here, I think just the combination of, like, the frenetic pace of the industry, X as a product, just creates, like, this insane kind of like, um, uh, uh, uh, yeah, this insane, like, pace of, of news, which is honestly very overwhelming. Uh, the main thing is, like, you don't need to be-- you don't need to know all of that to, to really engage with what's happening right now. And even something as simple as just, like, install the Codex client, play around with it. Install ChatGPT and connect it to a couple of your, uh, you know, internal, uh, uh, data sources, Notion, Slack, GitHub, and see what it can and cannot do. Um, all of that, I think, is, uh, a, a, a part of it.
- LRLenny Rachitsky
Amazing. Sherwin, with that, we've reached our very exciting lightning round. I've got five questions for you. Are you ready?
- SWSherwin Wu
Yeah. Yeah, absolutely.
- LRLenny Rachitsky
First question: What are two or three books that you find yourself recommending most to other people?
- SWSherwin Wu
I'll, I'll, I'll talk about one nonfiction, one and one fiction book. Uh, the fiction book was, I just finished reading it. I, I... It was really-- I, I really recommend it. It, it's, uh, uh, There Is No Antimemetics Division by QNTM. Uh, he's a, uh, I think he's, like, an online author, but I saw it being shared on X. Uh, this, this, uh, it's like a science fiction-y kind of book. Um, and it was... I basically devoured it in, like, two days. Um, it was, it's super, super well written, super fascinating. It's about a government agency that's fighting, you know, things that make you forget it. Um, and so it's just a very, like, smart, like, creative book that, that, and fresh, uh, honestly, in terms of, like, source material, uh, that, that, that I really like. So I'd, I'd recommend that one. Uh, the book is also unintentionally hilarious. [chuckles] So, like, it's, like, meant to be, like, this, like, sci-fi, almost, like, horror-style book, but it was, it was, it was, uh, it made me laugh a couple of times. So, uh, that's the, that's the, um, fiction book. Non-fiction, so I'm gonna cheat, and I'm, I'm gonna recommend two of them. So in the last year, I've been reading a lot more about China and kind of like the US-China relations. Uh, and I think there are two books that came out in the last year that have been, you know, really, really eye-opening for me in, in, in that regard. First one is the Dan Wang book, Breakneck. That one is really, really good. I really liked his analogy of, like, the lawyerly, US as the lawyerly society, China is the engineering society, uh, and there are pros and cons to each.... I read it, and I was like: Hmm, yeah, it does, does seem like we're run by [chuckles] lawyers, uh, in the US. Uh, so I think that's one. Uh, and the other one is the Patrick McGee book on Apple in China was super, super interesting. I'm a huge Apple fanboy. Like, if you could see my, uh, desk right now, it's all Apple stuff. But just, like, one, it was just super fascinating learning about Apple's relationship to China, and then two, it just, like, had a lot of inside information about Apple as a company that I found fascinating, so it was also quite a page-turner and, um, also, you know, a very, very timely, uh, timely book as well.
- LRLenny Rachitsky
The Antimemetics book sounds amazing. I'm buying it right now as you're talking. [chuckles]
- SWSherwin Wu
Yeah. Yeah, yeah, it's, it's like... I think it's only, like, a couple hundred pages.
- LRLenny Rachitsky
Perfect.
- SWSherwin Wu
I literally finished it in two days.
- LRLenny Rachitsky
A dream.
- SWSherwin Wu
It was just, like, so, so good.
- LRLenny Rachitsky
Okay, great tip. Okay, uh, favorite recent movie or TV show you have really enjoyed?
- SWSherwin Wu
Yeah, that one's tough 'cause, you know, with I have, I have two kids and, uh, uh, a busy job, and so I really haven't had much time, um, to watch TV shows. Uh, I will say in the last couple of weeks, I watched a couple episodes on... I'm actually a big anime guy, and so, uh, I, I watched a couple episodes. There's a new season of this anime called Jujutsu Kaisen, uh, that's out. Uh, so Season 3 of JJK, uh, was, was, was really good. Um, in general, uh, I'm a huge, uh, fan of, uh, Japanese anime. I think they create the most, uh, novel and unique, uh, plots, uh, and universes that, uh, Western media has shied away from. Um, and so, uh, generally a big fan of that, but yeah, it- I haven't really watched much, but saw a couple episodes of JJK recently.
- LRLenny Rachitsky
Extremely understandable in your r- role.
- SWSherwin Wu
Yeah.
- LRLenny Rachitsky
Favorite product you recently discovered that you really love?
- SWSherwin Wu
Yeah, okay. So, so, uh, so I recently, uh, had to set up, uh, Wi-Fi and, like, home networking, and I went all in on Ubiquiti, uh, routers, um, and ca- security cameras. I'd never heard of it before I had, had to do this. I always just had a very simple setup, uh, and it was just such a well-built product. Uh, I don't know if you've used it before, but it's basically like the Apple of, like, home networking, so, uh, beautiful products. Uh, but the thing that actually makes it extremely good is its software is good. Uh, and so they have a really great, um, mobile app to help manage, you know, uh, all of the, the home networking. Um, and so basically, uh, Ubiquiti, you can use it to buy, uh, wireless routers. Um, you need Ethernet, uh, wiring throughout your house to use it, um, but I actually think what makes it really good are its security cameras. So if you have security cameras that are plugged into the Ubiquiti ecosystem, they have an incredible mobile app, uh, and Apple TV app, and iPad app, um, to kind of see the live feed of your cameras. And, and so, uh, they're, they're, they're a little pricey, but not that pricey, uh, but it's been a... just an incredible product experience.
- LRLenny Rachitsky
All right. I went eero, so I made a mistake. Good tip.
- SWSherwin Wu
Eero are pretty good too, but, uh, I-
- LRLenny Rachitsky
Not Ubiquiti
- SWSherwin Wu
... fully converted to Ubiquiti at this point.
- LRLenny Rachitsky
Okay.
Episode duration: 1:19:39
Install uListen for AI-powered chat & search across the full episode — Get Full Transcript
Transcript of episode B26CwKm5C1k
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome