Lenny's PodcastInside OpenAI | Logan Kilpatrick (head of developer relations)
EVERY SPOKEN WORD
150 min read · 29,755 words- 0:00 – 3:49
Logan’s background
- LKLogan Kilpatrick
Finding people who are high agency and work with urgency. If I was hiring five people today, like those are, like some of the top two characteristics that I would look for in people. Because you can take on the world if you have people who have high agency. And like, not needing to get 50 people's different consensus, they hear something from our customers about a challenge that they're having and like, they're already pushing on what the solution for them is and not waiting for all the other things to happen. That like, people just go and do it and solve the problem and I love that. It's so fun to be able to, to be a part of those situations.
- LRLenny Rachitsky
(instrumental music) Today my guest is Logan Kilpatrick. Logan is head of developer relations at OpenAI where he supports developers building on OpenAI's APIs and ChatGPT. Before OpenAI, Logan was a machine learning engineer at Apple and advised NASA on their open source policy. If you can believe it, ChatGPT launched just over a year ago and transformed the way that we think about AI and what it means for our products and our lives. Logan has been at the front lines of this change and every day, is helping developers and companies figure out how to leverage these new AI superpowers. In our conversation, we dig into examples of how people are using ChatGPT and the new GPTs and other OpenAI APIs in their work and their life, Logan shares some really interesting advice on how to get better at prompt engineering. We also get into how OpenAI operates internally, how they ship so quickly, and the two key attributes they look for in the people that they hire, plus where Logan sees the biggest opportunities for new products and new startups building on their APIs. We also get a little bit into the very dramatic weekend that OpenAI had with the board and Sam Altman and all of that, and so much more. A huge thank you to Dan Schipper and Dennis Yang for some great question suggestions. With that, I bring you Logan Kilpatrick after a short word from our sponsors. This episode is brought to you by Hex. If you're a data person, you probably have to jump between different tools to run queries, build visualizations, write Python, and send around a lot of screenshots and CSV files. Hex brings everything together. Its powerful notebook UI lets you analyze data in SQL, Python, or no code, in any combination, and work together with live multiplayer and version control. And now, Hex's AI tools can generate queries and code, create visualizations, and even kickstart a whole analysis for you, all from natural language prompts. It's like having an analytics copilot built right into where you're already doing your work. Then when you're ready to share, you can use Hex's drag and drop app builder to configure beautiful reports or dashboards that anyone can use. Join the hundreds of data teams like Notion, AllTrails, Loom, Mixpanel, and Algolia using Hex every day to make their work more impactful. Sign up today at hex.tech/lenny to get a 60-day free trial of the Hex team plan. That's hex.tech/lenny. This episode is brought to you by Whimsical, the iterative product workspace. Whimsical helps product managers build clarity and shared understanding faster with tools designed for solving product challenges. With Whimsical, you can easily explore new concepts using drag and drop wireframe and diagram components, create rich product briefs that show and sell your thinking, and keep your team aligned with one source of truth for all of your build requirements. Whimsical also has a library of easy-to-use templates from product leaders like myself, including a project proposal one-pager and a go-to-market worksheet. Give them a try and see how fast and easy it is to build clarity with Whimsical. Sign up at whimsical.com/lenny for 20% off a Whimsical pro plan. That's whimsical.com/lenny.
- 3:49 – 8:20
The impact of recent events on OpenAI’s team and culture
- LRLenny Rachitsky
Logan, thank you so much for being here and welcome to the podcast.
- LKLogan Kilpatrick
Thanks for having me, Lenny. I'm super excited.
- LRLenny Rachitsky
I want to start with the elephant in the room, which I think the elephant is actually leaving the room because I think this is months ago at this point, but I'm still just really curious. What was it like on the inside of OpenAI during the, the very dramatic weekend with the board and Sam and all those things? What was it like and is there a story maybe you could share that maybe people haven't heard about what it was like on the inside, what was going on?
- LKLogan Kilpatrick
Yeah. It was, it was definitely a very stressful, stressful Thanksgiving week. I think like the, in broad context, like, you know, OpenAI had been pushing for a really long time since ChatGPT came out and that was supposed to be like the first, one of the first weeks that like the whole company had like taken time away to like actually reset and have a break. So like very selfishly, I was super excited, spent time with my family, all that stuff. Um, and then yeah, Friday afternoon we, we got the message that all of the changes were happening and I think it was super shocking because I think, and, and this is a perspective a lot of folks share, like there's, everybody has, uh, and cont- had and continues to have like such deep trust in, in Sam and Greg and our leadership team that it was like just very surprising and we're also like a very, as far as company cultures go, like very transparent and very open. So like, you know, when there's problems or there's things going on, like we tend to hear about them and again, it was the, the first time that a lot of us had, had heard, um, some of the things that were happening between the board and, and the leadership team. So very, very surprising. I think my, my sort of, being someone who's not based in San Francisco, I was like, again, very selfishly like kind of happy that it happened over the Thanksgiving break because a lot of folks actually had like gone home to different places. So it felt, it felt like I had a little bit of comfort knowing like I wasn't the only one not in San Francisco because like everybody was meeting up in person to do a bunch of stuff and, uh, and be together during that time. So it was, it was nice to, to know that there was a few other folks who were, who were sort of out of the loop with me. I think the thing that surprised me the most was like just how quickly everybody got back to business. Like I flew to San Francisco the next week after Thanksgiving, which I wasn't planning to do to be with the team in person and like seeing-... literally Monday morning I was kind of walking to the office being, like, expecting, I don't know, something, like, weird to be going on or happening or like a d- and, and really it was, like, people laser-focused and, like, back to work. And I think that, that, like, speaks to, like, the caliber of, of our team and, like, everybody who's just so excited about building towards the mission that we're building towards. So I think that was, like, the most... Yeah, that was the most surprising thing of the whole, the whole incident. I think a lot of companies, like, would have had the potential to, like, truly be, like, derailed for some non-trivial amount of time by this and, like, everybody was just right back to it, which I love.
- LRLenny Rachitsky
I feel like it also maybe brought the team closer together. Feels like it was a kind of (laughs) traumatic experience that may, uh, bring folks together 'cause it was something they all shared. Is there anything along those lines that's like, wow, things are a little different now?
- LKLogan Kilpatrick
W- one of my takeaways was I'm actually very grateful that this happened when it happened. I think, like, today the stakes are, you know, they're, they're still relatively high. Like, people have built their businesses on top of OpenAI, like, we have tons of customers who love ChatGPT so if something bad happens to us, like, we definitely impact our customers. But sort of on the world scale, like, you know, somebody else will build a, a, a model if OpenAI disappeared and, and continue towards this, this progress of, of general intelligence. I think, you know, fast-forward, like, five or 10 years if something like this would've, would've happened, um, and we sort of hadn't gone through the, the hopeful upcoming, like, org transformation and, and sort of all those changes that are going to happen, I think it would've been a little bit... or potentially much worse of, of an outcome. So I'm glad that things happened when, when the stakes are a little bit lower. And I totally agree with you. It's like, the team has been growing so rapidly over the last, like, year since I joined that it's been, it's been crazy to, to think about, like, how many new folks there are. And I really think that this, like, br- really brought people together 'cause most folks, like, historically many of the folks when I joined, what kind of banded us all together was, like, the launch of ChatGPT, the launch of GPT-4. And, like, for folks who, like, weren't around for some of those launches, it was perhaps DevDay. Uh, for folks who weren't around for DevDay, like, it was probably this event. So I think we've had these events that have really brought the, the company together cross-functionally. So hopefully all the future ones will be, like, really exciting things like, you know, GPT-5 whenever that comes and stuff
- 8:20 – 9:52
Exciting developments in AI interfaces
- LKLogan Kilpatrick
like that.
- LRLenny Rachitsky
Awesome. We're gonna talk about GPT-5. Going in a totally different direction, what is the most mind-blowing or surprising thing that you've seen AI do recently?
- LKLogan Kilpatrick
The things that are getting me most excited are these, like, new interfaces around AI, like the, the Rabbit R1, I don't know if you've seen that-
- LRLenny Rachitsky
Mm-hmm.
- LKLogan Kilpatrick
... but it's a consumer hardware device. This company called TLDraw, I don't know if you've seen TLDraw.
- LRLenny Rachitsky
I think you sketch something and then it makes it as a, as a website.
- LKLogan Kilpatrick
Yeah. And that, and that's, like, only, like, a small piece of what TLDraw is actually working on. But, like, there's all of these, like, new interfaces to interact with AI. And I think, like, I was having a conversation with the TLDraw folks a couple of days ago, like, really blows my mind to think about how chat is the predominant way that folks are using AI today. And, like, I actually think like, and, and this is my, you know, my bold case for the folks at TLDraw, I'm super excited for them to build what they're building, but they're sort of building this infinite canvas experience. And you could imagine how as you're interacting with an AI on a daily basis, like, you know, you might want to jump over to your, like, infinite canvas, which the AI has sort of filled in all the details and you might see, like, a reference to a file and to a video. And, like, all of these different things. And it's such a cool way, like, it actually makes a lot more sense from us as humans to, like, see stuff in that type of format than I think, like, just listing out a bunch of stuff in chat. So I'm really, really excited to see more people. I think, like, 2024 is the year of multimodal AI, but it's also the year that people really push the boundaries of, um, some of these, like, new UX paradigms around AI.
- 9:52 – 13:04
Using OpenAI tools to make companies more efficient
- LKLogan Kilpatrick
- LRLenny Rachitsky
It's funny, I feel like chatbots, like, as a, as a PM for many years, it feels like every brainstorming session we had about new features, it's like, "Hey, we should build a chatbot to solve this problem." It's like the perennial, like, oh, chatbot, of course someone's gonna suggest we do a chatbot. And now they're actually useful and working and everyone's building chatbots, a lot of them based on OpenAI APIs. There's not really a question there, but maybe the question I was gonna get to this later is just when people are thinking about building a product like say TLDraw, what should they think about where OpenAI is not gonna go versus, like, here's what OpenAI is gonna do for us, we shouldn't worry about them building a version of TLDraw in the future? What's the, kind of the, the way to think about where you won't be disrupted essentially by OpenAI knowing also they may change their mind?
- LKLogan Kilpatrick
That's a great question. I think, like, we're, we're deeply focused on these, like, very, very general use cases, like, the general reasoning capabilities, the general coding, the general writing abilities. I think where you start to get into some of these, like, very critical applications, and I think a great example of this is, um, is actually, like, Harvey. I don't know if you've seen Harvey, but it's this legal AI use case where they're, they're building custom models and tools to help lawyers and people at, at legal firms and stuff like that. And that's a great example of, like, our models are probably never going to be as capable as, as some of the things that, that Harvey is doing because, like, our, our goal and our mission is really to solve this, like, very general use case and then people can do things like fine-tuning and build all their own custom, you know, UI and product features on top of that. And I think that's the... You know, I, I have a lot of empathy and, like, a lot of excitement for people who are, like, building these, like, very general products today. Like, I talk to a lot of developers who are building, like, you know, just general-purpose assistants and, like, general-purpose agents and stuff like that. And I think it's cool and it's a good idea. I think, like, the challenge for them is, like, they're, they are going to end up directly competing against us in those spaces. And I think there's, there's enough room for a lot of people to be successful. But, like, it... To me, like, you shouldn't be surprised when, you know, we end up launching some, like, general-purpose agent product because, like, again, we're sort of building that with GPTs today and versus, like, we're not going to launch, like, some of these, like, very verticalized products. Like, we're not going to launch, like, an AI sales agent. Like, that's just not what we're building towards. And companies who are and have some domain-specific knowledge and they're really excited about that problem space, like...... they can go and do that and leverage our models and, like, end up continuing to be on the cutting edge without having to, like, do all that R&D effort themselves.
- LRLenny Rachitsky
Got it. So the advice I'm hearing is get specific about use cases. And that could be either models that are tuned to be especially useful for a use case like sales, or make an interface or experience solving a very, a more specific problem.
- LKLogan Kilpatrick
And, and I think if you're going to try and solve this, like, very general, like if you're tr- going to try to build, like, the next general assistant to compete with something like ChatGPT, like, it has to be so radically different. Like people have to really, like, be like, "Wow, this is solving, like, these 10 problems that I have with ChatGPT and therefore I'm gonna go and try your new thing." Otherwise, like, you know, we're just putting a ton of engineering effort and, and research effort into making that, like, an incredible product and it's just gonna be, like, the normal challenges of building companies. Like, it's just hard to compete against something like that.
- LRLenny Rachitsky
Awesome. Okay, that's great. I was gonna get to that later, but I'm glad we touched on that. I imagine that's on the minds of many developers
- 13:04 – 18:35
Examples of using AI effectively
- LRLenny Rachitsky
and founders. Kind of along the same lines, there's a lot of talk about how ChatGPT and GPTs and many of the tools you guys offer are gonna make a company much more efficient. They don't need as many engineers, data scientists, PMs, things like that. But I think it's also hard for companies to think about what should we actually, like what can we actually do to make our company more efficient? I'm curious if there's any examples that you can share of how companies have taken, built to say a GPT internally to do something so that they don't have to spend engineering hours on it. Or generally just used OpenAI tooling to make their business internally more efficient.
- LKLogan Kilpatrick
Yeah, that's a great question. And I wonder if, if you can put this in, like, the show notes or something like that, but there's a really great, uh, Harvard Business School study about, and I forgot which consulting firm they did it with. Maybe it was like Boston Consulting or, or something like that, but it, it might have been one of the other ones. And they talk about, like, the order of magnitude of efficiency gained for those folks who were using AI tools, and I, I think it was ChatGPT specifically in those use cases that they were using, comparatively against like folks who aren't using AI. I'm really excited also just, like, as this more time passes between the release of this technology for us to get more, like, empirical studies, 'cause like I feel this for myself, like as somebody who's an engineer today. Like I use ChatGPT and like I can ship things way faster than I would be able to. I don't have any, like, good metrics for myself to put a, to put like a, a specific number on it, but I'm guessing, like, people are working on those studies right now. I think engineering is actually like one of the highest leveraged things that you could be using AI to do today, and like really unlocking like probably on the order of at least a 50% improvement, especially for some of the like lower hanging fruit software engineering tasks. Like the models are just so capable at doing that work, and it's crazy to think, and I'm guessing actually GitHub probably has a bunch of really great studies they published around like co-pilots and I'm ... You could use those as an analogy for what people are getting from ChatGPT as well. But those are probably like the highest leverage things. And I think now with GPTs people are able to, like, go in and, and solve some of these more tactical problems. Or I think one of the general challenges with ChatGPT is like, it gives like a decent answer for like a, a lot of different use cases. But oftentimes it's not like particular enough to like the voice of your company or like the nuance of the work that you're doing. And I think now with GPTs like and people who are using the teams in ChatGPT and enterprise in ChatGPT, they can actually build those things, incorporate the nuance of their own company and make, make solving those tasks like much, much more domain specific. So we, we literally just launched GPTs a couple of months ago, so I don't think there's been any like good public success stories. But I'm, I'm guessing that th- that success is happening right now at companies and, and hopefully we'll hear more about that in the, in the months to come as folks like get super excited about sharing those case studies.
- LRLenny Rachitsky
I'll share an example. Um, so I have this good friend, uh, his name's Dennis Yang. He works at Chime, and he told me about two things that they're doing at Chime that seem to be providing value. One is he built a GPT that helps write ads for Facebook and Google. Just b- gives you ideas for ads to run. And so that takes a little load off the marketing team or the growth team. And then he built another GPT that delivers experiment results, kinda like a data scientist with like here's the results of this experiment. And then you could talk to it and ask for like, "Hey, how, uh, how much longer do you think we should run this for?" Or, "What might this imply about our product?" And things like that. And I think it's really, really-
- LKLogan Kilpatrick
Love that.
- LRLenny Rachitsky
... like you said. Is there anything else that comes to mind, just like things you've heard people do, just like, "Wow, that was a really smart way of..." So I get there's like engineering, co-piloting type tooling. Is there anything else that comes to mind? Just to give people a little inspiration of like, "Wow, that's an interesting way I should be thinking about using some of these tools."
- LKLogan Kilpatrick
I, I've seen some interesting GPTs around like, uh, the planning use cases, like you wanna do like OKR planning for your team or something like that. There's... I, I just actually saw somebody tweet it like literally yesterday. I've seen some cool like venture capital ones of like doing diligence on like a deal flow, which is kind of interesting and like getting some different perspectives. I think all of those like horizontal use cases where like you can bring in a different personality and like get perspective on different things I think is really cool. Like I've, I've personally used in, uh, GPT, the private GPT that I use myself that like helps with some of the like planning stuff for, for different quarters and like just making sure that I'm being consistent in how I'm framing things, like driving back to like individual metrics. Stuff that like when people do planning, like they often miss and like are bad at. And it's been super helpful for me to like have a GPT to like force me to think about some of those things.
- LRLenny Rachitsky
Wait, can you talk more about this? What does this GPT do for you and how do you... What do you feed it?
- LKLogan Kilpatrick
Yeah. There's... I, I forgot what article I found online, but it was like some article that was talking about like what are the best ways to, like, set yourself up for success in planning. And I took a bunch of the like... I'll, I'll see if I can make it public after this and send you a link, but took a bunch of the examples from that and went in and put some of those suggestions into the GPT, and then when now when I do any of my planning of like I wanna build this thing, I put it through and, and have it like generate a timeline, generate all the specifics of like what are the metrics and success that I'm working for, like who, who might be some important cross-functional stakeholders to like include in the planning process, all that stuff, and um, it's been, it's been helpful.
- LRLenny Rachitsky
Wow, that is very cool. That would be awesome if you made it public. And if we do, we'll link to it and we'll make it the number one most popular GPT in, in the store.
- 18:35 – 22:12
Prompt engineering
- LRLenny Rachitsky
- LKLogan Kilpatrick
I love it.
- LRLenny Rachitsky
Going in a slightly different direction, there's this whole genre of prompt engineering. It feels like it's one of these really emerging skills. I actually saw a startup hiring a prompt engineer, one of my... the startups I've invested in. And I think that's gonna blow a lot of people's minds, that there's this new (laughs) job that's emerging. And I know the idea is this won't last forever, that in theory AI will be so smart, you don't need to really think about how to be smart about asking it for things you need it to do. But can you just describe this idea of what is prompt engineering, this term that people might be hearing? And then even more interesting than just like... What advice do you have for people to get better at writing prompts for, say, ChatGPT or th- through the API in general?
- LKLogan Kilpatrick
Yeah. This, this is such an interesting space and I think it's like another space where I'm excited for people to do, like, more, like, scientific, empirical studies about because there's, like, so much, like, gut feeling best practices that, like-
- LRLenny Rachitsky
(laughs)
- LKLogan Kilpatrick
... maybe aren't actually true in, in a certain ways. I think the re- the reason that prompt engineering exists and comes up at all is because the models are so inclined because of the way that they're trained to give you just an answer to the question that you ask. Crap in, crap out. If you ask, like, a, a pretty, like, basic question, you're gonna get a pretty basic response. And actually, the same thing is true for humans. And you can think of a great example of this when I go to another human and I ask, like, "How's your day going?" They say, "Ah, it's going pretty good," like (laughs) literally absolutely zero detail, no nuance, like, not very interesting at all. Versus, again, if you have some context with the person, if you have a personal relationship with them and I ask you, "Hey, Lenny, you know, how's your day going? Like, how did the last podcast go?" Et cetera, et cetera. Like, you just have a little bit more context and, and agency to go and answer my question. And I think this is, like, prompt engineering, my whole, my whole position on this is, like, prompt engineering is a very human thing. Like, when we want to get some value out of a human, we do this prompt engineering. We, we try to effectively communicate with that human in order to get the best output. And the same thing is true of models. And I think it's like, again, because we're using a system that appears to be really smart, we assume that it has all this context. But it's really, like, you know, imagine a human, hu- human level intelligence, but like, literally no context. Like, it has no idea what you're going to ask it. It's never met you before. It has no idea who you are, what you do, what your goals are. And like, it's the reason that you get super generic responses sometimes, is because people forget they need to put that context in the model. So I think this thing that is going to help solve this problem, and we already kind of do this in the context of DALLE. So when you go to the image generation model that we have, DALLE, and you say, "I want a picture of a turtle." What it does is it actually takes that description. It says, "I want a picture of a turtle." And it changes it into this high fidelity, like, you know, generate a picture of a turtle with a shell, with a green background, and, you know, lily pads in the water, and all this other... It adds all this fidelity because that's the way that the model is trained. It's trained on examples with super high fidelity. This will happen with text models. You can imagine a world where you go into ChatGPT and you say, "Write me a blog post about AI." It automatically will go and be like, "Let me generate a much higher fidelity description of what this person really wants, which is, you know, generate me a blog post about AI that talks about the trade-offs between these different techniques and some example use cases and references, some of the latest papers." And it does all that for you. And then you, as the user, will hopefully be able to be like, "Yep, this is kind of what I wanted. Let me edit this-"
- LRLenny Rachitsky
Mm-hmm.
- LKLogan Kilpatrick
"... let me edit this here." And again, the inherent problem is, like, we're lazy as humans. We don't want to type all... (laughs) We don't really wanna type what we mean. And, um, I think AI systems are actually going to help solve some of that problem.
- 22:12 – 26:05
How to write better prompts
- LRLenny Rachitsky
So s- until that day, what did... What can people do better when they're prompting, say, ChatGPT? And I'll give you an example. Tim Ferriss suggested this really good idea that I've been stealing, which is when you're preparing for an interview, go to ChatGPT and I'm... and so I did this for you, and I was like, "I'm interviewing Logan Kilpatrick. He's, uh, Head of Developer Relations at OpenAI, on my podcast. Give me 10 questions to ask him in the style of Tyler Cowen," who I think is the best interviewer. He's so good at just, like, very pointed, uh, original questions. So what advice would you have for me to improve on that prompt to have better results? 'Cause the questions were, like, fine. They're great. They're, like, interesting enough, but they weren't like, "Holy shit, these are incredible." So I guess, what advice would you give me in that example?
- LKLogan Kilpatrick
Yeah. That, that's a great example where, like, thinking in context of, like, who it is that you're asking questions about. Like, I'm probably not somebody who has enough information about me on the internet, where, like, the model actually has been trained and, like, knows the nuances of my background. I think there's, like, probably, like, much more famous guests where, like, it might be-
- LRLenny Rachitsky
Mm-hmm.
- LKLogan Kilpatrick
... that there's enough context on the internet to answer the questions. Like, you actually have to do some of that work. You need to say, like, if you're using, uh, browse with Bing, for example, you could say like, "Here's a link to Logan's blog and, like, some of the things that he's talked about. Like, here's a link to his Twitter. Like, go through some of his tweets, go through some of his blogs and, like, see what his interesting perspectives are that we might want to surface on the, on the blog or something like that." And again, giving the model enough context to answer, to answer the question. I think, again, that, that prompt actually might work really well for somebody who, like, has a... Like, if you were interviewing, like, Tom Cruise or something like that, somebody who has a lot of information about them on the internet, it probably works a little bit better.
- LRLenny Rachitsky
So the advice there is just give more context. It doesn't tell you, "Hey, I don't actually know that much about (laughs) Logan, so give me some more information." It's just like, "Here we go. Here's a bunch of good questions."
- LKLogan Kilpatrick
Exactly. Like, it wants to, like... It so deeply wants to answer your question.
- LRLenny Rachitsky
Mm-hmm.
- LKLogan Kilpatrick
Like, it doesn't care that it doesn't have enough context. It's like the most eager person in the world you could imagine to answer the question. And without that context, it's just hard to do, to give of any- anything of value. If, if we got T-shirts printed, they should say, like, "Context is, is all you need. Context is the only thing that matters." Like, it's, it's such an important piece of getting a language model to do anything for you.
- LRLenny Rachitsky
Any other tips, just as people are sitting there? Maybe they're, they have ChatGPT open right now as they're crafting a prompt. Is there anything else that you'd say would help them have better results?
- LKLogan Kilpatrick
We actually have a prompt engineering guide, um, which folks should go and, and check out, and it has some of these examples. It, it depends on sort of the order of magnitude of, like, how much performance increase you can get. There's a lot of, like, really small, silly things, like adding a smiley face increases the (laughs) performance of the model. Like telling the... You know, you've seen... I, I'm sure folks have seen, like, a lot of these, like, silly examples, but, like, telling the model to, like, take a break and then answer the question. All these kinds of things. And again, if you think about it, it's because the corpus of information that's, that's trained these models is the same things that... is that humans have sent back and forth to each other. So like, you telling a human, like, "When I go take a break and then I come back to work, like, I'm fresher and I'm able to answer questions better and, like, do work better." Um, so very similar things are true for these models. And again, when I see a smiley face at the end of someone's message, like, I feel empowered that, like, this is gonna be a positive interaction and I should, like, be more inclined to give them a, a great answer and spend more effort on the thing that they asked me for.
- LRLenny Rachitsky
Wow. Wait, so that's a real thing? If you add a smiley face, it might give you better results?
- LKLogan Kilpatrick
Again, it's like the, the challenge with all this stuff is, is like, it's very nuanced and, and it's also, like, it's a small jump in performance. You could imagine, like, on the order of, like, 1 or 2%, which for a few sentence answer is, like, might not even be a discernible difference. Again, if you're generating, like, an entire saga of texts, like, the smiley face, like, could actually make a material difference for you. But for, like, something small in text, well, it, it might not.
- LRLenny Rachitsky
Okay. (laughs) Good tip. Amazing. Okay.
- 26:05 – 32:10
The launch of GPTs and the OpenAI Store
- LRLenny Rachitsky
We've talked about GPTs. I think maybe it might be helpful to describe what is, what is this new thing that you guys launched, GPTs? And I'm curious just how it's going, this... 'Cause this is a really big change and element of OpenAI now, with this idea that you could build your own, like, kind of mini... And I'm almost explaining it, your mini OpenChat- ChatGPT, and then people can... I think you can pay for it, right? Like, you can charge for your own GPT or is it all free right now?
- LKLogan Kilpatrick
It's all free right now.
- LRLenny Rachitsky
Okay, it's all free.
- LKLogan Kilpatrick
Today it's all free.
- LRLenny Rachitsky
Okay. In the future, I imagine people will be able to charge. So there's this whole store now. Basically it's a whole app store that you guys have launched. How's it going? What's happening? What surprised you there? What should people know?
- LKLogan Kilpatrick
Yeah. It's, it's going great. And again, historically the, the thing that you would have to do... Let's say, for example, you have, like, a really cool ChatGPT use case. What you would have to do to share it with somebody else is, like, actually go in and, like, start the conversation with the model, like prompt it to do the things that you want it to, and then you would share that link with somebody else before the action has actually happened and be like, "Here. Now you can, like, essentially finish this conversation with ChatGPT that we started." Um, so GPTs kind of changes this, where you take all that important context, you put it into the model to begin with, and then people can go and, like, chat with essentially a custom version of ChatGPT. And the thing that's really interesting is, you know, you can upload files, you can give it custom instructions, you can add all these different tools. Like, a code interpreter is built in, which allows you to, like, do, like, math essentially. You have browsing built in, image generation built in. And you can also, like, for more advanced use cases if you're a developer, you can, like, connect it to external APIs. So you can connect it to the Notion API or Gmail or all these different things, and, like, have it actually take actions on your behalf. So there's, there's so many cool things that people are unlocking. And what's been most exciting to me actually, is, like, the non-developer persona is now empowered to, like, go and solve these, like, really, really, really more challenging problems by giving the model enough context on what that problem is, um, to be able to solve it. Going back to, like, context is all you need. Like, this is very true in the context of GPTs, and if you give it enough context, like, you can solve much more interesting problems. Um, there's so many things that I'm excited about with this. Like, I think monetization when it comes to the store, uh, later this quarter I think is gonna be extremely exciting. Like, when people can get paid based on who's using their GPTs. That's gonna be a huge unlock and, like, open a lot of people's eyes to the, to the opportunity here. I also think, like, continuing to push on making more capabilities accessible to GPTs for people who can't code is really exciting. Like, having to... Even for me as, like, a, someone who is a software engineer, like, it's not super easy to, like, connect the Notion API or the Gmail API to my GPT. And like, really, I'd love to just give a, like, one-click sign in with Gmail. Then all of a sudden it's like my Gmail is accessible or, like, s- someone else can sign in with their Gmail and make it accessible. So I think over time, like, all those types of things will come. But today it's really, like, custom prompts is essentially, like, one of the biggest value adds with GPTs.
- LRLenny Rachitsky
Awesome. Um, I have it pulled up here on the... on different monitor and Canva has the top GPT currently and I'm, I was trying to play with it as you were chatting just to see if... I was gonna make a big banner that said "It's the context, stupid." And it doesn't-
- LKLogan Kilpatrick
(laughs)
- LRLenny Rachitsky
I, I'm not doing something right, but I'm not paying that much attention to it 'cause w- we're talking. But, uh-
- LKLogan Kilpatrick
Yeah.
- LRLenny Rachitsky
... this is very cool. Just maybe a final question there. Is there a GPT that you saw someone built that was like, "Wow, that's amazing. That's so cool." Something that surprised you? And I'll share, I'll share one that was really cool, but is there anything that comes to mind when I ask that?
- LKLogan Kilpatrick
I think my, my instinct is the Zapier... All of the stuff that Zapier has done with GPTs is, like, the most useful stuff that you could imagine. Like, you can go so far with what... And I, and I don't know how it's, like, packaged for Zapier as GPT right now, but, like, you can actually, as a third party developer, integrate Zapier without knowing how to code into your GPT. So, like, they're, they're pushing a lot of this stuff. And then basically, like, all 5,000 connections that are possible with Zapier today, you can bring into your GPT and, like, essentially enable it to do anything. So I'm, I'm incredibly excited for Zapier and for people who are building with them 'cause, like, there's so many things that you can unlock, uh, using that platform. So I think that's probably, like, the most, the most exciting thing to me for people who aren't, who aren't developers.
- LRLenny Rachitsky
Awesome. Zapier's always in there getting there, connecting things.
- LKLogan Kilpatrick
Yeah. They're great.
- LRLenny Rachitsky
Uh, so the one that I had in mind... So I had, uh, a buddy of mine, Siki, who's the CEO of a company called Runway built this thing called Universal Primer, which helps you learn-It's described as learn everything about anything. And it basically, I think, is kind of this Socratic method of helping you learn stuff. So it's like, "Explain how transformers work in LMs." And then it just kind of goes through stuff and then asks you questions, I think, and kind of helps you learn new concepts. And I think it's the number two-
- LKLogan Kilpatrick
Nice.
- LRLenny Rachitsky
... education GPT.
- LKLogan Kilpatrick
I love that. Seek is incredible, so.
- LRLenny Rachitsky
Yes. It's true. Let me tell you about a product called Arcade. Arcade is an interactive demo platform that enables teams to create polished on-brand demos in minutes. Telling the story of your product is hard. And customers want you to show them your product, not just talk about it or gate it. That's why Product 4 teams such as Atlassian, Carta, and Retool use Arcade to tell better stories within their homepages, product change logs, emails, and documentation. But don't just take my word for it. Quantum Metric, the leading digital analytics platform, created an interactive product to our library to drive more prospects. With Arcade, they achieved a 2X higher conversion rate for demos and saw five times more engagement than videos. On top of that, they built the demo 10 times faster than before. Creating a product demo has never been easier. With browser-based recording, Arcade is the no code solution for building personalized demos at scale. Arcade offers product customization options, designer approved editing tools, and rich insights about how your viewers engage every step of the way. Ready to tell more engaging product stories that drive results? Head to arcade.software/lenny and get 50% off your first three months. That's arcade.software/lenny.
- 32:10 – 34:35
The importance of high agency and urgency
- LRLenny Rachitsky
I want to talk about just what it's like to work at OpenAI and how the product team operates and how the company operates. So you worked at your two previous companies were Apple and NASA, which are not known for moving fast. And now you're OpenAI, which is known for moving very fast, maybe too fast for some people's taste, as we saw with the whole board thing. And so what I'm curious is just what is it that OpenAI does so well that allows them to build and ship so quickly and at such high a bar? Like is there a process or a way of working that you've seen that you think other companies should try to move more quickly and ship better stuff?
- LKLogan Kilpatrick
You know, there's so many interesting trade-offs in all of this like tension around like how- how quickly companies can move. I think for us, like, again if you think about Apple as an example, if you think about NASA as an example, just like older institutions, like lots of like, you know, over time the tendency is things slow down. Uh, there's like additional checks and balances that are put in place, which sort of drag things down a little bit. So we're- we're young and like a new company, so like we don't have a lot of that like institutional, um, legacy barriers that have been put in place. I think the biggest thing, and I... There's a good Sam tweet somewhere, uh, in the ether about this from, I think, 2022 or something like that. But like finding people who are high agency and work with urgency is like one of the most... You know, if I was hiring five people today, like those are like some of the top two characteristics that I would look for in people because it's... You- you can- you can take on the world if you have people who have high agency. And like not needing to either like, you know, get 50 people's different consensus because like you have people who you trust with high agency and they can just go and do the thing, I think is like one of the most... It is the most important thing, I- I'm pretty sure, if you- if you were to distill it down. And like I- I see this in folks that I work with. Like folks who are so high agency, like they see a problem and they go and tackle it. They hear something from our customers about a challenge that they're having and like they're already pushing on what the solution for them is and not like waiting for all the other things to happen that like, I think traditional companies are- are sort of stuck behind because they're like, "Oh, let's check with all these like seven different departments to like, you know, try to get feedback on this." Like people just go and do it and solve the problem and I love that. It's so fun to be able to- to be a part of those situations.
- 34:35 – 35:56
OpenAI’s ability to move fast and ship high-quality products
- LKLogan Kilpatrick
- LRLenny Rachitsky
That is so cool. I really like these two characteristics 'cause I haven't heard this before. Those are the two, maybe the two most important things you guys look for, high agency, high urgency. To give people a clear sense of what these actually look like when you're hiring, you shared maybe this example of customer service, someone hearing a bug and then going to fix it. Is there anything else that can illustrate what that looks like, high agency? And then similar question on urgency other than just like move, move, move, ship, ship, ship.
- LKLogan Kilpatrick
I think like the assistance API that we released for dev day, like we continued to get this feedback from developers that people wanted these higher levels of abstraction on top of our existing APIs. And like a bunch of folks on the team just like came together and were like, "Hey, let's- let's put together what the plan would look like to build something like this." And then very quickly came together and actually built the actual API that now powers so many people's assistant applications that are out there. And I think that's a great example of like, you know, it wasn't like this like top down like, oh, someone's sitting there being like, "Oh, let's do these five things," and then like, "Okay, team, go and do that." It's like people really seeing these problems that are coming up and like knowing that they can come together as a team and like solve these problems really quickly. And I think the assistance API, and there's like 1,001 other examples of- of teams taking agency and doing this, but I think that's a- that's a great one, um, at the top of my head.
- LRLenny Rachitsky
That
- 35:56 – 40:22
OpenAI’s planning process and decision-making criteria
- LRLenny Rachitsky
makes me want to ask, just how- how does planning work at OpenAI? So in this example, it was just like, "Hey, we think we need to build this. Let's just go and build it." I imagine there's still a roadmap and priorities and goals and things that that team had. How does- how does road mapping and prioritization and all of that generally work to allow for something like that?
- LKLogan Kilpatrick
I think this is one of the more challenging pieces at OpenAI. Like, there's- there's so many... Like everyone wants everything from us. And like today especially, in the world of ChatGPT and- and how large and- and well used our- our API is, like people will just come to us and say like, "Hey, we want all of these things." I think there's like a bunch of like core guiding principles that we look at. Like one-... going back to the mission, like is this actually, like, going to help us get to AGI? So there's a huge focus on, like, you know, there's this, like, potential shiny reward right in front of us which is like, you know, like, optimize user engagement or whatever it is, and like, is that really the thing? Like, maybe the answer is yes, like maybe that is what is going to help us get to AGI sooner, but like, looking at it through that lens I think is, like, always the first step of deciding any, any of these problems. I think on the developer side there's also these, like, core tenets of, like, reliability. Like, hey, you know, it would be awesome if we had additional APIs that did all these cool things, like new, new endpoints, new modalities, new abstractions, but like, are we giving customers a robust and reliable experience on our API? And like, that's often, like, the first question. And I think there have been times where we've fallen short on that and, like, you know, there was a bunch of other things that we've been thinking about doing and, like, really bringing the focus and priority back to that reliability piece 'cause at the end of the day nobody cares if you have something great if they can't use it robust and reliably. So there's, like, these core tenets and I think, like, again, we have, like, very... other than all the principles about how we're making the decision, I think, like, the actual planning process is, like, pretty standard. Like, we come together. There's, like, H1, Q1 goals. We all sprint on those. I think the real interesting thing is, like, how stuff changes over time. Like, you'd think we're gonna do these, like, very high level things and, like, you know, new models, new modalities, whatever it is, and then, like, as time goes on there's, like, all of this turmoil and change. And it's interesting to have, like, mechanisms to be like, "Hey. How do we, how do we update our understanding of the world and our goals as everything sort of, the ground changes underneath of us," as is happening in the, in the craziness of the AI space today.
- LRLenny Rachitsky
It's interesting that it sounds a lot like most other companies. There is H1 planning, there is Q1 planning. Are there metrics and goals like that? Do you guys have OKRs or anything like that or is just, "Here, we're gonna launch these products"?
- LKLogan Kilpatrick
I think it's, like, much higher level. I, I actually don't think OpenAI is, like, a big OKR company. Like, I don't think teams do OKRs today. And I, I don't have a good understanding of, like, why that's the case, whether or not... I don't even know if OKRs are, like, still the industry... You're, you're probably talking to a lot more folks about, like, yeah, who are making those decisions so I'm curious, is that something that you're seeing from folks? Like, is it still common for people to do OKRs?
- LRLenny Rachitsky
Yeah. Absolutely. Many companies use OKRs, love OKRs. Many companies hate OKRs. (laughs) I am not surprised that OpenAI is not an OKR driven company. Along those lines, I don't know how much you can share about all this stuff, but how do you measure success for things that you launch? I know there's this ultimate goal, AGI. Is there some way to track if we're getting closer? What else do you guys look at when you launch, say, GPTStore or Assistance or anything that's like, "Cool, that was, uh, exactly what we were hoping for"? Is it just adoption?
- LKLogan Kilpatrick
Yeah. A- adoption is a great one. I think there's, like, a bunch of metrics around, like, you know, revenue, number of developers that are building on our platform. All those things. And a lot of these... and, and I don't wanna, to dive... I'll, I'll let Sam or, or someone else on our, our leadership team, like, go, go more into details, but I think, like, a lot of these are, like, actual abstractions towards something else. Like, even if revenue is a goal, it's like, revenue is not actually the goal. Revenue is a proxy for getting more compute which is then, like, actually what helps us get towards getting more GPUs so that we can, you know, train better models and, like, actually get to the goal. So there's all these, like, intermediate layers where, like, even if we say something is the goal and, like, you hear that in a vacuum and you're like, "Oh. Well, OpenAI just wants to make money," and it's like, well, really money is the mechanism to get better models so that we can achieve our mission, and I think there's, there's a bunch of interesting, interesting angles like that as well.
- LRLenny Rachitsky
I don't know if I've heard of a more, uh, ambitious vision for a company, to build artificial general intelligence. I love that. I imagine many companies are like, "What's our version of that?"
- 40:22 – 42:33
The importance of real-time communication
- LRLenny Rachitsky
Before we leave this topic, is there... is there anything else that you've seen OpenAI do really well that allows it to move this fast and be this successful? You talked about hiring people with higher agency and high urgency. Is there anything else that's just like, "Oh, wow, that's a really good way of operating"? I imagine part of it's just hiring incredibly smart people. Like, I think that's probably an onset thing, but yeah. Anything else?
- LKLogan Kilpatrick
I think there's a non-trivial benefit to using Slack, and I think, like, may- (laughs) maybe, maybe that's controversial and maybe some people don't like Slack, but OpenAI has such a Slack heavy culture and, like, it really... the, like, instantaneous, real time communication on Slack is so crucial and, like, I, I, I just love being able to, like, tag in different people from different teams and, like, get everybody coalesced so, like, everybody is always on Slack. So it's like even if you're remote or you're on a different team or in a different office, like, so much of the company culture is, like, ingrained in Slack and it allows us to, like, really quickly coordinate where, like, it's actually faster to send someone a Slack message sometimes than it would be to, like, walk over to their desk because they're on Slack and they're going to, they're going to be using it. And I saw, uh, if you saw the recent Sam and, and Bill Gates interview, but Sam was talking about how Slack is his number one most used app on his phone and, like, I don't even look at the time thing on my phone anymore 'cause I'm like, "I don't wanna know how long I'm using Slack," but I'm sure the Salesforce people are looking at the numbers and they're like, "This is exactly what we wanted." So... (laughs)
- LRLenny Rachitsky
I also love Slack. I'm a big promoter of Slack. I think there's a lot of Slack hate, but such a good product. I've tried so many alternatives and nothing compares. I think what's interesting about Slack for you guys is one of the... like, you don't know if someone in there is just an AGI that is, uh, not actually a person that's just there working at the company ??????.
- LKLogan Kilpatrick
I, I know they're real people. (laughs)
- LRLenny Rachitsky
(laughs)
- LKLogan Kilpatrick
There is no, no AGIs yet, but, um-
- LRLenny Rachitsky
Okay.
- LKLogan Kilpatrick
... I think, like, yeah. Even, even Slack is building a bunch of, like, really cool AI tools which, like, I'm excited to... and that's why, like, th- there's so much cool AI progress and, like, at the end of the day it's so exciting from being, like, a consumer of all these new AI products. Like, Google's a great example. Like, I'm so happy that Google's doing really cool AI stuff 'cause, like, I'm a Google Docs customer and, like, I love using Google Docs and, like, a bunch of their other products and, like, it's awesome that people are building such, such useful things around these models.
- 42:33 – 44:47
OpenAI’s team and growth
- LRLenny Rachitsky
How big is the OpenAI team at this point? Whatever you can share. Just to give people a sense of the scale.
- LKLogan Kilpatrick
Yeah, I think the last public number was something around like 750, um, near the, near the end of s- of last year. 780 or something like that near the end of last year. And we're growing, we're still growing so quickly. So I don't wanna, I won't be the messenger to share the specific updated numbers but like who the team is growing like crazy and we're also hiring like across all of our engineering teams. So if folks are, and, and PM teams, so if folks are interested we'd love to, we'd love to hear from folks who are, um, who are curious about joining.
- LRLenny Rachitsky
Maybe one last question here. So you're growing, maybe getting to 1,000 people. Clearly still very innovative and moving incredibly fast. Is there anything you've seen about what OpenAI does well to enable innovation and not kind of slow down new big ideas?
- LKLogan Kilpatrick
Yeah. There's, there's a couple of things. One of which is, um, the actual research team who, who like, you know, sort of seed most of the innovation that happens at OpenAI is intentionally small. They're not like, you know, most of the growth that OpenAI has seen is around like our customer facing roles, our, our engineering roles to like provide the infrastructure to, for ChatGPT and things like that. The research team is like again, intentionally kept small and there's all this talk and it's really interesting, I just saw this thread from one of our, one of our research folks who was talking about how in a world where you're constrained by the amount of GPU capacity that you have as a, as a researcher, which is the case for OpenAI researchers but also researchers everywhere else. Like each new researcher that you add is actually like a net productivity loss for the research group unless that person is like up-leveling everyone else in like such a profound way that like it increases the efficiency. Like if you just add somebody who's gonna go and like tackle some completely different research direction, you now have to share your GPUs with that person, and everyone else is now slower on their experiments. So it's a really interesting like trade-off that the, that research folks have that I don't think like product folks like, if I add another engineer to like our API team or to our, some of the ChatGBT teams, like you can actually write more code and do more and like that's actually like a net beneficial improvement for everybody. And that's always not the case in the case of researchers, which is interesting, in a GPU constrained world, which hopefully we won't always be in.
- 44:47 – 47:42
Future developments at OpenAI
- LKLogan Kilpatrick
- LRLenny Rachitsky
I want to zo- zoom out a bit and then there's gonna be a couple follow-up questions here. Where are things heading with OpenAI? What's, what's kind of in the near future of what people should expect from the tools that you guys are gonna have and launch?
- LKLogan Kilpatrick
Yeah. New, new modalities, I think ChatGBT like continuing to push all of the different experiences that are going to be possible. Like today, like ChatGBT is really just like text in, text out, or I guess like three months ago it was just text in, text out. We started to change that with now you can do the voice mode and now you can generate images and now you can take pictures. So I think like continuing to expand like the way in which you interface with AI through ChatGBT is coming. I think GBTs is our first step towards the agent future. Like again, today when you use a GBT, it's really you send a message, you get an answer back almost, almost right away. Um, and that's kind of the end of your interaction. I think as GBTs continue to get more robust, like you'll actually be able to say, "Hey, go and do this thing and like just let me know when you're done." Like it might take, I, I don't need the answer right now. I want you to like really spend time and be thoughtful about this. And like again, that's, I- if you think back to all these human analogies, like that's what we do as humans. Like I don't expect somebody when I ask them to do something meaningful for me to like do it right away and like give me the answer back right away. So I think pushing more towards those experiences is what is going to unlock like so much more value for people. And I think the last thing is GBTs as this mechanism to get like the next, you know, few hundred million people into ChatGBT and into AI. Because I think like if you've had conversations with people who aren't close to the AI space, oftentimes you talk about even if they've heard of ChatGBT, a lot of people haven't heard of ChatGBT, but if they have they're like they show up in ChatGBT and they're like, "You know, I don't really know what I'm supposed to do with this. It's this blank slate. I can kind of do anything. It's like not super clear how this solves like my specific problem." But I think the cool thing about GBTs is you can package down like here's this one very specific problem that AI can solve for you and, and do it really well. And like, I can share that experience with you and now you can go and try that GBT. Have it actually solve the problem and be like, "Wow, like, it did this thing for me. I should probably spend the time to investigate like these five other problems that I have to see if AI can also be a solution to those." So I think so many more people are gonna come online and start using these tools because very like narrow vertical tools are what's going to be like a huge unlock for them.
- LRLenny Rachitsky
So in that last case, a classic horizontal product problem where it does so many things and people don't know what exactly it should do for them. So that makes a ton of sense, just be- being a lot more template oriented, use case specific, helping people onboard makes tons of sense. A common problem for so many SaaS products out there. Uh, the other ones you mentioned, which was really interesting, basically more interfaces to more easily interact with OpenAI Voice you mentioned, audio and things like that. That makes tons of sense. And then this agents piece where the idea is instead of just it's a chat, it's like, "Hey, go do this thing
- 47:42 – 50:38
GPT-5 and building toward the future
- LRLenny Rachitsky
for me." Kind of along those lines, GPT-5 we touched on this a bit. There's a lot of speculation about the much better version. People are, just have these wild expectations, I think, for where GPT is going. GPT-5 is gonna solve all the world's problems. I know you're not gonna tell me when it's launching and what it's gonna do, but I heard from a friend that there's kind of this tip that when you're building products today, you should build towards a GPT-5 future, not based on limitations of GPT-4 today. So, to help people do that, what should people think about that might be better in a world of GPT-5? Is it just like it's faster? It's just smarter? Is there anything else that might be like, "Oh wow, I should really rethink how I'm approaching my product."
- LKLogan Kilpatrick
If, if folks have looked through the GPT-4 technical report that we released back in March when GPT-4 came out, GPT-4 was the first model that we trained where we could reliably predict the capabilities of that model, uh, beforehand based on the amount of compute that we were going to put into it. And you could actually, we, we did like a scientific study to show like, "Hey, this is what we predicted and here is what the actual outcome was." So it'll be one, I think, uh...... just as somebody who's interested in technology. But interesting to see, like does that continue to hold for GPT-5? And hopefully we'll, we'll share some of that information when, whenever that model comes out. I also think you can probably draw a, a few observations. One of them, which is GPT-4 came out. The, the consensus from the world is everything is different. Like, all of a sudden everything is different. This changes the world, this changes everything. And then slowly but surely, we come back to reality of like, this is a really effective tool and it's going to help solve my problems more effectively. And I think that is like the, undoubtedly the lens in which people should look at all of these model advancements. Like, GPT-5 is like surely going to be extremely useful and like solve some whole new echelon of problems. Hopefully it'll be faster, hopefully it'll be better on all these ways. But like fundamentally, the same problems that exist in the world are still going (laughs) to be the same problems. You now just have a better tool to solve those problems. And I think like going back to, like, vertical use cases, like I think people who are solving very specific use cases are just now going to be able to do that much more effectively. Like, I don't think that's, like, going to... People have these unrealistic expectations that like GPT-5's gonna be like doing back flips in the background in my bedroom while it also, like, writes all my code for me and, like, talks on the phone with my, with my mom or something like that. And like that's not the case. Like, it is just going to be this, like, very effective tool, very similar to GPT-4. And it's also going to become, like, very normal very quickly. And I think, like, that is actually a really interesting piece if you can plan for the world where people become very, very used to these tools very quickly. I actually think that's like an edge and like assuming that this thing is going to, like, absolutely change everything in, in many ways I think is actually like a, um, a downside, it's like the wrong mental framing to have of these tools as they
- 50:38 – 52:30
OpenAI’s enterprise offering and the value of sharing custom applications
- LKLogan Kilpatrick
come out.
- LRLenny Rachitsky
Kind of along these lines, you guys are investing a lot into B2B offerings. I think half the revenue, last I heard, was B2B and then half is B2C. I don't know if that's true, but that's something I heard. What is it that you get if you work with OpenAI as a company, as a business? What is the, what does, what does it unlock? It's, is it just called OpenAI Enterprise? What's it called and what do you get as a part of that?
- LKLogan Kilpatrick
Yeah. So I think a lot of our B2, B2B customers are using the API to, like, build stuff. So I think that's one angle of it. I think if you're a ChatGPT B2B customer, we sell teams, which is the ability to, like, get multiple subscriptions of ChatGPT, package it together. We also have an enterprise version of ChatGPT which are a bunch of, like, enterprise-y things that enterprise companies want around, like, SSO and stuff like that-
- LRLenny Rachitsky
Mm-hmm.
- LKLogan Kilpatrick
... related to ChatGPT Enterprise. I think the coolest thing is actually being able to, like, share some of these, like, prompt templates and GPTs internally. So again, you can make, like, custom things that work really well for your company with, like, all of the information that's relevant to solving problems at your company and, like, share those internally. And to me that's, like, you know, you wanna be able to collaborate with your teammates on the cool things you create using AI. So that's a huge unlock for companies. I think that, those are, like, the two biggest value adds. There's, like, higher limits and stuff like that on some of those models. But I think being able to share, like, your very domain specific applications is the most useful thing.
- LRLenny Rachitsky
And I think if you're a company listening and you think a lot of employees are using ChatGPT, basically the simplest thing you could do is just roll it up into a business account with single sign-on. And that probably saves you money and makes it easier to, um, coordinate and administer.
- LKLogan Kilpatrick
Yeah. There's also, like, a bunch of security stuff too. Like, if you wanna control, like, you don't want people to use certain GPTs from the ChatGPT store 'cause you're, like, worried about security or privacy and stuff like that, you don't want your private data going in places, it makes a lot of sense to, to sign up for that so that you have a little bit more control over what's happening.
- LRLenny Rachitsky
Okay. Got
- 52:30 – 55:09
New updates and features from OpenAI
- LRLenny Rachitsky
it. Yeah, there's a launch happening tomorrow, I think, after we're recording this. Can you talk about what is new, what's coming out? I think this is gonna come out a couple weeks after recording, but just what should people know that's new that's coming out from OpenAI and tomorrow in our time, (laughs) in our world?
- LKLogan Kilpatrick
Yeah. Updated... So there's a few different things. A couple of quick ones are updated GPT-4 Turbo model, um, updated the, the preview model that we released at Dev Day. There's an updated version of that. It fixes this, if folks have seen online, people talking about this sort of laziness phenomenon in the model. We, we improved on that and it fixes a lot of the cases where that, where that was the case. So hopefully the model will be a little bit less lazy. The, the big thing is this, is the third generation embeddings model. So we were talking off camera before recording about all of the cool use cases for embeddings. So if folks have, have used embeddings before, it's essentially the, the technology that powers, like, many of these, like, question, question and answering with your own documentation or your own corpus of knowledge. And, uh, Lenny, you were saying you actually have a, a website where people can ask questions about recordings of the podcast. Um-
- LRLenny Rachitsky
LennyBot, lennybot.com. Check it out.
- LKLogan Kilpatrick
Yeah. Len, lennybot.com. And then my, my assumption was that lennybot.com is actually powered by embeddings. So you take all of the corpus of knowledge, you take all the recordings, your blog post, you embed them, and then when people ask questions, um, you can actually go in and see the similarity between the question and, and the corpus of knowledge and then provide an answer to somebody's question and reference, like, an empirical fact, like, something that's true from your knowledge base. And, like, this is super useful and people are doing a ton of this is, like, trying to ground these models in reality in what they know to be true. Like, we know all the things from your podcast to be at least something that you've said before and, and to be true in that sense. And we can bring them into the, the answer that it, that the model is actually generating in response to a question. So that'll be super cool. And these new V3 embeddings models, again, you know, state of the art performance. The cool thing is actually the non-English performance has increased super significantly. I think historically people really were only using embeddings for, like, it, it only worked really well for, for English. And I think now you can, (clears throat) you can use it across, like, so many new languages because it's, it's just so much more performant across those, uh, across those languages. And it's, like, five times cheaper as well, which is wonderful. I... There's, there's no, (laughs) there's no better feeling than making things cheaper for people. I, I love it. I think now it's like you can embed...... I'm pretty sure it was, like, 62,000 pages of text for one dollar, um, (laughs) which is-
- LRLenny Rachitsky
(laughs)
- LKLogan Kilpatrick
... which is very, very cheap. So lots of really cool things that you can do with embeddings. I'm excited to see people embed more stuff.
- LRLenny Rachitsky
What a deal.
- 55:09 – 58:26
How to leverage OpenAI’s technology in products
- LRLenny Rachitsky
Final question before we get to a very exciting lightning round. Say you're a product manager at a big company or even a founder. What do you think are the biggest opportunities for them to leverage the tech that you guys are building, GPT-4, all the other APIs. How should people be thinking about, "Here's how we should really think about leveraging this power in our existing product or new product?" Whichever direction you want to go.
- LKLogan Kilpatrick
Yeah. I think going back to this theme of, like, new experiences is really exciting to me. Like, I think consumers are just going to be, like, ti- like, you're going to have an edge on other people if you're providing AI that's not accessible on a chatbot. Like, people are using a ton of chat, and, like, it's a really valuable service area. Like, it's clearly valuable because people are using it. But I think products that, like, move beyond this chat interface really are going to have such an advantage. And, and also, like, thinking about how to take your use case to the next level. Like, I've, I've tried a ton of chat examples that are, like, very, very basic and, like, providing a little bit of value to me. But I'm like, really, this should go, like, much further and, like, actually build your core experience from the ground up. Like, I've used this product that allows you to essentially, like, manage or, like, view the conversations that are happening online around, like, certain topics and stuff like that. So I can go and look online, like, "What are people saying about GPT-4?" And, like, that, wh- what I just said out loud, "What are people saying about GPT-4," is, like, the actual question that I have. And, like, in, in a normal product experience today, like, I have to go into a bunch of dashboards and, like, change a bunch of filters and stuff like that. And what I really want is just, like, ask my question, "What are, what are people doing? What are people saying about GPT-4," and, like, get an answer to that question in, like, a very data-grounded way. And I've seen people, like, solve part of this problem where they're like, oh, they'll be like, "Oh, here's a, here's a few examples of what people are saying." I'm like, "Well, that's not really what I want." Like, I want this, like, summary of what's happening. And I think it just takes a little bit more engineering effort to make that happen. But I think it's, like, that is the magical unlock of, like, "Wow, this is an incredible product that I'm going to continue to use," instead of like, "Yeah, this is kind of useful, but, like, I really want more."
- LRLenny Rachitsky
Awesome. I'll give a shout-out to a product, I'm not an investor but I know the founder, called Visualelectric.com, which I think is doing exactly this. It's basically a tool specifically built for creatives, I think specifically graphic design, to help them create imagery. So, you know, there's, like, DALL-E obviously, but this takes it to a whole new level where it's kind of this canvas, infinite canvas, that you could just generate images, edit, tweak them, and continue to iterate until you have the thing that you need. Visualelectric-
- LKLogan Kilpatrick
I'm gonna try this out. Is, is it similar to Canva?
- LRLenny Rachitsky
It's spec- it's even more niche, I think, for more sophisticated graphic design, I think is the use case.
- LKLogan Kilpatrick
Oh.
- LRLenny Rachitsky
But I'm not a designer so, uh, I'm not, I'm not the target customer. But I will say my wife is a graphic designer. She had never used AI tools. I showed her this and she got hooked on it. She paid for it without even telling me that she was gonna become a paid customer. And she just started, she created imagery of our dog, all, in all this art. And now it's, like, on our TV. She, the art she created is now sitting... It's, like, we have a framed TV and that's the, um, image on our TV. So anyway.
- LKLogan Kilpatrick
I love that. What was it called again?
- LRLenny Rachitsky
Visualelectric.com. Anyway, uh,
- 58:26 – 59:30
Encouragement for building with AI
- LRLenny Rachitsky
anything else you wanted to touch on or share before we get to our very exciting lightning round?
- LKLogan Kilpatrick
I've made this statement a few times online and other places, but, like, for people who are, have cool ideas that they should build with AI, like, this is the moment. Like, there are so many cool things that need to be built for the world using AI. And, like, again, if, if I or other folks on, on the team at OpenAI can be helpful in, like, getting you over the hump of, like, starting that journey of building something really cool, like, please reach out. Like, there's just... The, the world needs more cool solutions using these tools and, uh, would, would love to hear about, like, the awesome stuff that people are building.
- LRLenny Rachitsky
I would have asked you this at the end, but how would people reach out? What's the best way to actually do that?
- LKLogan Kilpatrick
Twitter, LinkedIn. Uh, my, my email should be findable somewhere. I don't want to say it (laughs) and then make it spam with a bunch of emails. Like, you should be able to find my email if you need it online somewhere. Um, but yeah. Twitter and LinkedIn is usually, like, the easiest place.
- LRLenny Rachitsky
And, uh, how do they find you on Twitter?
- LKLogan Kilpatrick
Uh, it's just Logan Kilpatrick or I think my name shows up as Logan.GPT or-
- LRLenny Rachitsky
Logan GPT.
- LKLogan Kilpatrick
... @OfficialLoganK. Yeah.
- LRLenny Rachitsky
Awesome. Okay. And we'll link it in the show notes. Amazing.
- 59:30 – 1:08:06
Lightning round
- LRLenny Rachitsky
Well, Logan, with that we've reached our very exciting lightning round. Are you ready?
- LKLogan Kilpatrick
I'm ready.
- LRLenny Rachitsky
First question, what are two or three books that you've recommended most to other people?
- LKLogan Kilpatrick
I think the first one, it's one that I read a long time ago and came back to recently, is The One-Room Schoolhouse by Sal Khan. I- incredible... Yeah, I, I don't wanna... It's a lighting round so I won't say too much, but, like, incredible story and AI is what is going to enable Sal Khan's vision of, like, a, a teacher per student to actually happen. So I'm really excited about that. And the other one is, uh, that I always come back to is Why We Sleep. Um, I, yeah, sleep, sleep and sci- sleep science are so cool. Um, if you don't care about your sleep, like, it's one of the, the biggest, uh, up levels that you can do for yourself.
- LRLenny Rachitsky
What is a favorite recent movie or TV show that you really enjoyed?
- LKLogan Kilpatrick
I'm, I'm a sucker for, like, a good inspirational human story. Um, so I, I watched with my family recently over the holidays this Gran Turismo movie. And it's a story about somebody who, like, uh, this, a kid from London who grew up, uh, like, doing, like, sim racing, which is, like, a virtual r- race car, uh, and did this competition, ended up becoming, like, a real professional race car driver through some competition. And it's just, like, really cool to see, yeah, someone go from driving a virtual car to driving a real car and, like, competing in the 24-hour Le Mans and all that stuff.
- LRLenny Rachitsky
I used to play that game and it was a lot of fun.
- LKLogan Kilpatrick
Yeah.
- LRLenny Rachitsky
But I don't think I have any clue how to drive a real car, race car. Uh, so that's inspiring. Do you have a favorite interview question that you like to ask candidates that you're interviewing?
- LKLogan Kilpatrick
Yeah. I'm, I'm always curious to see what people's, like, the thing that they so strongly believe that, uh, people disagree with them on.
- LRLenny Rachitsky
What do you look for in an answer that seems like, wow, that's a really good signal?
- LKLogan Kilpatrick
I'm, I'm oftentimes... It's, it's, uh, it's just an entertaining question to ask in some sense, but it's also, it's, it's interesting to see, like, what somebody's, like, deeply held strong belief is. I think that's, uh, and, you know, not, not to, like, judge whether or not I believe in that, but, like, just curious to, like, see why, why people feel that way.
- LRLenny Rachitsky
What is a favorite product that you've recently discovered that you really like?
- LKLogan Kilpatrick
Uh, on the narrative of sleep, I have this, um, I have this really nice sleep mask from this company called and, uh, not being paid, (laughs) I have to say this, but it's called, like, Manta Sleep or something like that. It's a weighted sleep mask and it feels incredible when I... I don't know, maybe I just have a heavy head or something like that, but it feels, (laughs) it feels good to wear a, a weighted sleep mask, um, at night. I, I really appreciate it.
- LRLenny Rachitsky
I have a competing sleep mask that I highly recommend. I'm trying to find it. It's in... I've emailed people about it a couple times (laughs) in my newsletter for gift guides.
- LKLogan Kilpatrick
Yeah.
- LRLenny Rachitsky
Okay. My favorite is called the Wao, WaoW Sleep Mask. Uh, W-A-O-A-W.
- LKLogan Kilpatrick
What do you like about it?
- LRLenny Rachitsky
W-A-O-A-W. Uh, I'll link to it in the show notes. Uh, it makes a lot of room. It's, like, very large and, uh, there's space for your eyes, so, like, your eyelashes and whatever eyes are impressed on, uh, and it just, it just fits really nicely around the head. (laughs) And my wife, we both wear our-
- LKLogan Kilpatrick
Yeah.
- LRLenny Rachitsky
... masks at night. It just, speaking of sleep, really helps us sleep. Uh, it's not like-
- LKLogan Kilpatrick
Yeah. Same here. I love it.
- LRLenny Rachitsky
Yeah. It doesn't have the weight in this piece, so it might be worth trying, but, uh, everyone I've recommended this to is like, "That changed my life. Thank you for helping me sleep better." And so we'll link to it.
- LKLogan Kilpatrick
I love it.
- LRLenny Rachitsky
Look at that, a sleep mask. Look at us, as an adult. Uh, two more questions. Do you have a favorite life motto that you often come back to, share with friends or family, either in work or in life?
- LKLogan Kilpatrick
Yeah. I've got it. It's on a Post-it note that I wr- right behind my, my camera and it's measure in hundreds. I love this idea of measuring things in hundreds and it's for folks who are, like, at the beginning of some journey. I, I talk to people all the time. They're like, "Yeah, I've tried this thing and it hasn't worked." And if your mental model is to measure in hundreds, by measuring in hundreds the five times that you failed at something, you've failed and tried zero times. And I love that. It's, like, such a great reminder that everything in life is, like, built on compounding and multiple attempts at stuff. And if you don't try enough times, like, you're never going to be successful at it.
- LRLenny Rachitsky
I love that. I can see why you are successful at OpenAI and why you're a good fit there. Final question. So I asked Open A... I asked ChatGPT, uh, for a very silly question, "Give me a bunch of silly questions to ask Logan Kilpatrick, head of developer relations at OpenAI." And I went through a bunch. I have three here, but I'm gonna pick one. If an AI started doing standup comedy, what do you think would be its go-to joke or funny observation about humans?
- LKLogan Kilpatrick
I think, uh, uh, today, I think if you, if you were to do this, like, I think the go-to question would be something along, like, the... So an AI walks into a bar and, and likely because again it's trained on some distribution of training data and, like, that's, like, the most common (laughs) joke that comes up and that's probably, like... I wonder if you came up with a joke right now whether or not that would show up in, in one of the examples.
- LRLenny Rachitsky
I love it. What would be the joke though? We need the joke. We need the punchline. I'm just joking. I know you can't come up with amazing-
- LKLogan Kilpatrick
That's what we have ChatGPT for.
Episode duration: 1:08:06
Install uListen for AI-powered chat & search across the full episode — Get Full Transcript
Transcript of episode XkMbkWG2ca4
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome