Skip to content
Lenny's PodcastLenny's Podcast

Michael Truell: Why coding becomes logic design at $300M ARR

Through Cursor's IDE bet and custom models behind every magic moment; coders specify intent in near-English while AI handles low-level autocomplete.

Michael TruellguestLenny Rachitskyhost
May 1, 20251h 11mWatch on YouTube ↗

EVERY SPOKEN WORD

  1. 0:004:20

    Introduction to Michael Truell and Cursor

    1. MT

      (instrumental music) Our goal with Cursor is to invent a new type of programming, a very different way to build software. So a world kind of after code, I think that more and more, being an engineer will start to feel like being a logic designer. And really it will be about specifying your intent for how exactly you want everything to work.

    2. LR

      What is the most counterintuitive thing you've learned so far about building Cursor?

    3. MT

      We definitely didn't expect to be doing any of our own model development. And at this point, every magic moment in Cursor involves a custom model in some way.

    4. LR

      What's something that you wish you knew before you got into this role?

    5. MT

      Many people, you hear, hire too fast. I think we actually hired too slow to begin with.

    6. LR

      You guys went from $0 to $100 million ARR in a year and a half, which is historic. Was there an inflection point where things just started to really take off?

    7. MT

      The growth has been fairly just consistent on an exponential. And exponential to begin with feels fairly slow when the numbers are really low, and it didn't really feel off to the races to begin with.

    8. LR

      What do you think is the secret to your success?

    9. MT

      I think it's been...

    10. LR

      Today my guest is Michael Truhl. Michael is co-founder and CEO of AnySphere, the company behind Cursor. If you've been living under a rock and haven't heard of Cursor, it is the leading AI code editor and is at the very forefront of changing how engineers and product teams build software. It's also one of the fastest growing products of all time, hitting $100 million ARR just 20 months after launching, and then $300 million ARR just two years since launch. Michael's been working on AI for 10 years. He studied computer science and math at MIT, did AI research at MIT and Google, and is a student of tech and business history. As you'll soon see, Michael thinks deeply about where things are heading and what the future of building software looks like. We chat about the origin story of Cursor, his prediction of what happens after code, his biggest counterintuitive lessons from building Cursor, where he sees things going for software engineers, and so much more. Michael does not do many podcasts. The only other podcast he's ever done is Lex Fridman, so it was a true honor to have Michael on. If you enjoy this podcast, don't forget to subscribe and follow it in your favorite podcasting app or YouTube. Also, if you become an annual subscriber of my newsletter, you get a year free of Perplexity, Linear, Superhuman, Notion, and Granola. Check it out at lennysnewsletter.com and click "bundle." With that, I bring you Michael Truhl. This episode is brought to you by Eppo. Eppo is a next generation A/B testing and feature management platform built by alums of Airbnb and Snowflake for modern growth teams. Companies like Twitch, Miro, ClickUp, and DraftKings rely on Eppo to power their experiments. Experimentation is increasingly essential for driving growth and for understanding the performance of new features, and Eppo helps you increase experimentation velocity while unlocking rigorous deep analysis in a way that no other commercial tool does. When I was at Airbnb, one of the things that I loved most was our experimentation platform where I could set up experiments easily, troubleshoot issues, and analyze performance all on my own. Eppo does all that and more with advanced statistical methods that can help you shave weeks off experiment time, an accessible UI for diving deeper into performance, and out of the box reporting that helps you avoid annoying prolonged analytic cycles. Eppo also makes it easy for you to share experiment insights with your team, sparking new ideas for the A/B testing flywheel. Eppo powers experimentation across every use case, including product, growth, machine learning, monetization, and email marketing. Check out Eppo at geteppo.com/lenny and 10X your experiment velocity. That's geteppo.com/lenny. This episode is brought to you by Vanta. When it comes to ensuring your company has top-notch security practices, things get complicated fast. Now you can assess risk, secure the trust of your customers, and automate compliance for SOC 2, ISO 27001, HIPAA, and more with a single platform, Vanta. Vanta's market leading trust management platform helps you continuously monitor compliance alongside reporting and tracking risk. Plus you can save hours by completing security questionnaires with Vanta AI. Join thousands of global companies that use Vanta to automate evidence collection, unify risk management, and streamline security reviews. Get $1,000 off Vanta when you go to vanta.com/lenny. That's

  2. 4:208:32

    What comes after code

    1. LR

      vanta.com/lenny. Michael, thank you so much for being here, and welcome to the podcast.

    2. MT

      Thank you. Uh, it's great to be here. Thank you for having me.

    3. LR

      When we were chatting earlier, you had this really interesting phrase, this idea of what comes after code. Talk about that, just like the vision you have of where you think things are going in terms of moving from code to maybe something else.

    4. MT

      Our, our goal with Cursor is to invent a sort of a new type of programming, um, a very different way to build software that's kind of just distilled down into you describing the intent to the computer for what you want in the most concise way possible, uh, and really distilled down to just defining how you think the software should work and how you think it should look. And yeah, with, with, you know, the technology that we, we have today and as it matures, uh, we think you can get to a place where you can invent a method of building software that's legions higher level and more productive, uh, in some cases more, more accessible too. And, um, that, that process will be, will be a gradual moving away from, you know, what building software looks like today. Um, and, um, I, you know, I want to contrast it with maybe like the vision of, you know, what software looks like in the future, um, that, you know, I think, you know, a couple visions that are in the popular conscious-

    5. LR

      Mm-hmm.

    6. MT

      ... that we at least, um, have some disagreement with. Um, one is, you know, there's a group of people who think that, um, you know, software building in the future is going to look very much like it, it does today, which mostly means text editing, formal programming languages like TypeScripts and Go and C and Rust. Uh, and then there's another group that kind of thinks like, you know, you're just gonna type into a bot and you're gonna ask it to build you something and then you're gonna ask it to, to change something about what you're building.And it's kind of like this, you know, chatbot, Slack bot style, where you're talking to your engineering department. And we think that there are problems with, with both of those visions. I think that on the, you know, on the, the chatbot style line of things, um, and we think it's gonna look like weirder than both. Um, the problem with the, the chatbot style end of things is, um, that lacks a lot of precision. If you want humans to have completely, you know, complete control over what the software looks like and how it works, you need to let them, you know, gesture at what they want to be changed, um, you know, in a form factor that's more precise than just, you know, change this about my app. You know, kind of in a text box, removed from the whole thing. And then, um, you know, the, the version of the world where kind of nothing changes we think, we think is, is wrong 'cause we think that the, the technology is gonna get much, much, much better. Uh, and so a world, you know, kind of after, you know, after code, um, I think it looks like a world where you have a representation of the logic of your software that does look more like English, right? You have kind of written down, you can imagine it in document form, you can imagine in kind of an evolution of programming language towards pseudocode, you have written down, you know, the logic of the software and you can, you can edit that at a high level and you can point at that. And it won't be kind of the, the impenetrable millions of lines of code. Um, it'll instead be something that's, like, much terser and easier to understand and easier to navigate. But that world where, yeah, the, the kind of crazy hard to understand symbols start to evolve towards something that's a little bit more, uh, human readable, uh, and human editable, uh, is one, is one that we're working toward.

    7. LR

      This is a profound point. And I think I, I, I wanna make sure people don't miss what you're saying here, which is that where you're envisioning in the next year essentially, uh, is kind of when things start to shift is, uh, people move away from even seeing code, think, having to think in code in, like, JavaScript and Python, and there's this abstraction that will appear, uh, essentially pseudocode describing what the code should be doing more in English sentences.

    8. MT

      Yep. We, we think it ends up, ends up looking like that. Um, and we're very opinionated that that path goes through kind of existing professional engineers, and it looks like this, this evolution away from code. Uh, and it definitely looks like the human still being in the driver's seat, right? And the human having both a ton of control over all aspects of the software and not giving that up. And then also, uh, the human having the, the ability to, um, make changes very quickly, like having a fast duration loop and not just like, you know, having something in the background that's, that's super slow and takes like weeks, uh, go

  3. 8:3212:39

    The importance of taste

    1. MT

      do all your work for you.

    2. LR

      This, uh, begs the question for people that are getting... are currently engineers or thinking about becoming engineers or designers or product managers, like what skills do you think will be more, more and more valuable in this world of the what comes after code?

    3. MT

      I think taste will be increasingly more valuable. And I think often when people think about tastes in the realm of software, they think about, you know, visuals or taste over smooth animations and, uh, you know, coloring things, UI, UX, et cetera, on kind of the visual design of things. And I think more and more... And, you know, the visual side of things is an important part of defining, you know, a piece of software but then, you know, as mentioned before, I think that the other half of defining a piece of software is the, is the logic of it and how the thing works. And, uh, we have amazing tools for speccing out the visuals of things. And then when you get into the, the logic of how a piece of software works, really the best representation we have of that is code right now. You can kind of gesture at it with Figma and you can gesture at it with writing down notes. Um, but it's, you know, when you have an actual working prototype. And so I think that more and more, being, being an engineer will start to feel like being a logic designer, and really will be about specifying your intent for how exactly you want everything to work. And it will less be about... uh, it'd be more, more about the, the what and a little bit less about the how, um, exactly you're gonna do things under the hood. Uh, and so yeah, I think, I think taste will be increasingly important. I think one aspect of software engineering, and we're very far from this right now and there are lots of, you know, uh, funny, funny memes going around the internet about, you know, the kind of the, some of the trials and tribulations people can run into if they trust AI for too many things when it comes to engineering, um, around, you know, uh, building, building apps that, uh, you know, have, have glaring, glaring deficiencies and, and problems and, uh, functionality issues. But, um, I think we will get to a place where, um, you will be able to, uh, be less careful as a software engineer, which right now is an incredibly, incredibly important, uh, skill. Um, and yeah, we'll move a little bit from carefulness and a little bit more towards taste.

    4. LR

      This, uh, makes me think of vibe coding. Is that kind of what you're describing when you talk about not having to think about the details as much and just kinda going with the flow?

    5. MT

      I, I think it's re- I think it's related. I think that vibe coding right now describes, um, exactly kind of this, this state of creation that, uh, uh, is pretty controversial where you're generating a lot of code and you aren't really understanding the details. That is, that is like a, a state of creation that then has, has lots of problems. Like you don't really... By, by not understanding the details under the hood right now, you then very quickly get to a place where you're kind of limited at a certain point where you create something that's big enough that, that you can't change. And so I think some of the, some of the, you know, ideas that we're interested around, you know, how do you give people, uh, continued control over all the details, um, you know, when they don't really understand the code. Like I think that, um, solutions there, um, are very relevant to, to the people who are vibe coding right now. I, you know, I think that, uh, right now, we, we kind of, we lack the ability to, you know, let the, the tastemakers actually have complete control over the software. And so, um, one of the, one of the issues also with, you know, with vibe coding and, and letting, letting taste really shine through from people is you can create stuff but a lot of it is the AI making decisions that are unwieldy and you don't have control over.

    6. LR

      One more question along these lines. You, you throw out this word taste. When you say taste, what are you thinking?

    7. MT

      I'm thinking having the right idea for, for what should be built. And then just it, it will become more and more about kind of effortless translation of here's exactly what you want built. Here's how you want everything to work. Here's how you want it to look. And then you'll be able to make that.... um, on a computer and it will less be about this kind of translation there of, like, you and your team have a picture of what you want to build and then you have to really painstakingly, labor intensive, like, lay out that into a format that a computer can then execute and interpret. And so, yeah, I think, you know, less is, less is on the UI side of things. Maybe taste is a little bit of a misnomer, um, but just about having the right idea for, for what should be built.

  4. 12:3918:31

    Cursor’s origin story

    1. MT

    2. LR

      Awesome. Okay. I'm gonna come back to these topics, but I want to actually zoom us back out to the beginnings of Cursor. Uh, I have never heard the origin story. I don't think many people know how this whole thing started. Basically, you guys are building one of the fastest-growing products in the history of the world. It's changing the way people build products. It's changing careers, professions. It's ch- it's changing so much. How did it all begin? Any memorable moments along the journey of the early days?

    3. MT

      Cursor kind of started as a solution in search of a problem. Um, and, uh, a little, a little bit where it very much came from reflecting on, um, how AI was gonna get better, um, over the course of the next 10 years. And, um, there were, there were kind of two defining moments. One was, uh, being really excited by using the, the, the first beta version of the GitHub Copilot actually. This was the first time we had used an AI product that was really, really, really useful and, um, was, you know, actually just useful at all. Uh, and wasn't just a vaporware kind of demo thing. And in addition to being an A- you know, the first AI product that we had used that was useful, GitHub Copilot was also one of the most useful, if not the most useful dev tool we'd ever adopted. Um, and that got us really excited. You know, another moment that got us really excited was the series of scaling laws papers coming out of OpenAI and other places that showed that even if we had no new ideas, AI was gonna get better and better just by pulling on simple levers like scaling up the models and also scaling up the, the data that was going into the models. And so at the end of 2021, beginning of 2022, this got us excited about how, you know, AI products were now possible. This technology was gonna mature, uh, into the future. And it felt like when we looked around, there were lots of people talking about making models and there... It felt like people weren't really picking an area of knowledge work and thinking about what it was gonna look like as AI got better and better. And, um, you know, that set us on the, the path to like an, you know, kind of an idea generation exercise. It was like, you know, how are each th- these areas of knowledge work, uh, gonna change in the future as this tech gets more mature? Like what is the, you know, end state of the work gonna look like? Um, how are the, the tools that we use to do that work gonna change? Um, how are the models gonna get, you know, need to get better to support, uh, changes in the work? And, you know, once scaling and pre-training ran out, like who are we gonna keep pushing for technological capabilities? And the misstep at, at the beginning of Cursor is we actually worked on, you know, we sort of did this whole grant exercise, uh, and we decided to work on, you know, uh, an area, uh, of knowledge work that we thought would be relatively uncompetitive and sleepy and, and boring. Uh, and you know, no one, no one would be looking at it 'cause, you know, we thought, "Oh, coding's great. You know, coding's totally interchangeable to AI," but you know, people are already doing that. And, uh, so there was a period of, you know, four months to begin with where we were actually working on a very different idea, which was helping to automate and augment mechanical engineering, uh, and building tools for mechanical engineers. You know, there were problems from the get-go in that. We had, uh, you know... Me and my co-founders, we, we weren't mechanical engineers. Um, you know, we had friends who were mechanical engineers, but, uh, we were, we were very much unfamiliar with the field. So there's a little bit of the blind man and the elephant problem from the get-go. Uh, you know, there were problems around, uh, you know, how would you actually take, take the models that exist today and make them useful for mechanical engineering? The way we netted out is you need to actually develop your own models from the get-go. And, you know, the way we did that was, uh, was tricky. And, you know, there's not a lot of, uh, data on the internet of, of, um, you know, 3D models of, uh, of different tools and parts and the steps that it took to build, build up to those 3D models. Uh, and then getting them from the sources that, that have them is, like, also a tricky process too. But, um, eventually what happened was, you know, we, we came to our senses. We realized we're not super excited about mechanical engineering. It's not the thing we wanna dedicate our lives to. Uh, and we looked around and, uh, in the area of programming, it felt like, you know, despite a, you know, a decent amount of time ensuing, uh, not much had changed. And it felt like the people that were working on the space maybe had a, had a disconnect with us and it felt like they weren't being sufficiently ambitious about, um, where everything was gonna go in the future and how kind of all of software creation was gonna flow through these models. Uh, and that's what set us off on the, the path to, to building Cursor.

    4. LR

      Okay. So interesting. Okay, so first of all, I love that there's this... This is advice that you often hear of go after a boring industry 'cause no one's gonna be there and there's opportunity and, you know, sometimes it works, but I love that (laughs) in this journey it's like, no, actually go after the hottest, most, uh, popular space, AI coding, app building, (laughs) and it worked out. And the way you phrased it just now is you didn't see as- enough ambition potentially, that you thought there was more to be done. So it feels like that's an interesting lesson if... Even if something looks like, okay, it's too late, there's GitHub Copilots out there, some other products. If you notice that they're just not as ambitious as they could be or as you are or you see almost a flaw in their approach, that there's still a big opportunity. Does that resonate?

    5. MT

      Uh, that totally resonates and I think it's, um... A part of it is you need there to, to be, like, leapfrogs that can happen, you need there to be things that you can do. And I think the exciting thing about, uh, about AI is, in a, in a bunch of places, and I think this is ac- you know, very much still true of our space, and you can talk about how we think about that and how we deal with that. But, um, you know, I think that the, just the ceiling is really high. And, um, yes, if you're, if you look around, uh, you know, probably even if you, you take the best tool in kind of, like, any of these, any of these fields, um, there should be a lot more that needs to be done over the next few years, and so...That, that space, having that space, having that, you know, high ceiling I think is, is unique, um, amongst areas of software, at least the degree to which it is. It is high with AI.

    6. LR

      Let's come

  5. 18:3122:39

    Why they chose to build an IDE

    1. LR

      back to the IT question. So there's kind of a few routes you could have taken and other companies are doing different routes so there's building an IT for engineers to work within and adding AI magic to it. There's another route of just a full AI agentic dev-in sort of product. And then there's just, like, a model that is very good at coding and focusing on building the best possible coding model. What made you decide and see that the IT path was the best route?

    2. MT

      The folks who were, from the get-go, working on just a, uh, a model or working on end-to-end, uh, automation f- of, of programming, I think, uh, they were trying to build something very different from us, which is we care about giving humans control over all of the decisions, um, in kind of the end tool that they're building. And I think th- those folks were very much thinking of a, of a future where kind of, you know, end-to-end the whole thing is done by AI, and maybe, like, the AI is making all the decisions too. And so one, there is kind of, like, a personal interest component. Two, I think that, uh, always we try to be, uh, intense realists about where the technology is today. You know, very, very, very excited about how AI is going to mature over the course of many decades. But, uh, you know, I think that sometimes, uh, people, you know, there's a, there's an instinct to, to see AI do magical things in one area and then kind of anthropomorphize these models and think, you know, it's better than a smart person here and so it must be better than a smart person there. But these things have massive issues. And, um, we, uh, from the, from the very start our, our product development process was really about dogfooding and using the tool intensely every day. And we, we never wanted to ship anything that wasn't, wasn't useful to us. And, you know, we had the benefit of doing that because we were the end users for our, of our product. And I think that that instills a realism in you around where the, where the tech is right now. And so, uh, that definitely made us think that we need the humans to be in the driver's seat, the AI cannot do everything. We're also interested in giving humans that control too for, for personal reasons. And so that, that gets you away from just you're a model company that also gets you away from just kind of this end-to-end stuff with- without the human having control. And then the way you get to an IE versus maybe a plug-in to an existing coding environment is, uh, the belief that, you know, programming is gonna flow through these models, and the act of programming is gonna change a lot over the course of the next few years. And the extensibility that existing coding environments have is so, so, so limited. So if you think that the UIs may change a lot, if you think that the form factor program is gonna change a lot, necessarily you need to have control over the entire application.

    3. LR

      I know that you guys today have an IDE and, uh, and that's probably the bias you have of this is maybe where the future is heading. But I'm just curious, do you think a big part of the future is also going to be AI engineers that are just sitting in Slack and just doing things for you? Is that something that fits into Cursor one day?

    4. MT

      I think you'll want the ability to move between all these things fairly effortlessly. And sometimes I think you will want to have the thing kind of go spin off on its own for a while. And then I think you'll want the ability to pull in the AI's work and then work with it very, very, very quickly, right? And then maybe have it go spin off again. And so these, like, kind of background versus foreground form factors, I think you want that all to work well in one place. And, uh, I think the background stuff, there's, like, a segment of programming that it's especially useful for, which is type of programming tasks where it's very easy to specify exactly what you want, um, with, you know, without much description and exactly what correctness looks like without much description. And often that's, uh, bug fixes are kind of like the, are, are a great example of that. But it's definitely not all of programming. So I think that, w- you know, what the IDE is, uh, will totally change over time and kind of our approach to, you know, having our own editor, uh, was premised on. It's gonna have to evolve over time. And I think that that will both include you can spin off things from different surface areas like Slack or your issue tracker or whatever it is. And I think that will also include, like, you know, the pane of glass that you're staring at is gonna change a lot. Um, and, you know, we just mostly think of an IDE as the place where you are building software.

    5. LR

      I think something

  6. 22:3924:31

    Will everyone become engineering managers?

    1. LR

      people don't talk enough about w- with talking about agents and all these, uh, AI engineers that are gonna be doing all this stuff for you is basically we're all becoming, uh, engineering managers with a lot of reports that are just, like, not that, not that smart and you have to do a lot of reviewing and approving and specifying. I guess thoughts on that and is there anything you could do to make that easier? 'Cause that sounds really hard. Like, anyone that has a large team, has had a large team being like, "Oh my god, all these, uh, junior people just checking in with me, doing not high quality work over and over." It's just like, oh, to life. It's kinda suck.

    2. MT

      Yeah. Yeah. Maybe eventually one-on-ones with, uh, with AI agents. Um, some-

    3. LR

      So many one-on-ones.

    4. MT

      Um, uh, yeah. So the, the customers we've seen have, uh, most success with AI I think are still fairly conservative about some of the ways in which, in which they, they use this stuff. And so I do think today that the most successful customers really lean on things like, um, you know, our next ag, edit prediction where we, you know, your coding is normal and we're predicting the next sequence of actions you're gonna do. And then they also really lean on, like, scoping down the stuff that you're gonna hand off, uh, to the bot. And, you know, there's, for a fixed percent of your time spent reviewing code, you could, um, from, from an agent, um, or from an AI overall, you could, you know, there's kind of two patterns. One is you could, you know, spend a bunch of time specifying things up front, the AI goes and works, and then you then go and review the AI's work and then you're done. That's the whole task. Or you could really chop things up, right? So you can, you know, specify a little bit, AI writes something, review. Specify a little bit, AI writes something, review. And that's kind of, you know, autocompletes all in the way of that spectrum. And, um, still we see, uh, often the most, uh, successful people, um, using these tools are, are, are chopping things up right now, uh, and keeping things fairly small.

    5. LR

      That sounds less, less terrible. I'm gr- I'm glad there's a solution here.

  7. 24:3126:45

    How they decided it was time to ship

    1. LR

      I wanna go back to you guys building Cursor for the first time. What was the point where you realized this is ready? What was kind of a moment of like, "Okay, I think this is time to put it out there and see what happens?"

    2. MT

      So when we started building Cursor, um, we were, uh, fairly paranoid about spinning for a while, without releasing to the world. And so to, to begin with too, we actually... the, the first version of Cursor was, was hand-rolled. We, um... now we, we use, uh, VS Code kind of as a base, like many browsers use Chromium as a base, um, and have forked off of that. Uh, to begin with, we, we didn't and built a prototype of Cursor from scratch, and that involved a lot of work. We had to build our own... uh, you know, there are a lot of things that go into, you know, a modern code editor, uh, including, um, you know, support for many different languages and, um, navigation support for moving amongst the language, you know, error checking support for things. There's, you know, things like, you know, an integrated command line, you know, the ability to use like remote servers to, uh, to... you know, to... the ability to connect to remote servers to, to view and r- and run code. And so we kind of just like went on this blitz of building things in- incredibly quickly, building kind of our own, uh, editor from scratch and then also the AI components. And, um, it was after like a couple of months that we just, you know... uh, it was after maybe five weeks that we were living on the editor full-time and, you know, had thrown away our previous editor, uh, and were, were using a new one. And then once it got to a point where we found it a bit useful, then we put it in other people's hands and had this like very short beta period, and then we launched it out to the world within, uh, a couple of months from the first line of code. I, I think it was probably, probably three months. And it was definitely a like, you know, let's, let's just get this out to people and build in public quickly. The thing that took us by surprise is we thought we would be building for a couple of hundred people for a long time. And, you know, for- from the get-go there, there was kind of an immediate crush of interest and a lot of feedback too. Uh, and, you know, that was super helpful. We learned from that, and that's actually, you know, why we switched to being based off of VS Code instead of just, you know, this hand-rolled thing. Uh, a lot of that was motivated by kind of the initial user feedback and, uh, you know, and then had been iterating in, in public, uh, from there.

    3. LR

      I like

  8. 26:4532:03

    Reflecting on Cursor's success

    1. LR

      how you understated, uh, the, the traction that you got. Uh, I think you guys went from $0 to 100 million ARR in like a year, year and a half or something like that, which is, uh, historic. What do you think was the key to th- to success of something like this? You talked about dogfooding being a big part of it. Like, you built it in three months. That's insane. (laughs) Uh, what do you think was, is the secret to your success?

    2. MT

      The first version was n- was not... the, you know, the three-month version wasn't very good. And so I think it's been, you know, a sustained paranoia about, you know, there are all of these ways in which this, this thing could get better. You know, the end goal is really to, uh, invent a very new form of programming that involves automating a lot of coding as we, uh, know it today. And, um, no matter, you know, where we are with Cursor, it feels like we're very, very far away from that end goal. And so there's... uh, there's always a lot to do, but I think it's been kind of... uh, a lot of it hasn't been re- over-rotated on kind of that initial push, but instead is like the continued evolution of the tool and just making the tool consistently better.

    3. LR

      Was there an inflection point after those three months where things just started to really take off?

    4. MT

      To be honest, it felt fairly slow to begin with (laughs) . Um, and, you know, maybe, maybe comes from some in- impatience on our part. Um, but, uh, one, one... I think, you know, there's the, the overall speed of the growth, which is, um, uh... you know, continues to take us by surprise. I think one of the things that, uh, has been most surprising too is that the growth has been fairly just consistent on an exponential of just consistent month-over-month growth, accelerated at times by, um, launches on our part and other things. But, uh, you know, an, an exponential to begin with feels, feels fairly slow and the, the numbers are really low, and, uh, so it didn't, it didn't really feel off to the races to begin with.

    5. LR

      To me, this sounds like build it and they will come actually working. You guys just built an awesome product that you loved, yourselves as engineers. You put it out and people just loved it, told everyone about it.

    6. MT

      It, it being essentially all just us, you know, the, the team working on the product and making the product good in lieu of, you know, other things one could spend one's time on. You know, we, we definitely spent time on tons of other things, for instance, building the team was incredibly important and, you know, um, uh, doing things like, uh, you know, support rotations are very important, but some of the normal things that people would maybe, uh, reach for, uh, in, in building the company early on, um, we really let those fires burn for a long time, especially when it came to things like, like sales and marketing. And so just working on the product and building a product that you like for s- you know, your team likes and then, you know, also then adjusting it for some set of users, that can kind of sound simple, but then it's... you know, it's har- hard to do that well. And, uh, there are a bunch of different directions one could have run in, a bunch of different product directions. And I think that, um, you know, one of the difficult things, you know, I think focus and kind of strategically picking the right things to build and prioritizing effectively is tricky. I think another thing that's tricky about this, um, this domain is it's kind of a new form of product building where, um, it's very interdisciplinary in that we are something in between a normal software company, uh, and then in between a normal software company and then a, a, a foundation model company, in that, um, you know, uh, we want to develop a... you know, we're developing a product for millions of people and that, you know, that side of things has to be excellent. But then also one important dimension of product quality is doing more and more on the science and doing more and more on the model side of things, uh, in places where it makes sense. And so that element of things, doing that well too ha- has been tricky. But yeah, you know, the overall thing would note is, you know, uh, maybe, you know, some of these things sound, sound simple to specify, but then, like, doing them well is, is hard and they're both different ways you can run it.

    7. LR

      I'm excited to have Andrew Luo joining us today. Andrew is CEO of OneSchema, one of our longtime podcast sponsors. Welcome, Andrew.

    8. NA

      Thanks for having me, Lenny. Great to be here.

    9. LR

      So what is new with OneSchema? I know that you work with some of my favorite companies like Ramp and Banta and Watershed. I heard you guys launched a new data intake product that automates the hours of manual work that teams spend importing and mapping and integrating CSV and Excel files.

    10. NA

      Yes. So we just launched the 2.0 of OneSchema FileFeeds. We have rebuilt it from the ground up with AI. We saw so many customers coming to us with teams of data engineers that struggled with the manual work required to clean messy spreadsheets. FileFeeds 2.0 allows non-technical teams to automate the process of transforming CSV and Excel files with just a simple prompt. We support all of the trickiest file integrations, SFTP, S3, and even email.

    11. LR

      I can tell you that if my team had to build integrations like this, how nice would it be to take this off our roadmap and instead use something like OneSchema?

    12. NA

      Absolutely, Lenny. We've heard so many horror stories of outages from even just a single bad record in transactions, employee files, purchase orders, you name it. Debugging these issues is often like finding a needle in a haystack. OneSchema stops any bad data from entering your system and automatically validates your files, generating error reports with the exact issues in all bad files.

    13. LR

      I know that importing incorrect data can cause all kinds of pain for your customers and quickly lose their trust. Andrew, thank you so much for joining me. If you wanna learn more, head on over to OneSchema.co. That's OneSchema.co.

  9. 32:0334:02

    Counterintuitive lessons on building AI products

    1. LR

      What is the most counterintuitive thing you've learned so far about building Cursor, building AI products?

    2. MT

      I, I think one thing that's been counterintuitive for us, uh, hinted at, at it a little bit before, but is we, we definitely didn't expect to be doing any of our own model development when we started. Uh, as mentioned, you know, when we, when we got into this, there were companies that were immediately from the get-go going and just focusing on-

    3. LR

      Mm-hmm.

    4. MT

      ... kind of training a model from scratch. And we had done the calculation for it t- to, to train GPT-4 and just knew that that was not gonna be something we were gonna be able to do. And also felt a little bit like, uh, focusing one's attention in the wrong area because there are lots of amazing models out there, and why do all this work to replicate kind of what other players have done, especially on the pre-training side of things. You know, taking a, uh, a neural network that knows nothing and then teaching it the whole internet. And, uh, so we thought we were, we were- weren't gonna be doing that, uh, at all. And it seemed, uh, clear to us from the start that the, the existing models, there were lots of things that they could be doing for us that, um, they weren't doing 'cause, you know, there wasn't the right tool built for them. In, in fact though, we do a ton of model development, and internally it's a, it's a big, um, focus for us on the hiring front, uh, and have assembled a, a fantastic team there. And it's also been a big win on the, the product quality side of things for us. Uh, and at this point every magic moment in Cursor involves a custom model in some way. And, uh, so that, that was definitely counterintuitive and, and surprising and, uh, it's, it's been a gradual thing where, you know, there was an in- initial use case for, for training our own model where it really didn't make sense to use any of the biggest foundation models. That was incredibly successful. Kind of moved to another use case that worked really well, uh, and have, have been going from there. And one of the, you know, the, the helpful things in, in doing this sort of model development is, is picking your, your spots carefully. Not trying to reinvent the wheel, not trying to focus on places, uh, maybe where the, the, the best foundation models are, are excellent, but

  10. 34:0238:42

    Inside Cursor's stack

    1. MT

      instead kind of focusing on their weaknesses and how you can complement them.

    2. LR

      I think this is gonna be surprising to a lot of people hearing that you have your own models. When I... You know, when people talk about Cursor and all the folks in the space, they would kind of call 'em GPT wrappers, they're just sitting on top of ChatGPT or Sonnet, and what you're saying is that you have your own models. Talk about just, like, the stack behind the scenes.

    3. MT

      Yeah, of course. Um, so we definitely use, uh, the biggest foundation models, uh, you know, a bunch of different ways. Um, they're really important components of, of bringing the Cursor experience to people. The, the places where we use our own models... So sometimes it's to serve a use case that a foundation model wouldn't be able to serve at all for cost or speed reasons. And so one example of that is, um, uh, the autocomplete side of things. And so this can be a little bit tricky for, uh, people who don't code to understand, but code is this weird form of work where sometimes really the next 5, 10, 20, 30 minutes of your work is entirely creatable from looking over your shoulder. And I would contrast this with writing. So writing, you know, everyone or a- lots of people are familiar with, you know, Gmail's autocomplete and kind of the different forms of autocomplete that show up when you're trying to compose text messages or emails or things like that. They can only be so helpful 'cause often it's just really not clear what you're gonna be writing just by looking at what you've written before. But in code sometimes when you edit a part of a code base, it just, you're gonna need to change things in an other- other parts of a code base, and it's, it's entirely clear how you're gonna need to change things. And, uh, so one core part of Cursor is this really souped up autocomplete experience where we predict, like, the next set of things that you're gonna be doing across multiple files, across multiple places within a file. And, you know, making models good at that use case, one, there's this speed component of those models need to be really fast, they need to give you a completion within 300 milliseconds. There's also this cost component of we're running tons and tons and tons of molecules. You know, every keystroke we need to be, you know, changing our prediction for what you're gonna do next. And then it's also this really specialty use case of you need models that are really good not at completing the next token of just a, like a generic text sequence, but are really good at autocompleting a series of diffs. Um, you know, looking at what's changed within a code base and then predicting the next set of things that are gonna change. You know, both deleted and added and all of that. And we, we found a, a ton of success in training models specifically for that task. So that's w- a place where, you know, no foundation models are involved. It's kind of our own thing. We don't have a lot of labeling or branding about this in the app, but that, you know, cores of, uh, you know, power's a very core part of Cursor.And then, uh, a no- you know, another set of places where we're using our models are to, to help things like Sonnet or Gemini or GPT. And, uh, those sit both on the input of those big models and on the output. On the input side of things, those models are searching throughout a codebase, trying to figure out the parts of a codebase to show to one of these big models. Um, you can kind of think about this as like a mini Google search that's specifically built for finding the, you know, relevant parts of a codebase to show one of these big models. And then on the output side of things, you know, we take the sketches of the changes that these models are suggesting you make with that codebase, and then, you know, we have models that then kind of fill in the details of like, you know, the high level thinking is done by the smartest models. They spend a few tokens on doing that, and then these smaller specialty incredibly fast models coupled with some inference tricks then take those high level, uh, changes and turn them actually into full code diffs. And so it's been super helpful for, um, pushing on, on quality in places where you need a specialty task, and it's been super helpful for pushing on speed, which is such an important dimension of product quality for us, uh, too.

    4. LR

      This is so interesting. I just had Kevin Wheel on the podcast, CPO of OpenAI, and he calls this the ensemble of models. That's-

    5. MT

      Yes, yeah.

    6. LR

      ... the same way they work, to use the best feature of each one and, to your point, the cost advantages of using cheaper models. Uh, these open, these other models, are they based on, like, LLaMA and things like that, just open source models that you guys plug into and build on?

    7. MT

      Yeah, so, uh, again, we try, try to be very pragmatic about the ways that we're gonna do this work and we don't want to reinvent the wheel, and so, um, starting from the, the very best, um, you know, uh, pretrained models that exist out there, often open source ones. You know, s- sometimes in collaboration with these big model providers that, that don't share their weights out into the world, um, 'cause the thing we care about less is, you know, the ability to read- read line by line, you know, the, you know, the matrix of weights that then, you know, go to give you, give you a certain output. And it's, you know, we just care about the ability to kind of, to, to train these things, to postrain them. And so, uh, by and large, by and large, yes, open source models, uh, you know, sometimes working with, uh, with closed source providers too to tune

  11. 38:4246:13

    Defensibility and market dynamics in AI

    1. MT

      things.

    2. LR

      This leads to a, a discussion that a lot of AI founders always think about, and investors, which is moats and defensibility in AI. Uh, so it feels like one is custom models is, is a moat in the space. How do you just think about long-term defensibility in this space, knowing there's other folks, as you said, launching constantly, trying to take your, trying to eat your lunch?

    3. MT

      I think that, you know, there are ways to, to build in inertia and, um, you know, tr- traditional moats. But, uh, I, you know, I think by and large we're in a space where, you know, it is incumbent on us, uh, to continue to try to build the best thing, and, and, and everyone in this industry. And I, you know, I truly just think that the ceiling is so high that kind of no matter what, you know, entrenchment you build, um, you can be leapfrogged. And I think that this, uh, resembles markets that are maybe a little bit different from normal software markets, from normal enterprise, um, markets of the past. You know, I think one that comes to mind is, is the market for search engines at the end of 1999, or, you know, at the end of the, the '90s and beginning of the 2000s. I think another market that comes to mind that resembles this market in many ways is actually just like the development of, um, the personal computer and mini computers, you know, in the '70s, '80s, '90s. And, um, I think that, yes, m- you know, in each of those markets, the ceiling was incredibly high. You know, it was possible to switch. You could keep getting value for, like, the incremental hour of a smart person's time, the incremental R&D dollar for a really long time. You wouldn't run out of useful things to build. And then, you know, in, in, in search in particular, not only computer case, having distribution was, was helpful for making the product better too, in that you could tune the algorithms, you could tune the learning based off of the, the data and the feedback you were getting from users. And I think that, you know, all of those dynamics exist, exist in our market too. And so I think that, you know, maybe the, the sad, sad truth for people like us, but then, like, the amazing truth for the world is I think that there are many leapfrogs that exist. Uh, there's many, you know, more useful things to build. We're a long way away from where we can be in, you know, five, 10 years, and it's kind of incumbent on us to, to keep that engine going.

    4. LR

      So what I'm hearing, this sounds like a lot more like a consumer sort of moat, where it's just be the best thing consistently so that people stick with you, versus creating lock-in and things like that where they're just for, like Salesforce, where it's just contract with the entire company and you have to use this product.

    5. MT

      Yeah, and I, I think the, the important thing to note is, you know, if you're in a, a space where, like, you kind of run out of useful things to do very quickly, then that's, you know, that's not a great situation to be in. But if you're in a place where, you know, big investments and, um, you know, h- having more and more great people working on the right path can keep giving you value, then you can get kind of these economies of scale of R&D and you can kind of have, you know, you know, deeply work on the technology in the right direction and, and get to a place where that is defensible. Uh, but yes, it is, it is, you know, I think there's, there's a consumer-like tendency to it, and I really think it's just a, you know, about building the best thing possible.

    6. LR

      Do you think in the future there's one winner in this space or do you think it's gonna be a world of a number of products like this?

    7. MT

      I think the market is just so very big. And this is also one thing that, um, you know, you asked about the IDE thing early on, and one thing that I think, uh, tripped up some people that were thinking about this space is like they looked at the IDE market, uh, of the past 10 years and they said, "You know, who's making money off of editors?" Like, you know, there's all these, it's this super fragmented space where everyone kind of has their own thing with their own configuration and, you know, there's one company that commercially, like, you know, actually makes money off of, of making great, great editors, but, like, that company is only so big. And, uh, you know, then, like, the conclusion was it was gonna look like that in the future. And I think that the thing that people missed was that, you know...There is only so much you could do building an editor in the 2010s for coders. And, you know, the, the company that made money off of editors was doing things like making it easy to navigate around a code base and, you know, doing some, some error checking and type checking for things and, you know, hav- having good debugging tools but, like, which were all, uh, very useful, but I think that the, the set of things you can build for programmers, I think the set of things you can build for knowledge workers in many different areas just goes very far and very deep. And I think that really kind of like the, the problem in front of all of us is, is like the automation of a lot of busy work and knowledge work and really changing all the areas of knowledge work in front of us to be, um, much higher level and more productive. So, uh, that, that was all, you know, a long-winded way to say I think the market's really, really big, that we're in. Uh, I think it's much bigger than people have realized, uh, e- than, you know, the other, uh, you know, building tools for developers in the past. And I think that there will be a b- a bunch of different solutions. I think that there will be one company, and to be determined if it's gonna be us, but I do think that there will be one company that builds the, the general tool that builds almost all the world's software, and that will be a very, very generationally big business. But I think that there will be a... kind of niches you can occupy in doing something for a particular segment of the market or for a very particular part of the software development life cycle. But the general, like, programming shifts from just writing formal programming languages to something way higher level, this is the application you, you purchase and use to do that. Uh, I think that there will be generally one winner there and, and it will be a very big business.

    8. LR

      Juicy. Uh, on those lines, it's interesting that Microsoft was actually, like, right at the ce- like, at the center of this first with an amazing product, amazing distribution. Copilot, you said, was like the thing that got you over the hump of like, wow, there could be something really big here. And it doesn't feel like they're winning. It feels like they're falling behind. What do you think, what, what do you think happened there?

    9. MT

      I think that there are, like, specific historical reasons why Copilot might not have lived up or so far have, have kind of, uh, lived up to the expectations that some people had for it. And then I think that there are structural reasons. I think the structural reason is... And to be clear, you know, Microsoft, uh, you know, in the Copilot case, uh, obviously a big inspiration for our work. Um, and in general, I, you know, think they do lots of awesome things and we are users of many Microsoft products. Um, but I think that this is a market that's not super friendly to incumbents in that, um, you know, a market that's friendly to incumbents might be one where there's only so much to do, it kind of gets commoditized fairly quickly, and you can bundle that in with other products and where the ROI between, you know, different products is, you know, quite, quite small. And, you know, in that case, perhaps it doesn't make sense to buy the innovative solution, it makes sense to just kind of buy the thing that's bundled in with other stuff. Another market that might be, you know, particularly helpful for incumbents i- is one where there's, you know, from, from the get-go it's just like you have your stuff in one place and it's like really, really excruciatingly hard to switch and you know, for better or for worse, I think in, in our case you can try out different tools and you can decide which product you think is better. And so that's not super friendly, uh, to, to incumbents and that's more friendly to whoever you think is gonna have the most innovative product. And then the specific historical reasons, like, as I understand them are the group of people that worked on the first version of Copilot have by and large gone on to do other things at other places. I think it's been a little hard to kind of coordinate among all the different departments and parties that might be involved in, in making

  12. 46:1351:25

    Tips for using Cursor

    1. MT

      something, uh, like this.

    2. LR

      I wanna come back to Cursor. A question I like to ask everyone that's building a tool like this, if you could sit next to every new user that uses Cursor for the first time and just whisper a couple tips in their ear to be more successful, most successful with Cursor, what would be like one or two tips?

    3. MT

      I think right now, and we want to fix this at a product level, a lot of being successful with Cursor is kind of having a taste for like what the models can do, both what complexity of a task they can handle and like kind of how much you need to specify, you know, things, things to that, that model, but like having a d- a taste for the quality of the model and where its gaps exist and what it can do and what it can't. And, um, right now we don't do a good job on the product of like, you know, pr- educating people around that. Um, and maybe g- giving people some swim lanes, giving people some guidelines. But so, um, to develop that taste, um, would give kind of two, two tips. So one is, as mentioned before, uh, would bias less toward like, hey, try to have the model like, you know, trying in one go to tell the model, "Hey, here's exactly what I want you to do," than seeing the output and then either being disappointed or accepting the entire thing for an entire big task. Instead what I would do is I would chop things up into bits and you can spend basically, you know, the same amount of time specifying things overall, but chopped up more. So you're specifying a little bit, you're getting a little bit of work, you're specifying a little bit, getting a little bit of work and, you know, not doing as much the like let's write a giant thing, uh, telling them all exactly what to do. I think that that will be a little bit of a recipe for disaster right now. Uh, and so biasing toward chopping things up at the, at the same time, and it might make sense to do this on a side project and not on your professional work, you know, I would encourage people to especially, you know, developers who are kind of used to existing workflows for building software, you know, I would encourage people to explicitly try to fall on their face and try to discover the limits of, uh, what these models can do by, you know, being ambitious in like kind of a, a safe environment, uh, like perhaps a side project and, and trying to kind of go out and use AI to the fullest. Because, you know, sometimes we do run, or a lot of the time we run into people, um, who haven't given the AI yet, uh, a fair shake and are kind of underestimating its abilities. So generally biasing towards chopping things up and making things smaller, but like to discover the limits of what you can do there, like explicitly just kind of try to go for broke in a safe environment and, you know, get, get a taste for it. You might be surprised in some of the places where the model doesn't break.

    4. LR

      What I'm essentially hearing is, kind of build a gut feeling of what the model can do and how far it can take an idea, versus just kind of guiding it along. And I bet that you need to rebuild this gut every time there's a new model launch. Like when 4.0 comes out, you have to kind of do this again. Is that generally right?

    5. MT

      Yes. Uh, it's not... You know, for the past few years, it hasn't been as big as, like, I think the- the first kind of experience people have had with some of these big models. But, um, yeah. You know, it... This is also a problem we would hope to solve much better just for users and take the burden off of them. But yeah, e- each of these things have slightly different quirks and different personalities.

    6. LR

      Kind of along these lines, something that people are always debating. Tools like Cursor, are they more helpful to junior engineers or are they more helpful to senior engineers? Do they make senior engineers 10X better? Do they make junior engineers more like senior engineers? Where do you think most of... Who do you think benefits most today from Cursor?

    7. MT

      I think across the board, um, both of these cohorts benefit in big ways. It's a little hard to say on the relative ranking. I will say they fall into different anti-patterns. So I would... The junior engineers we see going a little too wholesale, relying on AI for everything, and we're not yet in a place where you can kind of do that end-to-end on a professional tool. You know, working with tens, hundreds of other people within a long code base. And then the senior engineers, for many folks... It's not true for all.

    8. LR

      Mm-hmm.

    9. MT

      And we actually, uh, uh... Often, you know, one of the ways these tools are adopted is, there's developer experience teams within companies. Often those are staffed by incredibly senior s- you know, senior people, because often those are people who, you know, are building tools to make the rest of the engineers within an organization more productive. And we've seen some very, very, um, you know, boundary pushing kind of, uh, uh... Yeah, like we've seen people who are, you know, on- on the- the front lines of like really trying to adopt the tech- technology as much as possible there. But by and large, I would say on average as a group, the senior engineers underrate what AI can do for them and stick to their existing workflows. And so the relative ranking's a little hard. I think they both have... They- they fall into- into different- different anti-patterns, uh, but they both by and large get- get big benefits with these tools.

    10. LR

      That makes absolute sense. Uh, I love that it's like two ends of the spectrum. Like expect too much, don't expect enough, and it's like the, uh, the three bears. (laughs) Is that the allegory?

    11. MT

      (laughs) Yeah.

    12. LR

      Yeah. Okay.

    13. MT

      Yeah. It's... Uh, the, uh, yeah, maybe the sort of senior but not staff-

    14. LR

      Yeah.

    15. MT

      ... you know, is right- right in the middle. Um, yeah.

    16. LR

      Hmm. Interesting.

  13. 51:2559:10

    Hiring and building a strong team

    1. LR

      Okay. Just a couple more questions. Um, what's something that you wish you knew before you got into this role? If you could go back to Michael at the beginning of Cursor, which was not that long ago, and you could give him some advice, what's something that you would tell him?

    2. MT

      The tough thing with this is it feels like so much of the, uh, the hard one knowledge is tacit and a bit hard to communicate verbally. And, uh, the sad fact of life feels like for, you know, for some areas of human endeavor, like you kind of do need to fall on your face to, uh... Either- either need to fall on your face to- to learn the correct thing, or you need to be kind of around someone who's a great example of kind of excellence in that thing. And one area where we have felt this is- is- is hiring. Um, I think that, uh, we actually were... So we- we tried to be incredibly patient on the hiring front. Um, it was really important to us that, you know, both for personal reasons and also for, I think actually for the company's strategy, having a world class group of engineers and researchers to work on- on Cursor with us was gonna be incredibly important. Also, getting people who fit, you know, a- a sort of mix of, you know, intellectual curiosity and experimentation, because there's gonna be so many new things we need to build. And then also kind of an intellectual honesty, and maybe micro pessimism and bluntness, because, you know, with all the noise and, you know, especially as the company's grown and, um, the business has grown, you know, keeping a level head, I think is in- incredibly important too. Um, but getting the right group of people into the company, you know, was, you know, the- the thing that maybe more than anything else, apart from- apart from building the product, we really, really, um, uh, you know, fussed over. And, uh, you know, I... We actually waited a long time to grow the team because of that, and I think that most... You know, many people you hear hired too fast. I think we actually hired too slow to begin with. I think it could have been remedied. I think we could have been better at it. And, um, you know, the- the method of, uh, uh, of recruiting that we ended up, uh, eventually falling into and working really well for us, which- which isn't that novel of like going after people that we think are really world class and like recruiting them over the course of, in some cases m- many years, uh, ended up working for us in the end. But I- I don't think we were very good at it to begin with. And so I think that there were hard won lessons around both who was the right profile, like who actually made sense on that team, like what did- what did greatness look like? Um, and then how to, you know, um, talk with someone, um, about- about the opportunity and, you know, get them excited if they really weren't looking for anything. Um, there- there were lots of kind of, uh, learnings there about how to do that well, um, and that took us a bit of time.

    3. LR

      What are some of those learnings for folks that are, you know, hiring right now about something you missed or- or learned?

    4. MT

      I think, you know, to start with, uh, we maybe... We actually biased a little bit too much towards, um, l- looking for people who fit the archetype of well-known school very young, had done the things that were like, you know, high credential, um, in those well-known school environments. And, um, and actually, like, you know, I think found... Uh, we're lucky early on to find a lot of, you know, uh, to find, uh, fantastic people who are willing to, you know, to do this with us, uh, who were- who were later career.And so, yeah, I think we should kind of spend a, a bunch of time on maybe a little bit the, the wrong profile to begin with. And, and part of that was the seniority thing. Part of that was, like, you know, kind of an interest and experience thing too. Uh, we have hired people who are excellent, excellent, excellent and very young, but they maybe look, uh, in some cases slightly different from, you know, being straight out of central casting. You know, another lesson is just, like, we very much evolved our interview loop. And so now we, uh, you know, we have, like, a hand-rolled set of interview questions and then, you know, kind of core to our, um, core to how we interview too is, is actually we have people on site for two days and do, do a project with us, a work test project. And, um, that has worked really well but increasingly refining that. And then, yeah, I think how to, to learn about what people are interested in and, you know, put our best foot forward and, and letting them know about the opportunity when they're really not looking for anything and have those conversations. Uh, there's definitely been, you know, gotten, gotten better at that over time.

    5. LR

      Do you have a favorite interview question that you like to ask?

    6. MT

      I think this two-day work test which we thought would not scale past a few people has been, has had surprising staying power. And the great thing about it is, it lets someone go end to end on a, like a, a real project. It's, it's not, you know, work that we use as kind of a can- Canalyst of projects. Um, but it gives you two days of seeing, like, a real work product. And, um, it doesn't have to be incredibly time-intensive on the team's re-, uh, time. You know, you can take the time you would spend in, like, a half day or one day on site and you kind of spread it out over those two days and give someone a lot of time to do, to do work on their project. And so that can actually help it, help it scale. Uh, and then it really helps you, it helps you enforce, you know, do you wanna be around this person type test, um, because you are around this person, uh, you know, for two days, and so, you know, bunch of meals with them. And, uh, so that one, uh, we didn't expect that one to stick around but that has been really, really important to our value dev process. And then also important to getting people excited at the, especially the very early stages of the company, because before people are using the product and know about it, and you know, when the, the product is comparatively, like, not very good, really the only thing you have going for you is, you know, a team of people that, you know, some peo- some people find special and, and want to be around. And, you know, the two days, it would, would give us a chance to just, like, you know, have this person, uh, meet us and, uh, in some cases hopefully get, get convinced that they, they wanna throw in with us. And so yeah, that one, that one was unexpected. Not exactly an interview question but kind of like a, you know, a, a form of interview question.

    7. LR

      The ultimate interview question. So just to be very clear about what you're describing, it's that you give them an assignment, like build this feature in our actual code base, work with the team to, uh, code it and ship it. Is that roughly right?

    8. MT

      Uh, yes. Not, not, like so we don't use the IP, not shipped end-to-end.

    9. LR

      Mm-hmm.

    10. MT

      But yeah, it's like a mock, like, you know, very often in our code base, here's a real mini two day project. You're gonna do it end-to-end largely being left alone. You know, there's, there's collaboration too. Uh, and then, you know, we're, we're a pretty in-person company, so in almost all cases, yeah, it's actually just sitting in office with us too.

    11. LR

      And you've been saying that this has scaled to even today's. How, how big are you guys at this point?

    12. MT

      Uh, so we are going on 60 people.

    13. LR

      So small for the scale and impact.

    14. MT

      Yeah.

    15. LR

      That's, I was, I was thinking it'd be a lot bi- larger than that.

    16. MT

      Yeah. Uh-

    17. LR

      And I imagine the largest percentage is engineers.

    18. MT

      Yeah. The thing that's more than anything, and to be clear, you know, a big part of the, the work ahead of us is, is, is building a group of people that is, is bigger and also men can continue to make the, the product better and the service we give to customers better, and so you don't plan to stay that small, uh, for longer. Wouldn't, wouldn't hope so. But, uh, yeah, uh, part, part of the reason that, um, that number is, is small is, uh, the percentage of, of engineering and, and research and design is very high within the company, and so many software companies when they have, you know, roughly 40 engineers would be over 100 people because there's lots of operational work and often they're very, very sales led from the get-go. Uh, and that's just quite labor-intensive. And, you know, we started from a place of being, like, incredibly lean and product led and, like, you know, we now serve lots of out market customers and have built that out, but, you know, there's much more to do there.

  14. 59:101:02:31

    Staying focused amid rapid AI advancements

    1. MT

    2. LR

      A question I wanted to ask you, there's so much happening in AI, there's things launching every ... There's like newsletters, like many newsletters whose entire function is to tell you what is happening in AI every single day. Running a company that's at the center, kind of the white hot center of this space, how do you stay focused and how do you help your team stay focused and heads down and just build and not get distracted by all these shiny things?

    3. MT

      You know, I think hiring is a big part of it, and if you get people with the, the right attitude, um, and, you know, all of this should be asterisked and, like, you know, I think we're doing well there. I think that, like, you know, we'd probably be doing better there too. And, um, you know, it's something that we should probably talk even more about as a company. But I think that, you know, hiring people with the right disposition, you know, people who are less focused on external validation, more focused on building something really great, more focused on doing really high quality work, and people who are just generally kind of, uh, level, level-headed and, uh, maybe like the highs aren't very high, the lows aren't very low, I think hiring can, can get you through a lot here, and I think that's, that's actually like, you know, a learning throughout the company is that, you know, for any ... You, you need process, you need hierarchy, you need lots of things. But for, for any kind of organizational tool that you're intro- introducing into a company, you know, the, the result you're looking to get from that tool also, you know, you can go pretty far on like hiring people with the right behaviors that you want, like, you know, to resolve from that or- organizational thing and, you know, the specific example that comes to mind is we've been able to get away with not a ton of process yet on the engineering front, and I think we need a little bit more process, but for our size not a ton of process, by hiring people who I think are really excellent.You know, one is, you know, hiring people who are level-headed. I think two is just talking about it a lot. I think three is, hopefully, leading by example. Uh, and, yeah, for us personally, you know, we've, you know, since 2021, 2022, been professionally working on, on this and been working on AI. And we've just seen a sea change of the comings and goings of, um, various technologies and ideas of, you know, if you had to transport yourself back to end of 2021, beginning of 2022, this is GPT-3. You know, InstructGPT doesn't exist. There's no DALL·E. There's no Stable Diffusion. And then, you know, we've gone through all of those image technologies existing, ChatGPT and that rise and, you know, GPT-4, all of these new models, all these different modalities, all the video stuff. And only, you know, a very small number of these things really kind of affects, affect the business. So I think we've kind of just built up a little bit of an immune system and kind of know, know when, when an event comes around that actually is really gonna matter for us. And this is, you know, this dynamic too of there being lots and lots and lots of chatter, but then maybe only a few things that really matter, I think has been mirrored in AI over the last decade where, um, there have been so many papers on deep learning in academia, so many papers on AI in academia. But then the amazing thing is there are really a lot of... I mean, a lot of the progress of AI can be attributed to some very simple, elegant ideas that have stayed around. And the vast majority of, of ideas that have been put out there haven't had staying power and haven't mattered a ton. And so the dynamic's a little bit mirrored in kind of

  15. 1:02:311:10:28

    Final thoughts and advice for aspiring AI innovators

    1. MT

      the evolution of deep learning as a field overall.

    2. LR

      Last question. What do you think people still most misunderstand or maybe don't fully grasp about whe- about where things are heading with AI in building, in the way the world will change?

    3. MT

      People are still a little bit, you know, occupied too much either end of a spectrum of, uh, you know, it's all gonna happen very fast and, you know, that this is all, you know, bluster and, and hype and snake oil. And, you know, I think we're in the middle of a technology shift that's gonna be incredibly consequential. I think it's gonna be more consequential than the internet. I think it's gonna be more consequential than, you know, any shift in tech that we've seen since, since the advent of computers. And I think it's gonna take a while, and I think it's gonna be a multi-decade thing. And I think many different groups will be consequential in pushing it forward. And, um, you know, to get to a world where computers can increasingly do more and more and more for us, there's all of these independent problems that need to be knocked down and progress needs to be made on them. And some of those are on the, the science side of things, of getting these models to understand different types of data, be faster, cheaper, smarter, you know, conform to the, yeah, the, the modalities that we care about, you know, take actions in the real world. And then some of it's on the, like, how we're gonna work with them and, you know, what's the, you know, what's the experience a human should actually be seeing and, and controlling on a computer in working with these things. But I think it's gonna, you know, it's gonna take decades. I think that there's gonna be lots of amazing work to do. I think that also, you know, one of the most im- like, a pattern of a group that I think will be especially important here, you know, not, not to talk our own book, but I think is like, you know, the company that works on, um, automating and augmenting a percular- particular area of knowledge work builds the both the technology under, you know, under the surface for that. Um, integrating the best parts from providers, sometimes doing it in-house, and then also builds the, you know, the product experience for that. I think people who do that and, you know, we're doing it in- trying to do it in software, people do that in other areas, I think those folks will be really, really, really consequential not just for like, you know, the end value that users see. But then I think as they get to scale, they'll be really important for pushing forward, uh, the, you know, the technology because I think they'll be able to build, you know... the most successful of them will be able to build very, very big businesses. Um, and, um, yeah. So excited to see the rise of, you know, other companies like that in other areas.

    4. LR

      I know you guys are hiring. Uh, for folks that are interested in, "Hey, I want to go work here and build this sort of stuff," what kind of roles are you looking for right now? Anyone specifically you're trying to... any roles you're most excited about filling ASAP? What should people know if they're curious?

    5. MT

      There are so many things that this group of people need to do that, like, we are not yet equipped to do. And so, uh, you know, kind of generic across the board, first of all. And so if you, uh, don't think we have a role for something, maybe you should reach out. That, that won't actually be the case. Um, and maybe we can actually learn from you and kind of decide that we, we need something that we weren't yet aware of. But, um, you know, by and large, I think that, you know, t- two, two of the most important things for us to do this year are have the best product in the space and then grow it. And we're kind of in this land grab mode where almost everyone in the world is either using no tool like ours or they're using one that's maybe developing less quickly. And, uh, so, so growing, growing, um, Cursor 2 is, is, is a big goal. And, um, uh, I would say, yeah, uh, especially always on the hunt for, for folks, uh, who... excellent engineers, designers, researchers, um, but then focusing, uh, all on... across the business side too.

    6. LR

      I can't help but ask this question now that you talk about engineers. There's kind of this question of just like, you know, code's gonna write up all, all our co-... Uh, AI is gonna write all our code, but everyone's still hiring engineers like crazy, all the foundational models-

    7. MT

      Yeah, that's, we're, we're not-

    8. LR

      ... so many open roles.

    9. MT

      ... out there tooting the, you know, the horn of, uh, people can learn to code, so yeah.

    10. LR

      Do you think there's gonna be an inflection point of, like, engineering roles start to kind of slow down? Uh, and I know this is, like, a big question, but just it's... Do you see engineers being more and more needed across all these companies, or do you think at some point there's all these Cursor agents running building for us?

    11. MT

      Again, we, we kind of have the view that, like, there's this, you know, both long messy middle of, uh-You know, it- it not jumping to a just like, you step back and you ask for all your stuff to be done, and you have your engineering department. And you know, very much, like, we want to evolve from programming as it exists today. We want humans to be in the driver's seat. And you know, we think even in the end state, like that's, you know, giving folks control over everything is- is really important. Um, and you will need professionals to do that and kind of decide what the- the software looks like. So both, both, I think that yes, like, you know, uh, like, you know, engineers, uh, are- are definitely needed. Uh, I think that engineers will be able to do much more. I think the demand for software is very lasting, which is, you know, not the most novel thing, but I think it's- it's kind of crazy to think about how expensive and labor-intensive it is to build things that are pretty simple and easy to specify, or it would look like it to the outside observer. And you know, just how hard those things are to do right now. And so if you can, you know, all of the stuff that exists right now that's, you know, justified by the cost and demand that we have now, if you could bring that down by orders of magnitude, I think you would have tons and tons and tons of more stuff that we could do on our computers, tons more tools. And you know, I've- I've felt this where, you know, one of my early jobs actually was working for a biotechnology company, and it was building internal tools for them, and the off-the-shelf tools that existed were horrible and did not fit their use case at all. And then the internal tools I was building, there was definitely a ton of demand there, uh, for things that could be built and, you know, that far outstripped just the things that I could, I could build in the time that I was with them. But yes, I think that, uh, it's still so, you know... The physics of working on computers are so great, it should be able to, you could, you should be able to kind of basically just move everything around, do everything that you want to do. There's still so much friction. I think that there's much more demand for the software than, uh, for software than what we can build today with, you know, things costing like a blockbuster movie to make kind of simple productivity software. And so, I think long into the future, yes, there will actually be more demand for engineers.

    12. LR

      Is there anything that we didn't cover that you wanted to mention? Any last nugget of wisdom you wanted to leave listeners with? You could also say no, because we've done a lot.

    13. MT

      We think a lot about, uh, how- how you set up a- a team to be able to make new stuff in addition to like continuing to improve the stuff that you have right now. And I think if we're going to be successful, like yeah, IDE is gonna have to change a ton. What it pre- like, looks like it's gonna have to change a ton going into the future. And um, you know, if you look around, uh, the- the companies we respect, uh, there are definitely examples of companies that have continued to really wa- like, you know, ride the wave of many leapfrogs and continue to kind of actually push the frontier. But, you know, uh, they're- they're kind of rare too. Uh, like it's a hard thing to do. And um, so you know, part of that is- is just kind of thinking about the thing and trying to reflect on it, you know, in our- our day-to-days and you know, the first principle side of things. Part of it is also, you know, trying to get in and- and study past examples of- of greatness here. And um, you know, that- that's- that's something that we think about a lot too.

    14. LR

      Yeah, the- what you just told is we were, before we started recording, you had all these books behind you and I was like, "What's that over there?" It's like the history of some old computer company that was influential in a lot of ways that I've never heard of. Uh, and I think that says a lot about you, of where a lot of this innovation comes from is studying the past and studying history and what's worked and what hasn't. Okay. Uh, where can folks find you online if they want to reach out and maybe apply? You said that there may be roles you- they may not even be aware of. Uh, where do they go find that? And then how can listeners be useful to you?

Episode duration: 1:11:13

Install uListen for AI-powered chat & search across the full episode — Get Full Transcript

Transcript of episode En5cSXgGvZM

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome