Lenny's PodcastJason Droege: Why AI still needs someone to dig up the road
Through Meta's $14B stake and expert networks of doctors, engineers, and PhDs: production AI now takes 6 to 12 months, and Uber Eats lessons still apply.
EVERY SPOKEN WORD
155 min read · 30,625 words- 0:00 – 6:01
Introduction to Jason Droege
- LRLenny Rachitsky
There's been a lot of talk these days about AI not delivering on the promise that we hear, especially at enterprises.
- JDJason Droege
These things take 6 to 12 months to get them truly robust enough where an important process can be automated. Like with any of these major tech revolutions, headlines tell one story, and then on the ground... Laying broadband means you need to dig up every single road in America to lay it. Someone's got to dig up the road or someone's got to run the undersea cable.
- LRLenny Rachitsky
Is there anything you think people don't truly grasp or understand about where AI models are going to be in the next two, three years?
- JDJason Droege
The general trend right now is going from models knowing things to models doing things. The next question becomes, what can it do for me? How does the agent make decisions for you?
- LRLenny Rachitsky
Let's talk about scale and this whole world of AI that you're in. You essentially pioneered data labeling, training data, creating evals for labs.
- JDJason Droege
18 months ago, you would get a short story and I would say, "Is this short story better than this short story?" And now you're at a point where one task is building an entire website by one of the world's best web developers, or it is explaining some very nuanced topic on cancer to a model. These tasks now take hours of time, and they require PhDs and professionals.
- LRLenny Rachitsky
I've talked to a bunch of people that have worked with you over the years, and I heard a lot about just how high of a bar you set for new businesses.
- JDJason Droege
From an entrepreneurship standpoint, it truly is about, what insight do I have? Why am I so lucky to have this insight? Why, in a world of a million entrepreneurs who are thinking, who are smart, who are trying everything, why am I in the position where I likely have an insight that others do not?
- LRLenny Rachitsky
Today my guest is Jason Droege. Jason is the new CEO of Scale AI. This is the first interview that he's done since taking over for Alex Wang after the Meta deal. Alex now leads the superintelligence team at Meta. Prior to Scale, Jason co-founded a company with Travis Kalanick before he started Uber, worked at a couple startups. Most famously, Jason launched and led Uber Eats, which went from an idea that he and his team had to what is now a multi-billion dollar run rate business, and one that basically saved Uber during the pandemic when nobody was taking rides. This interview is following a theme that I've been following through a bunch of interviews, which is the evolution of how AI models actually get smarter. Along with scaling compute and improving the actual model code, much of the improvements we're seeing in ChatGPT and Claude and every frontier AI model is these labs hiring experts to fill in gaps in their knowledge and correcting their understanding of how things work, and basically showing them what good looks like in every domain that consumers are using models. Scale was the pioneer in this space. They created the category. And in our conversation, we talk about what is happening at Scale and just how this deal with Meta worked, what experts like doctors and software engineers are specifically doing to help models get smarter, how the whole market of data labeling and evals and data training has changed from when Scale entered the market to today, and also just how long will we need humans to keep helping AI get smarter? We also get into where Jason sees models going in the next few years, because they have such a unique glimpse into the future. We also talk about a ton of really unique and really important product lessons from the course of Jason's career, including a bunch of advice on how to start a new business, both startups and within existing companies, and also a bunch of advice on hiring and leadership and so much more. A huge thank you to Allen Pen and Stephen Chow for suggesting topics for this conversation. If you enjoy this podcast, don't forget to subscribe and follow it in your favorite podcasting app or YouTube. It helps tremendously. And if you become an annual subscriber of my newsletter, you get a year free of 15 incredible products, including Lovable, Replit, Bolt, n8n, Linear, Superhuman, Descript, WhisperFlow, Gamma, Perplexity, Warp, Granola, Magic Patterns, Raycast, ChatPRD and Mobbin. Head on over to lennysnewsletter.com and click Product Pass. With that, I bring you Jason Droege. This episode is brought to you by Merge. Product leaders hate building integrations. They're messy, they're slow to build, they're a huge drain on your roadmap. And they're definitely not why you got into product in the first place. Lucky for you, Merge is obsessed with integrations. With a single API, B2B SaaS companies embed Merge into their product and ship 220 plus customer-facing integrations in weeks, not quarters. Think of Merge like Plaid, but for everything B2B SaaS. Companies like Mistral AI, Ramp and Drata use Merge to connect their customers' accounting, HR, ticketing, CRM and file storage systems to power everything from automatic onboarding to AI-ready data pipelines. Even better, Merge now supports the secure deployment of connectors to AI agents with a new product so that you can safely power AI workflows with real customer data. If your product needs customer data from dozens of systems, Merge is the fastest, safest way to get it. Book and attend a meeting at merge.dev/lenny and they'll send you a $50 Amazon gift card. That's merge.dev/lenny. This episode is brought to you by Figma, makers of Figma Make. When I was a PM at Airbnb, I still remember when Figma came out and how much it improved how we operated as a team. Suddenly, I could involve my whole team in the design process, give feedback on design concepts really quickly, and it just made the whole product development process so much more fun. But Figma never felt like it was for me. It was great for giving feedback and designs, but as a builder, I wanted to make stuff. That's why Figma built Figma Make. With just a few prompts, you can make any idea or design into a fully functional prototype or app that anyone can iterate on and validate with customers. Figma Make is a different kind of vibe coding tool. Because it's all in Figma, you can use your team's existing design building blocks, making it easy to create outputs that look good and feel real and are connected to how your team builds. Stop spending so much time telling people about your product vision, and instead show it to them. Make code-backed prototypes and apps fast with Figma Make. Check it out at figma.com/lenny.Jason,
- 6:01 – 10:27
Jason’s early career and lessons learned
- LRLenny Rachitsky
thank you so much for being here and welcome to the podcast.
- JDJason Droege
Yeah. Thanks for having me. Excited to be here.
- LRLenny Rachitsky
As I was researching your background and prepping for this podcast, I learned a really interesting fun fact about you that I don't think a lot of people know. So, uh, Travis Kalanick, he had a startup before Uber it was called Scour. It was a peer-to-peer file sharing app and I think got shut down. You were his co-founder. This was the early part of your career. I'm guessing there are hours of stories we could talk about during this experience, so let me just ask you this one question. What's just a lesson that has stuck with you from that experience that you've taken with you to future places you've worked and built product at?
- JDJason Droege
I mean, there's so many lessons, how I like to pick one. I think that, like, the main lesson, uh, is that in business and in startups, uh, everything's negotiable. (laughs) I think that's, like, the main thing because we were 19 at the time, 19, 20, um, at the time. We built this search engine in a dorm room, uh, and we were running it out of the dorm room and we... Our first URL was scour.cs.ucla.edu. And, you know, these things were, like, not necessarily infractions at the time, but we were just being practical. It was, it was basically a project that we had started. And so we built this search engine and, you know, people started using it and we thought we would get in trouble, but it turned out the computer science department was excited about it even though we had, like, basically parked a domain on their, you know, on their servers, um, and we were using our own computers in the dorms to serve up this website, uh, and product. And, and then, and then when we got into financing, uh, the, like, the financing process was fascinating. Our, um... And this is where the sort of everything is negotiable lesson came from, which is, uh, it was Ron Burkle and Mike Ovitz who are the initial investors in the business. We were in LA, so we were at UCLA, so we were not quite wired into the entire Sand Hill Road scene, and as we were doing the deal, the terms kept changing on us. You know, we thought you went and raised money, you know, and, and it was like, "Oh, we'll get a few million dollars at a $5 million valuation." This is back when, like, that was actually a series A, like, valuation. Um, and then over the course of the deal, it was like, "We're doing the deal. We're not doing the deal. Oh, you should give us 50% of the company. Oh, you should give us 75% of the company. Oh, if you wanna sign the document today, um, this person's gonna show up for breakfast, um, and if you don't sign today and give us 80% of the company, the person's not gonna show up." It was just, like, completely wild, uh, the things that we saw from day one of what can happen in business and we thought there was, like, a way to do things and at a very young age, we realized there is no way to do things. There is just the way that you can negotiate your way through the world, um, which I actually think influenced Travis heavily and then me later heavily at Uber in terms of if you can imagine it and it makes sense and you can align incentives, uh, then it can happen, but there is no, like, way. And to learn that at 19 or 20 years old, I think was highly imprinting.
- LRLenny Rachitsky
That is an amazing lesson. What, what happened to Scour? It got shut down, I think. What happened there?
- JDJason Droege
Well, yeah, so at the time... So basically what Scour was, was it was a multimedia search engine and then peer-to-peer file sharing network, but what it was used for was finding free content. (laughs) And, uh, at the time, it... Like, the laws around this were pretty ambiguous, uh, because we weren't... You know, like, mixtapes were legal, but this was like a hyper version of that. We were, we were eventually sued for a quarter of a trillion dollars. So I guess if you're gonna experience something that's potentially as life devastating as that, you know, doing it when you're... I think we were 21 or 22 at the time, is the time to do it, but it was just like this, like, very, um, uh, uh, uh, you know, cold splash of water about how the real world really works. Because the NBAA and the RIAA were the ones who sued us, the entertainment industry sued us, um, or the associations that represent the entertainment industry, and then they settled it for a million dollars. So we're like, "Wait, you wanted a quarter of a trillion dollars and then you settled for a million dollars?" And of course they were just trying to drive us into bankruptcy, drive us out of the market, like, and these are established companies, so we're like, "If these guys don't have a playbook to follow, they just kind of make up numbers, then wow, like, how should we navigate, like, the rest
- 10:27 – 12:37
The current state of Scale AI
- JDJason Droege
of our lives?"
- LRLenny Rachitsky
Let's talk about Scale and this whole world of AI that you're in. This is the first interview that you're doing since taking over as CEO at Scale. I'm honored to have you here to talk through this stuff.
- JDJason Droege
Yeah.
- LRLenny Rachitsky
Uh, this is also the first interview you're doing since the whole Meta deal, which was very complicated, confused a lot of people. So, uh, I'm just curious to hear the current state of Scale, what people should know. For example, what's your relationship with Meta? What's your relationship with Alex? Uh, what is the current state of Scale?
- JDJason Droege
Yeah. So Scale is a fully independent company. The transaction was, uh, uh... Meta invested a little bit over $14 billion to get 49% of the company non-voting stock. Um, didn't, didn't take a new board seat. Alex fills the board seat. Um, so the board is the same. The governance is, um, largely the same. There's no preferential access to anything that Meta has. There's no preferential relationship. I mean, we've had a longstanding relationship with Meta, um, uh, on the data side of the business for a long time and even on some business development related things too. You know, maybe working on things in government together, et cetera, and so those might get bigger just as we're closer now, but there's nothing that prevents us from doing things with other parties and they have no access to anything that they wouldn't have had otherwise. Like, all the privacy is still in place, all the data security is still in place, um, that, that was there before. And in fact, uh, only about 15 people went over on the transaction. So Scale has about 1,100 employees or so now, and we have two major businesses. Um, each of those businesses has... Uh, each of them has hundreds of millions of revenue, so we kind of have two unicorns inside the company today. That sustains... The business has grown every month since...... the deal happened, which I've read the reporting is not consistently reported. We haven't talked about it, right? So this is part of getting the word out. And, you know, we're excited to continue to build, deliver data, and do what we did before.
- LRLenny Rachitsky
Okay, so the company today, independent, its own company. Alex, just to be clear, he works at Meta now. He's no longer at Scale.
- JDJason Droege
Yeah, yeah, yeah, yeah. That's right, excuse me. I, I should have re- I should have talked about that more, yeah.
- LRLenny Rachitsky
Uh, I think that's really interesting. So basically it was an investment. Uh, some people left to join Meta. The company continues. You're running the ship.
- 12:37 – 17:02
The shift to expert data labeling
- LRLenny Rachitsky
Let's talk about this whole space that you guys essentially pioneered. I don't know best way to call it, data labeling, training data, creating evals for labs. You guys were at this before anyone even knew this was a thing. I know even Scale pivoted into this market from other things. I think there was, like, a bunch of stuff they tried with self-driving cars and all these things, and then it's like, "Oh, shit. AI labs need this data." One of the main stories I've been hearing is, and I've had a bunch of CEOs from this space on the podcast, is that there's been this big shift from the way... from what Scale had pioneered and had been doing for a long time, which is generalist, low-cost labor training. From that to now, labs mostly need experts, lawyers, doctors, engineers, doing training, writing evals, things like that. I'm curious just what you're seeing, how that's impacting you guys, where you think things are heading, what people should know about this whole market of data training, data, uh, labeling.
- JDJason Droege
Yeah. Yeah, yeah, totally. I think, I think the, the, the current, um, positioning out there th- from competitors is just bogus. Um, uh, you know, so I'll start with that, uh, and then maybe sit, talk a little bit about... Uh, I'll explain w- what I mean by that in a second, but I think it's important to just give 30 seconds on, like, what the history of Scale is and what's the thread going back to 2016. So Alex had this insight, um, in very early days that the important thing to models was data. And, um, I think he was 19 or 20 years old at the time as well, and so he's like, "Okay, well, what business would I create around this?" And the business that he created around it was, okay, let's do labeling for autonomous vehicles. Um, because if you label the data that, um, they have, the v- the cars do better, um, you know? And then that wave turned into the computer vision wave, which, uh, we have a relationship with the Department of Defense where we do labeling for them, and that was in 2020. And, and then you move forward, and the, and, and the models are- have gotten better over this period of time. And so the model- as models get better, they need different types of data, so we've constantly been adapting to the type of data that models need to be successful. And so then the gen AI wave hit, and this went through the moon, uh, you know, or to the moon. Um, and so as part of that, that industry is changing constantly, too. So we... I mean, it is correct that when the models came out two or three years ago, the mo- I mean, we remember using them. They would hallucinate all the time. They would get basic answers wrong. Um, they didn't know which poem was better, you know, this poem or that poem. And that was the state of labeling a couple years ago, and things have changed quickly, and we've changed with it. And now the state, um, for everyone, and we've been at the forefront of all of this, uh, is expert data labeling, um, more sophisticated, uh, uh, tasks. So to give you a sense of what the task was 18 months ago, um, I'd been here about 13 months. So I was interviewing, and I remember seeing it. Um, you would get a short story, and it would say, "Is this short story better than this short story?" And then you would edit it and be like, "Yeah, it'd be better if it was this." And you would give some preference ranking to it, right? It was pretty basic 18 months ago. And then you had s- and you had the rise of some experts, but the models were so far behind that they needed just even the basic stuff they needed. And now you're at a point where a task is... one task is building an entire website by one of the world's best web developers, or it is explaining, uh, some very nuanced topic on cancer to a model, and these tasks now take hours of time, and they require PhDs and professionals. So to give you a stat to back this up, 80% of the people, uh, that we have in our expert network have a bachelor's degree or greater, which is very contrary to some of the positioning that's out there and some of the understanding of this industry. Um, about 15% have a PhD that's greater, and we have PhDs on the network earning, like, significant amounts of money doing labeling, contributing their expertise to these models. Um, so we've been doing expert data labeling ever since the models need it. I mean, like, like, this game is keeping in touch with the researchers, knowing what they need, coming up with ideas internally. In some, in some ways, we drove this, because, um, we were seeing that the models were not sufficient in more expert ways, and so we would go to the model builders and say, "Hey, like, we noticed that this is a problem. If you would like to fix it, like, this cadre of experts can do that for you." So yeah, I... Like, the counter positioning is out there, but I think that's just what pe- what competitors say sometimes. Um, it... like, it has nothing to do with reality.
- 17:02 – 18:48
Challenges and strategies in finding experts
- JDJason Droege
- LRLenny Rachitsky
Okay, that was extremely interesting. So what I'm hearing is, yes, there's been a big shift to labs need more, uh, expert tr- uh, folks involved in training, labeling, writing evals. You guys are very aware of that and have evolved with that. One of the, I don't know, allegations, I guess, in the market is that it's hard to find these experts. Uh, so all these companies have their proprietary network of experts and how they find them. Is there anything you could share about just how you guys go about that? 'Cause that feels like the hardest part is finding these experts and keeping them from other companies.
- JDJason Droege
They are hard to find. Um, you have to have many, many tactics, right? So we get... You know, as you would expect, there's not one way you do it. The largest way is that they refer each other. Um, because when you are enjoying what you're doing, um, and you are using your expertise to contribute to AI, which is pretty cool. Like, if you're a PhD on this, like, pretty specific topic and you're using a model and you're frustrated that, oh, it doesn't, it doesn't interact with me in the way that I want, this is a paid way to have an outlet for that and to make...... hundreds or thousands of dollars doing that, and so a lot of times they refer each other. Um, we also have campus programs where we'll, we will, where we will literally go onto the campus and talk to the professors, talk to the students, you know, ask about, you know, who would like to do this type of work. Um, and then of course there's the more traditional scaled ways of, like, LinkedIn and you know, you know, you know, places like that. But the best ones come from these sort of grassroots and referral networks, and the only way you get that is providing a great experience to, to these people because these people, like, they're doing it partly for money, but they're also doing it because they think that their contribution to the AI models, uh, is important and interesting and, and, and many times it, it solves a problem
- 18:48 – 28:18
Reinforcement learning and AI environments
- JDJason Droege
for them.
- LRLenny Rachitsky
So, something that I've been seeing on Twitter (laughs) just this week as I was preparing for this is there's this, uh, uh, the information headline. This came out and this mirrored something that Brendan from Workhorse said that over time the entire economy is gonna move towards just reinforcement learning and everyone's just training AIs, basically the jobs that will be left. Thoughts on, on that. Is that, is that where you think things are going? Is there another perspective?
- JDJason Droege
Reinforcement learning is very important, um, and, and I think this is a broader, uh, comment about the move to environments. Um, there's these things called RL environments that effectively are sandboxes for AI agents to play in to accomplish a goal so that they can learn how to accomplish that goal. Um, we've been doing this for over a year. So for example, you have a Salesforce instance. How does an AI agent navigate that instance? That instance has data that it needs to recognize. It has configurations. You know, Salesforce is a highly configurable product, like, it has configurations it needs to understand how to navigate. You're, you're, you're asking the agent to do a business process that needs very high reliability and then the agent needs to know, hey, if, if, uh, I can't accomplish what I'm going to accomplish or I think if there's a low, low, low, uh, accuracy of what I'm about to accomplish, how do I, uh, pop it up to a human being for feedback so I can get guidance? All of those things need to be trained and there's no, uh, there's no alchemy to it. You just have to put the AI agent in an environment that, that represents what, uh, a human being would be doing. And you can imagine the number of environments in the world and the number of, uh, uh, goals within each environment is enormous. So the question is, and the, uh, research that we have done, um, over the past year to try to be a good partner to our model builders, our model builder customers, is, uh, how generalizable is each individual task or each individual environment? So if you imagine the world of, like, environments of, like, software systems, um, configurations, data types, sizes, user counts, complexities, it's like the permutations are endless. So what you need is you need a strategy that allows a lab to collect data, um, that is generalizable enough across a broad spectrum of use cases so that they don't have to collect 45 trillion combinations of what should the agent do in this particular situation. So sometimes the work and the data is highly generalizable and by, by generalizable I mean you have it accomplished, you know, in a simple way. Uh, the task might be, uh, uh, uh, find the most, uh, you know, find the meeting on my calendar, uh, for my, um, interview with Lenny. And the agent goes and it looks through all of my calendar and then it pops it out, right? A very simple example like that needs to be generalizable to any calendar search potentially or prote- or potentially any calendar action. And the more generalizable it is, the more valuable the data is. So our job is to provide the, the most valuable data to model builders that, uh, accomplishes, uh, the goal of making agents as useful as possible for their end users.
- LRLenny Rachitsky
I love that you've been sharing these examples of what this stuff is specifically that these people are doing, the data you're providing to labs. So just to mirror back a few of the examples you've shared, one is an engineer building a website, sharing the code essentially with the model and here's how I would do it. And is, in that example, is it just like here's the code or is it like s- a recording of them wa- building it?
- JDJason Droege
Yeah. It's-
- LRLenny Rachitsky
What is the data?
- JDJason Droege
It could be, um, it, um, it could be both.
- LRLenny Rachitsky
Mm-hmm.
- JDJason Droege
Uh, so in some cases it's just the website and here's like an example and then they design it and what... Um, in some cases um, it needs to be annotated in such a way that's like I made this decision for this reason or this decision for that reason, um, or here's how I would think about it. So it depends on what the model builders are trying to accomplish and so it can get quite nuanced in terms of, uh, what they're trying to train on.
- LRLenny Rachitsky
Got it.
- JDJason Droege
So the, you know, it's not like here's a website and then it's great at doing websites. It's like here's a website, here's why I made this decision, here's why I didn't make this decision, or here's a broken website and here's why it's broken if they're trying to accomplish, I don't know, a debugging tool for a website builder or something like that.
- LRLenny Rachitsky
And another example you shared is a short story where it's like here's one short story, here's another I imagine generated by a model and then it's like which is better and then how would you make it better? The other example you just shared is a Salesforce agent where it's like hey, uh, book a meeting with a prospect and then, uh, teach it how that happens. I love just how concrete these are 'cause it's like okay, I get it. This is the stuff that these companies do. Is there another, maybe one or two examples just to give people a sense of what, what this data looks like?
- JDJason Droege
Absolutely. I can actually give you an example from, um... So, so we have two sides of our business. One, we supply data to model builders. We sell the data. And then the other is, is we actually do, um, solutions. We do, uh, we sell applications and services to healthcare systems, insurance systems, et cetera. Um, I actually think it would paint a, uh, more colorful picture if I gave you an example of one of those because it involves data, but, but it involves the use of data, the manipulation of data for a very, very specific goal. And so one example there is, uh, we work with a healthcare system and they, uh, uh... Health systems have lots of problems. Um, this particular healthcare system has experts that, um, see very rare cases, uh, on a regular basis, so you go there only if no one else can figure out your problem, and there's a huge backlog.So, um, so there's a productivity element to this implementation too. So there's a huge backlog. Um, they wanna be able to see more patients, they wanna be able to provide better care, and they wanna prevent the number of revisits because they wanna give the accurate diagnosis day one and what the treatment should be. Well, to do this today, um, without the help of AI, uh, y- the doctor really needs to read two to 300 pages of documentation, and it's, and it's, and it's rolled into one document but in different formats. And so if you're a doctor, how are you gonna read two or 300 pages of everything? So what they do is they do the best they can, right? They scan it, they ask a nurse to look at it, they ask, uh, maybe a more junior doctor to take a look at this case 'cause they wanna treat the patient very well, like, like obviously this is, you know, this is why they became a doctor, and then they go into the room and they talk to the person and then they make a diagnosis. Well, we basically built a tool that will read that document for them and point out the top five to 10 things that they should take into consideration, either allergies that might not be obvious, um, is, is, you know, is one example where we actually, we picked up on an allergy that a patient had that would not have been obvious from reading the document, and, and that allergy actually would've b- had a conflict with the medication that they were gonna be prescribed, and th- and so the AI tool basically pulled out this correlation that would've even been hard for a human being to do. To make this tool better and better, you get to a certain limit with the models off the shelf and actually the people inside of this healthcare system have to do their own labeling. So we talk about labeling for model builders, but, um, we actually start, w- we're starting to see the labeling move into enterprises, um, and into governments because you can only get so far with off-the-shelf plus RAG plus some fine-tuning based on recorded data. One thing people often miss about these systems is, is we assume because you hear these numbers of like, oh, this bank in just 200 petabytes of data a year or what, you know, whatever fantastical number, um, what we miss is, is it the right data? Which of that data is useful to the models? And most of it is not useful. Uh, some of it is, but a lot of what we do when we're talking about knowledge work, when we're talking about making judgment is human judgment based on synthesizing, like how would this doctor in this case or how would this banker in this case make this decision and, and how would they make decision in the context of their overall enterprise? And that might be different bank to bank, healthcare system to healthcare system because of the culture, the objectives, the incentives, et cetera. And so we're getting to the point now where we see that digitizing judgment, human judgment, true subject matter deep expertise is becoming a bottleneck that we're unblocking for our customers.
- LRLenny Rachitsky
That's really interesting. It's, uh, it's like the spectrum went from just l- uh, low skill, generous labor to experts to now like the specific expert at this one company who e- needs to do this work, this labeling.
- JDJason Droege
Absolutely. I mean, I mean, understanding what, you know, I, I, there's this broad narrative y- you know, we kind of have two narratives. We have like the AGI, like everything's just gonna become AGI, and then there's the skeptics which is like, you know, hey, this is all bunk. Like, this is a bubble, et cetera. And of course, I, my view is most things are kind of like, there's truth in between and some of the extreme, parts of the extreme are probably correct, but you know, the reality is, is that, uh, it's very hard to get, uh, mission critical use cases in agentic systems where agents are talking to agents to a level of accuracy and, um, that is necessary to accomplish a goal, and one of the main issues is that a one document e- like, like think about the problem of even understanding a document, like a, a document that reads the exact same words in company A will have a different meaning and importance in company B. So how do you have a system that knows that? So this has all gotta be built so, um, uh, if you're gonna make good decisions.
- 28:18 – 31:21
The future of AI and human involvement
- JDJason Droege
- LRLenny Rachitsky
This is a good segue to this question that is always on people's minds when they look at companies like yours and the other folks in the space is just how long do we need people to be doing this? Like at what point will AI be smart enough to do it themselves? I know your incentives are to say we'll never run out of people 'cause it's a- aligned with your growth, but just how do, how should we think about just like why do we need people, I don't know, in 10 years? Like how long do we need these experts telling AI things it doesn't know?
- JDJason Droege
First off, the history of data labeling is a history of new beginnings, right? Like the, you know, autonomous vehicles, um, uh, do not need as much data labeling as they did in the past. So the company, I mean like Scale is a company that believes that data will always be important. At the point at which you don't need external data, human data in models, I think we've gotten to a level of advancement in the world that is almost like unfathomable because y- you're effectively saying that like no new h- no new human skill and no new human knowledge is important enough to put into these models. That feels like pretty far out there. And so, uh, y- you know, for a business like ours, we're constantly looking at how do you build operations that can, uh, constantly find the new needs and then work with the contributor network, we call the experts contributors, to unearth that data, to unearth that information. And sometimes it's new people. Sometimes within our existing base we find that, that existing people have expertise that we didn't know about that maybe wasn't useful to a model a year ago but now is useful. So, you know, these, you know, this is, this is, this is a constant progression of getting more and more data into these models. Y- yes, we are financially incentivized to believe that humans will always be in the loop, but that's not just a business belief. It is a personal belief like these systems need to work for us, and if these systems work for us, then we will need to be on the loop or in the loop on any of the dec- decisions that these systems make. As to the broader point around labor which, uh, I think comes up around white collar apocalypse and these things that kind of come up, um, I'm definitely on the, on the more maybe practical side of this, uh, possibly just 'cause of my nature, possibly because I see what's going on on the ground actually in these customers where supposedly this transformation is gonna happen in the next one to two years, and I just think that like it might happen. The space is moving super fast.... but I don't think it's gonna happen e- like, it's definitely not gonna happen in the next year. Um, the idea that it happens in the next two years I think is, like, very far-fetched, but nothing's impossible here and long term, I think that if- if you go back through, I don't know, like Pessimist's Archive or whatever these, like, you know, accounts that post the radio was invented and then, you know, like, all of this will be eliminated. There will be change, but the change I think, humans are very good at adapting, so I think what we're underestimating in all of the doom and gloom is we believe in human adaptability, we ha- as a company are highly adaptable, and I think the history of technology has shown that people are adaptable.
- LRLenny Rachitsky
I really like that takeaway. I'm an optimist as well, so I'm always looking for reasons to be optimistic. Uh, I- I wanna follow that
- 31:21 – 35:25
The role of evals
- LRLenny Rachitsky
thread, but before we get there, something very tactical I wanna ask about is, uh, evils seems to be coming up a lot, especially f- with companies in your space. I'm still learning a lot about just what this all is, eh, especially, uh, in your market. How much of what you are ... experts are providing are- are evils versus other types of data?
- JDJason Droege
Yeah, a lot of it's evils. And within enterprise customers and government customers, it's mostly evils because somebody has got to establish the benchmark for, like, what good looks like. That's the simple way to think about evils. What does good look like and- and- and you have a comprehensive set of evils so that the system knows what good looks like. It's as simple as that.
- LRLenny Rachitsky
So in the case maybe of the healthcare example you shared, essentially this doctor would be sitting there looking at all these reports, creating evils that are like, this is what this should be discovering in this report, in this, uh, record. Is that a way to think about it?
- JDJason Droege
Yeah, yeah. That's a, that's a very big part of it, which is, what does good look like?
- LRLenny Rachitsky
Awesome. Okay.
- JDJason Droege
Um, I have to reduce things down to simple terms. (laughs)
- LRLenny Rachitsky
Yeah. It's interesting you say good versus correct. Is that a specific term you like to use, good versus just this is the c- the correct answer?
- JDJason Droege
Well, I didn't intentionally use that word, but, uh, these are probabilistic systems, and so, uh, you know, depending upon... Yeah, so, uh, I- I can get into some nuance here about the right types of problems that AI, um, uh, is good at solving. So if you have a human process that is like 10 or 20% acc- like accurate or 10 or 20% liked, AI is awesome because it can get you s- i- if you get to 50, 60, 70, 80% accurate, you're in the money, like, you're in the green, everybody's happy. Now- now the system then has to know, hey, for the remainder, how do I make sure that, like, humans are involved for the remainder of the decision-making? But from a net value add standpoint, the humans are pumped in that scenario. If you have, uh, a human process, a workflow that is 98% accurate and you expect an AI system to get you the remainder two- remaining 2%, not totally there yet. And so- and so when I say what does good look like, a lot of the processes and a lot of the things that people are asking these systems to do and systems for us to build are making judgments on their behalf. And so just like we would ask a human being, "Hey, what do you think we should do in this scenario?" Th- you know, like what- what you're looking for is you're looking for the best recommendation or course of action given the current information.
- LRLenny Rachitsky
To you, this is so obvious and to people in your market that I think a lot of people think about AI being trained on just here's a bunch of data, check it out, learn everything you can from all of human history and all of written record, but what's wild is basically people are sitting around teaching AI things it doesn't know, filling gaps. Like that's how AI is getting smarter now. There's no more real data for it to feed on. It's just like here's what I don't know, okay, or here's what an expert found. You're wrong. I'm gonna teach you this. And the fact that it scales and that's what's keeping models improving is so mind-boggling.
- JDJason Droege
Yes. No, I, yeah, I agree. I mean, like with any of these major tech revolutions, you know, the headlines tell one story and then on the ground, you know, laying broadband means you need to dig up every single road in America to lay it, like- like- like there is the, there is the like, yeah, it's as simple as that. Someone's gotta dig up the road or someone's gotta, like run the undersea cable, like there's always some operational chiseling that's going on in all of these industries. I mean, if you think about how magical these models are, I mean, it, they're, like they're remarkable that like i- if you've been in technology long enough, it's like, it blows my mind even today that they get the punctuation right consistently. I mean, that sounds like almost daft to say, like at this point in the market, but if you were to go back three years and think about that from a technological standpoint, like I think, like- like a lot of things that we think are trivial now are very sophisticated and it's a combination of, I mean, like the real answer is, is it's a combination of computational power, model improvement, and data, and all three are getting better at once.
- 35:25 – 41:43
What AI models will look like in the next few years
- JDJason Droege
- LRLenny Rachitsky
Let's follow that thread. You've been at Scale for a long time, CEO for, you said, 13 months. I feel like you see a lot more about where things are heading because you work with labs on things they'd haven't even announced yet. You see more than most people. And I know there's only so much you can share about what models, what companies are doing, but just is there anything you think people don't truly grasp or understand about where AI- AI models are gonna be in the next two, three years?
- JDJason Droege
Look, there's so much talk. Yeah, I think it depends on how much X or, uh, news you consume.
- LRLenny Rachitsky
(laughs)
- JDJason Droege
Uh, so I think it's like what's sort of our perspective. Uh, the general trend right now is going from models knowing things to models doing things. And, uh, you know, we're pushing the boundaries of knowledge, like the benchmarks that we put out and that others, uh, put out, um, are showing that the knowledge, uh, you know, the knowledge that these models have is getting... it's quite robust. Um, and then the next question becomes, well, what can it do for me? And as soon as you get into that world, that's where there are like the environments we were talking about start to come into play. How do you navigate a Salesforce instance? How do you navigate a healthcare system? How do you navigate, um, even like a weather app on your phone? You know, uh, and how does the agent make decisions for you? Uh, th- we're just getting into the beginning of that.Uh, it'll be very interesting to see how quickly that happens, and I think that's where a lot of the speculation has a wide variance because- because we're at the beginning of it, people take different trajectories on, like, how that's gonna improve. And so i- if you take a trajectory of, like, the most aggressive trajectory, which is like, oh, it's actually gonna be quite easy to train on these things, and then it's just a change management exercise in the economy. Which, by the way, change management exercises are not to be underestimated. Like, um, uh, (laughs) you know, there are still people in the world with no- without an email address. And so the, uh, uh, the adoption curve then becomes, like, a human and policy issue, not a technological issue. We're not there from the technology standpoint, but I do think in the next two to three years, if I, you know, you know, take the bait and, like, have to make a guess, is the technology will get to a point where it- it will push the change management and policy makers to say, like, "Ooh, what do we do with this? 'Cause it's getting pretty close." That's probably two or three years away.
- LRLenny Rachitsky
There's been a lot of talk these days about AI not delivering on the promise that we hear, especially at enterprises. There's this, like, MIT study that just showed that, uh, there's all these pilots that people are excited about and then they don't work and companies aren't adopting these tools. There's data showing engineers are not actually as productive with tools. Like, it actually slows them down sometimes. You work with a ton of companies implementing all kinds of AI. What are you seeing on the ground? What kind of gains are you seeing? Do you feel like, uh, it's overhyped, underhyped?
- JDJason Droege
There's a lot of hype out there, uh, and our job is to actually build products that work, that deliver value for our customers, and figure out where the rubber hits the road, and, uh, to get a sophisticated... You know, MyHealthCare example is one. Um, we do other sophisticated workflows. Claims management for insurance companies, right? This is a financial decision that's happening, but it's an automatable process. But basically what happens is the POCs get to, like, 60 or 70% of the way there and the human mind goes, "Oh, the rest is no big deal." But it's kind of like uptime in data centers where, like, every nine is, like, you know, an order of magnitude investment, you know, in terms of, like, reliability, backups, et cetera, you know? One nine is, like, basically, you know, a web server in a dorm room like we had at UCLA. Um, you know, and then five nines is like this crazy, uh, like, high bar, but it just seems like a very small movement. So you kind of have a similar dynamic going on here where you have a bunch of people... You know, one of the reasons why the POCs have failed, one- one, there's a denominator effect because it's so easy to do. "Hey, I spun up a project. I spun up a project. I spun up a project." So, like, the, you know, it's- it's really easy for people to try, so I don't necessarily know that, like... And, uh, you know, the 95% number I think is a bit of, like, clickbait in a way. Um, it tells the right story, but it is a little bit hyperbolic because if you take the efforts that happen in a company where they actually get a- get a, you know, get a quality partner like we are, um, if you get, you know, you know, or if you do it yourself, if you have engineers who've worked with models before and they put in the time, um... And I'm talking about, like, months, not like minutes like you see in these videos, to actually get legal approval, policy approval, regulatory approval, um, uh, change management, like, an accuracy that everybody's comfortable with. If you actually do that, you know, these things take 6 to 12 months to get them truly, you know, uh, robust enough where, like, an important process can be automated. So I think that's where, like, the hype is right that, uh, when you do it, the impact is like, whoa, like, I never would have figured that out myself and- and I'm one of the most educated doctors in the world, as an example. Uh, but the time to get there is just longer than what people are selling.
- LRLenny Rachitsky
It's such a good point that it's- not only is it easier to try these things, it's just like everyone's doing it, so everyone's feeling FOMO, like, "I gotta try these things. I gotta try all these prototyping tools, Cursor, all these things," just 'cause everyone's doing it and then you just rush into it and- and it doesn't actually work out.
- JDJason Droege
Yeah. Easy to learn, hard to master. That's- that's-
- LRLenny Rachitsky
Mm-hmm.
- JDJason Droege
Like, that's my summary.
- LRLenny Rachitsky
Yeah. This episode is brought to you by Mercury. I've been banking with Mercury for years, and honestly, I can't imagine banking any other way at this point. I switched from Chase and holy moly, what a difference. Sending wires, tracking spend, giving people on my team access to move money around, so freaking easy. Where most traditional banking websites and apps are clunky and hard to use, Mercury is meticulously designed to be an intuitive and simple experience, and Mercury brings all the ways that you use money into a single product, including credit cards, invoicing, bill pay, reimbursements for your teammates, and capital. Whether you're a funded tech startup looking for ways to pay contractors and earn yield on your idle cash or an agency that needs to invoice customers and keep them current or an e-commerce brand that needs to stay on top of cash flow and access capital, Mercury can be tailored to help your business perform at its highest level. See what over 200,000 entrepreneurs love about Mercury. Visit mercury.com to apply online in 10 minutes. Mercury is a fintech, not a bank. Banking services provided through Mercury's FDIC-insured partner banks. For more details, check out the show notes.
- 41:43 – 48:19
Building Uber Eats and understanding customer needs
- LRLenny Rachitsky
Okay, let's, uh, let's move on from AI. This could be an endless discussion about AI, but you've got a lot more, uh, lessons to teach us. You've, uh, helped build, uh, Uber Eats, uh, you've had a couple startups in the past. We talked about Scour for a bit. Uh, there's a bu- I've talked to a bunch of people that have worked with you over the years and I got a lot of really interesting insights into the stuff that you're extremely good at, so I'm just gonna kind of go through a bunch of these. One is your obsession with being close to customers, talking to customers, and I- I love this topic because it's something everybody thinks they are great at and they feel like they completely understand how this is important, why this is important. They all feel like, "I'm doing this. Don't even- don't worry about... Everyone else is not doing this, but I am."
- JDJason Droege
(laughs)
- LRLenny Rachitsky
Talk about just what you think maybe people miss about how this looks when you're doing it well and just why this is so important.
- JDJason Droege
I mean, I probably fall in the category of what you just described, which is maybe, like, (laughs) you know, maybe part of the hub- hubris you need to start anything new. Um, but, uh, yeah, I mean, my, uh...I don't think it's, like, a clean process. Uh, I think my process is I'm constantly questioning every single thing that I'm hearing at the beginning of anything. I don't take what a customer says, like, literally, um, and there's been a lot talked about on this topic from, like, a product management standpoint, in terms of like, "Oh, don't do what they say. Do what they mean and look at the real problems," and all the things. I think- I think the way that I look at it that might may be additive to the discussion is I look at the underlying incentives of the customer, and the underlying incentives of customers are not always financial. Sometimes it's ego. Sometimes it's career growth, right? If you're selling enterprise software to, you know, to someone that is an executive sponsor, as an example, that person needs to trust that, like, you're gonna do a good job for them. How do you get them to jump with you on this, like, big project? Well, that's part of the journey of, like, not just the product, but what do they need to hear from us? What do we need to supply them? What do we need to do to actually unlock the opportunity to implement the product? So, I think there's, like, an incentives alignment baseline. Like, I'm a big believer that, like, you know, you, you know, it's- it's cliche, but, you know, show me the incentive and I'll show you the outcome. I think that's absolutely true, and even when customers will tell you things, like, I'll give you an example. I've been out of the game for a while, so I can be open about it. Uber Eats. So, when we launched Uber Eats, I looked at the business, you know, we, you know, in- in terms of being close to the customer, we actually couldn't get a restaurateur. I- I- I knew nothing about this industry. So, at Uber, my job was to figure out what other businesses we should get into, and so we looked at a billion businesses and Uber Eats, you know, like, food delivery was the one that we thought was most interesting, which turned out to be right, uh, so good for us.
- LRLenny Rachitsky
Very right.
- JDJason Droege
Um, and, you know, we couldn't get a restaurateur to help us understand their unit economics, and, you know, they'd say like, "Oh, it'd be this percentage or that percentage," or, "Why do you want to know?" And then we'd go to a different restaurateur and they, and they would kind of explain it, but they were a little suspicious of like, why are these Uber guys talking to me about, you know, like, how much my ham costs? And so what we did is we ordered just a bunch of food from these places, and then we got a s- a, you know, a restaurant supplier to give us, like, a base catalog, and we just matched up, like, how much does the ham weigh? How much does the cheese weigh? How much does the bread weigh? How many pieces of lettuce are on there? And we tried to actually just compose our own independent view of, like, what's the ingredients cost versus what's the labor cost? And then- and then we sort of triangulated, like, what was our ground truth, and then what are we being told by restaurateurs? And then what is, like, the- the zeitgeist telling us about restaurant economics? And if those things, like, all overlapped, then we're like, okay, we have an insight about what to do here. And how does this relate to Uber Eats? Well, what we found as part of this is that roughly a restaurant pays 20% to 30% of every meal to ingredients, and they pay roughly 20%, 20% or 30%, to labor, and they pay roughly 10% to real estate and a bunch of other... you know, so kind of goes down the chain. But the important parts is what's the value of incrementality? And so we came in and we said, "We're gonna charge you 30% of the bill." And they were like, "Oh, my God. Is this Groupon all over again? This is way too high. Oh, my gosh." And we explained the economics to them, and they were like, "Okay, we'll give it a try, but this is way too high." And, th- they were right. The real number, the, like, the real clearing prices are on 25% but we weren't that far off. And so when you go to find, like, product market fit or, you know, you know, be close to the customers, it's a combination of, like, what's the most valuable thing? Well, in a, like in a restaurateur's case, sh- give me incremental demand, because if you were to take a restaurant location and triple demand based on the same labor, the same ingredients, uh, ex- ex- excuse me, the same labor but you're just scaling ingredients, you've got a 70% to 80% incremental gross margin product. Restaurateurs would hate it when we sw- would say this because it doesn't work out exactly like that in reality, but, like, because we had that insight, we had confidence that we could go to market with, "We need to charge you this so that the delivery fee can be that, and then if the delivery fee is that and we charge you this, then we think that consumers will adopt, and that's what you need to get your incremental demand, so h- and then we could pay the driver this." And so you kind of fit this whole puzzle together without totally satisfying, and you, in- in the case of a marketplace, you're not totally satisfying any individuals, like 100% of their needs. What you're satisfying is, is like you're getting a clearing rate for them to participate in the market, in the case of a marketplace. So that's like one example.
- LRLenny Rachitsky
Yeah, I love this example as you almost, you figure out how to help them, uh, with something they don't even fully themselves know yet. So it's you think through their goals for them, like as if you were them, break down the economics and then here's the solution, versus, "Hey, what can we do for you guys?"
- JDJason Droege
Yeah. Yeah, I mean, if you walked into a restaurant, they would tell you a bunch of things. They would say, "Oh, labor scheduling is, uh, like an issue." They would say, um, "My rent is an issue." They would say all these, you know, "My- my ingredients prices are an issue. That's 20% or 30%. If you could g- shave off 3% of that, that would be huge." You might then take that and go, "I'm gonna go build a business that's gonna save you 10% on your ingredients cost." Well, but that doesn't actually get into their head on like what's truly important day to day. That might be important for them on an annual basis, but on a daily basis, what are they doing? They're looking at their numbers. They're looking, do people show up? How did- d- did I make money yesterday? Am I gonna make money tomorrow? So like the urgency. I think the biggest thing people miss when they're building new products is the urgency of the buyer part of it. You can build something that provides a lot of value, but if it's not the top thing that the customer is thinking about in their busy days, then you're just gonna have a long road to a small town.
- LRLenny Rachitsky
Hmm.
- 48:19 – 50:45
The importance of independent thinking
- LRLenny Rachitsky
This touches on just a theme I heard a lot about you, this idea of independent thinking and how much you value that, and this feels like a really good example of that. Is- is there anything else along those lines of just like why this way of thinking is so critical?
- JDJason Droege
I- I think as a founder's job, and I mean, I stretch that term because at Uber we had all of the benefits of Uber, so I wasn't really a founder, I just sort of ha- started the business there. But, uh, but there are some elements of founding there, um, is you're looking for alpha in the market. Like, wh- when we started our first company in '97, it wasn't that cool.Like, it might've been cool in Silicon Valley, but it was definitely not cool in LA. Now it's super cool to start a business. So as a result, everyone's, kind of, trying everything. So how do you get alpha on that market? It ... You know, if you're just ... If your research is highly influenced by what the world is saying around you, you're not gonna have an independent insight. Like, you kind of have to go off and do your own thing. And this is why from an entrepreneurship standpoint, like I ... Like, I have a very ... I have very strong feelings about, like, what the approach to founding a company should be, and it's probably very particular to me. But, um, it truly is about what insight do I have because ... Like, why am I so lucky to have this insight? Why in a world of, like, a million entrepreneurs who are thinking, who are smart, who are trying everything, why am I in the position where I likely have an insight that others do not? And then why am I the one to do it? And like, the answer might be, I'm in this narrow, far-flung place. The, the other answer might be, like, I am inherently a contrarian personality type, so I'm just constantly looking for the, you know, the, you know, the thing that's true that people don't believe is true, which sometimes works. But then the second part of that's super important, which is, like, why do I want to work on this problem for five to 10 years? And like, people get this wrong all the time. They go and talk to a, a customer and they go, "Uh, they have a problem. I'm gonna go solve it." And it's just, like, not a great way to, like, start a business. Like, you really have to have this burning desire to constantly be questioning yourself. The other thing about independent thinking is, is that you can't fall in love with your ideas. And, and, and I do not proclaim to be the world's greatest independent thinker, for what it's worth. These are ... Th- this is, this is what you've been told. Um, but, uh, is, is just, like part of that is basically throwing away who you are, who you've been, all your ideas for the mission that you're on, which is like, uh, trying to accomplish something for a customer.
- LRLenny Rachitsky
This is great. I'm really ... I'm glad you went
- 50:45 – 53:03
Setting high standards for new businesses
- LRLenny Rachitsky
here. This touches on the other theme I heard often about you, is just how high of a bar you set for new businesses. And I think this, this advice is useful both for founders, as you said, and also people starting companies within companies, new business lines. Uh, so you've talked about this a bit already, but is there anything more there, just how high that bar needs to be for you to ... for it to likely work out when you're starting something new?
- JDJason Droege
Look, if, if you want to give yourself the best chance, and this isn't always how it works, but if you're like in my position, 25-plus years in their career, if you want to give yourself the best chance, I think there's two ways that, uh, uh, companies end up working out. Um, and the first way, which is probably the most important, quite frankly, is that the founder is just a force of nature over a long duration of time, because you're gonna have to pivot. You have to have that energy to pivot. You have to go years and years and years with it being hard. And that's probably the most important thing. But the second most important thing is that you can easily educate yourself on, like, what are good business models? What are bad business models? What are good markets? What are bad markets? And even if you're this force of nature, having the knowledge, if you're gonna go into a bad market with a f- ... with, with all your energy, you should at least know. You know, maybe ignorance is bliss because you just throw yourself into it and it just kind of works out with time. But like, that's not how I would operate, which is, marketplaces are good businesses. SaaS, at least historically, we'll see how this changes, but like SaaS historically, great businesses, recurring revenue businesses, sticky businesses, network effect businesses. And if you look at what, um, the top VCs invest in, um, yes, there is a lot of, like, portfolio building, but there are similarities in terms of the types of business models that they believe could be worth tens of billions of dollars, and they have network effects. They have lock-in. They are more valuable at scale than ... uh, big scale than low scale. So if you just take a filter on a new business, you know, th- this is what I did at Uber, which is like, if, if you just have a filtering mechanism on a new business, it doesn't take that long to eliminate the bad ideas. And then like w- ... Of what's left, you can pick, oh, I'm very passionate about this, even though it might have more problems than this other thing that on paper looks better. And, and then you have to have passion about it. But yeah, y- ... Like, I think people just miss, like a basic understanding of like, what businesses even have a chance of being worth-
- 53:03 – 57:07
Exploring and selecting business ideas
- JDJason Droege
- LRLenny Rachitsky
Mm-hmm.
- JDJason Droege
... $100 billion?
- LRLenny Rachitsky
So you launched Uber Eats. You figured out this is the place to go and bet. Uh, outs- ... As an outsider, feels obvious. Of course this is gonna be a massive success. Of course, food delivery, such a good idea. Uh, I know you looked at a ton of ideas in that process. Can you just talk about what you explored and why you ended up picking Uber Eats?
- JDJason Droege
I am definitely not the smartest person in the room when it comes to, like, figuring these things out. And so I keep a very, very wide aperture on ideas for as long as I can until I'm like, "Okay, everything is coalescing." And I think there's a bunch of reasons why you have to keep an open aperture on considering ideas that might seem bad at, at the, at the start, but you just keep digging and see if you're right that they're bad or you're wrong. So just as a general phil- philosophical principle, I'll start there. We looked at, um ... We did some crazy stuff. Um, uh, uh, you know, w- we put ... You know, like, I went walking around San Francisco one day and I, I looked down Market Street and there was like a CVS, a 7-Eleven, a CVS, a Walgreens, a 7-Eleven and I'm like, "How many SKUs could possibly be inside one of these things that people want?" And couldn't you just put that into a van and like, you know, you hit the button on the van and the van comes around and you get whatever convenience items you have, and they're convenience items, so like, why would that be a problem? And we launched that in DC. We put like 10 of these trucks on the road. We put like 250 SKUs in them. And I mean, crickets is an understatement of how bad it was.
- LRLenny Rachitsky
(laughs)
- JDJason Droege
I mean, y- we didn't ... We couldn't get an order to save our lives. And what we realized was that we hadn't really done the research on what convenience stores really were. It was, you know, if you didn't have cigarettes, you didn't have beer, you didn't have Slurpees, you didn't have these things, like for example, like you didn't bring p- people in to sell all the other things. So we didn't know anything about retail. We were clueless. Um, so that's like one idea. We looked at grocery, but honestly the unit economics just terrified me of like all the pick packing and everything like that. I think Instacart did a remarkably good job at getting the unit economics to a, a good spot and, like, probably the hardest operational problem you could tackle. Um, we did generalized delivery, like point-to-point delivery, what's now-... I forgot what Uber's part is called, but like Uber Direct, I think it's called, um, where you- you have something that needs to go point to point in a city. That was kind of a flop from the beginning because the truth is, is like how m- consumers don't really have this needs, business sort of have this need, and in 2014 when we were doing this, no one had this need. So, but like, we kee- we tried like 15 versions of all these things before we eventually just said, "Okay, the food delivery thing is just popping off on all signals. And we can make the unit economics work. People seem to want it." It's a super cool problem because we can like enable independent restaurants with like all these tools and allow them to compete with the big guys. We can take the real estate out of the equation, so you can have a real estate location that's non-prime, but if you have prime food then you get to compete. So we're like, "Oh, this is a very interesting problem, and we can really help local economies."
- LRLenny Rachitsky
And this ended up being, uh, if I remember correctly, this basically saved Uber during COVID.
- JDJason Droege
(laughs)
- LRLenny Rachitsky
Lyft didn't have something like this, and, um-
- JDJason Droege
Yeah.
- LRLenny Rachitsky
... how big does, is this business at this point? Uh, anything you share about just the, how important this turned out to be for Uber?
- JDJason Droege
Yeah, of course. Well, we launched it in, uh, December of 2015 in Toronto, and within like two hours, we had done like $20,000 worth of sales. It was crazy how quickly we saw that, that it was the right idea. Um, and the unit economics were good. And then four and a half years later... I was at Uber for about six years, but it took us about a year and a half to figure this out. Four and a half years later, it was about $20 billion. So it was zero to $20 billion in four and a half years, which is pretty good. Um, Uber was very good at scaling things. Uh, but competitive market, you know? Um, others did well. We beat a lot of people. Some people beat us. And then now I think it's pushing $80 billion, um, and that's been for another four and a half years since I left. I think COVID turned it from 20... I left right before COVID, total coincidence, um, uh, uh, 20 to 50 in like a year. So yeah, I mean, ride sharing went this, and, you know, food delivery just went like to Pluto.
- LRLenny Rachitsky
What luck. Well done.
- JDJason Droege
(laughs) Luck is part of the game.
- LRLenny Rachitsky
When a-
- JDJason Droege
That's the other thing that's important to realize. Luck is part of the game, so do not, do not begrudge people for luck. Like, this industry is hard. All these things we're doing are really, really hard. Luck is just part of the
- 57:07 – 1:00:13
The McDonald’s story
- JDJason Droege
game.
- LRLenny Rachitsky
One of your s- maybe speaking to that, maybe not, one of your colleagues, Stephen Chow, who, uh, I am an investor in his new company, he worked with you at Uber Eats for a long time-
- JDJason Droege
Yeah, yeah, love Stephen.
- LRLenny Rachitsky
... he told me to ask you about the McDonald's story. Uh-
- JDJason Droege
(laughs)
- LRLenny Rachitsky
... I imagine that was just a, a big milestone, a big moment, I know, for you guys. So why'd you decide to put McDonald's in Uber Eats? And there's apparently a story of how you won that deal.
- JDJason Droege
Yeah. So it was kinda interesting. Uh, and this just goes to maybe like where sometimes, uh, uh, ignorance leads you, leads you to accidentally the right answer. Um, so we had launched Uber Eats, and Uber had a global footprint, and we were the only food delivery network with a global footprint, excluding China. And so Uber was, everything at Uber needed to be launched globally. That was a very big part of the culture, et cetera. So, so which, which is a lot of work, and you can spread yourself too thin and cause other problems. But, in this way it was good. And so my vision was, okay, let's, let's help the little guy, like, um, compete with all these chains. You know, they have these systematized food systems, and, like, food is what makes a city amazing, and no one talks about the chain restaurant that they visited in Paris. You know, they talk about the local place that they found, and, like, let's be part of that. That's who we wanna be. And so McDonald's actually approached us, and they said, "Hey, we'd love to do food delivery with you." And I'm, and I said, "No." (laughs) And they're like, "Hold on a second." (laughs) Like, "We have like 80 million consumers a day. Like, you don't, we don't wanna do this together?" I'm like, "It's not really, like, our vibe right now." And so I pushed them off for like four or five months until my team is like, "You're insane." Like, "These people are gonna put marketing behind it. They really wanna do this. They wanna lean in." So we actually had a, a, a, we... Because of that, I think, I think, it's hard to correlate these things, like we ended up with this exclusive relationship with them, got an insane number of customers. Um, uh, chains at this point actually weren't really on food delivery networks 'cause everybody was so worried about the unit economics 'cause they're so sensitive to the basket size. And my approach was like, "Eh, figure it out," right? Which is a very Uber culture thing. Like, okay, the basket's $17. It's our job to make that work. Reduce the radius on the delivery, figure out the economics, maybe mark up some of the food some places. You know, like, there's always a way to figure it out. So we did it, and then, you know, three months later, the business just, like, started hockey sticking again at like a different level. And my team is just like, "Dude, you are so, like, uh, y- you are so, um, stubborn on this point." But I think it actually (laughs) ended up being in that benefit because we got a great deal with them.
- LRLenny Rachitsky
So the fact that you pushed them out helped you get a better deal is what I mean.
- JDJason Droege
Yeah.
- LRLenny Rachitsky
That's amazing.
- JDJason Droege
I think that's the story he would be referencing, and then the onboarding of it was crazy because, you know, we basically went global with them in like six months, and at this point the business was less than two years old. So, you know, activating this, you know, I don't even know, uh, 80-year-old company that expects processes to be in place, and we have like two of our ops managers in New York managing it? (laughs) It was just mayhem.
- LRLenny Rachitsky
Uh, I'm still sad In-N-Out is n- still not on any of these apps.
- JDJason Droege
I'm... Yeah, me too. (laughs)
- LRLenny Rachitsky
I r- I remember someone was hacking it. There's all these ways people found a way around, and they're like, "No, no, okay, you're Postmates. We know. We're not gonna give you any food."
- JDJason Droege
Yes. Love In-N-Out.
- 1:00:13 – 1:04:49
The role of gross margins in business feasibility
- JDJason Droege
- LRLenny Rachitsky
You've touched on this idea of gross margins and margins, how obsessed you are with this. I wanted to spend a little time on here. I think, I think, uh, I've heard just you're obsessed with understanding gross margins before going in on anything. Most founders have no idea what they're doing here. What have you learned about just what people should be paying attention to, what they might be forgetting when they think about just the feasibility of a business?
- JDJason Droege
Yeah. Look, it's one filter, like many filters. Um, there are certainly businesses that have low gross margins that are great businesses, uh, Costco, Walmart, et cetera. You know, Amazon talks about this all the time of like there's companies that increase prices and there's companies that lower the prices. But I would say that by and large, high gross margins, uh, combined with healthy churn curves are a very healthy sign for the business. I mean, think about it, like if I were to sell you something and I can't mark it up a lot-... how much value am I adding by, like, like, beyond what's in my hand? And if I'm not adding that much value, then what am I in the business of doing, um, and, and I'm in the business of adding value. And it's not quite that simple, right? This is just a litmus test of, like, when someone comes to me and they go, especially in a new business, and we deal with this, I dealt with this at Uber, I've dealt with it everywhere. Um, they, you know, someone comes up with an idea and they go, "We can get into this business, and I think we can charge this and it will get us to a 40% gross margin." And then my next question is, "Start at a 60% gross margin. Why does that not work?" And they go, "Oh, well the customer..." And immediately, you short circuit to, like, what the real problem is. "Oh, the customer has an alternative." "Oh, okay. Well, who's the alternative?" "Oh, it's some off-shoring company." "Well, what's their gross margin?" "Oh, we don't know." You go find out, it's like 20%, and they've been around for a long time and they have scaled operations. And you're like, okay, so, like, your gross margin is gonna ch- go from 40 to 20 quicker than you think, and you're gonna be in a wor- like, a world of hurt unless you do something to differentiate. So, I take gross margin as just, like, a very coarse instrument, not a perfect instrument, to think about, am I adding enough value? Am I differentiated? It's not perfect, but it's like a very quick short circuit filter to, like, even to see if someone's pitching you an idea, have they thought through this dynamic? Because if the response is, "Yeah, gross margin's super low right now, but here's the dynamic I'm going after," and then you're like, "Oh, okay." And sometimes it's like, we'll just make it up with volume and then the gross margin will go negative for a while, and you're like, "Wait, wait, wait, this doesn't work."
- LRLenny Rachitsky
So, what I love about this is this is just a lens into, is my idea good enough? If, uh, studying, can I keep a high gross margin, can I, is there a reason why people in this space haven't been able to have a higher margin?
- JDJason Droege
Yeah, exactly. And, and, and like I said, it's, it's meant to disqualify, like, just, like, you know, you're doing these large, for larger companies and everybody has ideas. And so, you just-
- LRLenny Rachitsky
(laughs)
- JDJason Droege
... it's, like, like, it's a way to cut through, like, do you understand, like, the, the machine that is going to be, need to be in place in two or three years? Like, you might have a f- a 70% gross margin now, because, because the next question is, is why can't someone else do this? And if you have an answer of, like, well they can now but they can't in two years if we run really fast, okay. We might have something. Um, if they can now and they will be able to in two years, you're gonna have margin compression.
- LRLenny Rachitsky
Along these lines, I was just listening to, uh, I think it was the a- c- 16Z podcast, Alex Rumpel, I think was sharing this story about Costco, how, eh, as you said, their strategy is actually to keep margins very, very low, because all their revenue comes from their subs- their membership, so they have something like 50 million members char- paying 100 bucks a month, and that's their entire business. And so, they don't plan and they don't want to make money off the products.
- JDJason Droege
Yeah, that's right. I mean, they're playing a slightly different game. Not an expert on Costco, um, uh, I have spent some time with the company, but, like, um, they use price as a way to get to scale, right? And so, they're basically saying if we discount, same with Walmart, right? Like, we will get so much volume that we will just take the air out of the room for all of our competition. And so then the question of, okay, so if you have a low gross margin today, in two or three years once you land one of these centers in a market, like, why won't your margins get eroded? The answer is 'cause we will have already absorbed, like, all of the demand. Like, you, you try to go to 8% versus 10% gross margin, which I roughly think is what their gross margin is, like, that's gonna be a really hard business if you already have a habit with a customer, they have already built their weekly trips around you, you already have relationships with suppliers, you already have general managers that, like, know how to stock inventory. That's not a straightforward exercise. So, so, so they are, like, first to scale, and then good luck competing with them.
- LRLenny Rachitsky
Hmm.
- 1:04:49 – 1:09:12
Why Jason says, “Not losing is a precursor to winning”
- LRLenny Rachitsky
Okay, just a couple more questions. One is, there's this term that I've heard, uh, that you, uh, often say and believe in, is this idea of not losing is a precursor to winning?
- JDJason Droege
Yes, yes. (laughs)
- LRLenny Rachitsky
Talk about that.
- JDJason Droege
Uh, tech is a culture where, uh, portfolios are buil- are built by investors. And a lot of the narrative is controlled by investors, frankly. Um, founders ob- obviously participate, but, um, this idea that you should just go for it is consensus. Just go for it! Who cares? Um, well, I don't know. If it's my life and I only have one moment every, you know, to, like, take a shot, I might wanna just not just go for it. Like, I might wanna think for a little bit and, uh, uh, and I think the best entrepreneurs, I have no data to back this up but just th- these are my fri- this is my friend group. I think the best entrepreneurs and the best business o- owners look at the risk profile of the decisions that they're making, and they try to make asymmetrically, um, positive decisions, uh, like all along the way. And so, um, oftentimes, I, I feel like we forget about the risk of a decision, and there is more to unpack there because I actually think taking highly risky decisions and then having it work out is a weird cultural thing too, because then how do you train people to do that? 'Cause it's a very hard thing to take high risk decisions and be right enough, 'cause it creates a lot of volatility. But it goes back to my comment about, like, the most important thing in founders, which is just this ability to persevere through it, right? Survival is just part of the game. And most people just give up before they can their timing right, before they get the right insight with the customer, before they get the right product in the market, and, and life can change quickly in tech. You, you can go from being a dog to being a hero in, like, a very short period of time but you, but you're on this very, very long journey, but you have to survive for that condition to be met. And so then, and so then the question is, is like, when you're in a hype cycle, I would argue that we are right now, everyone wants to go for it, and then go for it more, and then go for it more, and go for it more, and you don't realize, like, guys, these, all of our customers are gonna be around in five years. Like, they just want us to solve their problems. Like, we have to be around to solve their problem f- for them. And so survival is a precursor to that. So let's not put ourselves in position that could potentially, like, compromise the enterprise along the way. Doesn't mean don't take risks, but think about how you calculate it.
- LRLenny Rachitsky
I love how clear it is that this lesson, and the- many of the lessons along these lines have come from just failure and things not working out and things breaking, which is the best
- JDJason Droege
Yeah. You ever get on the other side of a high- a high-reward, high-risk decision?
- LRLenny Rachitsky
Mm-hmm.
- JDJason Droege
It is so painful, because you are just cooked.
- LRLenny Rachitsky
Hmm.
- JDJason Droege
You are done. And, and, and, and often there's no way out.
- LRLenny Rachitsky
Is there a, is there a story along those lines that comes to mind, or an example of that?
- JDJason Droege
Well, this is where, I mean, you know, this is where it knits together on, like, why I try to be so, like, I think you can spend a, a little bit of time thinking up front to save yourself a lot of pain downstream. I had this business, um, not worth detailing it, but, but after the bubble burst in, in 2001, I'm like, "I'm gonna self-fund a business. I'm gonna build a profitable business. I wanna prove that I can do this." And we had started Scour, which had all the things we talked about. And so, what I did is, I'm like, uh, I, I was a golfer, and frankly, there was nothing to do in tech. So, uh, I started selling used golf clubs on the internet-
- LRLenny Rachitsky
(laughs)
- JDJason Droege
... and I was making real money. Um, and I might have learned more from this business than any other, because I started on eBay. And, like, I was 22 and I, I didn't really understand that, like, my margins would come down, because anyone can do this, but I was one of the first ones to do it. So I was making a ton of money, and then I built this business, and then I just failed to recognize... I had a lot of hubris. I was like, "Oh, if I could just buy all the used golf clubs in America, I can be the market maker for, for, for, for prices." And don't people do that?
- LRLenny Rachitsky
I love this ambition. That's great.
- JDJason Droege
Yeah, and it's just, like, it's madness to actually think about the practicality of that. And so, I just didn't spend the time thinking, and then I ended up in this business. The business was profitable. It did, it got to a couple million of revenue, whatever, paid me a dividend for a while, but it was painful the entire way.
- LRLenny Rachitsky
I love the spectrum of experiences you've had. You've, uh, sold golf clubs, you're helping, uh, achieve AGI, you could say. Uh, there's also a whole business, uh, part of your career we haven't talked about where you built tasers and body cams and drones and all these things. Uh, also peer-to-peer file sharing (laughs) before anyone else. Uh,
- 1:09:12 – 1:12:11
Hiring and building teams
- LRLenny Rachitsky
final topic I just wanna spend a little time on based on this experience is hiring and building teams. Something that I know you have a really strong take on that I've been hearing a lot on this podcast recently is this idea of it's more important to build the right team than find the most optimal top talent. Talk about that, why that's so interesting and important.
- JDJason Droege
As of late, I've developed a, a more, more nuanced view of this. Um, uh, which is for certain roles, you absolutely need the right experience in this current market. You know, you see this with researchers, right? Because the, the market's moving so fast, you don't have time to train up some people, so you actually have to go find people either who have the right relationships with customers that you want to get, or you have to, you know, who might not check other boxes, but, like, you know, are awesome at that. Like, might not check the classic boxes that I think you're referencing of, like, they're a problem solver, they can grow with the company, they have a high trajectory, et cetera. Um, I would say that that's, like, 5% of the roles in the company, but, like, very important whenever speed to market is important. And then for interviewing, I just interview for three things, and I have to interview across all kinds of, like, expertises, um, which is hard. You know, I can't be an expert in everything. And so I reduce it down to just three things, which is, like, are you a curious problem-solver and can you articulate that verbally? Um, can you work across people? Like, are you humble enough to work across, and are you, um, a, a good leader? And if you just do those three things, you know, I think you have a pretty high chance of success, at least in an organization that I'm running. Um, because the world's changing, right? So, you do need people that are adaptable. So, all the experience is not necessarily, like, one-to-one relevant. And then, and then, and then the working across to your team point, uh, this actually came up at Uber Eats. So, like, when I was building the Uber Eats management team, I'm, I'm, I'm not sure if this was mentioned to you from that group, but, like, um, uh, whenever I would hire people, I was trying to compose almost like an organism of strengths, um, and then, like, minimize the conflicts. Uh, that management team, for the most part, outside of some of the operation side, but, like, for the most part, that management team was the same management team from day one when we had nothing to $20 billion. And I just believed that the team knowing each other's strengths and weaknesses and being able to compensate for each other was more important than the classic advice you get around, like, "Well, that person hasn't seen this much scale." And you're like, "Well, yeah, but can they learn it like I learned it?" Like, so you do have to kind of believe in people a little bit, which is my job, not necessarily their job. Um, and so, you know, I mean, these are people systems. They're not, they're not straightforward rules-based things you can apply.
Episode duration: 1:24:01
Install uListen for AI-powered chat & search across the full episode — Get Full Transcript
Transcript of episode W99jdYZOlN0
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome