Skip to content
The Twenty Minute VCThe Twenty Minute VC

Surge CEO & Co-Founder, Edwin Chen: Scaling to $1BN+ in Revenue with NO Funding

Edwin Chen is the Founder and CEO of Surge. Founded in 2020, Surge has scaled to $1BN+ in revenue with zero external funding. At the same time, their competitor, Scale.ai raised over $1.3BN to reach $850M ARR. Today, Surge have the world’s largest model providers as customers and have just 120 employees. ----------------------------------------------- In Today’s Episode We Discuss: 00:00 Intro 01:05 Why 90% of Big Tech Is Wasting Time on Useless Problems 05:58 How Surge Kills Meetings and Still Moves 10x Faster 08:05 100x Engineers Are Real 13:51 Founding Surge AI 26:29 “No Sales Team, No PR, No BS” 38:54 The Real Reason AGI Might Take Until 2040 43:58 Why the Real Bottleneck in AI Isn’t Compute or Models 49:58 Will Synthetic Data Kill Human Labelling? 56:15 The Price of a $10B Company? 58:43 Quick-Fire Round ---------------------------------------------------------------------------------------------- Subscribe on Spotify: https://open.spotify.com/show/3j2KMcZTtgTNBKwtZBMHvl?si=85bc9196860e4466 Subscribe on Apple Podcasts: https://podcasts.apple.com/us/podcast/the-twenty-minute-vc-20vc-venture-capital-startup/id958230465 Follow Harry Stebbings on X: https://twitter.com/HarryStebbings Follow 20VC on Instagram: https://www.instagram.com/20vchq Follow 20VC on TikTok: https://www.tiktok.com/@20vc_tok Visit our Website: https://www.20vc.com Subscribe to our Newsletter: https://www.thetwentyminutevc.com/contact ----------------------------------------------- #20vc #harrystebbings #edwinchen #ai #surgeai #twitter #google #elon #founder #ceo

Edwin ChenguestHarry Stebbingshost
Jul 21, 20251h 7mWatch on YouTube ↗

EVERY SPOKEN WORD

  1. 0:001:05

    Intro

    1. EC

      I think a lot of the other companies in our space, they're just not technology companies at the end of the day. They are either body shops or they are body shops masquerading as technology companies. One of the things that we simply tell everybody when we first join: quality is the most important thing. Yeah, it's more important than anything else. (mouse clicks) I definitely wanna sell for 30 billion or even 100 billion. (bell dings) I mean, if you think about us as a company, I already have everything I want. Yeah, we're profitable. (bell dings) I have complete control of our destiny. And so I'm really lucky to have all the resources I want to already do anything that I want. (mouse clicks)

    2. HS

      Ready to go? (upbeat music) Edwin, dude, I'm so looking forward to this. I am, like, the biggest fan of your business from afar, which makes me feel incredibly weird because we haven't met before, which means I'm basically a stalker. But thank you for joining me.

    3. EC

      Yeah, thanks for having me. It's wonderful being here today.

    4. HS

      Now, the... I wanted to break the show into two different parts. The first part being kind of the story of this incredible rise, and then the second part really being assessing the future of data, data labeling, and taking a kind of more analytical approach.

  2. 1:055:58

    Why 90% of Big Tech Is Wasting Time on Useless Problems

    1. HS

      If we start on the story itself, and pre actually the founding of Surge, you said to me that 90% of the people while you were working at your Google, your Facebook, your Twitter, 90% of the people there were working on useless problems. I thought that was-

    2. EC

      Yeah.

    3. HS

      ... a very interesting place to start. Why were they working on useless problems, and what did it teach you about efficiency seeing that?

    4. EC

      Yeah. So I think the biggest lesson for me was that you can build a completely different kind of company with 10% of the resources and 10% of the people, but you're still moving ten times faster and building a ten times better product. Like, imagine if you could just magically remove the 90% of people who aren't working on, on interesting problems. Wh- wh- what would happen then? Well, if you have a company that's one-tenth this size, you don't need to hire as many people, so you spend less time interviewing, you spend less time in meetings, you spend less time people- giving people updates for the sake of updates. And if it's one-tenth this size, that means everybody has a better view of what's going on around the company because there isn't all this clutter masking the important stuff. And because the talent density is higher and the teams are smaller, that means the communication is a lot higher and the iteration speed is a lot higher, and better ideas just percolate around more quickly.

    5. HS

      Can I ask, like, prioritization is slightly ambiguous according to different people.

    6. EC

      Yeah.

    7. HS

      Everyone feels that their project is important and more-

    8. EC

      Yeah.

    9. HS

      ... important than someone else's. How do you determine priorities within a company and determine what matters versus what doesn't?

    10. EC

      Yeah, I mean, I think a big thing about being nice is that, or being small is that when you're smaller, that means that I, other people around the company, we just have a much better view into the customer problems themselves and what everybody's working on. And so it's kind of like at, at these bigger companies, a lot of your priorities or the things that you're building just simply, you're simply building them to impress someone, like, "Hey, I need to impress my VP. I need to impress my manager. I need to impress my director so that I can get promoted." And you're not really building things or prioritizing things because they're good for the end customer, they're good for the end product. It's more like, "Okay, I have this priority to..." wait, let me think about it. It's like, "I have this priority to improve an internal tool." "Okay, why are you improving the internal tool?" "Well, it'll make people 5% more productive." "Why do we want them to be 5% more productive?" "Because they're spending 10, 20% of their time interviewing." "Why are they interviewing?" "Because they, uh, like, they're growing for the sake of growing." And it just, like, leads to this perpetual cycle where a lot of your priorities are just divorced from, um, from, like, the end, end customer, the end product, and they're almost like priorities just for the sake of internal com- company machinery. So yeah, I think it's very nice.

    11. HS

      What do you think no one knows about working within these big, incredibly hailed companies that they should know?

    12. EC

      I think one of the things that people don't realize, again, from the outside, it's that how much of what you're building, again, is for this internal company machinery. And how much of the internal company machinery is simply because a lot of people within these organizations, their goal, again, their goal isn't to build a product. Their goal is to tell their friends they are, they're, they're, they're a VP of a 1,000-person org, and that sounds impressive. And so their, their goal is to think about, "Okay, so how do I, how do I grow my org even faster? How do I find more teams that I can hire? How do I have these monthly performance reviews where, again, now that I've built this 1,000-person org, I need to prove to my VP, my CEO that the 1,000-person org I'm building is efficient and useful?" And so, uh, uh, like, basically a lot of the work that goes on in these internal companies, uh, uh, these large companies, it is simply to kind of perpetuate and grow even, even further a lot of this, like, very, very big company machinery that exists purely, purely for, like, internal reasons.

    13. HS

      When you're hiring, how do you determine between managers who like to brainstorm and tell their friends that they have 1,000 person orgs and they're very powerful and they are very important versus doers, those that execute work and complete tasks? How do you determine the two, and are there very clear differences?

    14. EC

      Yeah. I mean, I think a big part of it actually just boils down to the kinds of questions they ask me. Like, some people, when I interview them, they will ask really interesting questions about our product. They will brainstorm about ideas to make our product even better. They'll be like, "Okay, yeah, like, I went to your webpage. Why don't you improve these things? I tried signing up as a worker." Like, "Why don't, why don't, uh, like, why, why did these ha- happens happen in the flow?" "I tried working on this project. Like, what if you guys did this instead?" And other people are like, "If I join, in a year, will I be able to be a manager of a company?" (laughs) Uh, "If I join, will you be able to hire, like, will, will I be able to hire 20 more people to support me?" And so it, it kinda just boils down, I think, a lot, a lot of times to the kinds of questions that people even have o- o- at the forefront of their minds.

  3. 5:588:05

    How Surge Kills Meetings and Still Moves 10x Faster

    1. EC

    2. HS

      Can I ask you, in terms of meeting cadence, I, I'm sorry for being granular, and I told you we'd go off schedule, but I've had Tobi on the show in the past who, from Shopify, who's obviously advocated for no meetings.

    3. EC

      Yeah.

    4. HS

      Given the-... ability to spend lifetimes in meetings that are quite pointless. How do you approach meeting policy and what does and doesn't belong in the org?

    5. EC

      Yeah. So I, I'm a big fan of that. Um, so like I, for example, personally, I actually have no one-on-one meetings. And it's kind of funny because oftentimes people will ask me, "Well, how often do you meet with your reports? How often do you set aside for, like, for, for these meet-" and I, I just don't have them at all. Like oftentimes, like, I will just give people my calendar, my Calendly, and they're just surprised at how blank it is because I try to avoid filling my meetings all day. And so I will actually go out and a- li- like, I, like sometimes when people join they'll be like, "Okay, um, I need to go and have one-on-one meetings with these 10 other people that I'm collaborating with on a weekly basis." That's just because that's so used to when you come from Google or Facebook. And I tell them like, "Why are you having these standing one-on-one weekly meetings? Like, did you not talk to them every day during Slack? Did you, like are, are you just like unaware of what they're doing?" Like (laughs) it's almost like a negative sign if you're having a one-on weekly meeting because it means that you just don't know what's going on with these people. You're not, you're like almost waiting for your weekly meeting to raise, raise interesting questions and raise interesting problems. And so I think we're, we're, we're pretty ruthless internally about killing meetings when, when they're un- necessary.

    6. HS

      We mentioned like the efficiency of teams, small teams. Before we dive into Surge, one of the kind of hot topics of the day is the future where billion dollar companies will be built by single people. Do you agree with that vision of the future or do you think it's slightly over-dramatized?

    7. EC

      Yeah, I mean, I absolutely believe that that company was this one day. Like you think about it, like I've always believed in 10X engineers, even 100X engineers, and already you have a lot of these single person startups that are already doing 10 million in revenue. And so if AI is adding all this efficiency, then, then yeah, I can definitely see this multiplying 100X to, to get to this $1 billion single person company.

  4. 8:0513:51

    100x Engineers Are Real

    1. EC

    2. HS

      You can't drop 100X engineer without me diving on it. We- we've been so focused for so many years on 10X engineers. What have been your biggest lessons on 100X engineers? Do they exist actually in reality? What are the signs? Talk to me about that.

    3. EC

      I mean, like even today you see how we are like honestly so much more efficient than some of our peer companies, right? And so even for that reason alone, you can, you can already see the, the, the fact that a 10X engineer or 100X engineers exist. Like if you can just break it down, some people are simply two to three times better, like two to three times faster than anybody else, right? They just code faster. There are some people who simply have two to three times, uh, more better ideas. There are people who simply work two to three times as hard. There are people who have two to three times fewer meetings. There are peoples who simply have ideas that something other people can't think of. And so if you just multiply all these things together, right? And like 200x is often actually an underestimate. Like I, I know people who, yeah, literally are five times more productive coders than anybody else. And now add in, you know, all the AI efficiencies that you get, like you can, you just like multiply all those things out and yeah, you get to 100.

    4. HS

      Do you think AI turns 10X engineers into 100X engineers or average 1X engineers into 10X engineers?

    5. EC

      Maybe both today, but definitely even more so in the future. I tend to think of... it's like good people have so many ideas that they just don't have time to implement. And if you think of AI today as something that isn't necessarily coming up with the greatest ideas, al- al- although it can, but it often just removes a lot of the drudgery of like your day-to-day work, a lot of your day-to-day coding. And so if you don't have to spend that time on the drudgery, but you just have like these endless ideas that are just bouncing around your head and AI just helps you put them to paper, then I, I do think it kind of dispersion- disproportionately favors people who are already like the, the, the 10X engineers.

    6. HS

      You mentioned the comparative efficiency in the landscape. Without naming names, a lot of people around you, so it's not naming names, but a lot have raised a lot of money to get to a smaller stage than you are. If I were to push you into a camp, does- is that a result of you being phenomenally efficient where you deserve credit, or where they've bluntly been incredibly mismanaged and resource allocation has not been done well?

    7. EC

      I mean, I think it's both. I mean, I think a lot of the other companies in our space, they're just not technology companies at the end of the day. They are either body shops or they are body shops masquerading as technology companies. So I, I just don't think-

    8. HS

      What, what do you, what do you mean by, what do you mean by body shops and body shops masquerading as technology companies? I get it, but a lot of people criticize the space with this and say, "Oh, it's just labor camps or it's..." So what do you mean by body shops or body shops masquerading?

    9. EC

      A lot of companies in this space, so they don't have any technology. And when I think about technology, it's like they don't have any way of measuring quality of the data that they're producing, and they don't have any way of improving the quality of data that they're producing. They are literally just body shops in a sense that, like they sometimes literally have no technology at all. They don't have a platform where workers are doing work, and so what they're doing is they're simply finding people, like they're recruiting warm bodies, they're looking at resumes, like anybody with a PhD they'll just like instantly hire them, and then they're just passing them along to, um, to, to the, to the AI companies, to the frontier labs. And so again, they have no technology, they have no way of measuring what any of these workers are doing, they have no way of knowing if they're doing a good job or not, so they have no way of doing things like, "Hey, what if I A/B tested this algorithm for improving quality? What if I change this method of allowing workers through? What if I tweaked your tools in order to, like, change these questions around? Would it make the workers more efficient? Would it improve their quality or would it actually make it worse?" They just have no way of doing these things because again, at the end of the day, what you're passing to the, to like their customers is just the body itself, the, the person as opposed to the data. And so what that means is they just, um, like, like again, they just have no technology to, to measure or improve anything.

    10. HS

      Do you think you have a fundamentally different business then? Because you're all lumped in the same category, but if they're passing along a warm body and you're passing along data...... it's a phenomenally different product and it's monetized differently, no?

    11. EC

      Yep. Yep. Yeah, again, like if I think about the way we think about it, it's, it's maybe the following. Like, so we have always started out with quality of the data as our number one principle, and as a result, we need to be able to let our technology in order to, to measure that and improve that. If I think about like what, what goes wrong, it's that people often just don't realize how difficult quality control is, and people often think that humans are smart, and so if you just throw a bunch of humans at a problem, you'll get good data. And what we found is that's, is that, that, that is completely untrue. Like, for example, I, I went to MIT, but yeah, I think half of the people who graduate with a CS degree, they, they can't even code. So it's a really challenging problem to detect high-quality. And secondly, if you actually take the folks from MIT who can code, they're actually just going to try cheat you. They're gonna sell their accounts to somebody in a third-world country. They're going to try use LMS to generate their data for you. They're gonna come up with all these crazy methods to cheat the system. So it's also this really, really challenging problem to detect low-quality. It's, it's actually really . And so what we found is that when you want to get the highest-quality data to train LMS that are already, you know, super intelligent, you actually need to be able to kind of release algorithms. You can't just take warm bodies or, you know, try to improve your methods for resume filtering and then throw people at the problem and you get good, get good data and results out of it. Um, like, like the teams I know who tried this, they actually ended up moving ten times slower than anybody else without realizing it. So again, at the end of the day, I think it's all about the technology that we build to extract the highest-quality data possible as opposed to, uh, again, just throwing, throwing warm bodies at a problem.

  5. 13:5126:29

    Founding Surge AI

    1. EC

    2. HS

      Okay. So we mentioned before that in background you have pre-obviously being the hail companies Google, Facebook, Twitters, and then you said there about the focus on data quality. Can you take me to the founding moment for you leaving the last company and deciding that you were gonna go all in on Surge?

    3. EC

      So, so I used to work as an ML engineer at a bunch of different companies, and the problem I just kept running into was that it, it just kept on being impossible to get the data that we needed to train our models. Like, I, I can give an example. So for example, I used to work on our ad- search and ad systems at Twitter, and one of the thing, first things I wanted to do was build a sentiment classifier. Yeah, it's a super simple problem. All, all you need is 10,000 tweets labeled as positive or negative to train your models. But our human data system at the time was literally just two people we'd hired off of Craigslist working nine-to-five. So even just in order to get started, we had to wait a month. Then we had to wait ano- another month for them to label the tweets inside the spreadsheet because the tools that we had were just terrible. And we fin- when we finally got the data back, it was actually just completely junk. They didn't understand slang like, "She's such a bad bitch." Like they were, they were actually labeling this negative when, you know, it's actually really positive. And they didn't understand hashtags and all these other aspects of the tweets. And so I actually ended up just spending a week labeling tweets myself because that was so much faster and better. And at the same time, this was actually really simple stuff, but the bigger problem we wanted to solve was how do we optimize our ML systems for the right objectives and how do we build feeds that are engaging in a positive way for users? Like, like again, think about, again about Twitter. This was the old days when it was a purely chronological timeline, and so one of the things we wanted to do was just make it easier for, for our users to discover the tweets that they really cared about. And so the question was how do we train our recommendation algorithms. And the obvious choice was clicks and retweets. Like, you just train your algorithms to produce as many clicks and retweets as possible. But the problem is we tried doing these things and it turns out to be this incredibly negative feedback loop. Like, once you optimize for clicks, the most clickbaity content starts rising up to the top. You get lots of racy content, lots of, lots of girls in bikinis, lots of listicles about ten horrifying skin diseases, and, and so on. And so we wanted to train all of our models on all these deeper principles instead where we'd ask our human raters to label tweets and recommendations with product principles, like whether this was a top voice connecting somebody with their interests or if somebody just had this really interesting insight about their particular topic, which we could... If we couldn't even get simple sentiment analysis right, like again, like labeling whether a tweet was positive or negative, we definitely couldn't get this more complex data at the quality of scale that we needed. So again, like if you think about, um... Like, we basically started Surge in 2020 right after the launch of GP3 and I think it really is because there was just so much more that you could see the industry moving, moving towards, and if we really wanted to progress it in all these like really, really big ways, we just, we needed, we just needed different kind of data, data to help the industry.

    4. HS

      Okay. So you realized this data problem in 2020. You leave Twitter. What happens then? You go heads down into product build for several months. You go about recruiting the first team members. Can you just take me to the build? Is... I mean, 2020, dude. It's not that long ago. A billion in revenue and you started in 2020.

    5. EC

      Yes. So the way I worked was... So I've always been a really big fan of MVPs and so I've already just built myself or V1 in a couple weeks. But like, I think the really nice thing was... So again, I had worked in this space for a really long time, so I already had a very clear vision of what I wanted to build. So as opposed to feeling like I needed to go out and hire ten engineers in order to build a product instead of feeling like I needed to go out and fundraise, you know, 10, 20, $30 million in order to hire, you know, more people, uh, again, I just wanted to build it myself and I wanted to talk to customers myself and so that's what I did. And so I think I, I, I've already built the V1 in a couple weeks. I, uh, I to- posted about it on my blog. I told people about it, uh, that I, that I met. And yeah, there actually was this giant demand for the data already, um, so I think we were, we were very lucky early on.

    6. HS

      Okay. So you post it on a blog. You get some demand. You said there about the MVP and deciding, you know, you'd build that first and not raise money. The traditional thinking in the Valley is, "I need money 'cause I need money to build." Why do you think that's maybe wrong and how would you change or advise founders differently?

    7. EC

      I think one of the things that's always driven me crazy about Silicon Valley is that it really is just a status game for most people. Like, people are just raising for the sake of raising. Their goal isn't to build some great product that solves an idea that they fundamentally believe in. Their goal really is to tell all their friends that they raised $10 million and they get a headline on TechCrunch.... like I, I have a lot of friends who've worked at Google for 10 years. When you think about starting a company, they actually often tell me they don't even have a problem that they wanna solve. They're kinda just bored and they wanna try something new. And at the same time, they can actually definitely pay their own salaries for a couple of months, but the first thing they tell me is, yeah, they're, they're going to go out and raise some money. And so they might try talking to some users and they might try building an MVP, but the only reason they do that is just to check off some checkbox on a YC application. And then what happens is they will just constantly pivot around random ideas until they get something that happens that get a little bit of traction and that sounds impressive to VCs. And so they spend all their time tweeting and tweeting hot takes, and networking and going to all these VC dinners, and it's all just so they can get this high line about raising $10 million. And, I-I really think that people's first instinct should instead be to find some big idea that they fundamentally believe in that could change the world. Like, I- I don't really care why they believe in it. It could be because, because they have a lot of experience in the space. It could be because they've talked to a bunch of users. But it really has to be something that they believe in that they'd double down on for the next few years. Like, the thing about startups, startups are all about big risks, right? Like, you have to believe in something enough that you're gonna take a risk building it. If all you're doing is jumping around from idea to idea every week until you land on something that gets you 1,000 retweets, you're, you're not taking any risks. You're just somebody looking to make a quick buck.

    8. HS

      I have so many questions off the back of that. You mentioned that kind of loving the MVP and kind of the ease of doing so. Given the tooling that we have today, the ease of MVPs never been greater. Do you think there's any excuse for going out to raise now without an MVP given the Luvables, the Replets of the world, meaning it's just so much easier?

    9. EC

      Yeah. For 90% of companies, no. (laughs) Like, there are... Sure, there are some companies where you actually do need a lot of capital in order to build hardware or, like, whatever it is for, for a couple of years. Like, you really need a lot of investment before, before you, uh, before you can get to your, your, like, actual MVP. Probably 90, 95% of products that are out there, for 90 to 95% of startups that people are building, no. Like, just go out and build your MVP, and see, yeah, see if it gets any traction.

    10. HS

      You said about the inherent risks that you take on when you start a company, obviously. Do you believe in the advice that you should only pursue ideas that only you can do? In other words, the idea is specifically tailored to you and not everyone could solve that problem? Or do you think that's bullshit and it's actually about execution?

    11. EC

      I actually do believe in it. So, again, if you think about the idea of a startup as something that is a place where you can take big risks, where you can build something that nobody else can, and you're willing to just go all out to, again, create something that literally nobody else could, it does have to be something unique to you. Because, again, like, otherwise you're... Or, like, sure, you can get to, like, a, like, a decent medium-sized company with a commodity idea. But if you really wanna go big, if you really want to build a generational foundational company, I think it really should be about an, uh, about an idea that is almost, like, unique to you.

    12. HS

      You said about people maybe gaining value or self-worth in raising big amounts, going to conferences. That is how most people do gain self-worth. When you think about where you derive your own self-worth from, sorry to be personal, but given yours is clearly not that, how do you think about where you get self-worth, self-value from?

    13. EC

      I think it's kinda funny. So if I think about the things that have made me happiest (laughs) in the past few years, it... I can think of, like, two things off, off the top of my head. So one is, sometimes our customers, whenever they launch their next big model, one of the first things that they'll do is they'll reach out to me and they'll be like, "Hey, just wanted to send you a note that we couldn't have done this without you." And I think that's just so amazing to hear. Like, again, if you think about the... like, how often do you get to play a role in building some of the most important technology of, yeah, like, of our time, and then right after they're launched, like, these very, very top people who are very, very busy, one of their first thoughts is to thank you because of how critical you were to the operation. I- I- I just think that's so cool. (laughs) So, like, that, that is one of the things I often think about. And then I think the other thing that I often think about is, again, in many ways, Surge is an embodiment of me and my interests, and what I've always loved doing is analyzing data and figuring out how to, how to use that data to make models better or to make products better. And so every now and then when I just get the chance to write an analysis myself of the latest frontier model, or I get to read someone's analysis, or, uh, that our internal employees are creating based off of the data that we're providing, like, I think... I- I just think it's so cool that a lot of data we're providing, it's just so insightful and it, like, helps people build models in ways that they just wouldn't know how to do otherwise. So I- I- I think it's just really cool to, like, help these insights emerge into the world.

    14. HS

      Can I ask, going back to that story then, so you built out the MVP, you post it, and then you said, "Luckily," you said it very nonchalantly, Edwin, which is very sweet, but, like, people came and people liked it. What did that look like? How did the initial demand come to you?

    15. EC

      Sorry, I- I think I say it nonchalantly because it felt very nonchalant. (laughs) Um, I think what would end up happening is, so I would, uh, find all these people who really were desperate (laughs) for a lot of really high-quality data. And, I mean, the way it'd work is they would just email me with their request or we would just jump on a, jump on a live meeting and we would just get started and might take a week or a couple weeks to negotiate some sort of, um, SOW or contract just because, you know, a lot of this does have to live within the confines of, of their company. But, um, yeah, I mean, I think we're really lucky in that I ha- again, I had a lot of experience in this space, and so I had a lot of experience working with ML engineers and research scientists in the ways that they wanted to get data and the way they wanted to look at it. And so, um, I think, I think things just moved very, very, very quickly.

    16. HS

      And so, in the early days, everyone else is acquiring supply side of town, correct? All the other people that compete in the space. And you're not acquiring that talent supply, you're building product, correct?

    17. EC

      Um, I mean, it was both. Because, I mean, obviously we need a su- talent supply in order to make our product work. But, again, it was less about... So, so there are some companies in this space who will simply think of it as a pure supply problem, and they don't give any consideration to the technology, like what's the technology, the underlying technology. Like, how do you identify these people? How do you make sure that they're doing good work? How do you remove the bad quality work? Like, they're just literally not thinking about any of the technology aspects at all, and they're also not thinking about the product at all. Like, how do you present the data to the customers? Like, one of our principles, like one of the principles that I've always had, even prior to Surge when I was just an ML engineer or a data scientist, one of the things that I've always tried to encourage is what we call this visceral understanding of the data. Like, I really just want you to go in and get your hands dirty and look at the data. Like, historically, a lot of ML engineers, they kind of just don't take the time to look at the data, and maybe that's because the data just isn't all that interesting. Like, when all you're doing is drawing bounding boxes on cars, sure, (laughs) I don't need to look at 1,000 bounding boxes. But when you're doing is, yeah, creating poetry, creating mathematical equations, creating new research, like, you want to get your hands dirty with the data to see what it is that you're producing when you're teaching your models. And so I, I think it actually really is important, this, this aspect of viscerally understanding the data that you're getting.

    18. HS

      Okay. And so we're there doing both, building product and acquiring the talent supply in unison. Fantastic. What did we end the first year at? Like, did we have immediate product market fit?

    19. EC

      Yeah. I mean, I, I think it was very, very obvious that there was just huge demand for this product, and, um, there, there, there was just so much more that we, that we could be doing.

  6. 26:2938:54

    “No Sales Team, No PR, No BS”

    1. EC

    2. HS

      So Edwin, when there's huge demand for your product, this is even more so the time when everyone goes, "Now raise money. Hire CS teams, hire sales teams, hire b-" Why did you not raise money then? I get it at the start when, hey, you didn't want to do what everyone else did. Why not raise money when it was a hair on fire problem and you had so many people calling you?

    3. EC

      I mean, I would say th- there was nothing that raising would help us with. Again, we were, again, we were very lucky to be profitable from month one, and so we didn't need the money. We didn't need a sales team. Like, I didn't, actually didn't want a sales team going out and selling our product. Like, I wanted people to buy us precisely because they understood the value of high quality data. They saw all the gains that our data was producing. I didn't want to buy... I didn't want them to buy us simply because they heard about us in some tech article, because that would almost put them at odds with the kind of product that we were building. Like, one of the things that I think is actually really important is that you want customers who, especially early on, you want customers who believe in your product, and not people who are simply giving you a little bit of money. Like, because your early customers will shape the kind of product that you're building. 'Cause yeah, you're building for them, you're, you're building for their needs. Like, they're giving a lot of really, really great feedback. And so you almost want customers who share the same overall vision, and so th- that was actually very important for us. Like, I, I didn't want (laughs) sales teams who would email 10,000 people and be like, "Hey, any, any thoughts on, on getting good data?" And i- it was just very, very counter to the kind of, kind of product that we wanted to build.

    4. HS

      How do you think about what you just said there in terms of building with your customers, being so close to them, letting you shape your product, but then also not doing the Henry Ford of building a faster horse, and then also not building a product that bluntly isn't relevant for a wider audience base, and you really just kind of tie yourself into a few small cl- well, few clients?

    5. EC

      So, I think this is where we actually have a really very strong vision of what our product should be. So again, like going back to what I said earlier about how most companies in this space actually don't have any conception of... I mean, m- both within our space, but maybe also at large, they don't have product principles that they try to adhere to. Again, like we had very strong product principles from the start. We wanted to focus on quality above all else. Like, if we ever thought that we couldn't give the quality that we, that we wanted, we would just say no. As opposed to these other companies where they're almost like desperate and racing around just trying to get any traction that they can to try to prove to their VCs that their numbers are always going up. They're almost like focused on getting $10, $100, $1,000, wherever they can. And so as soon as some customer comes to them, even if that customer is counter to the kind of product that they're building, if they're offering money, they'll just say, "Sure, I'll, I'll, I'll do it (laughs) just because that will give me another logo for my website. That will give me another case study to show another customer. That will give me another talking point with my VCs." Like, we just, uh, I think we're very lucky to not have to worry about that because we could build for the long term vision we had as opposed to, again, like as opposed to pivoting every few months. Like, we, we just wanted to double dow- double down on the idea that we actually believed in.

    6. HS

      Is there a time when you let quality slip in any area of the company? And with hindsight, what did you learn from that?

    7. EC

      I, I think we've, we've never let quality slip. I mean, it's so... it's such a principle ingrained to, into everybody at the company. Like, one of the things that we simply tell everybody when we first join, quality is the most important thing. It's more important than, uh... yeah, it's more important than anything else. If you have to make a deadline slip because for whatever reason you don't think the quality is there, if we have to say no to a project because we just can't handle it right now... I mean, uh, we can generally handle a lot of things, but for, like, we, we just want to ingrain this principle that it's okay to say no, it is okay to, um, kind of like let other things maybe slip, just because we, we care about quality at the end of the day.

    8. HS

      Most founders have a challenge where they need to hire now.... but they haven't found the perfect person and so they hire a seven out of 10. They let the quality bar slip because they need someone in the role. How do you think about that and what would you advise them?

    9. EC

      Yeah, I think the funny thing is, like again, I've been at all of these other companies. Oftentimes, when people are saying like, "Yeah, I, my hair's on fire and I really need this engineer, so I know they don't meet the bar. I'm gonna lower the bar to hire them." Like, actually, the, the engineer they're ... Like, what are they doing? (laughs) They're building probably a feature that nobody cares about. They're building an internal tool to improve the productivity of everybody around the company by 2%, while at the same time, having so many meetings with them that they, like, take up 5% or 10% of their time just talking about the feature. Like, a lot of the things that people hire for just actually aren't all that important. And so again, like, when you don't feel like you have to hire for the sake of hiring, like when you have the mentality that, okay, if your company only grows by 10%, or even 0%, that's actually positive. Like, li- uh, I think people right now, they have this view that if someone were to tell you, "Oh, yeah, my, my engineering org only grew by 2% this year," your initial reaction is going to be, "Okay. You guy- you guys must not be doing well." Right? (laughs) And so there's, like, this negative incentive where people feel like they need to hire just in order to prove to other people that, that their business

    10. HS

      Do you think now we're in an opposite world to that, though, where you see the reduction in force from, say, a Microsoft, and you see better performance now from them on a revenue per head? Do you think now we're seeing the counterbalance of that, which is the desire to be the smallest team, the fastest team to X there are, and the smallest team to it, and now revenue per head is the most important metric?

    11. EC

      I honestly don't pay enough attention to, (laughs) to, like, uh, these kinds of Silicon Valley Twitter discussions for me to have a sense of whether this mentality is, is becoming more p- pervasive. I can believe in it. I, I can hope for it. Uh, I don't know for sure right now.

    12. HS

      Do you worry that by not being so ingrained in social, you miss out on certain elements that is important to be in, or do you think that purity of mind that you get is really so valuable?

    13. EC

      It's kinda funny, because again, I used to work at Twitter (laughs) , and I, I loved Twitter back in hayday. But I actually really am glad that I'm not surrounded by default ways of Silicon Valley thinking. So every now and then, if something is important enough, like maybe there is some big new product that's actually r- really cool, or there's some really, really interesting new research paper, it'll be, like, big enough that even though I'm not monitoring Twitter every day, it will just reach me in some other way. Like, yeah, one of our employees will tweet, or, like, post it in our Slack channel, or somebody will email it to me. So, like, the really important stuff will manage to percolate itself to me, um, in, in other ways. But I actually am really glad that I'm not worrying about what people are saying about us on Twitter or-

    14. HS

      I love that, especially given the irony of being at Twitter for, for a number of years. Um, I, I do have to ask, so if first year ends, what do you end revenue at on the first year?

    15. EC

      Yeah, let's just say we've been, (laughs) we've been doing really, really well from the start, um.

    16. HS

      (laughs) Totally get it, and again, please, you, you said publicly about being at a billion in revenue now. Did it look like relatively even growth? Were there elements where it was much more accelerated than others? I'm just intrigued, and say whatever you feel comfortable doing-

    17. EC

      Yeah.

    18. HS

      ... terms of...

    19. EC

      Yeah. So we've always been very, very successful from, from literally month one. Things definitely hit an inflection point with ChatGPT because I think people just saw how incredibly valuable human data and RHF was, so definitely ChatGPT was an inflection point for us. But even before that, we were, we were, we have, we have very strong growth.

    20. HS

      Okay. Love that. So post-ChatGPT, you really see the inflection point. Another one that I guess is probably quite an important one is Scale obviously selling, and the movement of customers away. How did the world change for you with the Scale acquisition?

    21. EC

      So it's interesting, because I think it was an open secret where a lot of top researchers already knew who we were. Like, they already knew that we were the biggest and the best in the space, even though we've been pretty under the radar, and so most people were already working with us. And so, yeah, there were, there were a lot of teams who were using Scale for legacy reasons, or they just didn't happen to know about us, so we've getting a lot of new interest for them too. I th- I think the more interesting thing has been, it's kinda been really fun seeing how we've opened their eyes to what really amazing, really high quality data can actually look like. Like, a lot of them have tried getting human data from other teams, and they tell us it's been this slog. They'll spend months trying to improve the data quality for really basic stuff, and it will look like it's better for a month, but then it will just quickly regress. And so, we have this concept where we just wanna get started immediately. We wanna show them really, really high quality data immediately, but then we also want to ... Like, one, one of the big concepts for us as a company is we always want to be producing data that is ... li- that you simply couldn't get anywhere else. Like, there's so much richness and complexity in types of things that we do, that we just want to open up new avenues of research and open up new avenues of, uh, like, n- new types of products. And so, uh, I, I think a lot of these, th- these new companies, or these new teams who, who've been coming to us, they, they tell us it's just been a breath, breath of fresh air for them.

    22. HS

      W- I spoke to, uh, Garrett at Handshake, uh, r- right after the acquisition, and he said, like, "I'm just staying up all night. There is just a tidal wave of Scale customers moving to us." Did you have the same though, in terms of that tidal shift in customer demand shifting to you, as well as the realization that you mentioned there?

    23. EC

      Yep. I mean, so I would say I'm pretty sure that a lot of these other companies, they are, um ... Like, at the end of the day, people want high quality data, and they don't want to be working with potty shops. And so ...... I think we've seen, like, a massive wave of interest, beca- because, like, yeah, like, th- this space is really large and there are a lot of teams who are still using Scale for legacy reasons. It's like, uh, at the end of the day, we're already the biggest invest in this space, and so even when there were teams at some of these larger companies who weren't working with us already, they, like, they kind of, like, knew who to turn to.

    24. HS

      Do you think everything has a price, Edwin?

    25. EC

      Uh, I mean, I think for some people, then, like, they have a price, but I- I think we don't. (laughs)

    26. HS

      You said you wouldn't sell to Zuck for $30 billion. Would you sell for $50 billion?

    27. EC

      No. I- I mean, I definitely wouldn't sell for 30 billion or even 100 billion. I mean, if you, if you think about us as a company, I- I already have everything I want. (laughs) We are... We're profitable. I have complete control of our destiny. And so I'm really lucky to have all the resources I want to already do anything that I want. And there aren't many companies who can say that.

    28. HS

      What are you doing this for? You're such a... D- dude, I interview... I've interviewed a thousand founders. In the nicest way, I've- I've almost never met a founder like you. And so I... In- in a nice way, it's really special. But with a pure mindset like you have, what are you doing it for then? To build a- a business that you can pass on to the next generations? To build a legacy? What is it for you?

    29. EC

      I mean, I- I think it really is to help achieve AGI. Like if you think about every, every... Like wha- what do, what do kids dream of? Like, yeah, y- when you're a kid, you literally dream of building AI that can do all these amazing things, and now we have the chance to do it. Like I- I really do think we are such a critical aspect of what all these companies are building. Like, again, a lot of our customers at these frontiers labs, they would just often tell me they wouldn't be able to build what they're building without us, and they're just amazed at what we do. And so being able to be this critical part of what is literally the- the greatest technology of both our time now, but also maybe one of the most important things we can ever build, that's- that's amazing. (laughs) And so why would you... why would you get acquired and stop doing that? Because, yeah, getting acquired would be really limiting. It would be this admission of failure and jumping ship because you can't make it on your own anymore, when we're the opposite, we're incredibly successful, and there's literally nothing else that I'd

  7. 38:5443:58

    The Real Reason AGI Might Take Until 2040

    1. EC

      want to do instead.

    2. HS

      It is 2040 and we still do not have AGI. What is the primary reason why that would be the case?

    3. EC

      So I think there are two reasons. One is that there will always need to be more breakthroughs, whether it's breakthroughs in, you know, how you leverage all this data, or breakthroughs in a diff- different types of algorithms that you're, that you're, that you're building. And then another one is just how you gather that data. Like, at the end of the day, I think a lot of data will be... It's like in order to cure cancer, how will you gather the data that's needed to make those breakthroughs? Maybe you're going to have to run real-world experiments, real-world studies, and those studies will simply take time. And so is there... Will- will- will there be a way to speed up those experiments through various kinds of simulations or just other forms of gathering data? Um, I- I- I don't know. But there is... The- there's some of the question how, how do you get the data even faster? Which I, which I think will be very, very important.

    4. HS

      Speaking of kind of evolutions with AGI there, I do just want to ask on, like, the changing nature of data. How will the data needed evolve as AI gets smarter and smarter and smarter with each evolution?

    5. EC

      So, a lot of people talk about this shift to PhD-level data. And yeah, I- I think it's important. Like, yeah, it's actually really interesting how we... Like, we basically have the biggest group of the smartest people in the world working on a platform. Like, we actually have Harvard professors and Stanford PhD students and Princeton computer science theorists working on all these really interesting problems with us. It's kind of crazy, if you think of all the PhDs even at Google or Meta or Microsoft, we have way more than all of them combined doing work for us in a single day. And it's also true that they're not just writing random JavaScript code to improve ads, they're actually pushing the frontiers of science when they're collaborating with these models. But I think what people underestimate is that having a PhD isn't enough. Like, a lot of PhDs, they just aren't good at this type of work. Like, again, like I said before, there are a lot of body shops and recruiting shops in our space that basically just look whether you wrote down that you have a PhD on your resume, and they'll just instantly give you work if so. But a lot of PhDs just aren't very good. Like I- I think 80% of the computer science PhDs I know, they write shitty code because they're only good at math and algorithms. And then you think about people like Ernest Hemingway, he didn't have a PhD, I don't- I don't think he even went to college. And so I think there are two things that are important. Like again, there is this underestimated aspect of our space where you actually need a lot of technology in order to make sure that you're delivering really high-quality data. Like, I think it's, like, a lot how, like, Vimeo has a lot of so-called high-quality videos, but yet they don't have any algorithms, and so YouTube's videos are way higher quality and more engaging in the end. And then the second is that it's just that a PhD isn't enough. Just because you have a PhD doesn't mean that you can make some breakthrough in physics. What you also need is street smarts. Like, you need that creativity and the mental fortitude to think of really interesting problems, and find these problems and probe LMs and see whether they can solve them today, and then teach them in really interesting ways. Because otherwise, if all you're kind of doing is throwing PhDs at this problem, all you're doing is teaching models how to hack silly benchmarks and get good at basically the equivalent of SAT problems.

    6. HS

      If that's the landscape though today, which is PhDs aren't enough, because a lot of PhDs aren't great quality, how does that change over time? Will you have a dramatically larger supply side? How will the tooling of the supply side change? How will our ability to turn around work change?

    7. EC

      Okay, yeah, so, again, I think this boils down to technolo- technology that we build. Like, over time, you're s- it's simply true that people are going to be trying to solve more and more problems. And so when you have, like us, like, we have hundreds of thousands, millions of people working on our platform. When you do that and you have a thousand projects, like 10,000 projects that are literally running in any given week-... how do you make sure that you are building technology to identify who are the, the top 1%, top 2% of people who can really push the boundaries of physical problems with these models? Or how do you identify the top, again, like two or 3% of people who are writing the most amazing poetry? How do you find those people and then also how do you remove the, the worst of the worst, the people who will inevitably try to cheat you and spam you and they will basically regress the models if you allow their data through? Like it, it actually is a really, really profound problem and you just need a lot of technology to build this. And then at the same time, like these are researchers who wanna move really fast. Like researchers at all these frontier labs, they're, a- a- again, like all the algorithms are changing every day and so they want to learn... They want to try out new projects every single week. And so if you're not moving fast enough, like if you're unable to create a new template, or you're unable to find the expertise that you needed, like literally within the next day or the next week, it's just gonna be too slow for these researchers. And again, if you don't have the technology to manage these 10,000 projects and automatically create them and automatically identify the, the really high-quality data, it's, it's just gonna

  8. 43:5849:58

    Why the Real Bottleneck in AI Isn’t Compute or Models

    1. EC

      be too slow for them.

    2. HS

      Speaking of slowness of data and quality of data, I'm sorry, I, I would love to push you on this. When you think about, like, bottlenecks to progress today, if I were to rank them one through three, one being the most pressing bottleneck and three being the least pressing, you've got access to compute, you've got algorithms, and you've got data quality. If you were to rank them one through three, how would you rank them?

    3. EC

      Yeah. So I would definitely rank data quality first, followed by compute, followed by the algorithms.

    4. HS

      If compute continues to prove to be the unlock, where throwing more compute at it unlocks more and more performance, does that denigrate da- data quality in the prioritization stack?

    5. EC

      I mean, I actually just fundamentally don't believe that you can throw more compute at a problem. Because if you're not getting the data that the computer is essentially trained on, or if you don't have the right objectives and evaluation metrics that, again, your, your computer is optimizing towards, you're just going to fall into this trap of seeing progress that actually isn't there. Like I, I, I can g- I can give you some examples. So l- l- let me talk about why I think data quality is, is such a problem. So I think data quality issues have already been a huge setback for a lot of frontier labs. Like one of the things that we often hear from teams over and over is that before they used us, they tried getting data in other ways. And so they'd train their models, they'd evaluate their models, and their metrics kept going up. But after six months or even a year, they realized that their training data was shit, their evaluation data was shit. And so all the progress that they thought they were seeing was actually completely misleading, and they either made no progress or their models after six months were even worse than when they started. Like for, for example, we see this a lot with LLM Arena. So LLM Arena is this popular leaderboard of LLM models and it's basically the equivalent of clickbait. What happens is that you have people going on to what's called a chatbot arena. They'll enter a prompt, they'll see two model responses, and then they'll vote on which one's better. But they're not taking the time to really read or evaluate the model responses at all. Like one of the models could have made... completely made everything up and these participants, they'll vote on it because it has emojis and nice formatting. We've literally seen this in the data ourselves. Like one response will just be a complete hallucination, but because it has an emoji and because there's a couple of words voted, people were just like, "Okay, yeah, like that looks good. That looks much better than this other thing then... that I didn't take the time to fact check at all." And so one of the... I mean, one of the things that we've learned is that the easiest way to improve in this arena is simply to make your model responses a lot longer. Like one of the funny things is that if you actually take the top model on this leaderboard, the number one model, and you ask it, "When did the Pope die?" It will give you a really long response that seems impressive, but it gets the answer completely wrong. It tells you that Pope Francis is still alive. It will even tell you that there are search results that indicate that Pope Francis died in April, but actually these were just rumors and misinformation and he's still alive. Like it's wild that this model will say this. And so, yeah. So again, like what happens is that there are a lot of companies who are trying to improve their leaderboa- board rank. And so they'll see progress for six months because all they're doing is unwittingly making their model responses longer. They're adding more and more emojis. They're adding more and more formatting. And so they see their models climbing on this leaderboard and so they think they're making progress when all they're doing is training their models to produce better clickbait. And they may finally realize six months or a year later, but it means they've basically spent the past six months making zero progress. So again, like this is what happens when you kind of, like throw compute at the problem without understanding the underlying training data that you're, like, again, like throwing the compute towards. It actually just sets your models back.

    6. HS

      When you look at like Grok obviously announcing their recent developments and how they performed in the latest benchmarks and came out as number one, are those benchmarks bullshit then? Like how much weight should be placed on the importance of those benchmarks and how reflective are they truly of model quality?

    7. EC

      If you watched the Grok-4 launch, the, the Grok-4 livestream, I think you would even have heard Elon himself saying, like, "Yeah, these models are really good at..." I forget the word he used, but like they're really good at homework problems. They're really good at these academic, very narrowly scoped problems. It's, it's basically the equivalent of making them really good on SAT problems, but not making them good at problems that, that people are actually facing.

    8. HS

      Totally get you. Were you surprised by how far Elon has been able to get with Grok as fast as he has done or not?

    9. EC

      Like again, I think Elon has this... it's kind of funny. So before we worked with the, the team, I didn't really have a conception of what an Elon company was like. But yeah, I mean we, we work really closely with the, the xAI team and it's actually just incredibly refreshing to see how they operate. Like they are all very, very mission oriented and they're all incredibly smart and they work incredibly hard. Like it will be 11:00 PM at night and I'll, I'll DM them and someone will want to jump on a meeting and yeah, I jump on a meeting with them and I see them. They're in the office and they're... there's a ton of people behind them. So like they're all (laughs) they're just like crazy hacking together on, on all these problems. And so I actually think it's incredible and it's this...... kind of embodiment of what a startup can do when you really believe in something and are kind of, like, willing to do whatever it takes to achieve it, as opposed to living within the confines of this giant bureaucracy. So, I- I- I think it's actually really

    10. HS

      Is there anything that you think Elon does specifically to inspire his team to have that form of culture when they're not a small company?

    11. EC

      I think it's almost that you know what you're getting into when you, when you, when you work at Grok, or when you work at xAI, or any of these other companies. Like, you know when you interview that these people are incredibly mission-oriented. You know when you interview that everybody works super hard. You know that if you want to work there, you're gonna have to be the kind of person who has the same values. Otherwise, you just shouldn't, you just shouldn't join because you'll be miserable. And so, um, it's this, this fact that it has such a strong culture and such a strong belief in what they're doing, it just attracts people of sim- similar talent.

  9. 49:5856:15

    Will Synthetic Data Kill Human Labelling?

    1. EC

    2. HS

      B- before we- I do want to touch on kind of the working hours that you mentioned there, but everyone poses synthetic data as a, as a big threat. And what happens to your business when we have synthetic data that is obviously created, uh, automatically and labeled automatically? How do you think about the role of human-labeled data in a world of predominantly synthetic data? What's your thoughts there?

    3. EC

      So, I think synthetic data is actually really useful in some places, but I think people overestimate what it can do. So, I'll, I'll, I'll give a couple examples. So right now, there are a bunch of models that have been trained really heavily on synthetic data. But like I mentioned earlier, it means that they're only good at very academic, homework-style, benchmark-style problems. They're actually terrible at real-world use cases. So yeah, synthetic data, it's made models good at synthetic problems, not- not real ones. And we actually hear from a lot of companies who tell us they spent the past year training their models on synthetic data, but they've only now just realized all the problems that's caused. And so they've spent actually months throwing a lot of it out. Like, a lot of them tell us that even 1,000 or a couple thousand pieces of really high-quality human data that we generated for them has actually been worth more than 10 million pieces of synthetic data. And, uh, so a lot of the work that we do is simply cleaning up all this synthetic data. If you think about, like, why this happens, it essentially is because the models collapse onto this very, very narrow scope of, um, of, like, similarity that the synthetic data creates, and so it just doesn't give the models the kind of ge- the diversity and generalizability that they need. And then, then one other point is that there's also this interesting phenomenon where models simply make a lot of mistakes and have certain misunderstandings that humans never will. Like, I was actually just playing with one of the frontier models recently, and it kept on just randomly outputting Russian characters and Hindi characters in the middle of its responses. Like, this is a mistake that would be obvious to any human, to any second-grader, but the model just didn't know. And it's, like, shocking that a model in 2025, a frontier model in 2025, would do this. And so it's almost like you always need this external value system as a kind of safeguard to make sure that the models are working properly, just because the- the models themselves have such a different set of, um, way of thinking.

    4. HS

      I'm an investor in Poolside, which, uh, if you don't know, obviously is kind of in the, um, same space as, say, a Cursor or Windsurf. But they're They seemingly are much more behind because they've built their own models, and they believe very much in the power of verticalization of models and specific models, uh, or specialized models, so to speak. How do you think about the future in terms of monolithic, generalized, very large-scale models versus the requirement to have very narrow, very specialized models for things like code creation and development?

    5. EC

      So, I think there's an opportunity for both. And the reason I think that is, it's because on the one hand, you have these giant, all-powerful models. And sure, they can be really, really good and really, really powerful. And in like a raw capability sense, at least right now, I think they will be able to do what they'll be- they'll be able to encompass all of these different use cases. But in the same way that a company... So take a company like Google or Facebook. There are simply some products that they can't build, because building those products would be counter to, like, culture or the business goals of, like, the overall parent company. And so in the same way, sometimes you need to be able to move faster and to take big bets on certain kinds of products, and, like the, the all-powerful model just can't kind of let that happen, because if you let it happen within just, like, one small domain, it will kind of almost, like, pervade the entire model. So, sometimes we do need, like, the smaller models to, to break through if they have, like, a really unique view on how, on how they are operating.

    6. HS

      Can I ask you, Edwin, you are very composed as a leader, as a CEO. Now, really, it translates incredibly. Where are you not meeting the bar? Where are you not great and you are aware of it?

    7. EC

      So, I think one area where I'm not great, which is kind of funny, but, uh, one area where I'm not great is I'm really bad at understanding financials. (laughs) So, sometimes people around a company w- they'll try to tell me, like, "Hey, do you... Have you been paying attention to our revenue numbers? Have you been paying attention to our costs? Have you been paying attention to our margins? Li- do you even know what they are?" And I don't. (laughs) Like, uh, there are just, like, these financial metrics that, like, I- I- I could not tell you what EBITDA is. I mean, uh, I know the, I know what the acronym stands for, but the difference between that and revenue and profit and net margin and oppor- like, I actually just don't know any of these terms. And it's just, like, this blind spot. Like, no matter how much I try to try to understand these things, I just can never remember. Um, and so ...

    8. HS

      What single metric defines the health of the business to you? What metric, if I showed it to you every morning, you'd be like, "Okay, I know the state of my business."

    9. EC

      If I could paint my perfect north star, and this is something that I think we, we want to work towards, like, it's something that we actually, we actually want to build for an industry. But it's like...... are models progressing in fundamental ways? Like, are they actually getting more intelligent? Like, are their capabilities improving, again, as opposed to simply climbing up a meaningless clickbait leaderboard? So are these models progressing, and then how much of that is kinda, like, due to us? Whether it's due to our training data, or whether it's due to the evaluations we provide, or whether it's we- due to the insights that we provide, um, provide all these researchers for, for ways that they can improve their models. Like, if there were a way to measure that, I would love it. (laughs) Um, I think the, the closest proxy we have for it today is just, like the, the variety and the variety of projects that we're creating, because I think that one of the things I, again, one of the things I really, really believe in is we want to make it easy for all of these researchers to come up with new ideas and to not be blocked by data. So the more complex, the more diverse, the more creative projects that we can provide, like that, that, that is like almost a proxy for, for, for that overall north star.

  10. 56:1558:43

    The Price of a $10B Company?

    1. EC

    2. HS

      Final one, and then we'll do a quick fire. But you mentioned Elon and X and the hard work in that culture being so ingrained. Um, I recently said that, bluntly, Silicon Valley and China have increased the intensity required to win in terms of work ethic. You must work seven days a week if you want to build a $10 billion plus company, and the ability to put your phone on the side and not check an email does not exist anymore if you want to build a $10 billion plus company. You've built a $10 billion plus company. Do you agree with me?

    3. EC

      So I would say I think you have to be willing to work hard. Like, you have to be willing to jump on a call at 2:00 AM and yeah, a customer, like, like I think one of the things that I love is that sometimes customers don't call me. Well, they'll call me at 2:00 AM, 3:00 AM and they're like, "Hey, our models are freaking out. I need a bunch of data to fix it by 6:00 AM. Can you do it?" And, uh, maybe going back to the, the question of things that make me happy, and, like, nothing makes me happier than knowing (laughs) that, that yeah, we can, we can deliver this. Like yeah, we can deliver 10,000 data points to you in the next few hours even if you call us at 3:00 AM to fix some critical, critical bug, critical fire that you're facing. Like, that is actually something that, that, that makes me incredibly happy. And so I, I think you have to be willing to wi- work hard. I think a lot of people do confuse working hard with creating value. Like a- like again, it ha- it's a, uh, uh, I mean, it's, it's maybe a trope to say, but you have to work smart and not just hard. Like, if I think about a lot of what I'm doing, like oftentimes the, the best ideas come to me when I'm just walking around, not necessarily when I'm l- at my computer. And so I think we, I mean, I think we all work really hard, but I, I, I, I, I wouldn't confuse the number of hours y- we spend with actual progress.

    4. HS

      What trait of yourself do you love most or is your favorite trait, Edwin?

    5. EC

      So at least the thing that I really enjoy is I really enjoy it when there's, like this unique insight, and like I've, I've always really enjoyed writing down insights in written form, and I think I'm pretty good at it. And so this ability to, like deliver some novel insight about a model or deliver some novel insight about an algorithm or deliver some novel insight about, about a dataset and communicating that to, to our customers, uh, I think I'm pretty good at it, and, uh, yeah, it, it, it is something I really, really enjoy.

  11. 58:431:07:31

    Quick-Fire Round

    1. EC

    2. HS

      Uh, dude, I want to do a quick fire. So I say a short statement, you give me your immediate thoughts. Does that sound okay?

    3. EC

      Yeah, that sounds great.

    4. HS

      So what one widely held belief about AI do you think is completely wrong?

    5. EC

      So I think a lot of people think AI safety i- is overblown, but I think they ignore the paperclip maximizer problem where you have AI models that are accidentally trained towards their own objectives, even though this is a big problem that all the models face today with, with all the issues around LM Arena and benchmark hacking. So I actually think it's a really important problem that people should be thinking more about.

    6. HS

      So you think AI is much more dangerous than we let on?

    7. EC

      Both dangerous, but that it can be accidentally maximized towards the wrong objectives that, like today, uh, okay sure, like if you maximize towards these LM Arena objectives or benchmark hacking, your, like the worst that will happen is that your models re- regress and, uh, progress a little bit. But then like the more fundamental problem is that people don't realize this, and so in the future when the models are more powerful, and yeah, you're like, you're basically accidentally maximizing AI models towards their own objectives and you just have no idea what will happen, almost like a similar, similar phenomenon to what's happening today, but because the AI models are so much more powerful. Like yeah, they're literally building the code for an insurance company or they're literally building the code for, you know, some trillion dollar, uh, some trillion dollar company. It's just that the consequences can be much worse.

    8. HS

      You mentioned about gaining true passion, love for building towards AGI. I hate myself for asking this question. It's a shit question. I hate it. I'm so embarrassed. But if you had to put a number, 2028 or 2038, which brand bra- bracket would it be in and why?

    9. EC

      So I think it would be 2028 if you're talking about automating the job of the average engineer, and then 2038 if you're talking about curing cancer.

    10. HS

      Sorry, 2020 automating the job of the average engineer? I had Vlad on the show from Robinhood, uh, today it went out, and he said 50% of code created by Robinhood is now by AI. Benioff said the same on the show, 50%. Are we not at that stage already? How much code from Surge is created with AI?

    11. EC

      I don't think we're at that stage yet. At least if you're, again, if you're, (laughs) if you're working on deeper problems that aren't just random features, like again, if you're concentrating your company on the 10% of problems that are most important, I don't think models today can write 50% of the code.... um, and you know, come up with 50% of the ideas that- that- that people are- are- they're actually going to be meaningful to your company? Sure. If 90% of your company is writing little features that nobody cares about, or improving the- the efficiency of your code base by 1%, then yeah. But, uh, I- I- I don't think we're at that point if you're, if you're really working on meaningful problems.

    12. HS

      What question should every AI company be asking themselves but isn't?

    13. EC

      So if you're Frontier Lab, the question is, are you actually improving your models and their overall intelligence or are you just hacking benchmarks? If you're a product company, yeah, the question is, why do Frontier Labs won't be able to instantly replace you?

    14. HS

      Do you think they will? I don't ever worry about that in terms of application layer being absorbed by model layer, just because I think there is infinite product breadth that they could go after. They can't go after everything.

    15. EC

      I think they can't go after everything, but there are so many things where, yeah, you literally just want to chat with the model in its very simplistic universal interface that... Again, like think about Google Search. Like I actually do feel, I mean, I have felt that maybe 50% of the things I used to do with Google Search, they are replaced by ChatGPT, or they're even better with ChatGPT. And so, um, like there- there's a s- very pleasing aspect of a universal all-intelligent interface that I- I think people will just gravitate to.

    16. HS

      What would you do if you were Sundar today? Would you kill your golden goose with the ads engine?

    17. EC

      So the difficult problem, I think, for Google is they have to be willing to take a short-term hit to all their advertising revenue in order to build something better, and that- that's just really hard.

    18. HS

      Incredibly hard. Uh, final one for you, uh, actually penultimate one. What did you believe about the future of AI that you now no longer believe or have changed your mind on?

    19. EC

      How I see a world where there actually will be multiple, multiple frontier AI companies, frontier AGIs, just because every one of them will be able to go in a different direction. Like you see it already, you see it today playing out already with the differences and the- the strengths and weaknesses of OpenAI and Anthropic, and I- I- I just think that that trend will- will continue.

    20. HS

      What does that mean? Um, sorry, if you just play that out, what does that landscape look like then? Because OpenAI and Anthropic are- are so unique in their properties and characteristics. It means there will be 10 more of them? What does that look like?

    21. EC

      I don't know if there'll be 10 more of them, but I can certainly see even like three more of them, and I just think each one will have different trade-offs that they're willing to make, different focuses that they'll have. And, like t- even today, like Claude is really, really good at coding. Claude is really, really good I think at enterprise and like instruction following. Whereas ChatGPT is, yeah, it's like more optimized for consumer use cases, like I- I think it actually has a really, really great and fun personality right now. And then Grok, like, yeah, Grok is willing to (laughs) maybe answer certain questions that maybe- maybe it should, maybe it shouldn't, but it's willing to, uh, be a little bit transgressive in ways that I actually think are very, very interesting. And so I actually think that, um, just like this willingness to have different personalities and different boundaries and different focuses on your models, that- that just leads, um, that leads the models to be good at different use cases. Just in the same way that like, yeah, there's... L- like I think an analogy is, there isn't a single poet, there isn't a single mathematician that is like the- the- the greatest mathematician of all time. They all have different focuses. They all have different ways of approaching these problems, and I- I think that richness, uh, like what we often call like richness of human intelligence, that will apply to- to models as well.

    22. HS

      Have the bigger- biggest model providers been founded today?

    23. EC

      Don't think so yet. I- I can actually see big, new, even more powerful model developers appearing in the next few years.

    24. HS

      How s- h- how does that look? Because, like, when you think about like funding them, I- the capital intensity or capital requirements are so large, I don't know anyone who's willing to... All the big players in the financing world, bluntly, have already got their horses in terms of this race. How does that even work?

    25. EC

      I think it's because it depends on what you view the long-term vision for AGI to be. Like if you believe that we are still only... Like despite all of the immense progress that we've made, if you believe that there's only, we're only like f- I don't know, 1%, 5% of the way towards AGI, because yeah, we literally want AGI systems that can, in the future, cure cancer, and send rocket sh- rocket ships to Mars, and like design entirely new philosophical systems. Like these are big, massive problems, and... A- a- as opposed to simply automating away the job of the average L3 or L4 software engineer. And so if you believe that there's like... we're only, again, like only 2% or 5% of the way there, there's so much more headroom, it's like almost like asking, do you believe, you know, 10 years ago that Google was going to be the final search engine in the world? Like sure, if you (laughs) , if you're like only looking forward to the, like the next five years, moving towards the- the span in some sense of... Like if you just think of the immensity of what AGI could do, just like s- there's so much more ahead of us than behind us that there could be these serendipitous, um, like very, very creative breakthroughs that just nobody's expecting, in part because like maybe it's going to be created by- by some of the AIs themselves or AI in concert with humans. There's just so much- so much opportunity ahead of us that it would be almost amiss to- to think that we've already solved it.

    26. HS

      Do you believe AI will be able to create 10% increases in GDP gain or in productivity increases in the next 10 years? That's often kind of touted as the number which would create $10 trillion of value.

    27. EC

      Yeah. I- I absolutely believe it.

    28. HS

      Final one, Edwin. You can give yourself one piece of advice going back to day one starting the company. Going back to starting the MVP, what do you know now that you could tell yourself then?

    29. EC

      So I think it would be to focus always on the 10X improvements that you can make, as opposed to worrying about 10% realities.

    30. HS

      Edwin, listen, I so appreciate the time. As I said at the beginning, I've been such a fan of the incredible journey. You've been fantastic. It's been very atypical in most ways, bluntly, having this discussion, which has been so great for me. So thank you so much for joining me.

Episode duration: 1:07:41

Install uListen for AI-powered chat & search across the full episode — Get Full Transcript

Transcript of episode ziqsNe1sLHw

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome