Lenny's PodcastGarrett Lord: How Handshake feeds every frontier AI lab now
How expert trajectories from chemists, coders, and teachers feed frontier labs; Lord on post-training, audience as the only moat, and a new Handshake unit.
EVERY SPOKEN WORD
140 min read · 28,341 words- 0:00 – 5:00
Introduction to Garrett Lord
- GLGarrett Lord
There will never be a time like this. I've never seen anything like it. I doubt I'll ever feel anything like this in business again, where there's unlimited demand. How do you make sure that three months from now, six months from now, you have, like, no regrets? Get on a plane to go talk to a customer. Make the late-night push. Check the data six times over again.
- LRLenny Rachitsky
Your company creates new data to continue advancing the intelligence of models. This is a business that you built on top of a business you've already had.
- GLGarrett Lord
We're the largest expert network in the world. We have this massive strategic advantage, which is like no customer acquisition costs. The only moat in human data is access to an audience.
- LRLenny Rachitsky
You guys come in after the model's trained to tweak the weights based on additional data that you create.
- GLGarrett Lord
The models have gotten so good that the generalists are no longer needed. What they really need is experts.
- LRLenny Rachitsky
There's this tension between all these students training models to become smarter, and then there's the, they will have harder time potentially finding jobs.
- GLGarrett Lord
That's not what we're hearing from our employers. This is just enabling human beings to be even more productive. You used to put, like, Google Search on a skill on your resume 'cause you, like, grew up with Google. Being, like, AI-native, young people are at a huge advantage.
- LRLenny Rachitsky
Today my guest is Garrett Lord. Garrett is the co-founder and CEO of Handshake, which is one of the most interesting and incredible AI success stories that you probably haven't heard of. Handshake has been around for over 10 years. They're essentially LinkedIn for college students. It's a place for students to connect with companies to find a job. They are the platform of choice for every single Fortune 500 company, over 1,500 colleges, over 20 million students and alumni, and over one million companies use them to hire graduates. At the start of this year, Garrett and his team realized that their huge proprietary network of students, including tens of thousands of PhDs and master's students, is extremely valuable to AI labs to help them create and label high-quality training data. So, they launched a new business from zero to one in January. Four months later, they hit 50 million ARR. They're now on pace to blow past 100 million ARR within just 12 months. They'll exceed the revenue that they're making with their decade-old business in under two years. This is a truly incredible and rare story, and one that I think a lot of teams can learn from, because AI is creating a lot of opportunity, but also a lot of potential disruption. And this is an amazing story where the company basically disrupted themselves. This episode is packed with insights, including a primer on what the heck are people actually doing when they're labeling and creating data to train models? A huge thank you to Garrett for making time for this. His wife just had a baby this week. He's also in the middle of scaling this insane new business. So thank you, Garrett. If you enjoy this podcast, don't forget to subscribe and follow it in your favorite podcasting app or YouTube. Also, become an annual subscriber of my newsletter. You get a year free of a bunch of incredible products, including Lovable, Replit, Bolt, N8n, Linear, Superhuman, Descript, Whisperflow, Gamma, Perplexity, Warp, Granola, Magic Patterns, Raycast, ChatPRD, and Mobbing. Check it out at lennysnewsletter.com and click bundle. With that, I bring you Garrett Lord. This episode is brought to you by CodeRabbit, the AI code review platform transforming how engineering teams ship faster with AI without sacrificing code quality. Code reviews are critical, but time-consuming. CodeRabbit acts as your AI co-pilot, providing instant code review comments and potential impacts of every pull request. Beyond just flagging issues, CodeRabbit provides one-click fix suggestions and lets you define custom code quality rules using AST grep patterns, catching subtle issues that traditional static analysis tools might miss. CodeRabbit also provides free AI code reviews directly in the IDE. It's available in VSCode, Cursor, and Windsurf. CodeRabbit has so far reviewed more than 10 million PRs, installed on one million repositories, and is used by over 70,000 open source projects. Get CodeRabbit for free for an entire year at coderabbit.ai using code Lenny. That's coderabbit.ai. This episode is brought to you by Orkes, the company behind open source Conductor, the orchestration platform powering modern enterprise apps and agentic workflows. Legacy automation tools can't keep pace. Siloed low-code platforms, outdated process management, and disconnected API tooling fall short in today's event-driven, AI-powered agentic landscape. Orkes changes this. With Orkes Conductor, you gain an agentic orchestration layer that seamlessly connects humans, AI agents, APIs, microservices, and data pipelines in real time at enterprise scale. Visual and code-first development, built-in compliance, observability, and rock-solid reliability ensure workflows evolve dynamically with your needs. It's not just about automating tasks. It's orchestrating autonomous agents and complex workflows to deliver smarter outcomes faster. Whether modernizing legacy systems or scaling next-gen AI-driven apps, Orkes accelerates your journey from idea to production. Learn more and start building at orkes.io/lenny. That's O-R-K-E-S.io/lenny.
- 5:00 – 13:08
Understanding data labeling and its importance
- LRLenny Rachitsky
Garrett, thank you so much for being here. Welcome to the podcast.
- GLGarrett Lord
Yeah. Thanks for having me. Longtime subscriber.
- LRLenny Rachitsky
I appreciate that. Okay, so before we get into the insane trajectory that your data labeling business is on, which is just an amazing story that I think a lot of founders and product teams that are trying to navigate this AI disruption that's happening will have a lot to learn from, I want to first help people understand what the hell data labeling actually is, just like what are people actually doing? Why is this so valuable? Some of the most, uh, I don't know, fastest growing companies in the world today, including you guys, are just, are, are... th- this is what you do. Clearly there's something really important here. I sort of understand it, probably not really. I think a lot of listeners feel the same way. So let me just ask you this. What is data labeling actually like? What are people actually doing, and then just why is this so valuable to frontier AI labs?
- GLGarrett Lord
Yeah. So I, I think it's helpful to take like a step back of like what, what does training a model look like?So, there's really two primary functions. There's a pre-training and a post-training process in training a model. And for a long time, these AI providers or LLMs or frontier labs were focused on basically sucking up more and more information on the pre-training side of the house. And that's basically the entire corpus of, like, written human knowledge. So, that's not just written, but like, every YouTube video, every book. It basically ... You know, the pursuit of sucking up everything that was on the internet. And that was the pre-training side. And there was a lot of gains from pre-training. Like, models continue to get better. And about 18 months ago, 24 months ago, we started to really see, like, an asymptoting of gains coming from ... Because they had essentially, like, sucked up all of the knowledge on the internet. And so, labs really shifted towards most of the gains now coming from the post-training side of the house. And what post-training is, is it's augmenting the ... and improving the data they have across every discipline or capability area that they care about. So, take coding or mathematics or law or finance. You know, they are focused on collecting high quality data that really improves the state of our capabilities, their models. And you can see a lot of these popular benchmarks, um, on what are called model cards. You know, when LLaMA 4 is released, you'll see, like, the benchmarks across various domains. And each one of the research teams inside of the labs are ... have different n- use cases. Basically, they're running experiments, almost thing like the scientific process. They have like a hypothesis around how to improve a model. They're trying to collect small pieces of data to see if that hypothesis works out. If that hypothesis is proving true, then they expand their overall collection of the data in that effort. Um, and it could en- ... It could look like reinforcement learning environments. It could look like trajectories. It could be audio and multimodal. It can be text-based, like prompt response pairs. Um, it can also be like reinforcement learning with human feedback, which is like, you know, preference ranking data. Um, and so that's the, that's the state of the art of models. And most of the gains that are happening for models right now are, are coming from the post-training side of the house. And there's just an in- ... uh, an incredible amount of demand to stay at the absolute frontier of where models are going.
- LRLenny Rachitsky
So, training, pre-training is feeding it, say, the entire internet. Here's like all the data that the humans have ever created. Uh, figure out knowledge and facts and how to reason and all these things. Post-training, is it correct to say there's essentially two buckets of things to do? There's reinforcement learning, human feedback, RL, HF, and then there's kind of this bucket of fine-tuning?
- GLGarrett Lord
I mean, yeah. Yes and no. Because like, like take for example, like trajectories or like you want to be able to do ... People use flight search or like an accounting end-to-end process, or you want to be able to like conduct biological like experiments. Like, you need actual trajectory data. Like, you, you need to ... They're, they're still very much ... A lot of the labs are still ... They have points of view on what data to collect. It's evolving very quickly. But I think, you know, reinforcement learning is really like preference ranking, right? Like, which, which question do you like more, question A or question B? SFT data is like a prompt and a response, and obviously the labs are very focused on these like thinking or reasoning models. So, in order to improve a reasoning model, you'd actually have like the step-by-step instructions of which ... when you interact with a lot of these frontier models, they're ... the, you know, they struggle in very advanced domains. And so, you know, I, I think there's a variety of datas that, that they're working, you know, working with to improve capabilities in their models.
- LRLenny Rachitsky
What I'm hearing is there's other ways to, uh, post-train. Which of these are you guys focused on? Where do you help models most of these three-ish buckets?
- GLGarrett Lord
Our like real unique proposition as a business is the fact that we, like, have an engaged audience. We have 18 million professionals, uh, across ... You know, we have 500,000 PhDs. We have three million master's students. We're a global platform. And so, you know, de- depending on kind of what you're looking for across any area of academic knowledge, you know, what is the definition of a PhD? It's essentially to like be at the ... How do you get your, how do you get your PhD? You're defending your thesis. Defending your thesis means, generally speaking, like you have proven that you have extended the, the world's knowledge in a particular domain. And so the ability to like hyper target this audience into chemistry, math, physics, biology, coding, and really touch parts of human knowledge that have never before made it to the internet is really where we, we excel. And I would say that when you talk about the labeling market, something to, to make it more abstract is like, it used to be generalists work. Like, a lot of the market, before the models started to get better, was leveraging talented international lower cost labor to do basic generalist tasks. But really what's happened is the models have gotten so good that the generalists are no longer needed. Like, what they really need is experts, experts across every area that the models are focused on. And, and really you could think about these model, model builders as they're focused on like the most economically valuable capability areas in the economy, right? And so that, generally speaking right now, is focused on, you know, advanced STEM domains, advanced science and math domains, and then the kind of derivative functions of like accounting, law, medicine, finance, uh, where they want to make the models more capable. Um, and then the work that we're doing, I think to come full circle to your question, like we're doing work across, uh, across so many domains. I mean, we have, we have millions of bachelor students that are being used for work in like audio, work in customizing a model depending on the voice and tone, where you are geographically in the country, uh, what do women versus men prefer, uh, all the way to the most advanced PhD STEM domains out there.
- LRLenny Rachitsky
Okay. So is it fair to say, essentially, all the data that is available has been trained on, and your company pro- creates new data, new knowledge to continue advancing the intelligence of models?
- GLGarrett Lord
Yeah.
- LRLenny Rachitsky
Okay.
- GLGarrett Lord
And I'll also say, we help point out where the models are weak.
- LRLenny Rachitsky
Mm-hmm.
- GLGarrett Lord
So in order to break a model, you know, it's pretty tough for the average person to break a model and get an incorrect response.
- LRLenny Rachitsky
Mm-hmm.
- GLGarrett Lord
But if you're a PhD in physics, like you can go in, in multiple kind of subdomains of physics and prove where the model is actually breaking, either breaking in its reasoning steps or it's where it's broken in its ground truth right answer, or we start throwing tools in there, or needing to, you know, follow some step-by-step process. And it's, it's, it's, uh, I wouldn't say it's easy for them, but the average person cannot break the models. Uh, and that's where we really come in.
- LRLenny Rachitsky
So essentially, it's just like catching mistakes that the model has made. Um, okay. So,
- 13:08 – 15:35
The role of experts in AI model training
- LRLenny Rachitsky
what are these people actually doing? What does it... I know there's all kinds of different types. You described all the ways that data is generated, what kind of data is useful. So maybe just like the most common examples, like what... say a PhD person is sitting there doing stuff, what are they actually doing?
- GLGarrett Lord
A great example is a public paper called like GPQA.
- LRLenny Rachitsky
Okay.
- GLGarrett Lord
So for the engineers out there that want to read about it, like essentially the, the crux of the paper is you break the model, you provide a ground truth, the right answer to the question, you provide the step-by-step reasoning steps. So, you know, you can, you might imagine like because models are non-deterministic, like the model can get the answer right once, but it might not get the answer right, you know, three out of five times. So you actually prove where the model is failing. You'll actually break down into like, where is it failing? Y- you know, maybe it can get the... it knows the question but it can get the right answer, but the actual steps to get there are wrong, and they're really focused on like the steps to get there. Say there's like ten steps in a math problem, right, like step six through ten is wrong. And so like how do you fix the actual steps? Um, and, uh, what are they doing? So they're going in, we put them, you know, we really focused on calling this like a f- uh, uh branding the experience and treating people like experts. Like PhD students expect to be treated different than lower cost international labor with a different work expectation. And so these PhDs come into a community. We have a instructional design team and an assessments team that's going through and basically iteratively helping them understand how to use the tools that we built and how to interact with the latest models. Then they go in and start actually creating data. And that, you know, that process is, on our side, the model builders, they want to know that the data we're producing is high quality. So we have our own research team, our own post-training team. I hired a, a gentleman from Meta that went along th- along the post-training over there named-
- LRLenny Rachitsky
I hope you paid him well.
- GLGarrett Lord
Yeah. So war for AI talent is-
- LRLenny Rachitsky
(laughs)
- GLGarrett Lord
... uh, very expensive.
- LRLenny Rachitsky
Right.
- GLGarrett Lord
I'm super, super privileged a- and proud to be working with him. And so, you know, each unit of data, you know, we have to build an environment for them to actually create the data, then we have to understand at a, at a, you know, unit level, we're trying to approximate the actual gain from that piece of data and whether it can improve in a particular capability area. Uh, and then we're also focused on, you know, e- evolving the use cases to also follow what the model builders want, which is, they want more, th- they, they want more real world tool use and trajectory-based data as well.
- LRLenny Rachitsky
Okay. There's so much here. And like we could go infinitely
- 15:35 – 24:17
The future of AI and human collaboration
- LRLenny Rachitsky
down here, but I think that this is really interesting because just like people hear so much about all of this and they barely understand what the hell it actually is. So this is, for me, really interesting, and I think it's going to help a lot of people. So essentially, a PhD, say a biologist, biology PhD is just, their job is find flaws in what, say ChatGPT is producing, and then come up with here's the correct answer, and that is used to fine tune the model. Here's like, here's something you're doing c- incorrectly, here's the correct answer, and that improves the model. Is that a-
- GLGarrett Lord
Yeah.
- LRLenny Rachitsky
... simple way of thinking about it? And please correct anything I'm saying that isn't incorrect because I don't want people to misunderstand anything.
- GLGarrett Lord
Yeah. I mean, like, um, a great example, let's take like a non-verifiable domain like-
- LRLenny Rachitsky
Mm-hmm.
- GLGarrett Lord
... education. So there's like a PhD student, uh, Rachel, on the network. She got her PhD from the University of Miami, spent two decades as a teacher teaching students in the eighth grade, and she was an adjunct professor at a local community college, uh, in the field of education. And so she is interacting with the state-of-the-art models in educational design, so actually trying to understand what is the best way to teach people. And like, how do you frame the... how do you, how do you spot incorrect issues in a model in the way that they're like training people, and help the models understand the forefront of educational design with the hands-on experience of being an eighth grade teacher for 10 plus years and having a PhD in education? So that's an example of like, you know, you can have that all the way down to like a verifiable engineering problem that you're seeing the latest, you know, you know, seeing the latest models fail on. So you have... Yeah, I, I think that gives you a, a, you know, the, the gamut. You also have, you know, we talk about professional domains like these reinforcement learning environments, like, you know, there's a bunch of papers out there that's basically speak to like people narrating over their step-by-step tool use. So as they go to solve a problem from start to finish, interact with multiple different service areas, interact with multiple different tools, you know, they're like, you know, there's papers that talk about this, but like, you know, talking over what they're doing, actually following and screen recording where their mouse is going, how they're problem solving. When they run into a roadblock, what do they do? They really want to understand how humans think.
- LRLenny Rachitsky
You mentioned this term trajectory. Can you just explain what that actually means? Because it feels like you've mentioned that a few times and that feels important to tell this.
- GLGarrett Lord
Yeah. Well, trajectory is basically just like the entire environment, uh, that is collecting what you're doing. Um, so it's your screen, it's your mouse-
- LRLenny Rachitsky
Oh, I see.
- GLGarrett Lord
It's, it's
Oh, wow.
Yeah. Yep.
- LRLenny Rachitsky
Including this voiceover. Okay. And then I... this might be too technical, but what is the output of all this work, this, say teacher, is it just like a JSON file, an XML file, like a text file?
- GLGarrett Lord
Yeah. Yeah. It, think about as JSON data.
- LRLenny Rachitsky
JSON data, okay.
- GLGarrett Lord
And, uh, and then we also have, like, multimodal work, like audio, like classifying music and understanding. We're engaging, like, thousands, or not thousands, like probably hundreds of, uh, top music students at, you know, the lead music schools in the country who are improving models' understanding of music. And you also have the thing called, which we haven't talked about here, like a rubrics. And a rubric, like, models are... You can, you can put a model in as a judge. Like, you, you could, if you... What is a good, what is a good educational design? Or wha- what's a good MRI result? And instead of having some of these, in some of these domains you actually don't have a, a guaranteed correct right answer. And so models can sit in the middle as a judge and actually understand, you know, what is... You know, kind of like think back on your school days. Like, what it... How do you get an A on your 5,000-word paper? Well, there's, like, a great introductory statement and there's scientific proof. You know, like so you can build a rubric that allows a model to sit in the middle and actually, like, auto-evaluate responses. We're seeing a lot of rubrics work as well.
- LRLenny Rachitsky
And you would think, like, why would you trust this one teacher's opinion that this is the right way to do it? But what's cool is the market speaks for itself. If these models are being used more and more and people love them and value them, I imagine there's steps in between to verify this is good and other people think this is a good idea. There's, it feels like the market dynamics will tell you if the data you're providing is correct and what people want. Is there something more there?
- GLGarrett Lord
You know, I didn't get a PhD in (laughs) in AI or math or physics and I haven't trained myself with the frontier models, but you know, there is a lot to each unit of data, whether it's improving. If it... You know, there's a ton of science and research out right now around, like, how do you make sure that the data that you're producing is improving the model? And it's very hard for model builders to understand... You know, they, they really care about... To, to zoom out, they care about three things. They care about, like, quality first and foremost. Like, you have to have high-quality data. And if you, you imagine you're training a model, like teaching a student and you're giving it the wrong data, it's extremely, you know, challenging to overcome that. So quality is first and foremost. And then the other huge problems you have is, like, volume. Like, how, how do you generate thousands of pieces of data in the most advanced domains of chemistry and mathematics and physics? And how do you ensure that it's high quality? Well, for us we... Say in physics we just reach out to students at Stanford and Berkeley and MIT and, like, they're at the top GPA, they're at the best physics schools in the country. And so our ability to get to scale or volumes of data with that and to produce very high-quality data is, is something they care deeply about. And then the other thing I would say model builders care about is speed 'cause they have all these hypotheses and they're constantly testing different pipelines. And so you might have, like, three or four bets going at once and then as soon as one is actually showing a, a gain, imagine you're a researcher or, you know, you're a scientist, as soon as one's showing a gain then you're trying to grow that pipeline and grow that piece of data that's actually improving it. And you're maybe ditching two or three other projects you had that weren't showing improvement. So your ability to quickly turn around for them in, in a period of days and then get to high volumes of data that are high quality is the number one thing they care about. And so there's quite a bit of technology we built on our side to assess each unit of data. We have our own post-training teams, we're renting our own GPUs, and we're trying to make sure that we can sit directly with these researchers and help share, like, what we're seeing with the data that we're creating and how it, how it could improve their model, how they could best train with it. Um, so hopefully that helps.
- LRLenny Rachitsky
Going back to the types of post-training, just 'cause I think this might be helpful, at least for me, the mental model of there's pre-training, there's post-training, within post-training there's, uh, reinforcement learning, human feedback, there's kind of this concept of fine-tuning. Uh, there's also evals and stuff like-
- GLGarrett Lord
There is SFT, yeah.
- LRLenny Rachitsky
SFTs, which is supervised fine-tuning?
- GLGarrett Lord
Yeah.
- LRLenny Rachitsky
Is that... Okay. So the stuff you've been describing, is that... Would you mostly describe that as, uh, supervised fine-tuning?
- GLGarrett Lord
Uh, yes.
And we're doing preference ranking and we're kind of doing all of the above. Uh, we don't do the auto eval. We, we produce rubrics which are used in auto evals. Um, yeah.
- LRLenny Rachitsky
Okay, awesome. So essentially there's a model, it's trained on all this amazing data, uh, you guys come in after the model is trained to tweak the weights based on additional data that you create. What's interesting is that this is a scalable system. I n- well, I want to talk about just, like, the supply of amazing people that you have producing this but it's amazing that humans can do this. Like, you would think it needs to be this infinitely scalable thing but, like, humans sitting there adding, creating data is working in improving the intelligence of models significantly.
- GLGarrett Lord
Oh, yeah. I mean, I think, like, maybe a funny joke is, like, all the MBAs think this is all just, like, gonna go away. It's, like... And I think for as long as models are improving, humans will be needed in this process. And when you talk to the lead scientists and researchers at these labs it's, like, the data types will evolve in what they're trying to capture and collect but, you know, th- there will be y- there will be humans needed in this space for the next decade until we reach, like, full ASI. So yeah, it's... I mean, you think about, like, you... You know, a lot of them will struggle to do basic trajectories right now. So, you know, right now people are very focused on academic domains and I think they'll continue to be focused on academedical- academic domains but there'll also be, you know, far, far more demand for professional domains as well across basically every, every trajectory or step-by-step kind of problem that a knowledge worker solves in the workplace. You know, it's the pursuit of these labs to make sure that they're trying to collect the data to help add as much value in that process for humans as possible.
- 24:17 – 27:58
Why AI won’t eliminate entry-level jobs
- GLGarrett Lord
- LRLenny Rachitsky
So let me ask about this. There's this tension I imagine people might feel between, uh, all these students...... training models to become smarter and smarter and smarter, and then there's the, they will have harder time potentially finding jobs if models are so smart that people at the entry level, uh, aren't being hired as much. How do you think about just that tension? Do you think this is a real problem or not? Where do you think this goes?
- GLGarrett Lord
I mean, like, I'm probably in the camp of, like, GDP growth over, like, universal basic income. Like, I, I, like, very much, like, believe that this is going to improve and accelerate every human's ability to, like, create an impact in the economy and the world. And that, you know, we're hearing from ... There's like a million companies that use Handshake. Like, we have 100 ... well, 100% of the Fortune 500 uses Handshake. So we, we basically power the vast majority of how young people find jobs. And a lot of people are kind of hyperbolic in saying that all young people won't have jobs, and like, that's not what we're hearing from our employers. What we're hearing is like, take like social media marketing. Like, before you needed, like, somebody that could do Photoshop and take pictures and create videos. They needed somebody that understood, like, marketing analytics platforms to track, you know, what you were posting on different social media forums. And it's like, now one person, one, like, young, talented, AI native, Iron Man suit-enabled young person can get on, and like, they can build their own videos, produce their own creative assets, post across multiple social media platforms, run all of their own analytics. They don't need a data science degree to be able to do that. And that an ex- ... Or, or like take an intern at, in our, in our company. Like, he had his first PR op, like, I think, like, the afternoon he started, right? Like, if you were a PM, like, you realize how, how challenging that would have been historically to get your dev environment set up and, like, figure out where to add value. He just took a bug and, and squashed it. And so I'm really a believer that this is just, like, enabling human beings to be even more productive and create more impact. And yeah, like, of course, like, like m- ... hundreds of millions of jobs will become, you know ... The jobs will evolve. Like, people will become displaced, they'll have to upscale and rescale, and I think Handshake has a huge role to play in, in, in helping, uh, knowledge workers evolve.
- LRLenny Rachitsky
This has come up a couple of times, this point that I think is really good, that, uh, younger people coming out of school are actually gonna be much more likely to be successful because they're kind of, uh, growing up with these tools and are much more native to all these advanced tools. And so they just come in as beasts, just doing so much more.
- GLGarrett Lord
Well, do you remember, do you remember when-
- LRLenny Rachitsky
Yeah.
- GLGarrett Lord
... like ... I mean, I, I (laughs)
- LRLenny Rachitsky
(laughs)
- GLGarrett Lord
It's a little ... it predates me, but, like, you used to put, like, Google Circ- on as, like, a skill on your resume, right? Like, you were a young person, you were, like, good at Googling, right?
- LRLenny Rachitsky
Yeah, yeah.
- GLGarrett Lord
'Cause you, like, grew up with Google. It's like, I think being, like, AI native and having your Iron Man suit on and understanding how to leverage these tools is like, uh ... Young people are at a huge advantage.
- LRLenny Rachitsky
Yeah. Uh, especially if they're involved in training these models, I imagine there's some other cool advantage there.
- GLGarrett Lord
Yeah. Well, I mean-
- LRLenny Rachitsky
Uh, yeah.
- GLGarrett Lord
... just to hit on that, like, what we're getting from, like, our thousands of fellows is, like, they're in the classroom. They're actually producing research. Like, we're talking about, you know, PhDs at the top institutions of the country, and like, they, th- th- they can make, like, 100, 150, $200 an hour in their area, in their field of expertise. It's pretty sweet. Like, you can make like 25 bucks an hour being a teacher's assistant, or you can actually make $150 an hour breaking the latest models. And like you're learning that ... What we're hearing from our, our fellows is like, they're bringing a lot of those insights into the classroom to help them be more effective at teaching. More importantly, they're, they're starting to learn how to leverage these tools to actually advance their area of research. So they believe that these tools can help them advance their area of research by helping them be more effective with their time. And so, uh, it is quite cool to get kind of paid to learn
- 27:58 – 33:05
The continuous improvement of AI models
- GLGarrett Lord
a skill.
- LRLenny Rachitsky
Before I get to the story of how this all emerged, 'cause that is an incredible story, is there anything else about this whole field of labeling, of reinforcement learning, uh, that you think people just kind of don't fully understand or you think that's really important? There's just like so much happening. Like I said, some of the fastest growing companies in the world are in this space. Scale was just, like, acquired for, you know, 30 ... like sort of acquired for $30 billion. Uh, just like what else is there, if there's anything, that you think people need to understand?
- GLGarrett Lord
Generally speaking, like, any time that you're interacting with a model and you're asking it to do really advanced things and it's not performing to your expectations, like, somewhere there's probably an expert that is, you know, th- the top mind in that domain working directly for the best researchers in the world at the frontier labs, trying to understand and go through the scientific iteration process of how to make that better. And that, the assumption there is that, like, they already have the entirety of human knowledge that's written and recorded. And so, you know, for as long as there are problems in solving any problem with AI, you know, th- any human problem, there will need to be humans in the loop helping advance that. And like models don't generalize. I mean, they're ... Obviously the field will advance a lot and the type of data they'll collect a lot, will, will evolve a lot, but it's- it's pretty exciting at the frontier.
- LRLenny Rachitsky
Kevin Weil was on the podcast, the CPO at, uh, OpenAI.
- GLGarrett Lord
Yeah.
- LRLenny Rachitsky
And he, uh, h- he made this point that really stuck with me, that the model of today is the worst model you will ever use.
- GLGarrett Lord
I love that line.
- LRLenny Rachitsky
Will only get better.
- GLGarrett Lord
Yeah.
- LRLenny Rachitsky
It just, just boggles the mind. And now we know why, things are getting better 'cause all the work you guys are doing. Uh, just one quick question on this whole Scale thing, I guess. They were like, I don't know, the main company doing this. Now they're swallowed up and Alex is running Superintelligence and Meta. Are they still, like, a big player in this labeling space or are they kind of out of it and, and that's ... think that's a big opportunity?
- GLGarrett Lord
Yeah, I mean, kudos to the whole Scale team, uh, out of respect and for what, what they've built, um, there's many great companies operating in this space. I think to the core of your question, it's like, uh, I think if you were building the most ... If you viewed your research team and your model building team and the experiments they're running to be, you know, really the cornerstone of how you're improving, you probably wouldn't want the latest research of what you're trying to work on being, you know, being invested in by a, uh, by a peer. I mean, that's just generally what we hear in this space. Uh, and so we have seen a, uh, incredible surge in demand, um, and are-I think extraordinary well-positioned. We, we like to say, like, the only, the only moat in human data is access to an audience. Basically, there are, you know, many, many small players in this space, uh, some mid-sized players in this space, and they're basically, you know, running TikTok ads, running Instagram ads, paying money for Google Search display ads, YouTube ads, and, uh, they will be like, "Can you get me 200 physics PhDs?" They... What do they do? They only can do one thing. They, like, you know, they have 100 recruiters on staff. They all get on LinkedIn. They all send messages. They spend couple million bucks on performance advertising campaigns. Somebody's scrolling their Instagram feed that's a physics PhD of which you can't target them that well, and they, like, see, you know, come train a model. It's like, "I've never heard of this brand before." The huge advantage that we've had and why we've resonated (laughs) so fast in the marketplace is, like, we built a decade of trust with, you know, 18 million people, and they trust us, and, and we built up a ton of brand affinity, and they use Handshake, and they have an active profile, and we have a ton of information around their academic performance and what they've done in school. And so, we're able to really target people really effectively and get to scale and volume of high-quality data faster than anyone else. And I think that competitive advantage of access to an audience is really resonating in the marketplace.
- LRLenny Rachitsky
Today's episode is brought to you by Anthropic, the team behind Claude. I use Claude at least 10 times a day. I use it for researching my podcast guests, for brainstorming title ideas for both my podcast and my newsletter, for getting feedback on my writing, and all kinds of stuff. Just last week, I was preparing for an interview with a very fancy guest, and I had Claude tell me what are all the questions that other podcast hosts have asked this guest so that I don't ask them these questions. How much time do you spend every week trying to synthesize all of your user research insights, support tickets, sales calls, experiment results, and competitive intel? Claude can handle incredibly complex multi-step work. You can throw a 100-page strategy document at it and ask it for insights, or you can dump all your user research and ask it to find patterns. With Claude-4 and the new integrations, including Claude-4 Opus, the world's best coding model, you get voice conversations, advanced research capabilities, direct Google Workspace integration, and now MCP connections to your custom tools and data sources. Claude just becomes part of your workflow. If you wanna try it out, get started at claude.ai/lenny, and using this link, you get an incredible 50% off your first three months of the Pro plan. That's claude.ai/lenny.
- 33:05 – 37:07
The emergence of Handshake’s new business model
- LRLenny Rachitsky
Okay, this is an awesome segue to where I wanted to go, which is just how, how this business emerged. This is a business that you built on top of a business you've already had. From what I understand, you were at, like, $150 million in revenue. You've been at this for a long time. You found this opportunity. And now that I... You know, looking back, it's like, obviously this is an amazing idea. Labs need data. You guys have the supply of incredible experts. What an opportunity. (laughs) Talk about just how you first realized this was something that you could be doing and should be doing and then how you started to kind of execute down this path.
- GLGarrett Lord
Yeah. I, I think it's been a pretty natural extension from, like, helping people jumpstart or restart or start their career. Like, you know, monetizing your skills in, in this new employment ecosystem is gonna look very different in the future, and we've wanted... You know, to, to, to, to zoom into, like, how we discovered, it's like we... Because we have such a large access to this audience and as the world shifted from generalists to experts, we're the largest expert network in the world. We have, you know, more PhDs, 500,000 of them use Handshake than any other platform. We have three million master's students who are, you know, in-school or alumni. And so, we started to see all the, what I would call, like, middleman companies reaching out to us saying, "Can we recruit your PhDs and master's students?" And like any great marketplace, you know, we started sending them to these different platforms and started to really realize that, you know, from hearing from our users that, like, the experience was really frustrating, like, training was very transactional. The payments were, you know... There was very amorphous how you could get paid. Like, there was immense amount of drop-off in the process to actual project, like, completion on these other platforms. So, we started to, we started to think the company was, you know, making tens of millions of dollars from, uh, helping these other platforms, and we started to realize, like, w-... You know, what really kicked it off was, like, hearing also from the frontier labs, like, they started to reach out to us and started to go direct and tr- trying to, like, almost kind of cut out the middleman. And we started to realize, well, you know, we could really serve our fellows, our PhDs, our experts. We could treat them... We, we just believe there's, like a... There will need to be a platform, an experts-first platform, in the pursuit of ASI and advancing AI. And there will need to be a place that everyone in the world could go to to monetize their skills and their knowledge as these labs are focused on improving in these, you know, in all of these multidisciplinary outcomes. And, yeah, we, we entered the business in... Really, like, I started doing it over, like, Christmas and New Year's's. Like, that's when I started, like, flying around. My family kind of thought it was a little wild that I was, like, on, (laughs) on planes trying to chase different leaders. But we, we built an incredible team of people that came from the human data world and really started building out our platform in January, and then started really monetizing the relationships about five months ago. Uh, fast forward to today, we're working with seven of the frontier labs, basically e- every lab that's doing, that's do- that's doing work (laughs) in building the best large language models. And the team has exploded, and revenue's exploded, and it's been, it's been really a incredible ride kind of, like, running back a new company inside of a company for the second time over again.
- LRLenny Rachitsky
And just to share some numbers, tell me if this is correct or if you're sharing these, but, uh, I heard that you hit $50 million in revenue just four months into this. Today, we're at eight months in, and you're on track to hit $100 million in revenue, uh, in the first year.
- GLGarrett Lord
I think we'll blow through that number, but yeah.
- LRLenny Rachitsky
(laughs) Okay.
- GLGarrett Lord
Right.
- LRLenny Rachitsky
Incredible. Uh, and I didn't even know there were seven Frontier Labs. (laughs) That's, uh-
- GLGarrett Lord
Yeah. Zero to 50 is pretty good in four months, I think. (laughs)
- LRLenny Rachitsky
Uh, zero to 50 million in four months. That's something. It's like the bar has been (laughs) shifting constantly.
- GLGarrett Lord
Yeah.
- LRLenny Rachitsky
Like, you know, a year ago, that'd be legendary. Now it's like, all right, well, another one of these.
- GLGarrett Lord
(laughs)
- LRLenny Rachitsky
It's 50 million in four months. No big deal. Uh,
- 37:07 – 40:42
Incubating new ideas in established companies
- LRLenny Rachitsky
it's truly insane. Just to zoom out one second for people to, that don't know a ton about Handshake, the original business, what was that like? What was actually this network that you had that you sat on top of?
- GLGarrett Lord
Yeah. Tha- that network does about 200 million. This one will do about 200 million.
- LRLenny Rachitsky
200 million.
- GLGarrett Lord
Yeah.
- LRLenny Rachitsky
Okay.
- GLGarrett Lord
So that's, we have like 600-ish like super passionate teammates that work on, on the core business, which is, you know, I would simply do those. Like, these aren't two businesses. I think it's like, it's one business, but, uh, that, what is that business? Um, it's the nu- If you're a young person in America that's graduated in the last five, six, seven, eight years, you probably have Handshake on your phone. You like definitely know what Handshake is. It's like a, it's a verb with young people in America. It's a verb with people that, like are in college, in their PhD or master's, you know, program. And it is, uh, I call it like an unconnected graph. Uh, meaning like you don't need to, you know... LinkedIn is very focused on like who you know and like what your experience is. The first question on LinkedIn is like, "What's your job?" And a lot of young people start off like they've never had a job before, right? They don't, they don't have like 500 connections to add to their, to their, to their graph. Whereas on Handshake, you start off like trying to discover and explore and figure out how to navigate through school and figure out, "Oh, I'm an engineer. Maybe I wanna be a PM. Maybe I wanna work at a startup. Maybe I wanna work at a larger company." Like, what are the pros and cons? You wanna learn from near peers and young alumni. And so Handshake's this, like I call like a very like social platform with like groups and messaging and profiles and short form video and feed, all focused on your interests and helping p- really like build your confidence in your early career to find your first job, your second job, and to manage, you know, kind of 18 to 30, I would say.
- LRLenny Rachitsky
And how long have you, has that business been around?
- GLGarrett Lord
It's been around 10 years.
- LRLenny Rachitsky
10 years.
- GLGarrett Lord
Yeah. We-
- LRLenny Rachitsky
Okay. So it's just like, again, it just feels like such a holy shit, you guys are in the right place in the right time with the right network that is extremely valuable now. Uh, what an interesting story. I feel like, I feel like it's just another interesting example of you've been doing something for a long time and then all of a sudden AI is just like, opens up a whole new way of leveraging something that you have been doing for a long time. It makes me think a little bit about, uh, Bolt and StackBlitz, which was building for seven years this like browser-based, uh, OS, where you could run an OS in the browser, and they're like, "I don't know. No one needs this. Why are we, what are we doing?" And then all of a sudden, AI, and they're like, "Oh, what if we build AI apps in the browser and just generate products for you with, with AI?" And now it's, uh, I don't know, one of the fastest growing companies in the world.
- GLGarrett Lord
Yeah.
- LRLenny Rachitsky
So interesting. And so, I think this is just an interesting time for our people to s- l- think about what have we done that may give us a new opportunity to build something huge based on this unfair advantage that we have.
- GLGarrett Lord
I think also, like as your company grows in size and headcount and maturity, it's also like hard to like incubate something new inside of a business. Like, it's hard to, you know, it's hard in so many ways, right? Like, the way that you build zero to one and find product market fit and scale the team very quickly and is very different than the way that you run a, a more mature business that has been around for 10 years with hundreds and hundreds and hundreds of people. Um, so I've really had a ton of fun and, and been, it's been fun, or a ton of passion in, like, running it back again for the second time inside the business. And then, yeah, we have this massive strategic advantage, which is like no customer acquisition costs, and we have like much higher conversion rates and retention than like any of the other platforms by a large margin because we have such consumer
- 40:42 – 45:43
Handshake's competitive advantage
- GLGarrett Lord
affinity.
- LRLenny Rachitsky
There's actually two threads here I wanna follow. I'm gonna follow the second one first. Uh, this idea of where this data labeling work can come from. This isn't a really clear, simple, understandable one, which is just experts sitting there creating data. Another one that I know a lot of other, uh, companies in the space use, Scale I know especially, is just like low cost labor internationally. Um, w- are there other methods for doing this that isn't one of those two? How are other companies doing this?
- GLGarrett Lord
I think if you, like, care about building a really high quality business and having, like good gross margin and, like, high quality growth, like, you know, the, the, the ecosystem here is like, one of the leading players has like, they have like 200 recruiters.
- LRLenny Rachitsky
Hmm.
- GLGarrett Lord
It's like unsustainable. They're like 200 people on LinkedIn sending individual messages to acquire these people 'cause there's no brand, there's no trust. They spend, you know, they're spending tens of millions of dollars a month on performance advertising, Google ads, right?
- LRLenny Rachitsky
To find experts and to find folks.
- GLGarrett Lord
To find experts.
- LRLenny Rachitsky
And it- and it's experts mostly at this point.
- GLGarrett Lord
Yeah. And then they put them onto an experience that like is treating them like they're drawing like boundary boxes around stop signs in the Philippines. Like, you know, the, the Frontier tax accountants don't wanna be treated like low cost international labor, right?
- LRLenny Rachitsky
Mm-hmm.
- GLGarrett Lord
And I, I don't, I mean, I don't think anyone enjoys that process. And so, you know, the ability to build a experience that's rooted in community, that's rooted in, like, high quality training, like if you are getting your PhD at MIT, chances are you're just not being taught well enough on how to use the tools. It's not you can't break the models. It's just that, you know, the other platforms, you know, they're spending thousands of dollars to acquire an individual user, and then they're put right into a project with no training. So, we just started from day one at building like this expert, we believe there'd be a deep network effect here that's very connected to our core business of starting, jumpstarting, or restarting your career. And like, you know, you come in, you build a profile, you see the community. There's, you know, groups and a feed of here's how people are learning. Like, you come into actual individual cohort with like peers that, that look like you and- and have your similar background. You're being taught on how to interact, and there's like a trial and error, an ins- we have an instructional design team teaching you how to do it. Then you're put onto projects where we're building, like-You know, there's certain swim lanes where we're actually pre-building data and selling that data to all the labs. So, we can do this thing where, you know, we produce one unit of data ourselves. We pay for it. Almost like a movie production. We pay for a unit of data, and then we, you know, we make sure it's very high quality. We, we run o- our own post-training on it, and then we produce a bunch of specifications of the data, and we actually sell that individual package of data to, like, many different labs. And so, they... you get put on a project like that. Once you're doing a really, really good job on our projects, oftentimes then we'll put you on customer projects where, you know, we, they only want the best of the best people in, you know, machine learning, right? And then they go from our projects to their projects. Uh, and so, you know, there's a huge customer acquisition. I mean, it's a basic... You know, you love going deep on your podcast, so just to talk about it, it's like, you know, you, you really have a couple things that matter. You have a cost, cost of customer acquisition, right, your CAC, and then you have your LTV, like the lifetime value of a user. And an LTV is calculated pretty simply in this business. Like, it is based on the retention of a person and how many projects they can participate in. So, if you treat people really well, you train them really well, right? Like, well, A, we have no customer acquisition cost 'cause we partner with 1,600 universities, power 92% of the top 500 schools in the country. We power almost every institution and community college in the country. We have no customer acquisition cost to acquire the people. We have ton of brand and trust with them built up, so they convert at ins-... you know, really, really high rates. And then, if you treat them really well, and... 'Cause that's what they expect from us. Like, they know Handshake. Their school ties Handshake. Like, we, we need to treat... We, we care about treating these people well, but, like, the universities would not tolerate our partnership with these, with these fellows unless we treated them well. So, you, you put them into this process where our LTVs and repeat engagement rate and retention rate on different projects is, is really high. And so these structural advantages are quite significant when you contrast, like, a leading provider that has, like, 200 individual contributing recruiters and are spending tens of millions of dollars a month on performance marketing, you know? It... So, that's, I think, why we've seen so much success.
- LRLenny Rachitsky
That's extremely interesting. And it feels like, as you said, there used to be a big focus on generalists, which is people anywhere in the world, for low cost, can do the work, like draw bounding boxes around things. And it... And essentially the market has shifted from low-cost generalists to experts, and a lot of these companies like Skale were optimizing for general work model training data, and you guys are set up to be extremely good at expert-based data, and so you're in the right place at the right time with the right supply. Uh, what a business.
- 45:43 – 48:38
Scaling up and meeting market demand
- LRLenny Rachitsky
- GLGarrett Lord
Yeah.
- LRLenny Rachitsky
Nice work.
- GLGarrett Lord
Well, not... I would say it's not been easy building business two inside of business one, but -So, let me actually... Yeah.
- LRLenny Rachitsky
Yeah. So, let me follow that thread. That's where I wanted to go. What was just that like? So, you started noticing that model companies were coming to your people, that people were having hard times with some of these other companies in this space, and you're like, "Oh, maybe we should be doing this sort of thing." How did that just, like, initial inception start, and how did you start to explore that idea and to see if it was a real thing?
- GLGarrett Lord
Tactically, um, you know, we were working with many of the middleman companies doing work. We started to see the demand, as I t- talked about earlier. We, we started to see direct outreach from the frontier labs reaching out to us, trying to cut out the middleman c-... in their pursuit of getting higher-quality data. When we started to put together the dots on we, we could build a way better experience for our fellows. We could serve them directly to the labs and build a direct cost relationship with the labs and basically cut out the middleman and provide a better experience to the labs, provide a better experience to our fellows, and provided a better experience long term to our, like, a million companies in the network, and, uh, you know, and the... And, and you might, you might think about just, like, upskilling and reskilling, what's gonna happen there. So, we want that into the space. We started in, you know, really December exploring and learning more about it on, like, expert calls and hammering down... You know, I hired, like, three expert firms, Alpha in s-... AlphaSights and, like, GLG, and started doing a bunch of calls with the latest researchers 'cause we had resources. Like, one of the cool things about being a larger company is, like, we, we have financial... You know, our core business is $200 million ARR, so it's like, you know, we, we, we had resources to be able to, like, accelerate the learning curve here. Uh, and then we started working with the... arguably, like, the number one lab about five months ago.
- LRLenny Rachitsky
I wonder who that is.
- GLGarrett Lord
Yeah. Um, (laughs) yeah, wonder who it is. Depending on which benchmark, you get different answers.
- LRLenny Rachitsky
Mm-hmm.
- GLGarrett Lord
Uh, working with the number one lab, and, uh, and, and have just... You know, now we're working with seven other frontier labs, and, uh, the number one thing we're trying to do is just focus on, like, scaling up. I mean, we've gone from four or five people working on this to 75 plus people working on it. Uh, we're trying to... I think we had, like, 12 people start last Monday. It's like, we are... You know, we are so bottlenecked on just meeting this opportunity, because in this market, there's, there's unl-... essentially, like, unlimited demand. Like, if you can produce high-quality volumes of data, uh, you most likely will be able to sell whatever you produce. Um, and so, on our side, it's like we're really focused on making sure that we pick the right longer-term strategy, uh, making sure that we don't grow too fast as to erode the trust that we built up with these frontier labs. Yeah, but it, uh... You know, it's- it's been, it's been fun.
- 48:38 – 53:08
Overcoming challenges and adapting
- LRLenny Rachitsky
You said it's also been really hard to start this business within an existing business. What has been... what's been hard? What's been hardest? You touched on a couple of these elements already, but what else?
- GLGarrett Lord
I think I just kind of followed a lot more of my intuition around this, doing this. I mean, the story of Handshake was we had to sign up 1,600 universities.So I had to learn how to be, like, the best... We are the fastest growing higher education company in, like, history. So we signed up 1,600 schools. And then we had to build an employer business where, you know, we had to figure out how to sell the 100%... (laughs) You know, 70... You know, all these Fortune 500 companies use it and, like, 70% of it pay for it. So I had to learn about, like, upmarket sales to, like, Goldman Sachs and General Motors and Google and the biggest companies in the world, which is totally different than selling universities. And then we had to learn how to build, like, an incredible student, like, kind of social network. Like, what does a, what does the best feed look like? What does group messaging look like? You know? So we had... I felt a little bit of familiarity in this, like, kind of zero to ones. Oftentimes, like, marketplaces are, like, many zero to ones. Sometimes I dream that we just, like... I actually don't dream, but I make a joke that, like, I just wish we were, like, a cybersecurity company and we had, like, one buyer and just, like, one product. And it was just, like... You know, we had to... In, in a marketplace, you have to serve three different sides. You know from your time at Airbnb. And so, one of my learnings in spinning up these three different businesses and starting Handshake was, like, you know, you... I was pretty hands-on. So, like, you know, everyone reported directly to me. I really did not try to be, like... I, I really said in a lot of meetings, like, "I'm not trying to be the boss. I'm just trying to be, like, another smart guy in the room." Like, I hired... I was just... We've hired an incredible team of people that have, have spent a lot of time in this space and have been big leaders at a lot of the human data companies in this space. And so, everyone saw very clearly the structural advantages that we had. And a lot of the focus was on making sure that we could deliver high quality data to one customer before we expand to anyone else. Like, we just... You had to say no to a lot of things.
- LRLenny Rachitsky
Mm-hmm.
- GLGarrett Lord
And then you also had a lot of people in the core part of the business that, rightfully so, but, like, there's just checks and balances that... There's a lot of people that will, like, try to get involved, right? Like, everyone wants to say... Not everyone. This is a stretch. But, you know, it's easy to say no, right? It's easy to be like, "I, I can't prioritize that this week or this month. I have an existing set of priorities." So, you know, I essentially, with the exception of a few things, like, everyone just came straight into this new org that I built. Everyone did not have any responsibilities in the existing part of the business. It was extremely clear who was, like, the directly responsible individual across each area of the NewCo. And now we've got deeper coupling and integration points across the rest of the business. But, like, we sat in a separate part of the office. You know, we were... We... You know, everyone's in the office five days a week, a lot of weekends. There's a totally different expectation in hiring talent too, where it's like, "Hey, this is a, this is a 24/7 job." Right? Like, this is an early stage company. We... The compensation was also different too, and based on, like, hurdles in this new business. So people felt like owners creating the NewCo. And yeah. It's like, it's still extremely nimble. Very, very flat. You know, just because you want... run one function doesn't mean you're the directly responsible individual on a project. We pick the best person who's most capable of driving an initiative forward, regardless of the function, to be the DRI. We're a lot more metrics oriented. You know, when I, when I built Handshake, we, we, we resisted this, like, operating cadence for a long time. Like, this weekly, monthly, quarterly operating cadence. With Handshake.ai, we've, we've been way more focused on, like, operating with data and metrics and rigor from an early stage. There's a gentleman named Sahil on our team who's been doing an incredible job with that. Shout out, Sahil. Shout out, Yang. Shout out, Paco. Um, yeah.
- LRLenny Rachitsky
Okay. This is incredible. So, a few kind of elements of what allowed this to succeed within a decade-old company. And by the way, so you're at 200 million a year in revenue with the traditional business. You're gonna, as you said, blow past 100 million in the first year of this new business. So it's wild that in the first couple years, if things continue to go this way, you'll exceed, uh, the sizes, the run rate of a business that took you 10 years to build.
- 53:08 – 57:26
The importance of separate teams and ownership
- LRLenny Rachitsky
Incredible. To make this successful, a few of the things I noted as you were talking. One is, clearly, you are just like in founder mode. You're the CEO of this comp-... You're, like, the lead of this new business. You were taking... You weren't delegating it to someone, "Hey, go start this thing." You dedicated people. "Here, we're going to pick people. You have nothing else going on. This is your new job. You're gonna work on this, this stuff." You worked in a different part of the office. There's a different... There's a metrics-based cadence. So it's just like, "Let's stay really diligent about, here's how it's going, here's where we're going, here's our track, here's our KPIs," things like that. Anything else there that you felt really important to making this work? Because a lot of companies are gonna try to do this, I imagine, and so I'm curious what else you found important to make this work.
- GLGarrett Lord
Yeah. I mean, I just really believe in separating everything. Like, separate engineering team, separate design team, s-... Separate accounts and operations team, separate finance team. Like, early on, everything was separate. People only had one job and one job only, and that was making Handshake AI successful. We had a couple integration points more. And I have... I have an incredible executive team in the core part of the business, and now there's becoming more and more involvement. But, like, you know, I... The... Our executives that have built Handshake for a long time, like, ran the core business. And I focused 80-plus percent of my time and attention on just this. And, you know, we hired an incredible engineering leader, like Avery, who... You know, we, we focused on hiring a lot of entre-... We have a lot of entrepreneurs. People that have started companies inside the company. Or pardon me, people that have started companies before. Like, that was huge. Um, a lot of familiarity with hiring talent that have, like, only worked at early stage companies before. They feel super comfortable with ambiguity. Um, we were also, like, way more upfront around, "This is gonna be chaotic." Like, just, like, owning that narrative, like, a-... in front of all hands at the core company. Owning it directly at the team. We have a separate all-hands. We have separate onboarding. We have a separate recruiting team. Like-You know, everyone was essentially, you know... I had some connection points, but mostly separate, and I think that was, like, absolutely critical. We took some of the top people, and we have g- great people in the core business. We took some great people from the core business in, and basically said, "Sorry, like, I know you love your old team. I know you love what you're doing. Like, will you join us in Handshake AI?" And they like, completely foregoed their historical responsibilities and came over. That became really critical with engineering when things started to scale and topple, and like, you know, we're growing so quickly. We took some of our top senior engineers who are very entrepreneurial, and principal engineers, staff level engineers, like, parachuted them in, and, you know. Th- that's been like o- it's been awesome to be able to, like... We have... It's been awesome to, like, ask some of the most talented people in the core business, like, "Hey, do you wanna come over here and do this?" And sometimes they say no. Like, they're like, "I don't wanna work, you know, most of the weekends. I don't wanna be on..." The number of 2:00 AM, 3:00 AM nights we've done in this business, it's, it's, it's be- I mean, it's quite regular. Like, people sometimes don't wanna commit to that. But we've been up front, like, "Here, here are the expectations for this team." It's a, it's a, you know... It's an insane pace. If you wanna be a part of one of the fastest growing, you know, businesses in Silicon Valley, you can join it. Um, owner- the ownership too has also been huge, like, owning this outcome and like... We have, we have this motto, like, leave nothing to chance. Like, I always... For a while there, we, like, drew the number of days in the year on the whiteboard and it was like, there will never be a time like this. I've never seen anything like it. I doubt I'll ever feel anything like this in business again, where there's unlimited demand and it's just our ability to execute against it. And so we had this motto, like, leave nothing to chance. Like, how do you, how do you make sure that three months from now, six months from now, you have, like, no regrets. Like, get on the plane to go talk to a customer. Like, make the late night push. Check the data six times over again. Like, ship the extra feature that helps. And really a huge celebratory culture too, like calling people out across... It's very flat, right? So there, there really isn't this principle of, you know, the... There's so many people putting up points, like directly calling out the people that are putting up points, and creating a really fun environment around impact, I think has been, it's been awesome.
- LRLenny Rachitsky
Believe nothing to chance piece I imagine speaks partly to the value of trust in what you're doing. People are gonna... Like, you win if they can trust that your data's awesome and great and consistent, and I could see why that ends up being such an important part
- 57:26 – 1:00:30
The future of job matching with AI
- LRLenny Rachitsky
of what you're building. And, like, just listening to you describe this, I understand, like, it's there's so... It's obviously a massive opportunity, obviously a massive, uh, advantage you guys have, and just, like, the stress that comes with that burden also, I imagine, is very high, of just, like, this is... We can't screw this up.
- GLGarrett Lord
No (laughs) . Cannot, cannot... You know, it's... Handshake should be a... Business does billions of dollars revenue as a public company, like, there should, you know, we should be able to continue to... I mean, and it also helps our core business. Like, the longer term opportunity that we see is, it's connecting... It's building the best job matching marketplace on the internet. It's like, you know, it's probably one of the largest problems in the world, like labor supply matching, like, it's where people spend most of their time and energy, just hours of their life, they spend it at work. The process of, like, searching for a job, applying to a job is gonna be completely reinvented with AI. We've been leading the charge there, like, you know, an AI interviewer that's collecting skills and actually asking about your experiences, doing work simulation, experiences that, like, help employers find the best candidates. I mean, I don't know the last time you've done this, but, like, the hiring manager process, like, reviewing 200 resumes. Like, are you kidding me? Like, I'm, I'm gonna sit there and review 200 resumes? Like, not a chance five years from now, right? Like-
- LRLenny Rachitsky
Mm-hmm.
- GLGarrett Lord
... uh, students manually making cover let- like not a chance, right? So there will need to be a marketplace that wins in connecting s- you know, supply and demand and, you know, talent with opportunity, and we think and get psyched about, like, the opportunity for impact here. Like, uh, that's my story. Like, I went to community college, a pavement winter school. I went to a no name school in the upper peninsula of Michigan. I worked at Palantir as an intern, it totally changed my life. And like, I started Handshake 'cause I wanted to make it easier for, like, anyone regardless of who you knew, what your parents did, what school you went to, to find a great opportunity. And I think AI would totally step function improvement in matching. And I think that our human data business is really serving as, like, the foundation for improving matching. Like, a lot of things that we're doing in the human data business are being integrated into our core business. I think that's gonna improve outcomes for employers, save them, you know, in the aggregate, like, billions of dollars over time. Uh, and I think it makes the experience way better for students. So it's, uh, it's just like, we have to meet the moment. Like, you know, I, we still have the stamina and the excitement and the passion internally in our core and in the new business to like, go charge after this. Uh, and that's a lot of the messages we've been sharing internally is like, it's, it's time to amp it up. It's time to like, this is a once in a lifetime opportunity to be positioned as well, and like, we're, we- we are gonna meet the moment as a team.
- LRLenny Rachitsky
It really is. This is very much feels like a once in a lifetime opportunity. Let me ask a few other questions along these lines that are something I've been thinking about, something that a lot of people think about, just while I have
- 1:00:30 – 1:02:37
The biggest bottlenecks to advancing models further
- LRLenny Rachitsky
you. There's always this question of will we run out of data? Will models stop advancing? Are we gonna hit some plateau and there's not actually gonna be some AGI moment, SGI moment? So, well, first of all, do you think we'll run out of data? There's a point at which we just can't produce more knowledge and data to feed these models? And kind of along those lines, what do you think is the biggest bottleneck to advancing models faster and further?
- GLGarrett Lord
Yeah, I mean, like, it's just the type of data we're gonna need is gonna evolve. It's gonna be CAD files. It's gonna be scientific tool use data as they a- try to automate scientific discoveries and drug discovery. It's gonna, you know, it's gonna be esoteric...... you know, operating systems that exist on, you know, scientific tools. It's gonna be, you know. So I, I love this, like, trajectory and, like, stitching together step-by-step instruction following. Like, you know, there will need ... The type of data we're going to need is going to evolve a lot. And we haven't even talked about, like, multimodal and video and-
- LRLenny Rachitsky
Mm-hmm.
- GLGarrett Lord
... text and audio. Like, audio is, is, is ... There's a huge demand for audio data right now. So the type of data's gonna evolve.
- LRLenny Rachitsky
Yeah. I use Voice Mode all the time. That's on my default at GBT experience, just talking to-
- GLGarrett Lord
Yeah. It's, it's amazing. It's amazing. I just had a baby on ... We ... Or my wife had a baby on Sunday, and Voice Mode's been incredible. I mean, I'm every night at, you know ... Every two hours is
(laughs)
. It's like I have more questions. Voice Mode's been huge. So I, uh, shout out Voice Mode. And yeah, so the type of data's gonna collect a lot or change a lot. Um, I think synthetic data has a role to play in, like, in verifiable domains, but, like, what we consistently hear from companies is like, you know, their synthetic data's not gonna dominate. Like, it's not gonna be like ... There, there, there's an, there's, there's billions and billions and billions of dollars of value to, uh, extract as a company over the next decade in following the frontier of AI development.
- LRLenny Rachitsky
Let me first say just huge kudos to you for, uh, just having a kid. Your wife just having a kid a few days ago, and building this business that is growing bananas, and doing this podcast conversation. I really appreciate you making time for this.
- GLGarrett Lord
Of course.
- 1:02:37 – 1:09:50
Lightning round and final thoughts
- GLGarrett Lord
- LRLenny Rachitsky
Is there anything else that we haven't covered that you think might be helpful for folks to hear, or a part of your story that you think might be helpful for us to learn from? Or something you may wanna just double down on that we've talked about before we get to our very exciting lightning round?
- GLGarrett Lord
I mean, the thing that I always love, like, talking ... I'm really passionate about, like, people starting companies and helping them do so, and, like, I just think in this moment right now with AI, like, for young entrepreneurs that listen to, to read this podcast 'Cause I've been re- a reader since 2020. We looked.
- LRLenny Rachitsky
I ... Yeah. We did check, and it's incredible.
- GLGarrett Lord
Yeah. Been a long-term reader. I'm just, like, so curious and love-
- LRLenny Rachitsky
Appreciate it.
- GLGarrett Lord
... sucking up your interviews. But it's like, can you just focus on doing something like of meaning, like, that really helps people? And I think with AI there's, like, gonna be so many opportunities to improve the way people learn and, like, just, you know. I'm just really passionate about trying to make Handshake a platform that is not only, you know, incredible business, but it's also something that, like, really helps solve a societal problem that matters. And, uh, yeah, that's be- my one, one shout-out here. If anyone wants advice on how to do that or wants to reach out, I'm, like, happy to chat.
- LRLenny Rachitsky
Hmm. Okay. So this is, uh, an offer to share advice on starting companies within AI. Is that, is that the offer here, just for folks?
- GLGarrett Lord
Yeah. That'd be great. Happy-
- LRLenny Rachitsky
Okay. I don't know how much time you'll have for the hundreds of thousands of people coming your way, but-
- GLGarrett Lord
(laughs) .
- LRLenny Rachitsky
... but I appreciate the offer. That's very cool. Um, anything else before we get to our very exciting lightning round?
- GLGarrett Lord
No.
- LRLenny Rachitsky
Well, with that, Garrett, we've reached our very exciting lightning round. I've got five questions for you. Are you ready?
- GLGarrett Lord
Ready.
- LRLenny Rachitsky
What are two or three books that you find yourself recommending most to other people?
- GLGarrett Lord
I'm, uh, much sucker for Peter Thiel's Zero to One. I read it when I started the company, and watched Peter Thiel's, like, startup school class at Stanford. He taught back in the days where there wasn't everything written on the internet about how to start companies, and, like, just think he's, was the coolest. Um, love, love Shoe Dog. Like, I think that, you know ... It's the epitome of, like, starting a company. Hard Things About Hard Things, obviously, but these are, these are all quite common books.
- LRLenny Rachitsky
But, uh, also classics. Uh, Ben Horowitz is coming on the podcast to talk about Hard Things About Hard Things.
- GLGarrett Lord
Super cool.
- LRLenny Rachitsky
The Hard Thing About Hard Things. Yeah. Okay. Uh, what ... Have you seen a recent movie or TV show you really enjoy? I imagine you don't have much time for this, but-
- GLGarrett Lord
I'm gonna get blasted for this, but I, I did start Game of Thrones with my wife, and I, uh, cannot wait.
- LRLenny Rachitsky
For the first time?
- GLGarrett Lord
Yeah.
- LRLenny Rachitsky
Okay. Cool.
- GLGarrett Lord
So I got a lot of catching up to do.
- LRLenny Rachitsky
Why would you get ... No. This is great. It's like-
- GLGarrett Lord
Yeah. I've been watching so far.
- LRLenny Rachitsky
People that have watched it, you've loved it so far. Okay.
- GLGarrett Lord
Yeah.
- LRLenny Rachitsky
It's quite, quite gruesome, and that's the only downside of that show. (laughs)
Episode duration: 1:09:50
Install uListen for AI-powered chat & search across the full episode — Get Full Transcript
Transcript of episode 0qdR-XwHJ9o
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome