Skip to content
No PriorsNo Priors

No Priors Ep. 110 | With Mercor CEO and Co-Founder Brendan Foody

On this episode of No Priors, Sarah and Elad sit down with Brendan Foody, CEO and Co-Founder of Mercor, to discuss the company’s rapid growth and their vision for the future of the labor market. They dive into how AI is reshaping the workforce in real, tangible ways and what skills are worth investing in today. Brendan shares insights on evaluating talent in an AI-driven world, including how models might identify outlier or 10x candidates and even assess “taste.” The conversation also touches on the evolving role of human data, the future of hiring in fast-scaling startups, and whether AI will act as an individual contributor or a data-centric manager. Show Notes: 0:00 Introduction 0:16 Building Mercor 3:00 Identifying outlier talent with AI 9:07 How AI is reshaping the workforce: job displacement & evolution 11:18 What skills should we invest in now? 12:18 Verifiability 13:36 Evaluating models 16:07 What should kids learn today? 17:05 Evaluating taste in talent assessments 18:45 Future of data collection 26:07 Humans’ role in the AI economy 28:53 AI as a contributor vs. a manager 33:03 Mercor’s goals 34:50 Evolution of labor markets 36:00 Hiring advice

Sarah GuohostBrendan FoodyguestElad Gilhost
Apr 10, 202541mWatch on YouTube ↗

EVERY SPOKEN WORD

  1. 0:000:16

    Introduction

    1. SG

      (instrumental music) . Hi, listeners, and welcome to No Priors. Today, we're chatting with Brendan Foudy, co-founder and CEO of Merkle, the company that recruits people to train AI models. Merkle was founded in 2023

  2. 0:163:00

    Building Mercor

    1. SG

      by three college dropouts and Thiel fellows. Since then, they've raised $100 million, surpassed 100 million in revenue run rate, and are working with the top AI labs. Today, we're talking about where the data for foundation model training will come from next, evaluations for state-of-the-art models, and the future of labor markets. Brendan, welcome to No Priors. Brendan, thanks so much for doing this.

    2. BF

      Yeah, thanks for having me. Excited to be here.

    3. SG

      So you guys have had a, like a wild last six months or so.

    4. BF

      Mm-hmm.

    5. SG

      Um, there's huge traction in the company. Can you just talk a little bit about, uh, what Merkle does?

    6. BF

      Yeah, so at a high level, we train models that predict how well someone will perform on a job better than a human can. So similar to how a human would review a resume, conduct an interview, and decide who to hire, we automate all those processes with LLMs. And it's so effective, it's used by all of the top AI labs to hire thousands of people that train the next generation of models.

    7. SG

      What are the skills and like job descriptions that the labs are looking for right now?

    8. BF

      It, it's really everything that's economically valuable because reinforcement learning is becoming so effective that once you create evals, the models can learn them and how to, uh, you know, improve capabilities. And so for everything that we want LLMs to be good at, we need evals for those things. Um, and it ranges from consulting to software engineers, all the way to hobbyists and video games and, and everything that you can imagine under the sun. And it's really whatever capabilities you're seeing the foundation model cap- companies invest in, or even application layer companies invest in, uh, the evals are upstream of all that.

    9. EG

      Uh, and are you also helping companies outside of the core foundation models with this similar type of hiring? Or is it mainly just focused on AI models right now?

    10. BF

      Yeah, so actually when we started the business, it was totally unrelated to human data. It was just that we saw that there were phenomenally talented people all around the world that weren't getting opportunities and we could apply LLMs to make that process of finding them jobs more efficient. And then we realized after, uh, you know, meeting a couple of customers in the market that there was just this huge vacuum because of the transition in the human data market. And that the human data market used to be this crowdsourcing problem of how do you get a bunch of low- and medium-skilled people that are writing barely grammatically correct sentences for the early versions of ChatGPT. And it was transitioning towards this vetting problem of how do you find some of the most capable people in the world that can work directly with researchers to push the frontier of model capabilities. But we've still kept that core DNA of hiring people for roles, human data and otherwise. Um, and a lot of our customers hire for both.

    11. EG

      Do you think all of hiring eventually moves to these AI systems assessing people, or at least all sort of knowledge work?

    12. BF

      I think certainly 'cause we're already seeing on

  3. 3:009:07

    Identifying outlier talent with AI

    1. BF

      most of our evals that models are better than human hiring managers at assessing talent, a- and it's still like very early innings.

    2. EG

      Mm-hmm.

    3. BF

      And so I think we'll get to a point where it'll almost be irrational to not listen to the model, right? Where people trust the model's recommendation a- and like maybe for legal reasons, we'll still have the human pressing the button and making the final sign off. Um, but where, where we just trust the model's recommendations on who should be doing a given task or job more than we trust the humans.

    4. EG

      I guess in any field people, um, say that there's 10X people. There's 10X coders who are way more productive than the average coder. There's 10X physicians or investors or you name it. Do you see that in terms of the output of your models? In other words, are you able to identify people who are outliers?

    5. BF

      Totally. This is one of the most fascinating things is that the power law nature of knowledge work frames the importance of performance prediction. And that imagine if you can understand like the kinds of engineers on an engineering team that are going to perform in the 90th percentile, right? Or even if you could say, "I know that this person that costs half as much is going to perform in the top quartile," right? It frames like how you think about the value that we create for customers and how you think about like the long-term economics of the business. And it all ties back to like how do you measure the customer outcomes and, and really go on them?

    6. EG

      And is it a power law? What sort of distribution is it? 'Cause people always talk about human performance-

    7. BF

      Yeah.

    8. EG

      ... as a bell curve. Do you think that's actually true? Or do you think that's the wrong way to interpret human performance relative to knowledge work?

    9. BF

      It's very industry by industry, right? Like, uh, for you in, in investing, right? It's like the most power law thing imaginable and where it's just like the top handful of companies each decade are the, are the ones that matter such a disproportionate amount. And it's the investors that win in those versus if you're hiring like factory workers, right, it's a much more commoditized skillset. There is a lot less of a difference. And I, I think like software engineering is somewhere in between. Um, uh, it's definitely very power law, but I don't think it's as power law as say like the handful of best investors in the world.

    10. SG

      Do you have a prediction for, um, either because of the distribution of like, um, skill level or the, uh, measurability like where you should expect that models are better at evaluation or identification of talent beyond ev- you know, human data first?

    11. BF

      Yeah, so it's really everything that you can measure with text, the models are really good at. Like if you can ask questions in an interview and read through the transcript, the models are superhuman at that, uh, across...... many more demands than one would think. Like, it's not, it- it's more demand agnostic than I would have initially anticipated. I think the things where models are going to be slower is on the multimodal signals and understanding, like, how passionate is this person about what they're working on, right? Like, how persuasive are they or good at sales? And those capabilities will come but they'll just take a little bit more time. Um, so that's my mental model for thinking about it right now.

    12. SG

      Right, so like if I'm interviewing a candidate for one of our companies-

    13. BF

      Mm-hmm.

    14. SG

      ... and they are saying the right words about, you know, motivation level but I don't believe it, like, that might be a, uh, next level signal if I, if I have any predictive power here, right?

    15. BF

      Totally, totally, exactly. The other thing is that the models are way better at high volume processes.

    16. SG

      Mm-hmm.

    17. BF

      An example is, like, say you're assessing 20 people for the same job or... and you hire those people, you see how they perform. It's very easy to attribute features of each person's background to how they perform, right? It's sort of the stack ranking where you can understand, like, this person had this nuance in their interview, or this person had this nuance in their resume, and that was the thing that explained how well they performed on the job. Versus if those 20 people are performing 20 different jobs, then it's just this, like, mess of figuring out, like, what is causing what things to happen. It's way more difficult to understand, like, what features are actually driving signal. And so I think it will be those higher volume processes that also get automated first.

    18. SG

      Is there anything that, um, surprises you about, like, basically the discovered features, uh, in terms of, uh, I don't know, any domain that you are working on today that identifies amazing talent?

    19. BF

      That- that's a very good question.

    20. SG

      Or maybe in engineering because it's relevant for many of our listeners.

    21. BF

      Yeah, I think that one of the really interesting things for engineering is that there's so much signal about a lot of the best engineers online that I don't think people properly tap into, right? It's everything ranging from their GitHubs to the personal projects on their website, the blog post that they wrote during college. It's just because there's, like... it's bottlenecked by manual processes. The hiring managers don't have time to read through all this stuff, right? Uh, they don't have time to... Or with designers, they don't have time to consider every proposal or- or images from someone's Dribble profile before doing their top-up funnel interviews. And so I think one of the things where people are- are under-indexing on signal the most is the things that can be found online. Um, but then a lot of the things that can be indexed on during an interview like, how passionate is this person, does this person have the skills that you- it would require for the job, I think humans are relatively good at. At- at least they're, uh, they're a little bit, uh, more adopted right now.

    22. EG

      Are there hidden signals for other types of domains where there's less online work? An example of that would be physicians-

    23. BF

      Mm-hmm.

    24. EG

      ... lawyers. You know, there's a lot of other professions where-

    25. BF

      Totally. Yeah, there- there's all sorts of these hidden signals like, uh, one interesting one we've seen in the past is that people who are based internationally but study abroad in a Western country-

    26. EG

      Mm.

    27. BF

      ... tend to, like, work much more collaboratively or communicate better with, uh, people. And it's like, they're the kinds of signals that make sense when you look backwards and evaluate them, but are hard for, like, a human without having full context of, like, everything happening i- in the market to really understand and appreciate. And there's often... Like, one of the most important things, as you can imagine is just how intrinsically motivated and passionate are people about a domain. And so looking for signals of not just, like, on their resume and in their interviews, uh,

  4. 9:0711:18

    How AI is reshaping the workforce: job displacement & evolution

    1. BF

      as well as online of, like, what indicates this thing, right?

    2. EG

      Mm-hmm.

    3. BF

      Like, how do we... And- and it pertains not just to who you hire, but also what those people should be working on, right? Imagine the nuance between hiring a biology PhD to work on, like, biology problems versus hiring the person who wrote their thesis on drug discovery-

    4. EG

      Mm.

    5. BF

      ... to write, like, problems, uh, and, like, come up with innovative solutions contextual to their thesis. And there's just so much inefficiency with the way that we do matching, the way we use all of those signals right now.

    6. EG

      So you're eval-ing people. Are you also doing evaluations of the models relative to the people?

    7. BF

      Yeah. Yeah, of course.

    8. EG

      And then, um, when or what is your view in terms of the proportion of people who are eventually get displaced by these models? In other words, if you can tell the relative performance-

    9. BF

      Mm-hmm.

    10. EG

      ... and you can look at relative output, how do you start thinking about either displacement or augmentation or other aspects like that?

    11. BF

      I think displacement in a lot of roles is going to happen very quickly and it's going to be very painful, uh, and a large political problem. Like, I think we're gonna have a big populist movement around this and all the displacement that's gonna happen. But one of the most important problems in the economy is figuring out how to respond to that, right? Like, how do we figure out what everyone who's working in customer support or recruiting should be doing in a few years? How do we reallocate wealth, uh, once we have- once we approach super intelligence, um, for... e- especially if the value and gains of that are more of a power law distribution? Um, and so I spend a lot of time thinking about, like, how that's gonna play out, um, and I think it's really at the heart of

    12. EG

      What do you think happens eventually? X percent of people get displaced from, like, other work.

    13. BF

      Mm-hmm.

    14. EG

      What do you think they do?

    15. BF

      I think there's gonna be a lot more in the physical world. I think that there's also gonna be a lot that... of, like, niche skills-

    16. EG

      What does the physical world mean?

    17. BF

      Well, it could be everything ranging from people that are creating robotics data to people that are waiters at restaurants or, um, or are just, like, therapists because people want, like, human interaction.

  5. 11:1812:18

    What skills should we invest in now?

    1. BF

      Uh, w- like, whatever that looks like. I think all of... I- I think that automation in the physical world is going to happen a lot slower than what's happening in the digital world just because of so many of the, like, self-reinforcing-

    2. EG

      Mm-hmm.

    3. BF

      ... uh, gains and, uh, a lot of, yeah, self-improvement that can, that can happen in- in the virtual world but not physical one.

    4. EG

      Mm-hmm.

    5. SG

      Do you have a point of view on like what types of, of skills, knowledge, uh, reasoning are worth investing in now as a human expecting to stay economically valuable?

    6. BF

      So Sam Altman said this thing, uh, when someone asked him this, about how people should optimize for just being very versatile and like able to learn quickly and change what they do, and I think that resonates a lot because there's so many things that one would think the models aren't good at that they get very good at very fast that I almost think you just need to be able to like navigate that quickly.

    7. EG

      What are the characteristics of those things that you think

  6. 12:1813:36

    Verifiability

    1. EG

      models will learn the fastest? Like if you were to say, "Here's a heuristic-"

    2. BF

      Yeah.

    3. EG

      ... what, what do you think are the components of that?

    4. BF

      If it's verifiable. For things like math, uh, or soon code that are verifiable, they will get solved very quickly.

    5. EG

      So you want a feedback loop or utility function that you're optimizing against as a model?

    6. BF

      Exactly. For things that aren't verifiable, like maybe it's, uh, your taste in a founder, right? That's much harder to automate, uh, a- and it's also a very sparse signal because, uh, yeah, there's just not that much data on it.

    7. SG

      This is a pretty fundamental research question right now, but like what do you think are the most interesting ideas about verifiability beyond code and math?

    8. BF

      Well, I think that there's ways that you can have certain auto-graders or like criteria that humans can apply, um, and I'm very interested i- or that mo- models can apply those criteria, and I'm very interested in how that will play out over time. A- and there's obviously a lot of other domains where models will take unstructured data, they'll structure it, they'll figure out how to verify it, and it's very like industry by industry. I think it's going to be hard for one lab to do everything there, um, and there's going to be, you know, more specialization as we progress further and further and, uh, marginal gains in each industry become more challenging.

    9. SG

      How much do you believe in, uh, generalization from the code and math type reasoning in intelligence? Like

  7. 13:3616:07

    Evaluating models

    1. SG

      if I'm this much better at proof math, does it make me funny eventually? M- me being the intelligence.

    2. BF

      I, yeah, I, I generally believe in it, um, but to a certain extent. Like you still need a reasonable amount of, of data for the new domain and to kickstart it, um, but there's going to be a lot of transfer learning.

    3. EG

      I think it's very funny when Sarah does proofs.

    4. BF

      (laughs)

    5. SG

      (laughs)

    6. EG

      So I think it all fits.

    7. BF

      She good at-

    8. SG

      We're all bad at kind of...

    9. EG

      ... at proofs? Yeah. (laughs)

    10. BF

      (laughs)

    11. EG

      (laughs)

    12. SG

      I actually think being bad at proofs is funnier. (laughs)

    13. BF

      (laughs)

    14. EG

      (laughs)

    15. SG

      Okay, let's, uh, let's talk about evals because you're, you know, working on the, uh, bleeding edge of model capability. Uh, there has been, uh, this whole sense of what people call evaluation crisis around like the models are so good, uh, and they're somewhat indistinguishable at the fringe of-

    16. BF

      Yeah.

    17. SG

      ... of capability today that we, we don't know how to test them, you know, ignoring all the issues with, uh, uh, people, um, gaming, gaming the benchmarks, right? Um, what do you think, how, like what bright ideas are there about, uh, evaluating models, especially as they become superhuman?

    18. BF

      Well, I think one of the most important things is that a lot of the evals historically have been for like zero shot of a model, uh, or like a test question, right, that might be academic, when the thing that we actually need to eval is like what's economically valuable work. Right? When a software engineer goes to their job, it's so much more than writing a PR. It's like coordinating with all of the relevant parties, uh, to like understand what does like the product manager want and how does that fit into, you know, the priorities of each team a- and how does that all translate to like the end output of work? And so I think we're going to see an immense amount of eval creation for like agents, uh, a- and that is the largest barrier to automating most knowledge work in the economy.

    19. SG

      Where should people start? Like that feels not, um, terribly generalizable, so-

    20. BF

      Yeah.

    21. SG

      ... Sierra has something called TAO Bench-

    22. BF

      Mm-hmm.

    23. SG

      ... that I think people are trying and there are other efforts here, but it is perhaps like more specific to a certain function.

    24. BF

      Yeah. I, I think that people will need to have these by industry and they should probably start with tasks that are more homogenous, right? Like it's going to be... For customer support tickets I think that's a great example because there's like one interface that the customer support agent interacts with. Maybe they call a couple of tools, like, uh, accessing the database or, or reading through the documentation, but it's a relatively like homogenous uniform task. I think the things that are going to be more challenging but also, uh, in

  8. 16:0717:05

    What should kids learn today?

    1. BF

      many cases more valuable are creating evals for these like very, very diverse tasks, right? Uh, all the things that go into making a good software engineer. That's going to be really hard to do. Like I think it's going to be a years-long buildout for e- even some of the verifiable domains because there's so much that goes into a good software engineer of like how do they have taste for like, you know, what is the right way to approach a problem or what are the products that people really enjoy using? Um, and I'm really excited for that.

    2. EG

      So if you were to counsel people with young kids. Say your child is, I don't know, five to ten.

    3. BF

      Yeah.

    4. EG

      Should their kids learn computer science?

    5. BF

      I would probably not push them towards teaching their kids computer science, but I'm not totally against it. I think that the key thing is-

    6. EG

      What would you teach them?

    7. BF

      I would encourage them to just like find something that's intellectually stimulating they're really passionate about where they can learn general reasoning capabilities, um, and those like reasoning capabilities will probably be very like valuable and cross applicable. I like always

  9. 17:0518:45

    Evaluating taste in talent assessments

    1. BF

      loved co- building companies growing up and like hustling and doing small things like that, and I think that is something that could be helpful. But I am skeptical that like the really valuable thing is just people who can code in five years. I think it's much more likely like the people that have these contrarian ideas around what's missing in markets, um, a- and have the taste of what like features and nuances need to go into solving that problem.

    2. SG

      You said taste a few times. Are there signals of taste that you feel like you can discover in any domain?

    3. BF

      Yeah, absolutely. I, I mean, I think that oftentimes you, you just want to see the softer signals of how people think about certain problems, um, and certain people have intuitions, um, whether it be like the way they approach a problem or if they're looking at different like products, h- how they notice nuances. Yeah, it's very industry... It's very contextual to the industry, but it's important to measure.

    4. SG

      How can you score it?Like, what's the, what's the positive feedback loop here?

    5. BF

      We've done a variety of things but oftentimes we will give people, like, uh, a problem that as closely as possible mirrors what they would solve on the job, uh, and then we would see how they compare to other people. Um, and so that helps with scoring it.

    6. EG

      You ask them for their thought processes as part of that?

    7. BF

      Totally.

    8. EG

      I know, for example, it's almost like looking at, like, code reviews or other, other sort of intermediate work along the way-

    9. BF

      Yeah.

    10. EG

      ... relative to something.

    11. BF

      We definitely do. One thing I've realized about talent assessment is that a lot of people focus too much on the proxy for what they care about rather than the thing they actually care about. And so ideally you want to measure the thing that you actually care about so that's that person building an MVP of the product.

  10. 18:4526:07

    Future of data collection

    1. BF

      Ideally you have an interview. That's like a scoped-down version of doing that. The place where you need to use proxies is when it's, like, a longer horizon task where you just want to structure the proxy to get as much signal as possible. And so that's sort of how I think about talent assessment, yeah.

    2. SG

      Can I ask a scale of impact question?

    3. BF

      Mm-hmm.

    4. SG

      So if I think about the very largest employers today, like, let's call it, like, low single-digit millions of employees.

    5. BF

      Yeah.

    6. SG

      Right? Or I don't know. You can think about contractors and Amazon workers and such but, um, how many people do you think, like, will end up doing, um, data collection?

    7. BF

      I think it's a huge volume. I think the reason is that it all comes down to, like, creating evals for everything in the economy. I think part of that will be current employees of businesses that are creating evals for that business so that those agents can learn what good looks like. Part of that will be, you know, hiring out, uh, contractors through a marketplace to help build out those evals but it would not surprise me if that becomes the most common knowledge work job in the world.

    8. EG

      How long does that last? So effectively, people are being brought on to displace themselves.

    9. BF

      Th- this is true, um-

    10. EG

      Is that a six-month cycle? Is it a two-year cycle? Like, what is the length of time, uh, at which people have relevancy relative to some of these tasks?

    11. BF

      There's always, like, a frontier. So I think the-

    12. EG

      Unless they become superhuman, right?

    13. BF

      Yeah, unless they become superhuman.

    14. EG

      Where it's like, yeah, yeah. It's almost like time to superhuman.

    15. BF

      But I had an interesting conversation which is that, like, you don't even know that you have super intelligence without having evals for everything.

    16. EG

      Mm.

    17. BF

      'Cause it's, like, you sort of need to understand what is the human baseline and, like, what is good. And it's, like, grounded in this, like, understanding of human behavior.

    18. EG

      Yeah, a friend of mine basically, um, believes that, you know, Nyquist's theorem-

    19. BF

      Mm-hmm.

    20. EG

      ... which is that, uh, basically if you're sampling a signal, like, you need to be able to sample it twice the frequency in order to be able to actually extrapolate what it is. Otherwise, you're not sampling richly enough to know.

    21. BF

      Mm-hmm.

    22. EG

      And so he views that, that there's some version of that for intelligence. Like, you can tell if somebody's smarter than you but you don't know how much smarter because you aren't capable of sampling rapidly enough to understand it. And so I always wonder about that in the context of super intelligence or, um, superhuman capabilities in terms of how smart can you actually be since it's hard to bootstrap into the eval?

    23. BF

      Well, well so, I think, like, when you take it to the limit and you have super intelligence, th- what you're saying makes a lot of sense, but another way I think about it is that if we classify knowledge work in two categories, one is, like, solving an end task where it's sort of a variable cost of, like, you need to do that repeatedly, and the other is creating an eval to teach a model how to solve that task, which is like a fixed cost that you do one time, it does seem structurally more efficient for work to trend away from the variable cost of, like, doing it repeatedly towards this fixed cost of how do we build out the evals and the processes for models to do this themselves. That said, it- it all comes down to, like, how fast are we approaching super intelligence, right? Like, if we, if the models are just, like, getting that good that fast, then sure, I don't think we would need humans creating evals very much, but I also then don't think we would need humans in many other parts of the economy. Um, and so you sort of need to be thoughtful about the ratio of that.

    24. EG

      Does that create an asymptote in terms of how good these things get or do they start creating their own evals over time?

    25. BF

      I think that they'll play a role in creating their own evals, where they, like-

    26. EG

      Some kind of bootstraps.

    27. BF

      Yeah, where, where they might come up with, uh, certain criteria for what a good response looks like and humans validate that criteria. Um, however, I- I think you often need to ground this in, like, the experts, uh, in that particular domain.

    28. EG

      Sure.

    29. BF

      Um-

    30. EG

      But I'm just thinking, like, MedPaLM or something, right? Where-

  11. 26:0728:53

    Humans’ role in the AI economy

    1. BF

      one of the challenges is that a lot of this happens at economic contractions when people get more efficient, get more focused on bottom line. And so I think that a l- yeah, a lot of it hasn't happened yet, but it's going to happen imminently. And then in terms of things that, like, maybe no one even in, like, San Francisco is thinking about, uh, which is another interesting part of that problem, is that these agentic evals for non-verifiable domains is underindexed on significantly. Another thing is that people in San Francisco have a tendency to, like, not think critically about the role humans will play in the economy 'cause they're so focused on, like, automating humans. Um, and so I think that it's important to, like, think more about that problem. Um, like, one, one thing that I- I've thought about it is that ideally models should help us to figure that out over time, right? Like, what are the things that people are passionate about? What motivates them? And maybe it doesn't need to be an economically valuable thing. Maybe it's just, like, a certain kind of project that they like working on and I think that people aren't, um, indexed enough on how humans will fit into the economy in 10 years.

    2. EG

      You know, one thing that I feel that I've really, I really misunderstood or, um, didn't quite understand the scope of was the degree to which we effectively had different forms of UBI, or universal basic income, in different sectors of the economy. Government is a clear example-

    3. BF

      Totally.

    4. EG

      ... where there's enormous waste, fraud, grift, et cetera happening. (laughs)

    5. BF

      Yeah.

    6. EG

      Um, parts of academia, if you just look at the growth of the, uh, bureaucracy relative to the actual student body or-

    7. BF

      Mm-hmm.

    8. EG

      ... faculty, big tech-

    9. BF

      Mm-hmm.

    10. EG

      If you look at some of the-

    11. BF

      (laughs)

    12. EG

      ... size of, you know, you're, you're basically-

    13. BF

      Surely, yeah.

    14. EG

      ... it's just that a lot of these things are effectively UBI. And so to some extent one could argue that parts of our economy are already experiencing what you're saying in terms of there's, um, high-paying jobs that may or may not be super productive-

    15. BF

      Yeah.

    16. EG

      ... on a relative basis and so the question is, is that something that we actually embrace as a society given some of these changes and displacement? And if so, where does that economic surplus come from?

    17. BF

      Yeah, it, it's interesting. I think that as we have better analytics around the value of employees, it seems intuitive that these companies will become, uh, you know, start doing more layoffs, more cuts, et cetera.

    18. EG

      Do you think those evals, evals become illegal at some point? Because it feels like that happened a little bit with certain aspects of merit or merit-based testing for different disciplines or fields. That happened with the government in the '70s where they removed it as a criteria. I'm just wondering if that becomes something that more generally people may not want to adopt because it exposes things or do you think it's something that is inevitable economically?

    19. BF

      There's definitely gonna be pushba- pushback but I think it's inevitably economically 'cause it's hard to regulate and just, like, so strongly, uh, valuable to companies that they'll move towards it.

    20. EG

      Should companies adapt that now?

    21. SG

      I think it depends on what segments of the economy because some of these are not economically driven already. They're just not efficient as sectors, but if you look at healthcare or education, everybody's seen this chart

  12. 28:5333:03

    AI as a contributor vs. a manager

    1. SG

      that shows a bunch of industries that have some measure of output per dollar spent and you have increasing spend on healthcare and education and no improved output.

    2. EG

      Yeah.

    3. SG

      And, and, like, that's happened for a long time when there's increase in productivity in many other sectors-

    4. EG

      Yeah.

    5. SG

      ... and the answer is there's no e- economic pressure, actually.

    6. EG

      Sure, it's regulated versus unregulated-

    7. SG

      Yes.

    8. EG

      ... sectors, effectively, and the regulation is what causes the divorce from economics.

    9. BF

      Yeah. Al- also, one thing that I think is very interesting is that a lot of people are in the mindset of AI being really good as an ame- independent contributor when actually it may soon become much better at being a manager, right? In, like, taking a large problem, breaking it down, figuring out how to performance manage people for how they should be doing. And this ties into your point around, like, what should we do with all of those unproductive employees? Because if we have, like, a ruthlessly rational agent that is making the decision there, it is probably gonna be very different than a lot of the decisions that have been made historically.

    10. SG

      One of our companies asked, um, recently what I would expect an assistant to do that it doesn't do today.

    11. BF

      Yeah.

    12. SG

      Right? And I think the biggest thing is, like, you know, if I give it enough context and some objectives that I'm trying to achieve, I'm not, like, a particularly organized person. I have a lot of output-

    13. EG

      (laughs)

    14. SG

      ... I think. All things relative. But, uh, you know, is it, like, perfectly prioritized and tasked out and sequenced so I'm not bottlenecked on a particular thing? No, right? And I would absolutely expect that the assistant can do that for me.

    15. BF

      Totally. Well, and it goes to the point earlier, right, which is that we have-

    16. SG

      Just tell me. Tell me what to do for the next three minutes. (laughs)

    17. BF

      (laughs) We have these models that are, like...... incredibly good at math, right?

    18. SG

      (laughs)

    19. BF

      Like you give them a test and they can ace the test-

    20. SG

      Mm-hmm.

    21. BF

      ... but they still can't do, like, basic personal assistant work, right? And I think it goes to show that there's still a lot of, like, research and product to be built out and, like, how do we actually bridge the gap with what's economically valuable to complete that end-to-end job that, like, you're willing to pay a human salary for.

    22. EG

      Do you think the models are good enough for that, I mea- there's just incremental engineering work to make it better?

    23. BF

      They do.

    24. EG

      Or do you think it's... Okay, so w- we actually have model capabilities that you think would allow us to build certain types of true agentic systems versus we need, like-

    25. SG

      That are proactive too.

    26. EG

      ... yeah.

    27. BF

      Or a- actually, maybe, let me put it this way. I think with a small amount of evals for agents in various categories, the base model has, like, all the reasoning capabilities. And the reason you still need those, like, evals is the models need to understand, like, when they should be using tools in certain ways, they need to understand, like, how to synthesize information from those tools, but it's not a reasoning problem. It's, like, much more this problem of, like, learning each company's knowledge base and, like, what good looks like in that role. And so there's going to be some, like, p- post-training and I'm very bullish on RFT, uh, and everything that's going to mean. Uh, it'll be-

    28. EG

      Can you say more about RFT and explain it for our audience?

    29. BF

      Yeah, so basically everyone used to talk about fine-tuning the co- in the context of SFT, supervised fine-tuning, where you would have inputs and outputs for a model and the model would, um, learn from those input-output pairs. But the main issue, i- and supervised fine-tuning customization never really took off because it wasn't very data efficient. Like, companies would create a few hundred and, and eventually try to scale it up to tens of thousands or hundreds of thousands of SFT pairs, but oftentimes wouldn't be able to get a lot of the capabilities that they were looking for. Whereas in reinforcement fine-tuning, you instead define the outcome that you care about. So in Sierra's case, uh, like I was talking with them about how they define what, like, a good customer support response would look like. In our case, we define, like, what are the key things that you should identify as a characteristic of this candidate? Um, whether it be that they're passionate during their interview, they demonstrate XYZ domain knowledge, or they worked on this side project that demonstrated that skill, a- and then you reward the model for identifying that. So you set the solution and then the model can learn in that environment how to get really good at it, and the reason I'm so optimistic about it taking off is that it's, like, profoundly data efficient, right?

    30. SG

      Mm-hmm.

  13. 33:0334:50

    Mercor’s goals

    1. SG

    2. BF

      Yeah, exactly. And so it'll be, it'll be very cool. I think we're going to have these agents that, uh, fill all roles that employees currently fill, uh, working alongside employees, uh, human employees will help create the evals. I also think that, like, contractors in our marketplace will play a large role in that. It will just be this, like, huge build out of evals to create custom agents and, uh, across every enterprise.

    3. SG

      What is most important for Mercor to get done in the next, like, year or so?

    4. BF

      So there's two things that we focus on as a business, um, and I think those will be most important for this year as well as for the next five years. The first is how do we get all of the smartest people in the world on our platform? Uh, and that ties into the supply side of our marketplace, the marketplace network effects around, uh, similar to, like, an Uber or Airbnb, uh, because if we have the best candidates then we're able to give them job opportunities and understand what they're looking for. The second thing is predicting job performance.

    5. SG

      Are you trying to offer anything that isn't comp?

    6. BF

      Yeah, we are. So one of the things that we realized is that the average labor marketplace has a 50:1 ratio of supply side relative to demand side, which means the average person that applies talks to their friend who also applied and neither of them got jobs. A- and it's almost just this, like, structural part of building labor marketplaces. The way to actually scale up the labor marketplace to have hundreds of millions of the smartest people in the world on the platform is to build all of these free tools, such as AI mock interviews, AI career advice, uh, you know, shareable profiles for people. All of the things that just create the most magical experience possible for consumers and give that away for free because it's powered by this monetization engine on the other side of the business. And so that's a very significant focus for us.

    7. SG

      I interrupted you.

    8. BF

      Yeah.

    9. SG

      You were going to talk about what else was important.

    10. BF

      Part two. (laughs)

    11. SG

      Yeah.

    12. BF

      It's performance predictions. So we get all of the data back from our

  14. 34:5036:00

    Evolution of labor markets

    1. BF

      customers of who's doing well, for what reasons, uh, and, you know, how can we learn from all of those insights to make better predictions around who we should be hiring in the future? And that's the data flywheel that you would find in, you know, many of the most prominent companies in the world, and I think that the marketplace network effect is the more obvious one when you look at the business, but I actually believe that the data flywheel will become more important over time based on a lot of the initial results that we're seeing.

    2. EG

      How do you view the labor markets evolving over the very long term?

    3. BF

      Well, I think that the largest inefficiency in the labor market is fragmentation, and that a candidate, wherever they are in the world, will apply to a dozen jobs and a company in San Francisco will consider a fraction of a percent of people in the world, because it's all constrained by these manual processes for matching, right? Where they need to manually review every resume, conduct every interview, and decide who to hire. When you're able to solve this matching problem at the cost of software, it makes way for a global unified labor market that every candidate applies to and every company hires from. And I believe that that's not only the largest economic opportunity in the world, but also the most impactful

  15. 36:0041:52

    Hiring advice

    1. BF

      one, and so far as how you can find everyone the job that they're going to be passionate about and successful in.

    2. EG

      Would that include AI agents? In other words-

    3. BF

      I think-

    4. EG

      ... the, the marketplace would be a hybrid of people and agents all competing for labor globally?

    5. BF

      I think so 'cause customers ultimately come with, like, a problem to be solved, right?

    6. EG

      Mm-hmm.

    7. BF

      A- and ideally it's some coordination of how those two fit together.

    8. SG

      Given you spend all your time thinking about how to attract high-skilled candidates and, um, determine their effectiveness, like, what advice would you have for, uh, people who are hiring in startups and scaling companies?

    9. BF

      Early on...It's hard to stress the importance of talent density and just like, there's always a trade-off between hiring speed and hiring quality and you should just, f- for those early employees, like always index on quality. Like you need to be patient, and you need to make sure that people are extremely high caliber. When you're skilling up an org, uh, you obviously don't want to drop those standards but people need to be a lot more data-driven around what are the characteristics of people that actually drive the outcomes they care about and it feels like where a lot of the problems happen is when that slips, when it's sort of like this vibes-based assessment that doesn't scale very well, where each hiring manager is doing it in a fragmented way and i- it's hard to enforce those standards across the board and so just being very disciplined around like what are your hiring goals, what are the characteristics of people that you know are actually going to achieve the business outcomes you care about, uh, and how do you measure those things is really important.

    10. EG

      I find that almost every great company either hires well, like what you're talking about, or fires well, which is sort of your phase two.

    11. BF

      Yeah.

    12. EG

      But I think often they do that, one of those things really well early.

    13. BF

      Mm-hmm.

    14. EG

      For some reason most people don't seem to get both right early on. I don't know why it is. I think it's almost like a founder virus or something like that and then I feel like over time hopefully they pivot into both.

    15. BF

      Yeah.

    16. EG

      Google was a good example of a, um, organization that would always hire well but couldn't fire well.

    17. BF

      Mm-hmm.

    18. EG

      It took them a really long time to clean people out, years, like literally years. (laughs)

    19. BF

      Interesting.

    20. EG

      Facebook on the other hand was kind of known for a more mixed early talent pool but they're very good at removing, um, early people who weren't performing so I- I always thought that was kind of an interesting dichotomy between the two r- and now, you know, th- those were the rumors in the valley when each company was, you know-

    21. BF

      Yeah.

    22. EG

      ... tens or low hundreds of people. I don't, you know, now obviously they're all very professionalized in terms of how they do both.

    23. BF

      They have their UBI, yeah. (laughs)

    24. EG

      Yeah, exactly, yeah. (laughs) So I just thought that was kind of interesting.

    25. SG

      Yeah, I- I think it's like a, just because I mostly think about like engineering hiring and go to market hiring and investor hiring, they're all professions that have like some time scale of outcomes that isn't like an hour, right, and so I- I think you're always looking for proxy of outcomes-

    26. BF

      Yeah.

    27. SG

      ... for these like longer outcome jobs and I- and I- I think there's like a really interesting question very related to evals and assessment of like, "Well, what are the proxies we're gonna discover for each of these roles?" Because I think it's a huge shortcut in hiring, hiring well, not necessarily firing well if i- like if you can do references, if you can do work trials with engineers. Like you actually know a lot in the first five days, 30 days, um, of whether or not something's gonna work out-

    28. BF

      Totally.

    29. SG

      ... uh, and like, you know, I- I think we're always, I'm always looking for proxies for that.

    30. BF

      Yeah, and I think one of the, like crazy things about the market is that any candidate that you do a work trial with has probably done work trials with like a lot of other top companies in San Francisco but you don't have any of the data on that.

Episode duration: 41:52

Install uListen for AI-powered chat & search across the full episode — Get Full Transcript

Transcript of episode vnkVYLhGd_s

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome