Skip to content
No PriorsNo Priors

No Priors Ep. 83 | With Rippling COO Matt MacInnis

In this episode of No Priors, Sarah and Elad sit down with Matt MacInnis, COO of Rippling, to discuss the company’s unique product strategy and the advantages of being a compound startup. Matt introduces Talent Signal, Rippling’s AI-powered employee performance tool, and explains how early adopters are using it to gain a competitive edge. They explore Rippling’s approach to choosing which AI products to build and how they plan to leverage their rich data sources. The conversation also delves into how AI shapes real-world decision-making and how to realistically integrate these tools into organizational workflows. Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @Stanine Show Notes: 0:00 Introduction 0:32 Rippling’s mission and product offerings 2:13 Compound startups 3:53 Evaluating human performance with Talent Signal 13:19 Incorporating AI evaluations into decision-making at Rippling 14:56 Leveraging work outputs as inputs for models 18:23 How Rippling chose which AI product to build first 20:53 Building out bundled products 23:26 Merging and scaling diverse data sources 25:16 Early adopters and integrating AI into decision-making processes

Sarah GuohostMatt MacInnisguestElad Gilhost
Sep 25, 202431mWatch on YouTube ↗

EVERY SPOKEN WORD

  1. 0:000:32

    Introduction

    1. SG

      Hi, listeners. Welcome back to No Priors. Today, Elad and I have a spicy one. We're here with Matt McInnis, the COO of Rippling, the juggernaut workforce management platform that unifies HR, IT, finance, and more. They're launching a new AI product that looks at the work output of employees and generates performance management signals. Sound terrifying? Let's discuss. It's so good to have you.

    2. MM

      Thank you for having me.

    3. SG

      So

  2. 0:322:13

    Rippling’s mission and product offerings

    1. SG

      I think a lot of our audience will know Rippling or use Rippling-

    2. MM

      Yep.

    3. SG

      ... but for anybody who's missing it, what does the company do?

    4. MM

      Yeah, it's an all-in-one platform for HR, IT, and finance. We do all the boring stuff, but the important stuff to help you run your company. So we wanna, like, adminis- uh, eliminate the administrative burden of running a company. That's all the, like, official language, but most people come to us and say they need payroll, and we have payroll. They need a device management solution, we have one of those too. So we do all that stuff.

    5. SG

      The rumor is, you know, many hundreds of millions of dollars in revenue, growing fast. Anything else you can say about scale?

    6. MM

      Uh, it's going well. We got about 3,500 employees. We've got tens of thousands of customers using the platform. So I'd say we're doing something right.

    7. EG

      I guess also one of the things that you all have really pioneered is this, uh, notion of reintroducing compound startups or bundled products across a suite of different things. How many different products do you offer now, and how-

    8. MM

      Oh, man.

    9. EG

      ... what's the velocity in terms of adding new ones?

    10. MM

      We have on the order of like 25 unique SKUs that, uh, a customer can buy from us. Products come in different shapes and sizes. And so like w- we ship small new things every quarter, and then we, we definitely do, like, big things every couple of quarters or so. We're about to ship, like, scheduling, which, you know, again, sounds unsexy but is actually really cool. We shipped an applicant tracking system for recruiting, you know, tacking these sorts of things under our HCM suite. We d- we do a lot of this partially because we have so many founders in the business. We have over 150 people who have started companies that now work at Rippling. It's like an explicit strategy to go out and try to either give talented entrepreneurs whose business ideas didn't quite work out, like, "Hey, hand raised, I've been there," uh, a safe place to land and continue either pursuing what they were interested in or do something new at Rippling, and so that's worked out really well for us on the velocity front for shipping new products. The compound startup thing obviously has

  3. 2:133:53

    Compound startups

    1. MM

      been in the zeitgeist a little bit in the Valley. It's just obviously a huge tailwind for us that businesses generally wanna consolidate as much of their software onto a single platform as they can, so we're gonna keep pursuing this and keep recruiting awesome, talented entrepreneurs and ship new stuff all the time.

    2. EG

      Makes sense. And I guess one way to think about your business is almost like instead of one company growing at a certain rate, you're like 25 startups all compounding from a smaller base, which I think is very exciting in terms of the potential upside

    3. NA

      of what you're doing.

    4. MM

      It makes, like, if, if you're... One of the things that I wished I had learned way earlier in my career as an entrepreneur was just, like, basic corporate finance.

    5. EG

      Mm.

    6. MM

      Like, understanding an income statement and a balance sheet and how those things play together and what, you know, investors look at in that context. And I, you know, I get it now, for the record. I think I've mostly figured that stuff out. Uh, but the, the, when you look at the income statement for Rippling and you think about the 25 businesses or just like the major product suites like IT and finance and, and spend, they, they're sort of subscale businesses at some level on their own. And then in aggregate, you have this beautiful top line picture, but from an efficiency standpoint, like, it's okay today, but there's clearly just gonna be this, like, blossoming of efficiency over time for us as these different suites all play to one another. In SaaS software, most people don't totally understand that, like, for scaled businesses, the unit economics of your business converge at the cross-sell motion. Like, your new logo sales motion is super important, but as you sell, uh, your products into your ever-growing customer base, like, your economics start to look like that more than the new logo sales motion because you obviously have a lot more in the cross-sell bucket. And so for us, the compound startup thing also has these beautiful financial dynamics that we have a lot of things we can sell to our existing customer base over time, and that's helped us a lot economically.

    7. EG

      And one of

  4. 3:5313:19

    Evaluating human performance with Talent Signal

    1. EG

      the things that you're here to talk about actually is a new product that kind of ties together a lot of the other ones in some ways. Do you wanna talk a bit more about what that is and how you all are starting to move into AI?

    2. MM

      Yeah, I mean, the pendulum swinging toward consolidation has these obvious surface level benefits, right, of better sales efficiency and customers being able to save money by not paying multiple sales teams to acquire them, but that's, like, super basic. Like, it's really surface level. Where the magic really comes is where there's something common underneath all of these different applications that you're building that provides you with either a scale advantage or what I like to call kind of like your vibranium advantage. So (clears throat) you have some sort of superpower at the core of your platform that lets you do things that other companies just sort of look at and think like, "How the hell did they do that?" Like, "Why are they able to do that and we can't?" And for us, it's like our deep understanding of the employee graph and emlo- about employee data. So everything that we build runs on these common rails of, like, a deep understanding of data about the employees in your business. And so the question is like, what happens when you start to marry all of this data in a single platform? Like, what's some cool stuff you could do with it? And then you toss in the question of AI and, like, what could a large language model accomplish with this data and this, like, real understanding of its structure and its history? And that was one of the big questions we started asking ourselves a few years ago and started investing in this new thing. So there's a new product that, um, we're just releasing called Talent Signal. It's the ability of this system to read the work product of employees and, like, marry the data that we have on whom you've hired into your company, at what job level, with what you know. It's all the basic data about their job history married together with the actual work product that they produce to yield, like, an insight into how those employees are doing and, you know, this is obviously gonna be super powerful and useful. And it's sort of this thing that I think everyone knows is coming at some level that, like, AI is going to contribute in some way to evaluating human performance, and so we knew there was an opportunity here and, and that's what Talent Signal is gonna, is gonna deliver.

    3. SG

      It actually feels like a pretty big break to, uh, be looking at what you describe as work product because traditional HR and IT systems, they don't necessarily have that work product data in them.

    4. MM

      Whether you've had a job as an individual contributor, you know, for some period of time reporting to a middle manager and, like, w-Um, I did that for a while early in my career at Apple. And the real sort of crunch point in your relationship with a manager comes around performance review time, where you have some opinion on how you've done and your peers and others around you have an opinion on how you've done, and your manager has an opinion on how you've done. And everybody gets in a room, and after they've written the feedback, you know, they do this thing called calibration, where managers try to hold themselves to a common standard, and they all try to hold one another accountable to a standard way of evaluating against the rubric. But the truth is, like, the manager has never really, hasn't really sat there and, like, looked at everything you've done. Particularly if this is over, like, a, you know, six-month time horizon or a 12-month time horizon. They just don't have enough time to do that. And so if you, there was a bunch of, like, really interesting articles written about this in many different sources. I recently read one in HBR where they talk about the manager vibe. So, like, if the manager has a good vibe about an employee, and there's ambiguity about their performance in the review process, then that opens up this massive gaping hole for the vibe to be the basis of the performance review. And likewise, like, if you have a negative vibe on an employee, and there's some ambiguity about their performance, like, then they're gonna drive that negativity through that crack like a Mac truck. And the question is, like, how do you get around this tendency in these fundamentally human processes? And the answer's like, we go to the source. Like, you bring, you bring the facts to the discussion. And so Talent Signal, by reasoning from the work product only, like it doesn't have access to demographic data. It doesn't know your race, ethnicity, your age, your work location. It just knows this is the source code you wrote, or these are the customer interactions that you had as a support agent. And then it generates this thing called a signal that is a stamp, basically, that says that this person is high potential, this person is typical, or this person is in need of attention. We call it pay attention, but they're effectively at risk. And directs the manager to go and spend time with them, but it surfaces all of these concrete work product examples that the manager can use to go and have, like, a good coaching conversation with the employee.

    5. SG

      Do ICs get to see it?

    6. MM

      The ICs can see it when the managers let them, and this is actually a thing that we've debated, like, quite a bit. Talent Signal is not making employment decisions. It's just giving this independent signal to the manager about how the employee is doing.

    7. SG

      A calibrated signal.

    8. MM

      Yeah. Well, it is calibrate, it's, and that's actually really important because one of the pieces of data that we do feed the model is someone's job level, and then we try to calibrate that actually across all the companies that the data is trained on.

    9. EG

      Does it end up showing calibration relative to both the individual company and overall pool?

    10. MM

      We don't separate it out.

    11. EG

      Or just the overall pool?

    12. MM

      We give you only the localized version. And so you tend to see, like, a, a, a pseudo-normalized distribution. So you see, like, in a population of, like, 50 engineers, you'll always see some people who are flagged as being high potential, and you'll always see some that need attention, even if, in the global model, you know, they were all skewed-

    13. SG

      This is a really good company, this is a really bad company.

    14. MM

      Yeah. You know, exactly. You know, 'cause it's, it's not particularly useful otherwise. And, and this is all stuff that, like, is part of this early access program that we're doing. Like, a couple of things that I should share about this, 'cause I think your, your listeners are, are, like, gonna clue in now, like, "Wow, the stakes on this are pretty high. Like, getting this right is awesome, but getting it wrong sounds kinda dangerous." We are doing this as part of an early access program, and the way that the product works is that it generates one signal one time for one employee at their 90-day mark. Even people who have been at your company for, like, three years, we can generate a signal, but we're only gonna base it on the first 90 days of their work product. And the reason that we're doing it this way is because companies that look at this can see, okay, this thing actually made, like, a pretty darn good assessment at day 90, and it took us, like, 12 months to figure out that this person was not a fit for our company or that this person was gonna be an exceptional member of the team. That builds trust in the model over time, and we think, like, I don't know if you guys have ever heard the Overton window concept, right? Like, this idea that people are only ready for a certain amount of change in how they think about a certain problem. And for us, it was actually really important to contemplate in the design of the product that we not stretch the Overton window too far. Like, and also, like, by limiting it to the first 90 days, we get to build trust with the employees, with the managers, and have them kind of understand the implications of this thing and whether it's accurate for their particular circumstances. And over time, we can, like, expand how it's applied. You know, these are all the different issues that we've contemplated as we've, as we've gone along.

    15. EG

      But if somebody's been around them for three years, and we have a 90-day signal, is that still relevant to that person who's been around for three years?

    16. MM

      Nope, highly not likely to be, like, incrementally useful information at that point. The idea there is, like, hey, here's what we would, we would have said at day 90 for this person. Um-

    17. SG

      It's back testing.

    18. MM

      Yeah. It's back testing and, and establishing some level of credibility for the model 'cause we obviously done a bunch of testing with this and thought it was, it was quite accurate, and certainly instilled confidence in us that, like, it's a useful signal for the

    19. EG

      ... so a lot of the use of it is actually for new employees versus people who have been around at a company for a long time.

    20. MM

      This version of the product, V1, like, as we step into this baby steps, is to do the 90-day signal for new hires. And so the more people you hire, the more useful it is. So high-growth companies, obviously you're gonna get more value out of this initially, but the sky is the limit obviously as, as this thing, uh, evolves, um, and, uh, we all gain more trust in, in the model.

    21. SG

      I wanna talk about risks, too, but what is the aspiration for how this changes performance management?

    22. MM

      For me, the, the sort of motivating factor here, honestly, it's the, it's the bad manager. If you're an employee, and you are working in the bowels of the organization on hard problems. Your manager, a little lazy, doesn't sort of recognize the quality of your contributions, shows up at that calibration meeting with a better vibe on somebody else, and they get the promotion. Talent Signal walks into that environment and slams your work product down on the table, and says, like, "What about this?" I can give you a concrete example of an underrepresented profile at Rippling when we were building this product. She was an engineer in India who was working on one of our toughest problems, and she was singled out as a high-potential employee, and she was, in fact, pretty early in her tenure at the company. And we paid attention to that, and we talked to the manager about it, and it was sort of an eyebrow-raising moment where she was kinda lifted from obscurity by the model that was like, "I don't know what your vibe is on this person, but, like, man, they seem to be contributing at a high level, and here are concrete examples of how they've done so." So the lazy manager who doesn't, like, represent things the way they ought to is held accountable by their manager when they look at the total organization through this tool, and it does a better job of representing the employee. And obviously, I can talk all day about, like, lifting people from obscurity, but it also has the team performance impact of signaling that someone needs support. If they're not performing well, if they haven't ramped well, uh-Giving that signal to the manager and the manager's manager knowing about it is hugely valuable to overall team performance too. So, the vision here is like, to have an independent ... When I say independent, it's independent of the biases of the manager, it's independent of, you know, all the noise that sits in the company and just cuts at the heart of one vector on this employee, which is their work product, and gives them a chance to shine. You just imagine, this is the first time in kind of th- the recent history of the concept of performance management in companies where there is an orthogonal input that can really upset, with facts, how people are

  5. 13:1914:56

    Incorporating AI evaluations into decision-making at Rippling

    1. MM

      doing this.

    2. SG

      What did you learn from dogfooding at, uh, Rippling?

    3. MM

      We started talking about this internally quite some time ago, and as the product has gotten more mature and as we've talked about it more and more with employees, the feedback from employees has been super useful to informing the policies that we set up. You know, I'll give you a couple of examples, like, no one's allowed to make any significant decision using the model alone. So anytime you talk about employment decisions, promotions, that kind of thing, you're not allowed to just point at Talent Signal and say, "Well, it, you know, it said X." You've got to have your own independent assessment of, um, the inputs.

    4. SG

      So it's really, it's like manager reasoning about work product Talent Signal pointed them to?

    5. MM

      Talent Signal is like a cheat sheet, but the manager has to do what is fundamentally a human process, which is to evaluate the whole person.

    6. SG

      Mm-hmm.

    7. MM

      The policies that we've set up internally prohibit blind following of Talent Signal and require the manager to express judgment around what they saw in the study. And like, look, we dogfood the heck out of everything at Rippling. Um, Parker, our CEO, he runs payroll for the company. E- like, every pay run goes through him. He also approves every expense above 10 bucks.

    8. SG

      It's a lot.

    9. MM

      We can talk ...

    10. SG

      (laughs)

    11. MM

      We wanna talk all day about that. Uh, sometimes I just eat the 10 bucks. You know, like, what's the point? Um-

    12. SG

      Of fighting Parker on the-

    13. MM

      Yeah, n- yeah, exactly.

    14. SG

      ... expense policy? Amazing, yeah.

    15. MM

      Yeah, it was, it was Uber Comfort and not UberX, but anyway. The AI stuff, you know, he's obvi- obviously been very close to the development of this, and the employees have been, I would say, really thoughtfully engaged in balancing being good sports as dogfooders, but also sort of making sure that their own rights are represented in, in the development of this technology.

  6. 14:5618:23

    Leveraging work outputs as inputs for models

    1. MM

    2. SG

      One of the biggest objections I can imagine, especially as you get to evaluating people whose job might be, like, you know, classic middle manager, "I make other people successful," it's about, like, collaboration or focusing people on the, the right tasks, is that is not captured-

    3. MM

      Yeah.

    4. SG

      ... in concrete work product.

    5. MM

      Yeah.

    6. SG

      What's your response to that?

    7. MM

      I mean, so first of all, Talent Signal focuses on individual contributors in terms of developing signals. So for salespeople and support agents and individual contributor engineers, it, it like, it only does a signal for them. We haven't gotten into the game of managers yet. That's gonna be interesting for us or for someone to, to dig into. But there is this question of like, what's it looking at and is it sort of like, you know, this overlord looking at everything that I'm doing? What we needed to do in the development of the product was find the highest correlate, like find the best R-squared. Like, what is the input you can give the model that is most predictive of the output, which is, were they promoted? Were they terminated for performance? You know, did they stay at the same level for a long period of time? Just, in general, what was their career outcome in the period studied? When we did these sort of preliminary studies, the, the, the screaming signal-

    8. SG

      Was work product.

    9. MM

      ... was work product. You know, like, d- if you wanna know if someone's a good engineer, look at their contributions. Like, look at their source code. And don't just look at like, you know, definitely don't just look at how much they do.

    10. SG

      Mm-hmm.

    11. MM

      But like, really reason about the quality of the code contributions, think about, uh, security issues, look at pull requests, look at comments on pull requests. These foundational models do an excellent job of thinking about source code and writing source code, and so they're actually really excellent engines for assessing the quality.

    12. SG

      That was, uh, one of the coolest things I thought about seeing the demo. Like, when you, it was looking at assessment of, for example, like, maintainability, extensibility.

    13. MM

      Yeah.

    14. SG

      Right? Because that requires, like, code reasoning.

    15. MM

      Yeah, it has an opinion on this and it's able to express it really eloquently, and then the manager has to go in and use their own judgment. I'll give you another example. This is an example from a customer who's been using the product. So, the CTO of one of the, like, alpha test companies went in and saw that somebody he didn't think was a very strong engineer was flagged as high potential.

    16. SG

      Hmm.

    17. MM

      And he was like, "Okay, like, that does not jibe with my priors." Uh, he had pri-

    18. SG

      With my vibe.

    19. MM

      He had priors. It doesn't jibe with his vibe, right? Like, "It's not my vibe about this employee." So, he goes in and looks at the source code and he goes, "Oh, I see what's happening." He's like, "I wrote all this source code." And we were like, "Huh? Like, tell us more." And he's like, "Well, this employee has been struggling and so I've been spending time with them shoulder to shoulder, like, writing code and coaching them through this stuff, and like, what the model has picked up on is this really high-quality contribution that only happens when I'm sitting next to this person."

    20. EG

      Yeah.

    21. MM

      And it was like, "Aha. Okay, cool." So it's sort of like an unknowable misattribution.

    22. EG

      How do you think more generally about managers? You mentioned that you don't currently assess them. Um, Andy Grove used to always talk about how the output of a manager is the output of their team, and that's how you're supposed to assess them.

    23. MM

      Yeah.

    24. EG

      So to some extent, you could argue you have some signal you can aggregate up.

    25. MM

      So when you look in the product, it does aggregate at the manager level to show you the sort of distribution of high potential, typical, and needs-attention employees. That part, we're sort of saying to customers, like, "You use it informationally to sort of spot where there might be hotspots, but don't, you know-"

    26. EG

      Mm-hmm.

    27. MM

      "... don't totally judge the manager on the basis like-"

    28. EG

      So the question, is that a reflection of hiring or is that a reflection of execution?

    29. MM

      Yep.

    30. EG

      Or is that ... You know, so I guess it's hard to sometimes tease those things out.

  7. 18:2320:53

    How Rippling chose which AI product to build first

    1. EG

      on this as a thing that you're gonna do for AI? Was it a big exploration? Was it more like, "Hey, we actually have something here that we've aggregated data. This AI seems to be good at interpreting certain types of data"?

    2. MM

      Yeah.

    3. EG

      I'm just kind of curious to how you landed here of all the things that you could do with, with foundation models.

    4. MM

      We thought about a lot of the obvious AI use cases. I'm gonna zoom out for a sec and, and maybe toot the company's horn a bit. You have a dollar-And, there's a bunch of things you can do with it. Back to this corporate finance topic, like, you just... There's a bunch of things you can do with that dollar. If you invest that dollar back in the front and out the back of the machine it comes two, like, don't take the two, put the two in the front. Get four out the back. Put four in the front, get eight out the back. This is why SaaS software businesses run at such a deep cash deficit over the course of their early years. Now, if you can't do that because you don't know what technology you're gonna build next, if you don't know how to invest it in sales and marketing to go and acquire the next customer, if you don't know how to invest it in R&D to go and build the next product that's gonna generate incremental revenue, then you might do something like, you know, stock buyback. And, um, that means that the most creative idea that you could come up with, with this cash that your business is generating, is to, uh, is to just like juice the share price. And like even worse is a dividend, 'cause like now I can't even do that, I'm just gonna like literally just gonna give it. I don't know what to do with this money, I'm just gonna give it back to you. Like, what would I do with this money? This is like such a bad signal on a company if, you know, if the best thing they can think of is, is a dividend. Now, by contrast, companies like Rippling and many companies in Silicon Valley, uh, not only know what to do or think they know what to do with the next incremental dollar, but they want even more dollars than they have access to and so they use equity capital to go out and get a bunch more cash that they can use to pump in the front of that machine and get even more dollars out the back. You look at some of the highest performing companies in Silicon Valley and they reach profitability or they, you know, (laughs) some of them do, at least, it's, it's still centered around one idea or one product that they have done a really good job of scaling up. And one of the, like, super unique things about Rippling, and I, and it's like so easy for us as a team to take this for granted, is that we have this massive list of projects that if we were to go build them, they would turn into revenue. Like, we know the next product we wanna build, and the one after that, and the one after that. And the only challenge is like, can we hire enough engineers and not run out of money, you know? 'Cause we know that in the long run, this is all gonna work.

    5. SG

      Well, wait. I think a common objection in like classic not Rippling, Silicon Valley, do one thing well type companies is, it's really hard to focus-

    6. MM

      Mm-hmm.

    7. SG

      ... on that many things. It's really hard to do that many things well, it's hard to keep it cohesive. How do you teach the sales team that?

    8. MM

      Yeah.

    9. SG

      How do you think about cohesion?

    10. MM

      Uh, you just work harder.

  8. 20:5323:26

    Building out bundled products

    1. MM

      You know, like, you just, you just get the right people into the right jobs and get enough leaders into the business who can deal with sort of the fractal of complexity.

    2. EG

      I mean, this is also the traditional enterprise sales playbook from the '90s, right? And I think it's almost like we had an era of 10 years in the 2000s where we forgot about this and everybody became single point products. And then there's you guys, there's HubSpot, there's Datadog. Like, a variety of people have built out these sort of bundled products and the cross-sell motion around a single sort of core either, um, system record or type of identity or something else. So, I mean, I, there's the old saying from, um, Netscape, from the Netscape days where, um, all of innovation is either bundling or unbundling, or some variation of that.

    3. SG

      Yeah.

    4. EG

      So now we're in an era of bundling again.

    5. MM

      Yeah. History repeats itself. History doesn't repeat itself, but it rhymes, and like we're definitely in the rhyming phase of, like, the big platform stories from the '90s. If I'm being creative-

    6. SG

      Um, but Talent Signal doesn't look like bundling. It looks like something like pretty different, so how do you-

    7. MM

      Well, this is why, like, it doesn't, this is why it doesn't repeat itself but it rhymes, because the, the technology that emerges in these new situations offers new opportunity.

    8. EG

      Mm-hmm.

    9. MM

      And so for us, we have all of these things we wanna build, but the guiding principle is always, what can we alone do?

    10. EG

      Mm-hmm.

    11. MM

      What can we uniquely do with this new tool?

    12. SG

      Vibranium.

    13. MM

      Yeah, (laughs) vibranium. We have vibranium in this underlying platform. What is AI plus vibranium equals what? You know, like, what does it yield? And what I would say about other companies that are doing AI products is that for the longest time, their road map sucked. They didn't know what their next proximal feature was gonna be that was gonna generate revenue. They didn't have another SKU idea with 100% chance of generating incremental business, and they kept filling in additional features that made existing customers happy and may have given them sort of marginal cross-sell opportunities, but they didn't have the next big thing that they could tack on. AI comes storming into the scene and now all of a sudden everybody's a freakin' AI company because it's offered them this opportunity to at least masquerade as a company that knows what to do with the next proximal R&D dollar. We've never had that problem. And so, guess what we didn't do? We didn't build a chatbot. We didn't build a co-pilot. We didn't build any of these surface level, obvious capa- We're gonna build them, they'll be in there at some point. Who cares? It's not gonna sell a single extra subscription of software. We said we're gonna skip that, we're gonna fast-forward. We're gonna take these super expensive AI engineers, who are really hard to recruit, easy to retain, because it's such a great place to work, but hard to recruit.

    14. SG

      (laughs)

    15. MM

      Have them build something that has the chance to be, the opportunity cost of which is, like, for sure worth it. Because the opportunity cost of putting them on the chatbot thing ain't there, relative to what we could otherwise be going and building as a company.

    16. EG

      Are there any

  9. 23:2625:16

    Merging and scaling diverse data sources

    1. EG

      other types of new AI products that are in the pipeline for y'all?

    2. MM

      We've got a bunch of stuff we're working on in the AI world, but getting this one right is like, you know, we don't, we're not like peeling people off of this project to go work on project number two. Like, we really wanna get this one right out the gates. There is some new stuff coming from the company that's not directly AI related, but is about really scaled data. Like, super high scaled data. We've already built this, like, really beautiful data platform underneath Rippling. Um, it's kind of like our AWS. Like, we're gonna have our AWS moment at some point in the next, you know, quarter or so. Uh, but we're here to talk about Talent Signal. So, it sits on this data platform, and when you want to install Talent Signal for your engineering team, what you do is, you just, you install the GitHub app on Rippling, and it replicates your source code repository into this secure, you know, well-guarded environment, but there it's gonna do the analysis on the, on the source code.You know, when you plug in Salesforce, we're replicating a lot of the data out of your Salesforce instance, and that's heavy duty. I mean, like, the, you know, size of our Salesforce instance is massive. And so really it was about, how do we marry the HRIS data with this scale data platform where everything is really beautifully structured? And in particular, all the employee data is dereferenced elegantly. In other words, we always know who's who in all of these other systems, and then say, "Okay, now what business problems can we solve with that?" And, like, it was so obvious that this was the opportunity because we were seeing inside of these workflow products. And, and, like, GitHub can't do this because GitHub doesn't know who you've promoted, they don't know who did well, they don't know who you've had to let go of for performance reasons. Salesforce doesn't know that either. And so, like, I'm sure there's gonna be really cool, like, code quality evaluation tools built into many of these workflow systems, but ain't none of them gonna know what happened from a human perspective in the way that we do, and that's why this is kind of our magic talent.

    3. SG

      Do you

  10. 25:1631:28

    Early adopters and integrating AI into decision-making processes

    1. SG

      think you need a particular type of culture or leadership to be an early adopter of Talent Signal?

    2. MM

      Talent Signal-

    3. SG

      Or maybe is- is there already signal on that in terms of your alpha partners?

    4. MM

      Oh, for sure. I mean, look, there are companies that we've engaged on this who have looked at it and said, like, "Uh, we're- we're gonna not be an early adopter on this one," and, of course, we- we totally respect that. I have this sense that, like, AI in the conversation about human performance is 0.1% of the way there, you know? Like, there's a lot more to come on this. You know, I would... Mostly comfortable saying that it's, like, an inevitability that LLMs are going to be involved in assessing human performance in many different contexts.

    5. EG

      Yeah. You know, it's interesting. A- a friend of mine who's a CEO of a public company told me that he sometimes uses some of the chat-related products to talk about employee issues, where he'll chat and say, "Hi, I'm trying to work through this thing with an employee. What are some of the things that I should be doing? How should I think about it?" And so you already start to see sort of glimmers of that future emerging. What do you think are some of the principles that people should be building against in order-

    6. MM

      Oh, man.

    7. EG

      ... to make sure that they're approaching it in sort of a thoughtful way, or, to your point, they're not just sort of deferring the decision to AI?

    8. MM

      I do think job number one is to understa-... Like, you can become numb to this, the impact that this kind of stuff can have on people's li-... I think if you're, like, if you're not in the AI world and you hear people like me talking about, like, risks, the risks or AI safety or the ethics, it sounds weird. You're like, "Why are they talking about ethics and risk?" Like, it answered my question about whether, you know, how to convert a half of a cup of, you know, oil to ounces, you know? 'Cause that's most normal people's ex- you know, experience of AI is a very benign, friendly, approachable thing. But it- it doesn't take you too long when you contemplate it in this context to think about, okay, so, like, you know, if- if a manager were to run off and make decisions purely on this, that any hallucinations or misattributions could actually be really consequential to people's lives. And this is why the way that Rippling is approaching this, right, we're doing this as an early access program, we're constraining it to the first 90 days. The signal is awesome. Like, it looks like it's gonna be super useful for people, and also we're very conscious of the risk of bias that might be amplified or introduced through, um, the whole thing. And so when you ask about people who wanna start using these tools in these kinds of contexts, what advice you might give them, it's like, number one, you gotta understand the stakes. Number two is, like, even if the system is arguably bulletproof, like, you have to go to ground and still do your job as a manager. You've gotta go inspect the context. Don't let the misattribution that I described earlier around somebody who got a lot of coaching from their manager influence your thinking about them. Just see it for what it is.

    9. SG

      Mm-hmm.

    10. MM

      You know, not just the Talent Signal thing, but just AI more broadly. Just see it for what it is and under- if... You have to have some, like, understanding of the underpinnings of these systems in order to be able to judge the quality of their output, and I think it's probably too high a bar to say that everybody out there, you know, who's potentially a user of these kinds of tools is ready for that.

    11. SG

      So who is, in terms of CEOs or HR leaders or whoever else is choosing to do it now?

    12. MM

      It's pretty clear that the companies that have chosen to partner with us already on this are either reasonably, um, performance oriented, like, very interested in finding new tools to compete. Like, I- I think, I, you know, could... It's easy to go back to a sports analogy where if I could tell you that you're a coach and you've got a team and you're going for Olympic gold and, um, it's a beta version of something that a- assesses your, you know, your- your form on the court or, you know, kind of depends on what sport we're talking about, that, like, you're pretty keen to give it a shot and see if it can help you juice team performance. And if you're careful and you mitigate the downside risk, like, it could give you a leg up in what is a very competitive environment. There are a lot of business people, um, CTOs, like the engineering side of this, the sales leaders who are interested in this. I mean, sales is hyper, hyper competitive, and so if they can get a leg up, this is just, like, part of the arms race for- for sales. And then support teams, um, are so coaching oriented already. Like, a- a support team is so- generally so focused on rubric adherence and, you know, weekly air checks with their employees to make sure that they're communicating the right way about their new product or using the right tone, they already have these cultures. And so really, I suppose one of the things we've gotten right about this is that when we selected sales engineering and support as the areas to build the first version, those are already organizations that have a culture of competitiveness and a culture of looking to find the next incremental advantage for themselves. And then it- I think it rolls up to, uh, the company culture, where companies have said they're gonna wait this round out, and some of the more hard-edged or, you know, competitive type environments, those- those guys have- have said they wanna play ball.

    13. EG

      I think those are also, um, three disciplines where there's also a lot of coaching. And so there's products like Gong, where I've seen people, like, share calls so that people can learn off-

    14. MM

      Yeah.

    15. EG

      ... of each other.

    16. MM

      For sure.

    17. EG

      You know, customer support, obviously, there's a lot of training.

    18. MM

      Yep.

    19. EG

      You know, code, sometimes people pair program. So it- it does also feel like the places where the coaching aspect of what you talked about can become really valuable.

    20. MM

      Yes, 100%.

    21. SG

      It seems like an obvious, um, trigger for the AI pitchfork crowd to-

    22. MM

      (laughs) Yeah.

    23. SG

      ... come at you.

    24. MM

      Yeah. I will say that, like, I'm thankful for the pitchforkers. Like, I'm thankful for the people who are going to hold us accountable and- and criticize and- or critique, you know, the- the quality of work, uh, that we're putting out with this product, because it's really easy to sort of inhale your own exhaust and get excited about the potential without necessarily understanding the full picture. And so when someone comes at us and- and asks hard questions about bias or asks hard questions about, you know, the- the unintended consequences of involving AI in decisions this important, we're- we're just gonna listen and we're gonna learn. Feedback is a gift, like it's a real thing. And so I know that there will be some people who raise an eyebrow at, you know, what we're doing, and, um, and then, uh, all I can say is, like, we're really committed to learning from them and making sure that we make this a tool that works for everybody.

    25. EG

      Great. Thanks so much for joining us today.

    26. MM

      I am really glad you guys let me do it.

    27. SG

      Thanks, Matt. Find us on Twitter @nopriorspod. Subscribe to our YouTube channel if you wanna see our faces. Follow the show on Apple Podcasts, Spotify, or wherever you listen. That way, you get a new episode every week. And sign up for emails or find transcripts for every episode at no-priors.com.

Episode duration: 31:28

Install uListen for AI-powered chat & search across the full episode — Get Full Transcript

Transcript of episode hoShhKmK8so

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome