Skip to content
The Twenty Minute VCThe Twenty Minute VC

Matt Fitzpatrick: Who Wins the Data Labelling Race & Why Al Needs Forward-Deployed Engineers

Matt Fitzpatrick is the CEO of Invisible Technologies, leading the company's mission to make AI work. Since joining as CEO in January 2025, he has raised $100M, and accelerated AI adoption across industries from sports to consumer and government. Previously, Matt was a Senior Partner at McKinsey, where he led QuantumBlack Labs, the firm's AI R&D and software development arm. ----------------------------------------------- Timestamps: 00:00 Intro 01:21 Career Journey and Leadership 08:36 The Single Biggest Barriers to Enterprises Adopting AI 10:41 It is BS That Enterprises Can Adopt AI Without Forward-Deployed Engineers 27:13 Are AI Talent Marketplaces Dead? What is the best model? 36:39 How Does the Data Labelling Market Shake Out: Who Wins/ Who Loses 50:01 Are Revenue Numbers for Data Labelling Real Revenue? Or GMV? 52:56 Best Capital Allocation Decision? What did Matt Learn from it? 55:22 How Important is Brand for AI Companies Selling Into Enterprise? 01:09:24 Remote Work vs. In-Person Collaboration 01:21:47 What Does No-One Know About the Future of AI That Everyone Should Know ----------------------------------------------- Subscribe on Spotify: https://open.spotify.com/show/3j2KMcZTtgTNBKwtZBMHvl?si=85bc9196860e4466 Subscribe on Apple Podcasts: https://podcasts.apple.com/us/podcast/the-twenty-minute-vc-20vc-venture-capital-startup/id958230465 Follow Harry Stebbings on X: https://twitter.com/HarryStebbings Follow Invisible Technologies on X: https://twitter.com/InvTechInc Follow 20VC on Instagram: https://www.instagram.com/20vchq Follow 20VC on TikTok: https://www.tiktok.com/@20vc_tok Visit our Website: https://www.20vc.com Subscribe to our Newsletter: https://www.thetwentyminutevc.com/contact ----------------------------------------------- #20vc #harrystebbings #mattfitzpatrick #invisibletechnologies #datalebelling #ai #engineers #saas

Matt FitzpatrickguestHarry Stebbingshost
Dec 31, 20251h 25mWatch on YouTube ↗

EVERY SPOKEN WORD

  1. 0:001:21

    Intro

    1. MF

      MIT just released this report that 5% of gen AI deployments are working in any form. You've seen Gartner saying 40% of enterprise projects will likely be canceled by 2027. And I think the reason for that is, externally driven builds are 2X as effective as internal team builds. I don't think that that discipline exists in the same way in internal builds. They spent 25 million bucks building this agent. And what ended up happening was a couple months later, they shut it down and moved back to a deterministic flow. We don't actually sell anything. When we meet a customer, we say, "We will do it for free for eight weeks and prove to you the tech works." The minute you had to bring in FDs in a SaaS context, your economics broke im- instantly, right?

    2. HS

      Are there any other big misnomers that you think are pronounced in the industry?

    3. MF

      Look, I, I think the biggest one is just the view that synthetic data will take over and you just will not need human feedback. It's interesting, from first principles, that actually doesn't make very much sense if you think through it. In the AI world at least, strategy is a somewhat overrated concept. And what I mean by that is, it-

    4. HS

      Ready to go? Matt, I am so excited for this, dude. I think Invisible is one of the most incredible, but also, I'm sorry to say this, like under-discussed businesses when I look at the incredible achievements that you've had over the last few years. So thank you so much for joining me.

    5. MF

      Thank you for having me. I really enjoy the show.

  2. 1:218:36

    Career Journey and Leadership

    1. MF

    2. HS

      Can you just talk to me about how does like a 10-year McKinsey, uh, stalwart warrior-

    3. MF

      (laughs)

    4. HS

      ... become CEO of like one of the fastest growing data companies in tech? How does that transition happen?

    5. MF

      Yeah, so, um, I would say my McKinsey journey was non-traditional. Um, I spent 12 years there, I was a senior partner and I led a group called Quantum Black Labs, which is the firm's global tech development group. So about 10 years ago, McKinsey actually started hiring engineers, like, and I was a, a big part of this and a pretty big quantum. And I think we went from, I, when I started, we had about 100 engineers total in the firm. By the time I left, we had 7,000. Uh, I oversaw about a fifth of that group, uh, and all the application development, all of the data warehouse infrastructure and all of the, uh, gen AI builds globally. And so, uh, that journey was, was really interesting and it, and it, you know, over the course of it, spent a variety of my time competing with other large enterprise, um, AI businesses. And I got to know the found- the founder, Francis, really well, um, about three years or four years ago now. Uh, we actually met totally not work-related in a, uh, kind of social context where we were discussing... It was a, it was basically a forum called Dialog. I don't know if you've heard it, but you-

    6. HS

      Yeah.

    7. MF

      ... basically talk about different ideas. We bonded over-

    8. HS

      I keep getting invited-

    9. MF

      ... history.

    10. HS

      ... to this. It's in like Hawaii, though.

    11. MF

      (laughs) It's in many different locations.

    12. HS

      It's, it's far. (laughs)

    13. MF

      It's actually a great... It's a g- I really enjoy it because you actually don't talk about work at all. You're not allowed to talk about your job. You spend time talking about history, politics, technology. So you have-

    14. HS

      What does everyone from San Francisco do? (laughs)

    15. MF

      (laughs) But they don't talk about it for, for two days, which is very-

    16. HS

      It's a silent retreat. (laughs)

    17. MF

      (laughs) Yeah, exactly, exactly. Uh, but, but I actually think it's one of the few, uh, events I've been to where people are not talking their own book, they're not trying to convince you of anything and you just really actually... I've made a bunch of really good adult friendships out of that. And so Francis and I got to know each other from that four years ago. Um, and there had been another, uh, CEO kind of in the two years befo- before I joined who was actually based in Australia, interestingly. And so when the business got to a certain scale, it was just time to have a US-based CEO that could help take the business to the next level. And, uh, you know, it was actually Francis just approached me and kind of pretty directly said, "Do you wanna be our next CEO?" And that was kind of, that was kind of what happened. (laughs)

    18. HS

      Was it a no-brainer?

    19. MF

      Look, I, I think when you walk away from a really stable, uh, job that you really enjoy, that's always difficult. Uh, and I think that, um, you know, McKinsey actually, the, the sliver of McKinsey that I was doing I found to be one of the most intellectual day-to-day jobs ever. I was working with all the Fortune 1000 on every different AI topic daily. And particularly in the early machine learning days, kind of 10 years ago, I think we built some really interesting stuff. But, um, but yeah, it was, I think it was kind of a no-brainer in some ways, 'cause I think when you, when you think about it, I think this is the most interesting time to run a company on a topic that has probably existed in our lifetime. May- maybe the 2000s. But to run a company in AI right now is r- is fascinating, the, the rate at which you can build, the people at which you can recruit, the, the interest of customers in this topic. And so I felt like I'd spent 10 years learning one topic, and now I had a chance to run a business and build it the way I wanted to build it on that topic, and that's just something you can't pass up. And even though, you know, walked away from, um, a, a fair amount, but I think that, uh, I'm, I'm much more excited about building something for the next two decades out of this.

    20. HS

      When we think about like decision-making frameworks, I always have one which is like, find someone who you respect and admire. So for me, it's Pat Grady, who's the head of Sequoia. I've known him for like 10 years. He's a great father, investor and husband. Three things that I care about. And whenever I have a tough decision, I'm like, "What would Pat do?" And most of the time, I get to the answer by asking that question in that framework. If I were to ask you, what do you ask yourself, how do you find direction when struggling with a decision?

    21. MF

      I'm not a particularly materialistic person. You know, I think it, when I was coming out of college, for example, everyone was focused on going into large finance jobs, which at that time were pre- pre-financial crisis obviously where, where a lot of that was, and I don't... Uh, I think a lot of what I think about is doing work day-to-day that I really enjoy with people I really enjoy, and then building something. And I, I do think I really enjoyed the decade I spent building at McKinsey. I think that was an incredibly interesting experience to stand up something of that scale within, within a, an existing institution. Um, and then I d- I do think about, you know, I read a ton, I read a ton about everything from military history to current entrepreneurs to, um, enterprise executives I really admire. And then I have a group of, kind of a small group of people whose opinions I ask pretty regularly, and you know, I think, uh, probably the most telling piece of advice, um, my girlfriend and my main mentor, both of them, when I asked, within two minutes were like, "Absolutely, do this." My main mentor is a guy named Somesh Khanna who, um, had been a senior partner at McKinsey for a long time, is, is on the board of a whole f- variety of different companies today, and um, I remember we got lunch. I walked him through the opportunity, I said, "Listen, it's a big risk." And he goes, "The only risk is if you don't take this and the amount of regret you'll have not giving it a go."

    22. HS

      Mm-hmm.I, I, I totally agree with that one. I, I was once given advice that whatever you think you should do, hold that close, and then let your girlfriend tell you what you should do. (laughs)

    23. MF

      (laughs)

    24. HS

      And that's why you still have a relationship. It's a great-

    25. MF

      It's-

    26. HS

      ... piece of advice.

    27. MF

      Exactly.

    28. HS

      That was from someone who's been married for 40 years.

    29. MF

      (laughs)

    30. HS

      And so it's worked well for him. Um, we were chatting before and I said, "Listen, where do we have to go?" And I just, I always think that the best conversations are led by passion. The first one that you said was, "There's a gap or a chasm-"

  3. 8:3610:41

    The Single Biggest Barriers to Enterprises Adopting AI

    1. MF

    2. HS

      So I was speaking at one of the largest banks in the world. It's an absolute joke that they get a university dropout like me to speak at their, like, largest retreat. (laughs)

    3. MF

      (laughs)

    4. HS

      Uh, I find it very fun. Um, but I left and I messaged the team and I just said, "Oh my God, they're toast." And they're toast because I said about the amazing tool they should implement internally, and the CTO laughed at me. He was like, "Dude, there's no way that we can ever adopt your off-the-shelf, you know, search engine optimization for, you know, LLM tool because of data, because of security, because of permissions." And I was like, "Wow, everything that you just said there, I listened to."

    5. MF

      Yeah.

    6. HS

      But that was once you got in the door. Are enterprises even open for business? You see Goldman Sachs developing a huge amount of their own tools. Are they open for AI business?

    7. MF

      Yeah, it's a great question. Um, I think it depends a bit on the sector. I think for, there are sectors like banking that are very focused on building this internally. I think that is a reality. Um, I think-

    8. HS

      Do you think that will work, the internal build for them?

    9. MF

      So it's interesting. If you look at the MIT report, um, when I, which, which is the one I mentioned that says 5% of models are making to production right now, they actually cite a stat that intern, that externally driven builds are 2X as effective as internal team builds. I actually think there's an interesting kind of 10-year pattern on this, which is 10 years ago everyone bought software, right? Like, that was your tech team did not try and build anything and you started to buy and you bought, you know, often you bought way too many apps, but you bought 15 different apps, and that was what the technology team did. And then I think with the advent of cloud, you started to have a world where the technology function started to start to think about building things, like maybe they started to have more, some custom applications that wrapped around that. I think gen AI has 5X'd that, where now an internal team is given this enormous budget and said kind of, "Go, go have at it." And I think that's complicated because I think when you hire somebody to build, any vendor of any kind, you're pretty disciplined about, what are you delivering on what timeline? What's the ROI of it? What are the milestones? How does that... And I don't think that that discipline exists in the same way in internal builds. I also think that the talent levels often the internal teams have are challenging. And so, um-

    10. HS

      When you say

  4. 10:4127:13

    It is BS That Enterprises Can Adopt AI Without Forward-Deployed Engineers

    1. HS

      the internal team builds are challenging, there are some things that you can't say but I can, that the perception from external or from general kind of tech crowds is the internal teams for, uh, you name your boring, large enterprise-

    2. MF

      Mm-hmm.

    3. HS

      ... is just really low quality. You're not getting the top-tier AI engineers. You're not getting top-tier devs. Is that true?

    4. MF

      Look, I think the, the amount of talent that knows how to do this well is not large, right? And so that, that finite group mostly works in AI startups of various forms, right, and, and large tech companies. And so I do think there's real risk to the process of figuring this out from first principles and enterprises, right? And I think that's part of the cycle that we're going through right now is, uh, a l- a lot of internal groups have gone through the process of saying, "We must do this all internally." But the reality is, if you think about that this is an open architecture ecosystem and you're gonna adopt things like MCP or, you know, all the new tech and the new voice agent that comes out, you actually want a modular open architecture where you can use all the best tech available and figure out how to link it together. And I think the, the desire to shape that all internally has been challenged. Like, I'll give you, I'll give you one of the more interesting examples I can discuss. I was talking with an e-commerce retailer that had built an agent to handle their returns process, and they spent 25 million bucks building this agent. And at the end of it I said, "Well, how did you, how did you define..." This was aft- I met them after they built it, and I said at, at the end of it, "How did you define if this agent worked or not?" And like, "Well, we built, we built our own eval tool." This is not a joke. Uh, and we basically analyzed a mix of speed of call resolution and sentiment.The problem with that is what if the agent hallucinates and says, "Here's $2 million." That actually gets resolved quickly and the person's happy. And so-

    5. HS

      (laughs)

    6. MF

      ... they built this entire system from first principles, and what ended up happening was a couple months later, they shut it down and moved back to a deterministic flow. And that's not surprising to me at all. And so I do think that's a little bit of the adoption curve we're in, is I think over the next two years, you're gonna see the CFO function put different guardrails on how this stuff is built and say, "What is the ROI? What are you investing in? What's the metric? What's the return?" And that will change the adoption curve. But right now, there have been a lot of science projects. I think that is a realistic.

    7. HS

      Okay. I'm... You know, we have, uh, hundreds of thousands of listeners and many of them are CEOs. If you are a CEO-

    8. MF

      Yeah.

    9. HS

      ... thinking about your CFO being equipped to buy and to manage in this new environment, what should they be thinking about? And do we have the right CFO talent pool to manage this new environment?

    10. MF

      Yeah, so I think one misconception is that that t- that leader has to be highly technical to make that decision. And I would actually argue they don't at all. They just need the same muscle memory they've looked at in the past, which would be to go through it, what do you need to get it, to get a Gen AI initiative working? You need good data that you can work off of for that specific initiative, clear milestones and outputs, clear line ownership of the initiative, and then probably most importantly, you wanna actually anchor it in milestones and outcomes where you pay as it works. So, I think the other, the other interesting context for a lot of this is what I would call the Accenture Paradigm of the last 20 years, right, which is a lot of times the way that... if you think about the wrapper that's been around software for the last 20 years, you know, the- Francis, our founder, Francis Peraza, has the founding principle of Invisible was if there's an app for everything, how come nothing works?

    11. HS

      Huh.

    12. MF

      And it's an interesting concept, right? Because what ha- what ended up happening is you built, you, you hire, you bought 50 apps. You had Accenture come in and you paid them $200 million over two years to try and layer them all together, and often, you ended up a couple years in with no working data, no linkages between them, and that, that, that kind of layers of sediment has been how the tech paradigm worked for the last, uh, in the enterprise for the last five years. And I, I think what's different now is if you're thinking about a specific Gen AI initiative, like a contact center let's say, you don't need to operate that way. You can think about what are the operational metrics you want in your contact center. You want, uh, you wanna think about call resolution, call performance, cost per call, uh, routing logic. And, uh, you know, you can then look at both internal and a set of vendors who will deliver those metrics and make an evaluation. And if the vendor doesn't work, you fire them. And I actually think there's a very clear way to get ROI in this, which is figure out the list of three to four things that move the needle for your business, focus on those three to four, don't, don't spend money on a thousand sp- sci- science projects. Take your best four operational leaders and put them on those four things. Don't locate it in the tech function. That's the main ad- advice I give people is your Gen AI initiative should be led by the business and figure out, that could be your head of call center, that could be your head of operations, but each of those people with clear operational KPIs will get the stuff working. And there are a bunch of companies that have, but it's just a very different approach than, "I'm building Gen AI," as an example.

    13. HS

      It's really interesting, you said don- don't invest in a bunch of science projects, do three to four initiatives. Okay, let's do three to four initiatives again.

    14. MF

      Yeah.

    15. HS

      Let's put on that CEO hat. Contact center, it's just a big one that is homogeneous across everything.

    16. MF

      Yeah.

    17. HS

      Man, there's so many players in the contact center space.

    18. MF

      Yeah.

    19. HS

      I'm a CEO. I'm not a, I'm not a Silicon Valley guy. How am I meant to understand whether we go for Sierra or Decagon or Zendesk of old or Intercom or, or any of the other players that we've seen in this space? How do you advise the bigger CEOs-

    20. MF

      Yeah.

    21. HS

      ... on buying in a wave of new innovation?

    22. MF

      I think this is the other big challenge of Gen AI adoption is you have... you're an average CTO, COO, you've got 250 vendors a week pitching you. All of them sound pretty similar. In fact, I was with a customer yesterday who literally started the meeting by saying, "How are you different than the other 250 people that have pitched me this week?" So this is, this is the dynamic of we have an over-saturation of companies that all sound relatively similar, relative to agents. I think the, the- to, uh, to make your question even more, um, pointed, a lot of them don't work. You know, I think you've got, you've got a fair number of the enterprise agent companies that, you know, like Salesforce, um, AI Research released this report that if you take, um... if you test a lot of the out- out-of-the-box agents on single-turn and multi-turn workflows, they're about 58% accurate on single turn and 33% accurate on multi-turn workflows, which means they don't really work. And so you've got this challenge of 250 agents, 250 companies a week pitching you, um, you don't really know how to select it, and you, you, you're worried you're gonna pick someone as effective as Charlotte and it won't work. And, and the more you have a, a, a market where there's a lot of excitement, the more you do have that risk, right? Um, so I think the simplest advice I give, and by the way, this is how we "sell," quote-unquote, is start with proof of concepts, start with, um, we call solution sprints. Don't pay a dollar until you prove the tech works. So, like, we don't actually sell anything. We meet a customer, we say, "We will, we will do it for free for eight weeks and prove to you the tech works." And that's a very simple way. If your tech works, you'll show it. Uh-

    23. HS

      It's an expensive way to do business.

    24. MF

      It is and it's not, but l- so let me give you an example, like how-

    25. HS

      Yeah.

    26. MF

      ... one of our deployments works, 'cause I think it, it, um... fair enough if the answer is that, you know, it takes you two years to build anything, but like I'll, I'll give you an example. So we're serving, um... we have... our AI software platform is effectively five modular, modular components. So Neuron, which is our data platform, brings together structured, unstructured data. Um, Axon, which is our AI agent builder, Atomic, which is effectively a process builder, we can build any custom software workflow. And then we have a Meridial expert marketplace, which is... we, we have 1.3 million experts a year on any, any topic you can imagine that we bring into those workflows. And then Synapse, which is our evaluation platform in all of it. Now, we can take those five things and configure them to almost any different enterprise context. So just an example, we serve food and beverage, public sector, asset management, um-... uh, agriculture, sports, uh, and, uh, you know, a whole host of other diff- oil and gas, a whole host of different sectors using that same modular architecture. Uh, I think we, we end up scaling pretty materially once we show that the tech works. We're working with a company called, um, uh, Lifespan MD, which is a concierge medicine business, uh, and across the US and, and, and internationally. And, you know, w- what we're doing for them is we're building them an entire t- uh, tech backbone where they have an enormous amount of fragmented data across, um, EHRs, CRM, uh, ERP systems, uh, notes, everything else. All of their data sits in a pretty fragmented format. And so, we're using Neuron to bring all that data together. Uh, we do that very, very fast. So if Accenture would take two years, we can usually do it in two to three months. Um, we're then, on the back of that, uh, building a lot of different intelligence and reporting so they can look at things like patient journeys over time, labs, genomics data. Um, (smacks lips) uh, I don't know how much you use, like, the Oura Ring or anything else like that, but they wanna look at wearables, how all that content is looking, so they have a lot of detail on what any patient is doing at one time. And then on top of that, we layer things like, uh, we have conversational... uh, the ability to interrogate it and ask lots of different questions, like, "Let me look at who's used peptides, if it's a male between 36 and 50, and what have been the results." So we're using Axon to build all that. And then we, we build, um, (smacks lips) oh, and, and to fine-tune the model to do that. And then we actually do also on top of that build lots of specific custom agents for things like scheduling. So what you get at the end of that is a transformed tech-enabled business with all of those different components. Now that does take us a little while to st- to stand up. But once that is there, it's, it's effectively hyper-personalized software. And that is my view on where s- where this whole industry goes is you move from SaaS, out-of-the-box SaaS, to much more hyper-personalization using the specific data of an individual customer. And that is what we do.

    27. HS

      Do you think you can work with enterprise today with gen AI and with AI implementation without an intense fully deployed engineer mechanism?

    28. MF

      I don't think you can. So we are, we've doubled down, uh, w- a huge part of what we do is forward-deployed engineers, so we now have eight offices in many different cit- e- kind of eight cities, um, 450 people. We're fully focused on forward-deployed engineering. And I can tell you from a decade of my prior life, you just cannot do this with out-of-the-box SaaS. It does not work.

    29. HS

      What do the economics of FDEs look like? Obviously, Palantir has made it the most sexy thing ever.

    30. MF

      Yeah.

  5. 27:1336:39

    Are AI Talent Marketplaces Dead? What is the best model?

    1. MF

      ML did.

    2. HS

      Totally get that. And so when we look at the different products that we have today, uh, the expert platform is one I think that gets a lot of attention.

    3. MF

      Yeah.

    4. HS

      How much of the business today is the expert platform? But I find companies are lumped into categories. It's easier. And you have your Macaws, your Surges, your Invisibles, and you're all kind of put in the site, are all just talent marketplaces. And no one wants to be a talent marketplace, it seems. And I'm like, "How much of your revenue is the talent marketplace, and why does no one want to be a talent marketplace?"

    5. MF

      Yeah. So, so, let me think about that in a couple different ways. So, I actually think th- the AI training space has many different players that do m- have many different business models within it. There's four to five, but actually they're all quite different. Um, I think of us much more as an AI training platform than just a talent marketplace, meaning we have 1.3 million experts that come through the marketplace, but a lot of the expertise we've built over the last 10 years is the ability to... Uh, here's the simplistic question I think that AI training asks. You have to be able to source any expert in the world in 24 hours notice. You have to be able to source a PhD in astrophysics from Oxford, put them into a digital assembly line, and four days later generate perfect statistically validated data that will be compared head-to-head with somebody else's data and make sure that, that is perfect at the end. That is an incredibly difficult thing to do. And so actually, a lot of what I saw when I, when I took over Invisible was, that motion was incredibly applicable to actually the next phase of the enterprise as well, which is, um, the fine-tuning motions, the training, the, the ability to statistically validate for an enterprise use case like claims processing. It's the same motion. Like, I actually think AI training will be used next in banking and, and healthcare, and then after that in, in many other different enterprise contexts. And so the, the historical business I took over in 2024 was pretty materially weighted to the, um, AI training side of the house. But I, I came in with the thesis that, uh, enterprise would be a huge source of growth, and I think as you see next year evolve, like, y- you know, I think we've cons- confirmed 12 enterprise deals in the last 45 days, so we, we see pretty good momentum on that side of the business, and I think that's where we will evolve, is to doing both. I think the five core platforms we have are... Allow us to serve a whole host of different end markets. And I do think that's very different than the other AI training players you mentioned. I think we're the only player that spans that broad-based view in the same way.

    6. HS

      Can I push on the talent marketplace side-

    7. MF

      Yeah, go ahead. Yeah.

    8. HS

      How much of the business is that today, then?

    9. MF

      Uh, I won't say an exact number, but it was a pretty material percentage of 2024.

    10. HS

      Okay, got you. So it's a pretty material percentage. Th- the one thing that's also striking is the concentration of revenue to a couple of core players when you look at other providers.

    11. MF

      Yeah.

    12. HS

      It's like two players that make up more than 50% of revenues for pretty much every provider. Is that the same for you, and how do you think about what that revenue makeup will be given the enterprise diversification that you're talking about?

    13. MF

      Yeah. Um, I do think for... This is a space where there are not that many players that are, that are actually building LLMs, so by definition the whole space has concentration. I, I think... I would not, uh, disagree with that. I do think that's one of the really interesting things for us on the enterprise side is we have materially more diversification now in the number of customers we serve on a whole different range of topics. Um, I also think you're, you're seeing more, uh, kind of early stage model builders as well that are building hyper-specific topics. Um, and so that's, that's the other part of where we see expansion in the total customer base.

    14. HS

      When you come to negotiations with a client, given the revenue concentration, how do you play that staring contest? 'Cause essentially they go, "We know that you, we are one of your core customers and we will squeeze you on price." And you go, "I know I'm one of your core data providers. I will stand firm." How do you d- handle that negotiation? 'Cause it is a staring contest of sorts.

    15. MF

      I think people are willing to pay for good data. I, that's my simple frame... if you think about the importance of these models, if you think about the cost of compute, that is actually a huge chunk of the cost base. If you think about one week of bad data burns a lot of compute. Um, I, I think what we've seen, the reason the same players in this mar- it's been the same four to five players in this market for a couple of years now, is it's really hard to do well. And so people are willing to pay for good data. And so I think we, we have a very collaborative dynamic with all of our customers on that front. And, um, you know, I, I, I think that, uh, when you provide a service that's helpful, people are willing to pay for it. And if you provide a service that doesn't work, people don't pay for it. And so the interesting thing I would say on that front is, a lot of the time in these, they're not, um, t- the discussion topics anchor around, again, proven value. So we'll get a topic that'll come in, like a multimodal audio model, for example, and we'll go head-to-head with somebody on that that week. And at the end of it, we win or we lose. And so if you win and your data's way better, people are willing to pay for that.

    16. HS

      Hmm. Totally get you. I had a chat last night with a board member of, of another of the companies in the space, and he said, two, well, two things that really stood out to me. He said, "I'm just drastically shocked at the lack of price sensitivity for the core customers. Like, they're willing to pay pretty much anything." Is that the case or is that a bit of an exaggeration?

    17. MF

      I think it's an exaggeration. I think it's a nice- exaggeration. I mean, look, I, I think that there is a fair price, I think in any, if you think about like classic economics, people are willing to pay a fair price for good data. And so I don't think we, we, um, operate in a model of trying to give anything unreasonable. I think there's actually fairly standard price bounds across all the players here.

    18. HS

      Is data commoditized when I think about, like, pricing power? I'm a massive fan of Hamilton Helmer's 7 Powers.

    19. MF

      Yeah.

    20. HS

      It's this amazing book.

    21. MF

      Great book. Yeah, great book.

    22. HS

      When, when you think about like pricing premiums, you get that through not being a commodity, through owning supply of a rare asset.

    23. MF

      Yeah.

    24. HS

      Is, is there commoditization of data and we're kind of in a race to the bottom on the pricing of that data? Or do you own the supply of vet workflow data for surgeons in Oklahoma? That's very fun. (laughs)

    25. MF

      Yeah. So, so let me take that... I'll actually start with the market context and then I'll actually u- use 7 Powers as it is a great book, and I'll use one of his frameworks for that. Like, I think the market context that is somewhat misunderstood here is the way that human data becomes more and more important over the next decade. And I think the reason for that is if you thought of, um, the different types of things you could train off of. So synthetic data gets mentioned a lot, but like most of the time synthetic data is used for things like let's say base truth information, like math, where there is a clear output that is right or wrong. Now let's take all of the different reasoning tasks, like a multi-step reasoning task, like, I mean even a simple one, like what movie would I select based on, you know, these five preferences?

    26. HS

      Legally Blonde.

    27. MF

      Exactly. Um, well, and, and then let's take that question and add into it audio, video, multimodal language, the ability to do it in 45 language, language context. So the ability to think about computational biology in Hindi versus French versus English versus English in, with a southern accent. Like that, that, that paradigm is actually incredibly hard to train on, and we're still in the first inning of a lot of those permutations of complexity, is what I would say. And so for a multi-stage reasoning test that requires a PhD in multi different languages and f- like m- human feedback is gonna be important in that for the next decade. I have a strong belief on that and that was actually one of, when I to- chose to take this job, that was actually one of my core convictions is the enterprise is gonna need that too. 'Cause actually a lot of, you take legal services for example, a lot of the way you're gonna need to validate that is with legal expertise. There's no corpus of information you can train from. So I would start with the idea that I think the market tailwind for the next 10 years, we're actually in the first inning because there's the LMs, then there's the more sophisticated enterprises and then there's everyone else that needs to train, validate and move to fine-tuning. So again, contrasting, there's like the pre-training and LM work, but then to fine-tune a model to a specific context, uh, m- most companies don't even know what that is in the enterprise yet. And that whole process we're in the first inning of. So I think the market demand is gonna continue to grow pretty materially for a decade or more. Um, I think that the, the Hamilton Helmer framework is an interesting one 'cause he, my favorite example is, uh, he talks a little about what he calls institutional memory. So, uh, he mentions the Toyota production system as an example, right? Where Toyota would literally say to people, "This is exactly how my, f- how our factories are set up." And nobody could replicate it, right? I think the interesting thing about this space and why you've had a, a consistent set of folks doing it for a while is to go through the process of every week having to spin up, we have 26 thou- so we have 1.3 million active agents, or kind of, uh, experts that come into the pool. At any given week, we have 26,000 of those that we've selected that have to start in 24 hours and produce perfect data. Think about the, the challenge of scaling an organization that for five years can do that at really high quality and consistently turn and, and evolve to the different permutations of the market and new, new ideas of training. It's really hard to do. And I think that was what I, what I, what got me most excited when I took the invisible job was the question of can you make AI work in a really complicated context? Very few companies know how to do that on the enterprise side or on the training side of that m- for that matter. And so I thought that was a really unique institutional memory context.... it is a digital assembly line, no different than a, uh, than, uh, an auto factory. And I think that is a hard

  6. 36:3950:01

    How Does the Data Labelling Market Shake Out: Who Wins/ Who Loses

    1. MF

      thing to replicate.

    2. HS

      The other really interesting area that this board member said to me was, eh, v- he very much agreed with you, he said exactly the same words as you in terms of first innings of data, in terms of just-

    3. MF

      Yeah.

    4. HS

      ... how much market size will increase. He said, "The other thing that I really didn't under- or didn't understand when I made the investment was the specialization of data and how we are moving into the acquisition of this kind of insanely niche data supply pools," where it's not, like, cat, hedge, zebra crossing. Zebra crossings are, are, what, what do you guys call it? A pedestrian pathway or something?

    5. MF

      (laughs) Yeah.

    6. HS

      He said, "I did not see the specialization, the unbundling." Is that something that you see too, in terms of these very micro, niche, specialized data requirements?

    7. MF

      Uh, absolutely. I think, you know, five years ago, this space was what I would call catdog, catdog commodity labeling. I don't think anyone... And, and I think there was a lot of Google Sheets in that era, and you've seen some comments on that. Like, this, this, this era, this sector has evolved the same way most technology sectors do, where it started with Google Sheets and catdog labeling, and it's evolved to real digital assembly lines, huge velocity of expertise, and incredibly specific expertise. So like, you know, we have to, uh, give a funny example, we have to be able to validate, um, uh, an architectural expert on 17th century French architecture who speaks French. I mean, that is a, that is a complex thing to do on 24 hours notice, right? And so the ability to source, assess, validate, a- a- and I think one of the advantages for us is because we have five years of data on who's been good at what task, there's real institutional data memory in how you do that selection and assessment. I think that's one of the core advantages we have from them.

    8. HS

      How important is pay? You know, I think a couple of other providers, you kind of have said that, bluntly, it's about how much you pay. You pay more than the others, you'll get a good talent.

    9. MF

      You know, uh, look, so a, a weird analogy, I think of our business like Uber. It's, and what I mean by that is, um, we source talent at the price at which it, people will do, will do the work that is asked of them, right? So, the, the same way I do that, if you're standing at, standing on a street corner, your question is, "Can I find a ride that will pick you up at this moment within three minutes?" And that matter, that's a different price if it's raining, that's a different price if you're in, you know, Rio de, Rio de Janeiro, Rio de Janeiro versus London, right? The, the price depends on the market context and the specific place you are. I think extra pay is the same dynamic, really. A lot of what we're doing is what I call price discovery. And so the nuance I would add to what you're saying is, you can overpay a really bad expert, and that is a total waste of everyone's time. And so what I think our customers appreciate is we can tell you between $150 expert and $130 expert, the difference in expertise you get.

    10. HS

      Do you think you have control of a finite supply of data, uh, providers? If you look at the seven powers in Hamilton Helmer-

    11. MF

      Yeah.

    12. HS

      ... one of them is, like, acquiring finite supply.

    13. MF

      Tsk. Uh, um, I don't, I, so I actually don't think finite supply matters. Uh, and what I mean by that is, uh, I think the expertise needed varies so much month to month, that if you tried to do a world where you bottled up whatever supply it is, it would change in three months. And we actually r- uh, relish that concept. I actually think the dynamic, again, why I would use Uber and Lyft, you could use, um, you could use Airbnb and VRBO as the same context, is I don't think people, I don't think experts go on five platforms, right? I think actually what you wanna be is, eh, this is a two-way marketplace, where you need enough demand for people to be interested, and you need enough e- expertise that many experts... And I think the reason we get 1.3 mi- 1.3 million in bounds is because of that kind of supply-demand balance. So I don't think this moves to a world... And I actually, I, I would never say it would move to a world where there is one player coming out of this. I think there is benefits to everyone to having numerous players that do AI training, and so it's a question of being one of the players that has that balance.

    14. HS

      You, you said there about kind of the switching of preference, of like, "Oh, three months ago, it was this that you want, now it's something-"

    15. MF

      Yeah.

    16. HS

      ... "... completely different." Switching cost is another. When you have data providers in this way, are there inherent barriers to switching? Is there any loyalty?

    17. MF

      Yeah, I, no, I, I think that if you've learned how to do a certain data task really well, there's incredible value in that. And that's what I'd like, the, the way... And, and let's take the enterprise context again, 'cause I do think it's a good one. So, um, you know, we are, um, I'll give you an example, we're doing a lot of fine-tuning on some pretty interesting topics. So we're, um, one example, we worked with, um, SAIC, Ventor, and the US Navy on fine-tuning a model for underwater drone swarms, just to give you an example. And so the question on that, if you think about-

    18. HS

      Niche. (laughs)

    19. MF

      ... very niche. (laughs) That, that, that, uh, that's why I used it as an example to answer your question. So if you thought of it in that context, you've got a bunch of underwater, uh, underwater unmanned vehicles, and they're getting in all the drone and sensor data from the, the interaction patterns of those vehicles. And what they wanna know is, you know, an object is in the water near them, what do they do? Do they react? Do they pull back? Do they-

    20. HS

      Mm-hmm.

    21. MF

      ... alert another drone? Do they engage? What are the topics of that? So fine-tuning a model to take in all that complex sensor data, fine-tune it, train it, and build a decision-making framework for those drones, there's a lot of logic built into that. And I think that's why it's been a great partnership with SAIC and Ventor, 'cause we built logic on how to do that. And it's, uh, you know, I think that there is real, um, sustainability and expertise you, you, you build up. And so the way I think about, like, our, our enterprise motion, for example, is every sector is led by somebody with deep, deep sector expertise, and we do build real logic on those topics. And I think the same is true for multimodal video and audio, it's true for legal. Um, I actually think a lot of the training work, even in the ModelBuilder side now, one, one interesting view I have is people talk a lot about the public benchmarks, that tends to be one question you get a lot, is like, "Are we reaching a point where models are not improving?" I actually think it, I think about it very differently, which is the models are now all moving down hyper-specific things where there's not a public benchmark for them, by definition, right? Like, they're moving to more very specific tasks that are, you know, very different and not something you can publicly benchmark in the same way. And that's why we do see more and more model improvement every day, but both in ModelBuilders and Enterprises on these specific tasks.

    22. HS

      You said about, kind of, the benchmarks, so I'm just so interested, yeah. Gemini-3-... killed it, it's the best ever. And then, you know, yesterday, Opus 4.5, killed it, it's the best ever. Next week, Sam's gonna release one. Does it matter? Like, are we in a world of such transient and flux, where really we should detach ourselves from these bun-

    23. MF

      (inhales)

    24. HS

      ... monthly updates that last for days?

    25. MF

      Look, I, I think the benchmarks are a useful framework for society to gauge progress on this topic. And it's a very, it's a very often-discussed topic, so people want a way to say, to answer the question about, "Are the models improving?" And I can tell you, like, unequivocally, the answer is yes. I mean, I think by every measure you look at, um, they are. And, you know, they're not only including on the ben- improving on the benchmarks, but even on specific tasks, like research for investments, for example. You can see the models are much better at doing certain tasks. And I think what you're seeing start to happen is people, and we're doing this as well, are building very specific work-based ca- benchmarks to calibrate certain things, like, how well does the model do on building an LBO model, for example? And you're gonna see more and more benchmarks cited. Now, the complexity then becomes if you move from five main benchmarks, like, SuiteBench and others, to 600 benchmarks, then you kind of lose track of what's doing, who, who's doing well on which things. Um, but I think my, my, my interesting view on that would be, I'm not sure the benchmark progress is what determines enterprise adoption. And what I mean by that is, if you take the fact that the models have improved m- exponentially over the last couple years, and you say consumer adoption has been massive, right? Like, um, uh, KPMG had this report that 60% of consumers use this on a, on a weekly basis. The adoption curve on enterprise is not going to be a question of generalizability. It's gonna be a question of hyper-specific performance on a specific task, right? And so there isn't actually a benchmark for that. Like, if I, uh, you know, take, um, uh, l- uh, let's take a investment summary document for a private equity firm, right? There's no benchmark to say, "Firm One, this is how you write invest- investment committee memos." Does this generate something that looks, with 99% precision, like something you would, would roll out? There's no benchmark to do that. And so that's where what I see as the adoption curve is actually the fine-tuning and inference layer of actually testing that, getting into a place where that firm can say, like, "This looks good. I'm okay with this." I've, you've tested it. Like, machine learning has a, um, context. I don't know if you've heard. They, the, the banks do this thing called model risk management, where they actually do a, a whole host of validation and testing on things like redlining before they ma- roll a model out. That's what the enterprise is gonna have to do. And so it's not that the model improvement doesn't matter. I actually think the, the benchmarks are a good way to get some, some, uh, sense of model improvement. But it, they're almost orthogonal to enterprise uptake. I think an enterprise uptake depends on trust and precision on specific tasks at 99% accuracy, not generalizability.

    26. HS

      If those specific tasks are removed in the way that you said, like summary docs for-

    27. MF

      Yeah.

    28. HS

      ... investments, often it's done by more junior people in the earlier stages of their career-

    29. MF

      Yeah.

    30. HS

      ... when they are building and kind of scaling those skills. Do you think we will have a talent pipeline problem if we do remove a lot of those junior roles, which we are seeing in certain cases already, and I think we'll continue to see, where we won't actually have the graduation pathways that lead to the leaders that we have today because we've removed those junior roles?

  7. 50:0152:56

    Are Revenue Numbers for Data Labelling Real Revenue? Or GMV?

    1. MF

    2. HS

      There are large revenue numbers thrown out.

    3. MF

      Yeah.

    4. HS

      Are they revenue? 'Cause I've done shows before with them and I got battered, (laughs) bluntly, when people are like, "Oh, it's not revenue, Harry, and you can't categorize it as revenue." Is it GMV, not revenue? Are we playing fast and loose with the truth on revenue versus kind of bookings?

    5. MF

      (smacks lips) I think it is revenue. I think that, um, your, the rate you get on every project is different. The margin you make on every project is different, so I do think it is revenue and I think that the, um-

    6. HS

      Can you h- help me understand? Sorry, and I'm very naive. If I have, if I'm acquiring, uh, a- amazing talent-

    7. MF

      Yep.

    8. HS

      ... yeah, and I get paid for that, yeah, how, and then I have to pay them and then I get my take at the end of that, how is that different than booking on Airbnb where I get my take from a location, but I have to pay out to the owner?

    9. MF

      Oh, good question. Well, I think Airbnb has one consistent fee. That's the difference. There's actually a fair amount of variation of, uh, uh, based on the skill set or the expert. Like, you don't have a consistent rate relative to the booking amount. That's the biggest difference. So, there's huge variety depending on the project, the expertise type, the expert type of what you book on that.

    10. HS

      Are there any other big misnomers that you think are pronounced in the industry where you consistently are like, "I wish people would change the way they think about it"?

    11. MF

      Look, I, I think the biggest one is just the view that, uh, I, I think when I, when I first started, uh, first started this job, the main pushback I always got was that synthetic data will take over and you just will not need human feedback two, three years from now. And I, I, it's interesting. I don't, um, from first principles, that actually doesn't make very much sense if you think through it, right? If you think about the diversity of tasks that exist in the world and then how long it would take you to f- get comfortable with the accuracy, it doesn't make any sense, right? Like, I'll take legal services 'cause it's a really interesting one, right? A lot of the legal data in the world exists with big law firms. It doesn't even exist in the public inter- So if you take, like, the corpus of publicly available information, that's, that's been commoditized for years at this point, right? And so most of the logic is incredibly contextual to language, culture, multimodal context, and the information stored in individual companies, as an example. And so the only way to actually do the fine-tuning process consistently and to get it accurate for any specific context is RLHF. And I actually think in my, in my decade, in my McKinsey days, in m- McKinsey, Weinhaupt & Black days, um, that was the, the thing I realized was different about traditional ML models versus GenAIs. In machine learning, you can back test, you can get to a really clearly statistically validated outcome without any human intervention. I think on the GenAI side, you are gonna need humans in the loop for decades to come, and I think that is something that most people are starting to realize. I think it's always confusing to people when they hear like, "Oh, that's how models are t- are trained on the backend. I didn't realize that's how the statistical validation works." And so I think that's been an interesting evolution.

  8. 52:5655:22

    Best Capital Allocation Decision? What did Matt Learn from it?

    1. MF

    2. HS

      You're profitable, correct?

    3. MF

      This year, we had started to invest a lot more, so I think one of the big differences, uh, historically Invisible had only raised $7 million of primary capital-

    4. HS

      Oh.

    5. MF

      ... in its entire nine-year journey. Um, we've now w- we initially announced 100, we've actually right now raised $130 million and so I'm investing very heavily in technology, so we will not be profitable this year, no.

    6. HS

      Can you just take me to that decision? 'Cause I, this was gonna be my question-

    7. MF

      Yeah.

    8. HS

      ... which is like, that's a very clear decision to be profitable and profitability comes often at the expense of growth.

    9. MF

      Yeah.

    10. HS

      Naturally. Can you just take me to that decision-making for you and how you thought about it?

    11. MF

      Yeah, look, I mean, to me, i- it was a simple one, which is if you think about the dynamics of return on capital, uh, you can either harvest capital or invest capital, and your decision to invest depends on the growth you see as a result of that investment, and I think we're in the greatest environment for growth that has ever existed. I think Invisible's really uniquely positioned to capitalize on that growth too, and so I think of our four c- five core platforms, I think of the growth vectors across both AI training and enterprise, and there were just way too many different things I thought were interesting to invest in. It was the clear best use of capital and I, look, I'm trying to build this for the next 10 to 20 years, and I think if you want to build enterprise value for 10 to 20 years, now is the time to invest and build, um, and y- I hope we never get to the harvest stage, but I d- definitely not now.

    12. HS

      Where are you not investing that you want to be investing?

    13. MF

      I, I think the simplest answer is actual physical world interactions. So, what I mean by that is I think a lot of the most interesting data that we don't even really have access to yet is, is things that exist in the physical world that are more complicated to acquire and organize. So I'll give you an example. We, um, we're serving one of the largest, uh, agricultural conglomerates in the US on, um, herd safety, so actually, um, like, monitoring risk factors, when should you send a vet for their herd of, of cows, basically. And that whole process relies us, on us actually sending forward-deployed engineers to farms, uh, dropping Starlink terminals into those farms and building out custom computer vision models in those contexts. And I think there are so many different physical world contexts that become really, really interesting, but it does take cost and capital to build those out, like, you know, I think oil and gas, uh, oil rigs are an interesting one as an example. And so I think physical world interaction patterns are, uh-... the m- some of the most interesting growth factors for this, but they do take time and money to invest in, robotics being another big part of that.

  9. 55:221:09:24

    How Important is Brand for AI Companies Selling Into Enterprise?

    1. MF

    2. HS

      One area of investment that I think is interesting is brand. How do you think about Invisible's brand today?

    3. MF

      It was interesting, when I took over, we had had... if you looked at the entire public internet, I think there's one article available. Uh, and so I, I, we've definitely spent a lot more time this year thinking about, um-

    4. HS

      Was that a deliberate decision?

    5. MF

      I think so, to some degree. I think Invisible has a culture of, you know, we believe in doing great work for customers, and we were kind of not really focused on telling the whole world about that.

    6. HS

      Does that become detrimental to the business at some point though?

    7. MF

      Yeah. Look, I do think branding matters a lot. Um, my view now is that, um, it, it's been very helpful for, for us to spend time where I spend a lot... I spend about 70% of my time on the road, uh, and I, you know, go to a lot of conferences, things like that, and I think building a brand is really important for trust, for awareness, for engagement. And so... And I think also how you tell that story is really important, uh, so I, I'm very much a believer. I have this, um... one of my favorite quotes, like, Marc Andreessen has this idea that when pi- private and public narratives diverge, that is the risk or the opportunity. So meaning, if you say things you don't believe to be true, or if everyone's saying things they don't, they don't believe true, then what is the actual private narrative? So I think it's been very important to me to make-

    8. HS

      Sorry, can you just help me understand that?

    9. MF

      Yeah.

    10. HS

      What do you mean by that?

    11. MF

      So hypothetically, if I was going around saying, "We have an out-of-the-box agent that does everything," and then that wasn't actually true, that's what either creates opportunity for others or risk for us. That's how I think about it. And so I think what's been very important for me and how-

    12. HS

      Is that not our industry? I'm sorry, I, I mean, I...

    13. MF

      (laughs)

    14. HS

      I'm, I'm, I'm, don't mean to pick a fight with Marc Andreessen but like, "Hello, Marc." Like-

    15. MF

      (laughs)

    16. HS

      ... our job is to sell and then deliver later. (laughs) Like, uh, I'm looking at them thinking, "Well, I'm fucked." (laughs)

    17. MF

      Well, you know, I guess it's all a question of degrees, and I think in my mind, um, like, I wanna say things where the narratives are the same to the public and to what our team thinks and what our customers experience, and so I think that's part of why I have focused on saying some of the nuances of what's n- not working and not claiming everything works out of the box. And I think that is, that is a different approach, but it's been a core to how we've thought about building the brand is, we are building this around trust where like I want a company we work with to know that if I say this will work, it will work. And I, I think you only get one chance to do that right.

    18. HS

      Do you agree with "Fake it till you make it"?

    19. MF

      Uh, that's such an interesting question. Fake it, it... I think it depends on what faking it means, right? And, and, and what I... one of the things I think is really complicated about gen AI is it's non-deterministic, right? So like, i- if you've never built a machine learning model to do pricing in, uh, industrial manufacturing, you can still understand what data's available, understand how the price is being set today, and get pretty comfortable that what you build, if you say you will build it, will work, and I think that is okay. I think the challenge of non-deterministic systems is there is more risk to, uh, faking it till you make it. Meaning, if you... you can kind of go out and say your agent will do anything, and then you actually have to deliver an agent that works, right? And I think that's part of the, the interesting... you're asking about a c- accounting dynamics, is I think it's part of the interesting dynamic of like, a lot of the contracts that, that people sign right now are like, "I'll sign up for 50 agents to be delivered," but then the question is, do you deliver the agents? Do they work? And so I think that is a different thing than SaaS, to go, to go back to your earlier question. If I deliver a SaaS box, I know it will work. If I deliver an agent... In the current world, there was actually a report AWS came out with today, it's interesting, that like 70% of agents are actually not even, uh, AI agents as you think about it. Like, most of the agent, agentic processes today are actually traditional script writing and just traditional automation, right? And, and I think that's why I don't self-identify as an agent company actually at all. Like, I think we do AI agents, we do... like, uh, um, AI workflows are a core part of what we do, but we do data, we do training and fine-tuning, and agents are one tool in the toolkit, because I think too much emphasis on them a lot of the time won't work.

    20. HS

      (laughs) Did you see the video of the robot going around the house recently? And it was like the worst thing ever. It was like 11 minutes to take out a glass from the dishwasher.

    21. MF

      Yeah. (laughs) I didn't see that. I got this-

    22. HS

      And then at the end of it, it's like, "And this was controlled by Simon in the back room." And you're like, "The shittest robot ever was then controlled by some weird dude in your back bedroom."

    23. MF

      (laughs)

    24. HS

      Like, this is so shit. (laughs)

    25. MF

      I, I, I do think that it... yeah.

    26. HS

      (laughs)

    27. MF

      I di- I did see that. And look, I think robotics is another one that will take longer, but will be really interesting when it works. But... by the way, I think even in that case, you'll need more task-specific robotics, not just broad-based.

    28. HS

      Have you ever faked it till you make it and been caught out? And did you learn anything from it?

    29. MF

      When I first started working in... it wasn't even called AI back then, it was kind of... data analytics was what it was actually called, uh, this was probably 12 years ago or 13 years ago now. 12, probably 12 years ago, uh, in my McKinsey days. Um, and I... you know, I, I think the firm gave me a pretty interesting purview to try and explore where I could build out AI, uh, offerings across, um, different sectors and customer bases. And I don't think I knew what I was gonna build, candidly. I think that the interesting dynamic is, I, I had a lot of conviction that... and partly 'cause of some of the things I'd done before, that AI could be really useful on a whole host of things, from inventory forecasting, to pricing, to credit underwriting. If you just thought intuitively of like the sources of data, the fact that... so 70% of the software in America is over 20 years old. Most of that data is massively fragmented, not clean, and so a lot of the decisioning that happens in the enterprise is done in a really fragmented way.

    30. HS

      Mm-hmm.

Episode duration: 1:25:56

Install uListen for AI-powered chat & search across the full episode — Get Full Transcript

Transcript of episode 9bzeQRGtA-4

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome