Skip to content
No PriorsNo Priors

No Priors Ep. 97 | With Decagon CEO and Co-Founder Jesse Zhang

Today on No Priors, co-founder and CEO of Decagon, Jesse Zhang, joins Elad to discuss the future of agentic customer support. Decagon provides AI-powered customer interactions for companies like Rippling, Notion, Duolingo, Classpass, Substack, Vanta, Eventbrite, and more. Jesse shares the thesis behind starting Decagon, why he sees customer support as the ideal entry point for agentic technology, and what areas of AI excite him most. They also discuss voice-based interfaces, issues with latency in current capabilities, and the connection between young math olympiad communities and today’s AI startups. Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @TheJesseZhang Show Notes: 0:00 Introduction 0:30 Starting Decagon 3:15 Business impact of adopting agents for customer support and customer ops 8:00 AI infrastructure and models for customer success agents 12:05 Voice-based capabilities and text-to-speech engines 15:00 Combatting latency 16:25 Crossover of math and AI communities 21:12 Exciting areas of AI 25:29 Strengths and weaknesses of agents

Elad GilhostJesse Zhangguest
Jan 16, 202530mWatch on YouTube ↗

EVERY SPOKEN WORD

  1. 0:000:30

    Introduction

    1. EG

      (Robot sound) Hello, and welcome to No Priors. Today, I'm talking with Jesse Zhang, co-founder of Decagon. Decagon is an early stage company building enterprise grade generative AI for customer support. Founded in August of 2023, their platform is already being used by large enterprises and fast-growing startups like Rippling, Notion, Duolingo, ClassPass, Eventbrite, Vanta, and more. Jesse, welcome to No Priors.

    2. JZ

      Of course. Thanks for having me, Elisha.

  2. 0:303:15

    Starting Decagon

    1. JZ

    2. EG

      Absolutely. Maybe we can start a little bit, um, with sort of your background and what Decagon does. You know, you're a serial founder. You started another company before this, Syntactic Bot, and, um, you know, now y- you and Ashwin have started Decagon, and you've been working on it for a while and have seen some really interesting adoption from companies like Rippling, Notion, Eventbright, Vanta, Substack, and many others, right? So you- you've really started to carve out a real, um, space for the company. Could you tell us a little bit more about what Decagon got- does, how it works, what the focus is of the company?

    3. JZ

      Of course, yeah. So quick background on me. Um, grew up in Boulder. Did a lot of math contests, stuff like that, growing up. Studied CS at Harvard. As you mentioned, started a company right out of school. Uh, that company was eventually bought by Niantic, and then I left to start this company. Uh, Ashwin and I, we- we met through mutual friends, um, officially met at this, uh, VC offsite, and when we got together, we were like, okay, um, biggest learning from first company is that can't really overthink things too much. We, um, we started by just kind of obviously being interested in AI agents. It's very exciting technology, arguably like the coolest thing from- from this generation. And, uh, we just, you know, talked to a bunch of customers like the ones you listed. Um, we, I think, over the years have gotten a lot better at figuring out, you know, how to talk to folks and, you know, how- what's- what questions to ask, and, uh, through that process, we kind of arrived at, you know, our current use case as maybe what we think is like the golden use case for- for these AI agents, which is customer interactions, customer service. Um, the- the use case is very tailor-made for- for what LLMs are good at. And so we started building from there, right? And we still w- weren't thinking too much about, you know, division or anything yet. It was just like, all right, we had a lot of customers in front of us. How can we make it so that they're happy and, uh, they really like what we're building? And then that led to kind of where we're at now. I would say right now, as a company, Decagon, we ship these AI agents for folks to use on the customer service, customer experience side. The thing that's made us special so far is, um, we have a huge sort of focus on transparency, I guess, so when people use us, especially these larger companies, it's- it's very important for them that the AI agent is not a black box, that they feel like, okay, even though LLMs are cool and, like, you know, there's- there's a lot of things you can do with them that they can see how decisions are being made, like what data is being used, how you come up with answers, and if I wanted to get feedback, I can, that sort of thing. So currently, we're in production with a bunch of these- these large folks that have large support teams. Um, pretty much any company that has a large sizable support operation is- is a good fit for us.

  3. 3:158:00

    Business impact of adopting agents for customer support and customer ops

    1. JZ

    2. EG

      That makes a lot of sense. It's interesting because, um, I feel like one of the things that wa- that's been really striking over, say, the last year in the AI world is the CEO of Klarna, um, uh, posted on X or tweeted about the impact that AI has had on its- on their customer support or service team, and Klarna's sort of like a buy now, pay later service out of Europe. And, um, you know, his tweet basically said in the first four weeks, they handled 2.3 million customer service chats. The customer satisfaction was on par with humans. It's 25% reduction in repeat inquiries relative to people. It resolved customer errands or issues in two minutes versus 11 minutes for a human agent. Um, and instantly they were live 24/7 in 23 markets and 35 languages because AI supports so many things. And so, you know, a huge impact on that company, and I think, um, they sort of shifted 700 full-time agents to do other work, right? Um, in terms of the impact of Klarna itself as an organization. What sort of impact have you been seeing with your customers as they adopt this sort of technology, and how do you think through the lens of- of, you know, what you're really bringing to these customers and the sort of satisfaction that their own end users have?

    3. JZ

      It's an interesting way to think about it, which is, um, you know, all these people are shipping this use case, right? It's like, you know, there's- there's a lot of evangelists out there, which is nice. Uh, and the Klarna article is- is awesome. It gave- there's a lot of tailwinds for the- for the industry. Um, and I think one interesting thing we've seen is that the benefits that people get are- are all roughly in the same vein, but different people prioritize different things. And so at this point, it's not really even that much of a hot take to say that, like, in a couple of years, these agents are going to be super pervasive. Uh, people can use them for all these customer interactions. They're going to be everywhere. And so, to your point, like, what- what is the benefit? So for our customers, it's always the same. It's what fraction of total work, i- in this case, like conversations, can the AI agent do? So how- how much work is this saving us? And then two, um, how much happier are our customers, right? Like, what's the customer satisfaction score, um, NPS score? Those two are often just the leaders by far. As I said before, like, different people maybe va- value each one slightly differently. And then there's kind of other things like, okay, well, we want to make sure that there's accuracy, right? Like, if we're in a regulated industry, uh, you know, this- this has to be very accurate for us. So that- those are kind of where the- the benefits lie. It's like we're saving a bunch of money. We're saving kind of time and resources. But also, on the other side, we're kind of making the customers happier, and so that can lead to higher retention, um, more conversions, and it's kind of a lot more upside there. It's like you're giving every customer a personal concierge, basically, in their pocket that they can chat with anytime, any language, 24/7, and, you know, that's- this- that can be pretty transformational for a lot of businesses.

    4. EG

      Is there any, um, example customer that you can talk about as a case study in terms of the impact this has had, how it's lifted their metrics, the success they've seen using Decagon?

    5. JZ

      Of course, yeah. So we, we just did a big case study with a company called BILT Rewards, uh, great use case for us. They have a very large user base, growing very quickly. You're actually using it to either make points or make payments. Uh, a lot of my friends use the product. And then as a result, because you have a large customer base, like people will have questions, people will have things that they need help on. And so, like, the, the, the number of support inquiries basically grows linearly with the, uh, number of, number of users. And because of that, because they're growing so fast and basically exponentially, that means the number of support queries is also growing exponentially. So when they first started using us, that was, like, the main goal. It's like, "Holy crap, we're getting overwhelmed by all this volume. Can AI help here?" Uh, and so the thing that ended up happening there was, yeah, within, um, a, basically a month of starting to use us, they were able to, you know, stop scaling their, their team and, um, the AI would take over a lot of the automation. It just makes everything very smooth. And then now, uh, basically, we're almost, almost a year in at this point. They've been able to really restructure their customer support team. And again, we, we published a case study on this where they were able to quantify like, okay, what are the savings, right? And so, so far, uh, it's around 65 agents of just, like, headcounts saved. So very tangible, uh, difference and then for us, it's also great because we're able to, um, you know, provide them the value there. It's like a very easy ROI. But the customer experience is also a lot snappier and, you know, they get a lot of social media posts about like, "Holy crap, like, I just tried, you know, BILT Rewards support thing and it's... doesn't feel like any sort of AI or chatbot system we've ever used before." So, um, that makes us happy.

  4. 8:0012:05

    AI infrastructure and models for customer success agents

    1. JZ

    2. EG

      Could you tell me a little bit more about what you've built from a technology and infrastructure perspective? So I guess there's the core models that anybody can access, right? The, the GPT 4.0s of the world, or GPT-4., or the Claude Sonnets, et cetera. Um, and then there's all the stuff you've built on top of it to actually make this work well for your specific use case and for, um, customer support agents. Could you, could you tell us a bit more about what you all have had to build over time?

    3. JZ

      Of course. Uh, like you said, everyone has the same access to the same models. We see ourselves very much as a software company, and we're obviously doing a lot of work around AI and, like, using the AI models a lot. But I would argue that most, you know, applications nowadays they're, they're real software companies and AI models are kind of tools that everyone can use. And so most of the sort of alpha or most of the special stuff that you build is on top of models. It's either the orchestration layer or the software around it. Uh, for us, there's been a big focus on both. Uh, the orchestration layer is kind of how you can use all these different models together. You probably have evals set up that measure how good each model is at certain things. You put them together and th- the whole goal of putting them together is to mold it around the business logic of the customer. So that's part one. The other thing you build is just very classic software, right? You have this AI agent there and it's all the things that I was saying before like, you know, transparency is a big piece, it's... And you really don't want this to feel like a black box that's just there answering questions. And so how can you build all the tooling to see like, okay, what's the data that the agent's using? How's... what steps is it taking? Um, can I analyze all these conversations that are coming in? If you have a million conversations it's like, okay, like, no one's reading all those, so how can you make it so that the AI, the LLM can read every single conversation, tell you how stuff's going, find gaps in their knowledge, um, give you a breakdown of like, okay, here are the big categories you should care about. There's, like, been a trend here that, that's been interesting. So that's all the software around it that, that we're building. Um, and that's typically how it's structured and, and the orchestration layer, I think is gonna be different for every agent, right? Like, our agent versus, like, a coding agent, that orchestration is gonna look pretty different. Uh, but at the end of the day you're kind of just building a, sort of a, a structure on top of the LLMs.

    4. EG

      Yeah. It seems like we're very early in the days of, um, true agentic stuff, and that includes the ability to, um, sequence chains of events, that includes certain forms of reasoning. Obviously there's things like, um, o1 and other things that have been coming out to, to start to try and address this, but we seem quite early in the scaling curves. What do you think are the main pieces of technology that are missing to really take you or your sort of vision to the next level in terms of how these agentic systems should work?

    5. JZ

      Yeah, so one thing we were talking about the other day is there's actually different types of intelligence, uh, with the AI models and a lot of the recent developments with o1 or Sonnet and stuff like that has, has been arounds, I guess, like, quantitative reasoning intelligence. So they've gotten better at coding. They've gotten better at math. Um, and for us, actually, those things help but they're actually not the biggest difference maker. So in our use case, that type of intelligence that matters the most, we would probably describe it as instruction following. So you just have a bunch of instructions, like, can you follow it to a T? And I'm, I'm sure there's other kind of types as well but, um, for us, we're excited to see developments on the other areas too. Um, and people... E- everyone's saying like, "Oh, there's plateau happening with like, you know, the core models and the intelligence." I think wha- when most people say intelligence like that, they're probably talking about the, the reasoning capabilities. Um, for us, and like the agentic flows that we use, instruction following is a huge piece because you have to like, you know, just think about like a customer service, you know, SOP or like a pla- like a workflow or something like that. Like, you just have to be very accurate about it. Um, and so I know there's research going on about this in the, in the major labs and I think that's one

  5. 12:0515:00

    Voice-based capabilities and text-to-speech engines

    1. JZ

      thing we're looking forward to next year.

    2. EG

      One o- one other area that, um, it seems like really, uh, um, touches on customer success and customer support and...... sort of user experience is also voice-based support. And I think one of the things that's a little under-discussed in the AI world, 'cause we keep talking about large language models and understanding of text and all, and obviously that stuff is crucial to everything else, but I feel like we almost under-discuss, uh, text-to-speech engines and the ability to understand spoken word and then respond with audio, right? And so there's companies like Cartesia, ElevenLabs, OpenAI, Google, et cetera, who are starting to provide some of these services and APIs. Uh, how much of an impact does that have on what you're doing, or is that a separate type of product or, you know, how do you think about the voice component of these things?

    3. JZ

      Great question. A huge impact. So we, um, have customers now trying our voice agents, and if you just think about our space, right, like you have the- the overall problem is the same, which is you have a bunch of customers, they have, you know, questions or issues or things you need to talk about, and the channel really doesn't matter for them. It's like some people prefer voice, some people prefer chat, some people prefer email, some people prefer SMS or something like that. And so our job is to handle all of those, um, and obviously you- you start with text because that's the most, it's like the easier one, it's like easy to evaluate for the customer as well. I think just now you're getting to the point where you have big companies that are very interested in voice and, like, actually they- they've seen the results of a text-based agent and they're like, okay, well, yeah, you should, we should be able to generate voices and do the same thing for phone calls. Um, uh, none of this would be possible without the- the models that you just listed, right, and the- the- those companies, so like ElevenLabs, um, OpenAI is doing some cool stuff, Cartesia. Uh, and I think there's also been huge strides this year with those models around like how real- realistic the voices sound. Also la- like latency matters a lot in our use case 'cause if you're making a phone call, like you expect things to feel very snappy. Um, so yeah, big- big, uh, big topic for us, and as these companies get better, I mean, we're working with them pretty closely right now on how- how you can actually, like, build these things well at scale, uh, but as they get better, the- it's also gonna be huge for us to keep delivering these voice engines.

    4. EG

      Mm-hmm. Makes sense. Yeah, my sense is one of the issues is latency in terms of, um, it takes, uh, enough time to take a audio stream where somebody's talking, translate that into text, feed that into a language model, and then output it as, uh, voice again, that it feels, um... There's a lot of pauses where people have to kind of wait and there's different things that people have been trying to do in the background, like streaming the potential solutions back out and then, you know, being able to try and shorten that latency timeline.

  6. 15:0016:25

    Combatting latency

    1. EG

      Do you feel, do you feel latency is still an issue or is it just solved by integrating voice directly into the models in a deeper way for- for some of these services or w- when do you think latency becomes a solved problem for these sorts of application areas?

    2. JZ

      I think lat- latency is a big- big deal here, of course, uh, with voice models, so nowadays you have the- the voice to voice models that we're playing around with. Uh, OpenAI is doing a lot of work here. I think there's obviously a lot of trade-offs there. Voice to voice latency is great, uh, sometimes though with these production use cases you do need the extra computation cycles to, you know, fetch data, do multiple model calls, um, or y- there's- there might be other reasons that you c- you can't do voice to voice and so, okay, th- that's- that's one option that you- you would consider. Uh, the other one is the- the one you described where you're kind of going through, you're transcribing or, yeah, doing speech to text and then doing all the computation within text and then generating the voice at the end. That always causes a little bit of extra latency of course, and so, as you mentioned, a lot of folks have figured out fairly clever ways to- to get around that. You can, you know, start generating stuff first. Um, in- in our use case, you can always do something like, "Hey, give me a sec. I'm looking up your data." Um, so the- these are all things we're playing around with, and I think for each customer that we work with, there is different trade-offs, and so w- we're really trying to base what we build, uh, on the things that we're hearing from them and, you know, the- the sort of priorities that they have.

    3. EG

      Mm-hmm. That's cool.

  7. 16:2521:12

    Crossover of math and AI communities

    1. EG

      Uh, one thing that I think is kind of interesting is the number of companies in the AI world today that have been founded by people with ma- Math Olympiad or IOI or- or other sort of backgrounds, right? And I think you were, um, sort of, uh, involved with Math Olympiad stuff in high school. Uh, I think Decagon has actually hosted some Math Olympiad events for the team, um, which isn't like your typical happy hour. But there's other- other teams and companies. I mean, before that there was ????? and things like that, but I think the Brain Trust team and the Pika team and Cognition, which launched Evan, and then you all kind of have that common thread. Where do- where do you think that comes from? Like, why do you think this community is now so active in AI?

    2. JZ

      Uh, that's a good question. I mean, yeah, we're- we're actually all around the same age as well, so we've known each other since like, you know, middle school, high school. Um, well, one, it's- it's a great community, uh, for us. You know, we have a lot of people on the team with math contest, coding contest backgrounds. I think it's more so that this community was always there. Like Math Contest has been around for a while, and a lot of super smart kids that go through that. Uh, it's also a great way for folks to kind of, you know, get to know each other and get connected and, you know, build friendships. And I think the main thing is that now in the last few years, maybe the last, like, you know, five, six years, like there's just, because startups have been a lot more mainstream, a lot of folks in this demographic have gravitated towards startups as opposed to, you know, traditionally it'd be, you know, either academia or quant trading and- and things like that. So there's just a big influx of these super smart, super talented people that come into the startup world, and because there's this community aspect, um, you know, folks can see what other people are doing and like, you know, what- what sort of works and types of companies that people are building, um, that...Uh, uh, which isn't to say they're all the same, but I think a lot of folks with these backgrounds are now kind of working on startups, and that's why there's a lot of, you know, um, I guess, pro- progress in the companies that folks, folks have been building.

    3. EG

      Mm-hmm. Uh, and are there, are there ways that you all have been sort of supporting each other through the startup journey? 'Cause I feel like every generation, there's sort of a clique of people who built some of the more interesting companies who all kind of interact, they provide advice, maybe they angel invest in each other. Like, there's kind of a thriving community, and every five to seven years, it kinda shifts who it is, and I feel like, you know, the IOI sort of Math Olympiad community or coding competition communities are kind of very engaged right now. How, uh, is there any formal version of that or are y'all just kinda informally helping each other?

    4. JZ

      Uh, yeah, I mean, I angel invest in a lot of the companies you just listed. Uh, a lot of their founders are angel investors in our company. It's, it's very informal, obviously. It's just like, you know, casual friends helping each other. I, I think the m- the main thing is that, uh, with company building, there's just a lot of, a lot of surface area, right? As you know, there's just, like, how do you hire people? How do you like, do sales? Like, how do you build this thing? Like, how do you structure, like, comp for like... I don't know. Uh, uh, there's, there's infinite things. So yeah, having the other data points is obviously super helpful, so I hang out with them quite often, play games, play card games. There's a Chinese version of bridge that (laughs) I play with a lot of these folks quite often, and uh, it's just, it's fun, where you just kinda hang out, everyone's kind of in the relative same stage of life, and so yeah, like you said, there is definitely a lot of camaraderie and, and help that, that goes around.

    5. EG

      Uh, has coming from this background, from the sort of Math Olympiad community impacted at all how you think about hiring or your hiring practices at Decagon?

    6. JZ

      Uh, a little. I mean, if, if someone else has the same background and, you know, has gone through the same, you know, contests or programs, obviously that is pretty good signal since I have a good idea of, like, what those, those people have done. Uh, my co-founder Ashwin, also similar background, um, he didn't grow up in the US but in India he did a lot of these contests as well, and so, um, yeah, there, I think there's some correlation with people who just as kids just did a lot of this stuff and then, you know, now we're all adults and, you know, there's, there's some sort of, you know, signal there when you're, when you're talking about hiring. But, uh, for the most part, like it's, there's, there's so many talented people here whether or not you, you know, did math contests or not. In SF, you know, at Decagon, at other companies that I think our hiring process has been more or less the same. It is like a nice, um, sort of trigger for events, I guess, so when you host these events, you know, people come out, you can get a nice community of, uh, folks that are interested in the same things, and uh, we're gonna be hosting probably more, uh, and some of th- not, not all of them are gonna be like contest based, obviously like, you know, puzzles and things like that where, you know, you just get a lot of fun engineers and, you know, people bringing their friends, and that's, that's pretty important to us.

    7. EG

      Mm-hmm. And then I guess, um, for AI

  8. 21:1225:29

    Exciting areas of AI

    1. EG

      at large, what are you, what are you most excited about in the coming years? Or if you were to extrapolate out 12 or 24 months, what are you anticipating most keenly or, or what are you waiting for?

    2. JZ

      Uh, so obviously the model's getting better is awesome. Uh, the model's getting better across different modalities, also awesome. We talked about voice. There's also other parts, uh, other modalities that are also tangentially interesting to us, right? So you, you talk about a lot of our companies are, um, a lot of our customers, you know, have software products and so it'd be awesome if, you know, you're asking questions to the AI agents and it has like context of your entire screen and like all the interactions you've done and stuff like that. Like, that would be great. And you can even go a step further and have it actually like help you navigate stuff. So there's just so much you can do there where you talk about the other modalities or even just more advanced model capabilities, like y- we've seen the computer use demo from Anthropic. Uh, probably, in my opinion, not production ready yet, but as that gets better there's a lot of, you know, cool things you can do there. So on the model side, that's one thing we're excited about. Uh, on the not like core model side, I think one thesis we have is as, you know, the years, you know, go by, again, AI agents I think at this point undeniable that there's gonna be a, a reasonable explo- explosion of them where they're s- used on a bunch of different use cases. I think some use cases will take longer than others, but the value that they're providing is, is pretty undeniable, so there's definitely gonna be a lot of AI agents out in the world, you know, in our use case, customer service, in other use cases. Uh, but one th- one thesis we have is that the nature of the work of, you know, human agents and, you know, people like us is also gonna change pretty drastically, and one of the things that are gonna change is that there's gonna be a lot more people that are ser- supervising and editing agents, and so that's something we think about. Um, we're excited for a lot of the sort of innovations there because, uh, right now a big part, like I said before, we care about letting, you know, the human agents on- for our customers and their leadership team to go in and make changes and like monitor the agents, just have a lot of like visibility and control, and what does that look like, right? Like, if you compare it to a human, if you're monitoring a human, you kind of, you can give them feedback in real time, you can be like, "Oh, no, no, don't, don't, don't do this." Like, "You did this thing wrong." Like, "Please do this next time." When you're doing that with an AI agent, there's like a lot of different possibilities because, you know, they have some things that are different than humans, right? They're infinitely scalable, they're, um, they're like you can, you can like really hard code things sometimes. So um, that's the other area probably going into next year that we're, we're looking forward to.

    3. EG

      Mm-hmm. That's really cool. And uh, um, is it- do you view that as a main area of differentiation for you relative to some of the other folks on the market providing customer success and support tooling?

    4. JZ

      Yeah, right now that's, that's probably the, the biggest thing, uh, when we... The, the interesting thing about our space is that, and I think this will probably be true for a lot of AI agent spaces, is that the results are very quantifiable, like you're, you're basically taking the agent and you can, you're benchmarking against, okay, how good, uh, would a human be and like how much money is this saving me? Like how much...... better quality is the customer experience. And so because of that, uh, when people evaluate us in our space, it's, it's pretty, like, quantitative evaluation. They're like, "Okay, cool. This kind of works. Like, let me just put you out into production for, you know, one percent of the volume and build up from there, and maybe do that with a, another option." Or, you know, a lot of the old school companies like Salesforce are gonna be ... This is a very exciting space for them too, so they're gonna have alternatives. And then you just benchmark everyone, right? Like, how good are the stats? How good are the metrics? Like, how good of a job is everyone doing? And, uh, I think so far, we've been performing very well. And the, the main reason for that is this sort of transparency piece, you know, giving people observability, explainability, control over the AI. And there's still a long way to go in that field, right? Like, there's still so much more you could do, uh, and that's, that's been our specialty so far.

    5. EG

      Mm-hmm. That's great. Yeah, no. I've had some conversations with your customers over time. As people have been trying some of these agents, they've called me to ask questions about different companies in the space and everything else, and the three things they tend to point out is you, you all ship really fast, uh, you're very responsive as a team and company. Now, the third, and most importantly, just the product tends to outperform. And so I think that's really been, um, you know, uh, great to watch over time. How do you think

  9. 25:2930:09

    Strengths and weaknesses of agents

    1. EG

      about the areas where AI agents are gonna be, um, successful versus not successful in the short run?

    2. JZ

      So basically, one, one of the things that we have been thinking through, and this is something that was pretty big for us when we were first starting out, is that a lot of ... There's gonna be a huge variance between, like, the different types of AI agents and how successful they'll be and, like, how quickly they'll take to roll out. Because when we were first starting the company, right, like, uh, we, we were pretty open to what to build, and we knew that AI agents was exciting. Uh, at that point, we didn't even know that if there would be any, like, real use cases that would emerge even in the next 12 or 24 months, but we were kind of exploring. I think our view is that for the vast majority of use cases right now, it is still like, like there's not going to be real commercial adoption, uh, with, with the state of the current models because of a bunch of things. So one, one big thing is that if ... In a lot of spaces, you can't... There's really no, like, structure there to, like, incrementally build up. Like, it has to be good, like, almost perfect off the bat. So if you think about, like a space, like, you know, security or, or, or something like that where, okay, okay, you have all these, like, sims out there and it's like, it makes sense. There's like tons of logs. Like, that's, that's perfect for AI models. Uh, but the goal of that job is, like, you need to catch, like any small thing that happens. And so because the models are de- inherently non-deterministic, it's very hard for buyers to, like, really trust a gen AI solution there, and so ... And especially agentic solution. So, like, I think the adoption there is going to be really, really, really slow, a lot slower than people think, even though, like, people have cool demos and, you know, see- things seem to work. Like, just getting real (?) : adoption is gonna be very slow. Um, so that's, that's like one interesting thing we've been thinking about. And the other side of that is that, uh, there are also a lot of spaces where, you know, on the surface it seems like, oh, like, AI agents would be perfect here. But then, uh, the sort of followup is that it's actually not that easy to quantify the ROI that's happening. Um, I would ... You know, one example I would give this is like, uh, you know, there's, there are a lot of, like, text to SQL companies, like stuff like that where you could kind of see it working, but basically immediately everyone's reaction is, "Oh. This is cool, but, you know, we're still gonna have, to have someone, like, monitoring it and, you know, editing it," and so it becomes kind of a copilot. Okay, cool. So then, like, how do we measure, like, how much we should pay for one of these agents? Um, it's very difficult because, like, most teams don't have that many data scientists anyways, and so if you're, if you're claiming that you have a AI agent data scientist, it's like, okay, let's benchmark you against a real one. You're probably not gonna be able to replace a real one. So I think that's the sort of thing where it's like, it's very hard to quantify the ROI. Like, you're saving some people time, but because of that, like, it's, it's, you know, like a ... If you have ... If I'm a large company, it's hard for me to justify, okay, I'm gonna give you a large contract for this, like, AI agent data scientist. Um, so I think those are the things that we're thinking through. Uh, well, not thinking through. Like, in, in the moment, we're obviously just, you know, asking customers like what, what their willingness to, you know, to invest in certain things is. But in hindsight, I think we're ... Looking back on the last year, that's been a big thing that's been true, which is the, the use cases that emerge, like you have to have those two qualities. Like, it, it has to be able to be something that can be rolled out slowly and doesn't have to be perfect off the bat, but is already providing value, right? Like, I think coding agents is, like, a good example of this where, um, you know, you can just like section off some tasks for them and, like, they'll do it. Uh, and the other piece is, uh, the ROI. Like, you have to really be u- able to easily quantify the ROI. I- in our case, luckily you have these support agent teams and, um, you know, people who track metrics very closely. So uh, that's something we've been thinking about. I think most ... I think the takeaway from that is like, yeah, probably more bearish on a lot of these AI agent use cases in the near term, but I think as these models get better, there's gonna be ... They'll unlock a lot of new cases.

    3. EG

      Super interesting. Jesse, thank you so much for joining us today.

    4. JZ

      Thanks, Elad. Thanks for hosting. It's great seeing you.

    5. NA

      (Upbeat music) Find us on Twitter @nopriorspod. Subscribe to our YouTube channel if you wanna see our faces. Follow the show on Apple Podcasts, Spotify, or wherever you listen. That way, you get a new episode every week. And sign up for emails or find transcripts for every episode at no-priors.com.

Episode duration: 30:09

Install uListen for AI-powered chat & search across the full episode — Get Full Transcript

Transcript of episode CvFpEvcIDVM

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome