No PriorsNo Priors Ep. 21 | With Datadog Co-founder/CEO Olivier Pomel
EVERY SPOKEN WORD
95 min read · 19,191 words- 0:00 – 6:54
DevOps and AI Potential
- EGElad Gil
Welcome to No Priors. Today, we're speaking with Olivier Pomel, the co-founder and CEO of Datadog, the company at the forefront of the DevOps revolution. Datadog is a leading observability and security platform for cloud applications. Its execution and ambition has impressed me for years, especially since learning more about the company after it acquired Screen in 2021, a security startup I was the board member for. I'm excited to be talking about the potential for AI in DevOps. Olivier, welcome to No Priors.
- OPOlivier Pomel
Thanks for having me.
- EGElad Gil
Uh, so let's start with a little bit of personal background. Um, you're French. Uh, you've been in the US working on startups since '99. How did you start to think about starting a company, and Datadog in particular?
- OPOlivier Pomel
Yes. So, um, so yes, I'm, I'm from France, and I guess n- nobody's perfect. Uh, I, uh, I'm an engineer, uh, also. I got into computers, uh, largely through computer graphics, and when I was a kid, I, I used to follow the demo scene in Europe, you know, which was all about 3D and, you know, doing interesting things in, in real time. Uh, this led me later on to be one of the first authors of VLC, the media player, who, uh, you know, I think is mostly used for viewing illegally downloaded videos.
- EGElad Gil
(laughs)
- OPOlivier Pomel
Um, and I should say, most of the people who made that successful came in after me, picked up the project, and did something fantastic with it after I left for the US. Um, and then moved to the US to work, out of all places, at, uh, for IBM Research in Upstate New York, um, and ended up... and I thought I would stay six months, and I've been here since 1999. So it's been a while. I, um, worked for a number of startups, um, through the, uh, I would say the dot-com, the tail end of the dot-com boom. I arrived right in time for the bust, and after that, you know, worked for, I, I think eight years for an education software company, education startup, um, that was doing SaaS for schools based in New York, um, and that's... When I was at this company that I spent quite a bit of time with the person who went on to co-found Datadog, and that's where we had the idea to, uh, to start Datadog basically, w- which was... I used to run the dev team there, and he used to run the ops team, and we hired everyone on our teams. We tried hard not to hire jerks, uh, who are very good friends. You know, we knew, we've known each other since the IBM days basically, and we still ended up, you know, with dev and ops hating each other, people pointing fingers at each other all day long, big fights. So the, the starting point for Datadog was not monitoring, uh... it was not even the cloud initially. It was, "Let's get dev and ops on the same page. Uh, let's give them a platform, some place they can work together, see the world the same way."
- EGElad Gil
Yeah, that's actually quite different than thinking of it as like a, uh, like a ticketing relationship, right? A quite siloed relationship, um, between the two areas because I think most people assume that Datadog comes from a place that was more like metrics or, like, we knew the cloud was coming, and I'm sure, you know, both these things are true, but it's interesting that the, the core starting point is really around, like, dev and ops collaboration. You are, I think, pretty long NYC and have been challenged on, like, building infra in NYC as, I think you even attempted before. Like, tell me about your original thinking on that or if it even crossed your mind.
- OPOlivier Pomel
Uh, well, I mean, so I, I, I stayed in the US because I loved NYC, you know? That's why I, I ended up staying here. I love the energy, I love the, uh, the diversity of the city. I also met my wife in NYC, and she's also not French, you know, so it's, uh... and she's also not American, you know, so it also made sense for us to stay, to stay in the city. Uh, so when we started, it... My co-founder and I, it makes total sen- it made total sense to, uh, to start a company in New York. We also knew fantastic engineers we could hire in New York, so it, it was very obvious on that standpoint. Uh, I would say it was less obvious when we started fundraising, uh, because we didn't come from, you know, systems management or observability or anything like that, uh, and we were based in New York, which was not seen as a great place to start an infrastructure company at the time. Um, so I would say for most investors, especially Bay Area investors, I think it was considered as a, as some form of mental impairment, you know, to stay in New York at the time. It made it harder to fundraise, um, and I think as a, as a result, uh, it made us more successful because we were so scared of getting it wrong and so scared of not being able to fund the company any further that we really doubled down on building the right product for one thing, but also, um, you know, we built a company that was very efficient from day one and, you know, hovered around profitability throughout its who- its whole existence pretty much. And I think, you know, in, in the long run, it's been an advantage. Like everything else, you know, if it's a... if everything that's a long run or long-term advantage turns, turns out to be very difficult in the short term typically.
- SGSarah Guo
How else do you think New York has benefited you? Because I feel like now, it's, uh, it's kind of an obvious place to start a company, and to your point when you first got started, it was very different. It feels like there was always some good talent pools there. I know Google had a giant office there and Meta set up one, and, you know, it was really sort of flourishing over the last decade, and now, it definitely feels like a very strong standalone ecosystem. But are there other aspects of either recruiting or other things that, that you really benefited from being in New York?
- OPOlivier Pomel
Yeah, I would say there's really two things. The first one is, from a customer perspective, we're sort of out of the echo chamber of the Bay Area, which makes it easier, I would say, to latch on to what really matters to customers and what's not just now a fantastic idea you told three people and then repeated three others and then it came back to you and it sounds even better now. And there's a lot of companies or basically all... many known tech companies in New York w- that you can sell to and you can get a good idea of, uh, what they need basically. Th- the second aspect I think that benefited us is that... So it's a bit more difficult to recruit in New York. Uh, there's less pure tech talent, there's less deep tech talent in New York than there is in the Bay Area, but the retention is a lot higher. So, you know, if you give people, you know, great responsibilities, interesting work, treat them well, they're going to stay with you for three, four, or five years more, you know? Um, which I think in the Bay Area is pretty much on the, you know, very high end of what you can expect. What we see from looking at data, we have data from, uh... Most of our customers are engineers and most of our users are engineers. Um, and so... and we, we see basically when, when their individual accounts churn at our customers'...... uh, organizations, and we see that it's not rare for companies in the Bay Area to have, you know, engineers churn every 18 months. We think it's, it's really hard to build a successful company that way. I think you, you do have to over-invest if you want to do that.
- EGElad Gil
So one last background question before we ask you to talk, uh, just a little bit about Datadog today. Can you explain the name?
- OPOlivier Pomel
(laughs) So, yeah. So it's interesting because the, uh... I- I'm not a dog person. My co-founder never had any dogs. Uh, in our previous company, we used to name production servers, uh, dogs, and DataDogs were production databases. And Datadog 17 was a horrible Oracle database that everybody lived in fear of, uh, that had to double in size every six months to support the growth of the business, and that could not go down. So for us, it was the name of pain, it was the, the old world, uh, it was
- 6:54 – 20:40
Datadog and Generative AI
- OPOlivier Pomel
where we were coming from, it was the name of pain. So we used it as a code name when we started the product, uh, and we actually recoded it Datadog 17. Everybody remembered Datadog, uh, so we kept it. We dropped the 17 so it wouldn't, uh, sound like a MySpace handle, and, uh, and, you know, then we had a designer propose, you know, a puppy as the, uh, as the logo, among a sea of, you know, alpha, alpha dogs and things like that, and hunting dogs, and I think that's, that's... The smartest branding decision we've made was to keep the name and to keep the, uh, the puppy.
- EGElad Gil
Love it.
- SGSarah Guo
So Datadog is, um, clearly a leader in observability and security for cloud environments. You've had a enormous success.
- EGElad Gil
Mm-hmm.
- SGSarah Guo
I think you're now approaching a $2 billion run rate, 26,000 customers, you're really mission-critical to a variety of different folks who spend, in some cases, more than $10 million a year on you. But for th- for those, uh, listeners that we have who may be a little bit less familiar, could you give us quick background on what Datadog provides and, uh, almost like a Datadog 101?
- OPOlivier Pomel
Yeah, so what we do is, we basically gather all the information they a- they have together about the infrastructure our customers are running, the applications they're running, uh, how these applications are changing over time, what the developers are doing to it, who the users of these applications are using them, you know, what, what are they clicking on, where are they going next, what the applications are logging, what they're telling about what they're doing themselves on the systems. Um, so we cover everything end-to-end, basically. We sell to, uh, engineers, so our customers are, I would say, the, the, the folks who buy our product are, uh, typically the ops teams or devops teams in a company. Um, and the, uh, the vast majority of the users are developers and engineers, and then some of our users are going to be product managers, they're going to be security engineers, they're going to be all of the other functions that gravitate around product development and product operations.
- SGSarah Guo
How do you think about translating some of those products or areas into the generative AI or LLM world? I know that, you know, obviously, cloud spend is now 25% of IT, and there's been these really big shifts now-
- OPOlivier Pomel
Mm-hmm.
- SGSarah Guo
... in terms of adoption of AI, and it's very early, right? It's extremely early days, at least for this new wave of LLMs. Obviously, we've u- been using machine learning for 10, 20 years. How do you think about w- you know, where observability and other aspects of your product go in this, in this sort of emerging new area?
- OPOlivier Pomel
Yeah, so we, we f- we find the area quite exciting, actually, uh. I mean, there's, there's two parts to it. One part is the demand side, you know, so what's the, what's happening on the mar- in the market, um, that is driving the use of, you know, compute and building more applications and things like that, and the other side is the, uh, what we do on the solution side with the product and how we can use the, uh, uh, generative AI there. Uh, on the demand side, it's exciting at so many levels. You know, if you think of the, the, the highest level possible and the, what, what might happen in the, in the long, the long run, we think that it, it is gonna be so many more applications written by so many more people, um, it's going to improve productivity of engineers. And at a high level, you know, if you, if you imagine that one person is going to, to be 10 times more productive, it means that they're going to write 10 times more stuff, uh, but they're also going to understand what they write 10 times less, um-
- EGElad Gil
Mm-hmm.
- OPOlivier Pomel
... just because they don't have the time to see and understand everything they do. And as a result, you know, we think it actually moves a lot or transfers a lot of the value from writing the software to then understanding it, securing it, running it, modifying it when it breaks, and things like that, uh, which ends up being what we do. So we're thinking that for our industry, you know, it, it's great in general. From a workload perspective, uh, we already see actually an explosion of the workloads, uh, in terms of providing AI services or consuming AI services, you know, so it's actually, consumes a lot of infrastructure to train those models, to run them, um, so we're going to see, you know, a lot more of that. We also see a lot new, a lot new technologies that are being used there, uh, new components, the whole, this whole new stack that is emerging, um, so, you know, overall, it's, it's exciting at every single level.
- EGElad Gil
Yeah.
- OPOlivier Pomel
But to your earlier point though, it's, uh, uh, still very early, um, so, you know, it's hard to tell what actually is going to be the killer app, you know, for all of that, um, you know, in six months, in a year, in two years. Uh, it's possible that some of the things we've seen with LLMs, you know, where, you know, all of a sudden, everything's a chatbot, uh, it's possible that it's not the way people want to, uh, to e- to, uh, interact with everything, you know, two years from now. You know, for example, you, when you start your car, you don't want to play 10, 20 questions with it, you know, you just want to start it. Um, it might be the same thing with a lot of the products that are to- today starting to, uh, to implement, uh, LLMs. But what, what I think is pretty certain is that we're going to see a next expansion of the use cases and expansion of workloads. Uh, also maybe an acceleration of the transformations and whether that's digital transformation or cloud migration that are bringing all of that, you know, in- and making all of that possible. You know, if you want to adopt AI, you, you actually, you have to have your data digitally. I mean, it's, uh, it sounds obvious, but, you know, it's not the case still for most companies. Um, and second, uh, you also have to, to be in the cloud, you know, how else would you do it? Like, if you try to build everything on prem today, you wouldn't even know what to buy because the technology is changing so fast. So I think it's accelerating all those trends, which is very exciting.
- SGSarah Guo
I- I've definitely seen a lot of enterprise buyers sort of change their mind almost on a monthly basis, in terms of what they view as the primary components of the stack that they're using.... and that could be the specific LLM they're using, should they use a vector database or not. Like, the, the whole set of components seem to be very rapidly morphing. And you mentioned earlier y- you're sort of seeing this emergence of a new stack. Are there any specific components that you'd like to call out, or that you think, you know, are gonna kind of stick, or how do you kind of view the evolution of this area?
- OPOlivier Pomel
So, I- I would start by saying, it's extremely hard to know what's going to stick in the end. Uh, and for us, you know, it's actually a... it's a very new place for us to be. You know, we... as a company, we've been very good over the pan- past 10 years at understanding which trends are picking up, what are go- act- actually are going to be the winning platforms, you know, when the world went from, you know, VMs to inst- to Cloud instances, from Cloud instances to containers, from containers to managed containers with, uh, Kubernetes, from that to serverless. Like, we always had... you know, it always took, you know, a year, two year, three years for those technologies to, uh, to gain mass adoption, and it was very clear what, what the killer apps and what the winners were going to be. With generative AI, it's not the case. It's changing so fast, and everybody's exploring all of the various permutations of the, of the stack and all the various technologies so fast that it's really, really, really hard, you know, to tell what's going to, uh, to stick. If... you can take a guess, I think the, the one thing that's been the most surprising was the speed at which the, uh, uh, open source ecosystem has been, uh, innovating and building, uh, better and better models. You know, nobody has quite caught up to OpenAI yet in terms of what the frontier model is and the maximum level of performance you can get, uh, but I think we've all been surprised by the, uh, the amount of new, new technologies that has come out in the open source that does as good or even better on smaller, more identified use cases, and I think we should expect to see, uh, you know, a lot more of that. So when we, when we look at our customers and what they're doing, and, and, and we do serve the largest providers of AI, but also the largest consumers, uh, we think we see that today everybody's testing and prototyping with one of the, the best, uh, API-gated models, like typically the OpenA- OpenAI ones. There are a few others. Uh, but everyone is also keeping in the back of their minds what they can still, uh, what they can then bring in house, uh, with an open source model they trained themselves and host themselves, um, and what part of the functionality they can break down to, to which one of those models. So, I think we probably will have a very different picture, you know, uh, a year or two years from now.
- SGSarah Guo
Yeah. It definitely feels like the most sophisticated users are basically asking when can they fall back on the cheapest possible solution, and when do they need to use the most advanced technology, and then how do they effectively route a specific prompt or user action or something else against those, so it's, it's very interesting to watch this evolve. Um, I know that you all released an LLM API monitoring product. Um, is there a whole new observability and tooling stack needed for AI? And if so, what are the main components, and if not, you know... (laughs)
- OPOlivier Pomel
Yeah. Well, there is... first, there is the... there is an observability n- uh, stack needed period, right? So you, you do need to fold that in, uh, because that's just one more component you're using. And as I said earlier, there's a whole new stack that's emerging, you know? So if you're going to use a frontier model from one of the big, you know, API-gated providers, you need to monitor that. You need to understand what goes into it, you need to understand how it responds to you, wh- whether it responds right or wrong, and how it interacts with the rest of your application. Uh, then if you use, you know, a vector database or, you know, if you, if you host them, uh, the, uh, model yourself, and you use specific computing infrastructure for that, and you have GPUs and things like that, you also need to, uh, to instrument and, and observe all that and, and figure out how you can optimize all this. So there's a whole new set of components, uh, that, that, uh... from this new stack that needs to be observed, and that can stay, can be observed pretty much the same way anything else can be observed, you know, which is with metrics, you know, traces and logs and that sort of stuff. I would say there's probably also a, um, a whole new set of use cases around, you know, what used to be called MLOps, or, uh, now might be called LLMOps, and, and that's a field, by the way, we're, we're only watching from afar over the past few years, and the reason for that was that, you know, we saw 100 different companies do that, uh, but few of them, you know, reaching true traction, and the reason for that was that the use cases for that were all over the map. And the users tended to be s- very small groups of data scientists that also preferred to build things themselves in a very bespoke way, so it was very difficult to actually come up with a product, uh, that would be widely applicable, uh, and that would be, uh, you know, also something you can sell to, to your customers. And I think today it's changed quite a bit, because, uh, LLMs are the killer app. Um, everybody is, is trying to use them, and the users, instead of just being a handful of data scientists in every company, end up being pretty much every single developer, uh, and they are less interested about the building of the model themselves, uh, than they are about making use of it in application in a way that is reliable, makes sense, and, and they can, can run day in and day out for their customers. So, I think there's a whole new set of use cases around that, um, that are very likely to emerge and be very valuable to, uh, to, to those developers, and, and these have more to do about understanding what the model, what, what the model's doing, whether it is doing it right or wrong, how it is changing over time, and how the various changes to the application can improve or, or not improve the model.
- SGSarah Guo
Yeah. It seems like one of the things that has given Datadog enormous nimbleness is this unified platform that you've built, which is both a big advantage and a big investment. And, you know, my understanding is a pretty large proportion of the Datadog team is working on the platform right now. How do you think about resources being allocated towards the main platform, maintaining it, versus new initiatives like AI and, uh, and, uh, LLMs?
- OPOlivier Pomel
Yeah. So the, the rule of thumb for us is about half the team is on the platform, and... but that relates to what we do, right? We sell a unified platform. So internally, you know, as I said, half is on the platform, the other half is more on the product side, like specific use cases. Uh, but even the way we organize those, uh, those use cases, those teams that work on the use cases, tend to be more focused on problems, they tend to be forward-looking on where the market's going, whereas what we sell to our customers tends to be more aligned to categories that tend to be more backward-looking, which is w- how people are used to buying stuff.And, and that's very important. You know, when you talk to customers in our space, like, there's, like, 12, 15, 20 different categories, uh, all with interesting acronyms, and they correspond to things that customers have been buying for, you know, 10, 15, 20 years, and that's how they understand the market. So, we sell into that, while at the same time delivering everything as part of a unified platform that is itself shaped with more what we wa-... we think the, the world is going to be. So, it's very possible, and it's even likely, that five or ten years from now, uh, our SKUs and our products have changed drastically because they correspond to, you know, the evolution of the market, as opposed to, uh, being, you know, pinned into very specific and static definitions of categories. You know, an example of that being observability is emerging as one big super category that really encompasses what used to be infrastructure monitoring, application performance monitoring, and log management. And we still sell those three as different SKUs today, but I think it's very likely that, you know, uh, five or ten years from now you don't even think of them as being separate categories anymore. Like, they really become part of one, you know, super integrated category. I would say there's a specific cost, you know, when it comes to, uh, maintaining a unified platform, you know, which is that we also do some M&A, uh, and we acquire companies such as, you know, Screen, the company Sarah was on the board of, and, and gracefully signed the, uh, the order to sell the company. But when we do so, the first thing we do is we actually re-platform the companies we've acquired. So, we spend the first year, really, uh, post-acquisition rebuilding everything that the company had built on top of our unified platform. Um, so it's an extra cost, but again, what we deliver to our customers is end-to-end integration, uh, bringing everybody into the same conversation, into the same spa-... the same place. More use cases, more people, more different teams into the same place, and we, we see it as a necessary part of a... of maintaining our, our, our differentiation there.
- EGElad Gil
You've made a, a handful of other acquisitions, um, to expand the, the product suite, and I think the, the sort of talent group at Datadog. What else has made them successful? Because, you know, it, it seems to me that it has really, like, continued to drive, like, useful product innovation at Datadog, which is not always true with acquisitions.
- OPOlivier Pomel
Yeah. I mean, the... You know, as, as you know, the, the... Making an acquisition is easy. Like, you know, signing a piece of paper and wiring money is super easy. Anybody can do that.
- 20:40 – 31:46
Datadog's Acquisition and Expansion Strategy
- OPOlivier Pomel
Uh, the problem is what happens next, you know? So now you've done it, now you have, you have to merge those two things and make them work. Um, I think, you know, in general, the way we approach acquisitions is they always correspond to, um, uh, product areas we want to develop. And, you know, we, we're fairly ambitious. Like, we... There's a lot of different, uh, product areas we want to cover in the end that span from, uh, observability, to security, to a number of other things. At the end of the day, we're ready to build them all, but if we can find some great companies, um, to, you know, get us two or three years of a head start in a specific area, you know, we'll, we'll do it whenever we can. So, we start with a very broad, um, pipeline or very large funnel of companies, um, and then we... you know, we, we focus basically on the ones that are going to be fantastic fits for us post-acquisitions, meaning they are teams I want to build, uh, and entrepreneurs that can really take us to the next step there with the experience they've, they've gained in a specific area. One thing we're very, very careful of is... So we select for, for entrepreneurs who want to stay and want to build, as opposed to entrepreneurs who are tired and want to sell out. You know, it's a fine reason to sell your company, it's not a great reason for us to buy, so, you know, that's not what we're going to do. The other thing we do is when we, uh, close the acquisition, but before we've closed, we have a very, very specific and very, um, short fuse on an-... on the integration plan after that. So, we have a plan basically that calls for shipping something together within three months of the acquisition, which is very short, because an acquisition, people celebrate a little bit, you know, uh, and then they have to get oriented, and new HR systems and whatnot. Three months is a very short time. We don't really care what gets shipped within three months, but we care that sh-... something gets shipped within three months. And what it does is that it forces everyone to find their way, um, it also, uh, makes it very easy for the, uh, the acquired company to start showing value, uh, which then builds trust. You know, the main issue you have when you acquire companies is, or the main risk, is that, um... It's not that you waste the money of the aqui-... the acquisition, it's that you demoralize everyth-... everybody else in the company, because they see this new company as being acquired and they don't understand the value, or they wonder, you know, why you would pay some new people a lot of money instead of paying the people you have a lot of money to do the same thing. And it's very, very important to show value very, very quickly for that, and we put a, a high emphasis on doing that. So far, we've done it well. You know, um, I'm still expecting us to make some mistakes someday, but so far it's worked out for us.
- EGElad Gil
I wanna go back to, um, something you were saying, right around, like, calling, let's say, environmental technology changes. Well, like, you know, y-... progression from, like, VMs to containers to here we are with, um, managed Kubernetes and such. But Datadog as a company has always been amazing to me, because the, the spectrum of things that you want to. You're ambitious to go after is very broad, right? Like, I was at Greylock investing in companies that were APM companies and, you know, logging companies, and you have this platform advantage, but you also are still attacking, like, many different things, customer problems and different categories. Like, can you talk a little bit about how you organize that effort, like, in your mind or as a leadership team, and how, how you sequence it?
- OPOlivier Pomel
Uh, so in terms of what we go after, you know, so first of all, it took us a very long time to go beyond our first product, you know. So we... I think we, we spent the first six or seven years of the company on our first product, and the reason for that is it was really hard to catch up, you know, for... with the demand for that product. We also realized after the fact that, you know, we, uh...We were fairly lucky in terms of when we entered the market. Like, we had an opportunity to enter what's a sticky market, a market that's hard to displace, um, because of the re-platforming that came with the cloud, you know. So we could start with a smaller product and then expand to it as customers were, have themselves growing into the cloud. Everybody was new into the cloud at the time. So we had to spend that time getting to the, I would say, the minimum, you know, full product for infrastructure monitoring. After that, what has driven the expansion of the platform was really what we saw our customers build themselves. You know, so before we started building APM, uh, we had our customers... or we saw a number of our customers build, uh, poor man's APM on top of Datadog. We didn't have the primitives for it, you know, but our product is open-ended enough that they could actually build and script around it and do all sorts of things. Um, so we, we saw it, uh, it made perfect sense. If they... if, if it made sense to them for them to build it and for us to be part of the, of the solution there, uh, we thought it would make sense for us to build it for them. So in great part, that's what guides the development of the, of the platform. The first threshold was really to get from a single product company to having, uh, two or more products that were successful, and it was not easy. Uh, once we had done that, um, that's what gave us, for one thing, the, uh, confidence to take the company public, because we understood we could grow it a lot for a very long time. Um, but also, that really opened us up to, you know, "Okay, let's, let's look at what our customers are doing and... with our product, what problems we can solve for them, and use the secret weapon of the company, which is the surface of contact we have with customers." You know, we deployed on every single one of the servers they have because we start with infrastructure monitoring. So we deployed everywhere, and we touch every single engineer, so we're used by everyone every day. And that's the first of contact is then what lets us expand, uh, solve more problems for the customers, um, and build more products.
- EGElad Gil
You guys have, I think, over 5,000 security customers now, but, you know, relative to the overall Datadog base or the security industry, it's still newer effort. You know, I've been on boards of companies that sell to IT and security, but it is hard, conventional wisdom is it's, like, quite different audiences, even if the, as you said, the surface area is there, or it makes architectural sense to consolidate the tools 'cause a lot of the data is the same. I, I think, you know, the world or the investor base might see this as a bigger jump than some of the monitoring, um, products you guys have released before. What do you need to do as a business to succeed in security?
- OPOlivier Pomel
Yeah, it's, it's a great question, and it's definitely... it is true that it is a bigger jump because we... you can, you can argue that most of the other new products we've released outside of security were part of the same larger category around observability. The users were the same, the buyers into a large extent were the same. Uh, with security, we get new types of users and new types of buyers. Um, our approach to it was that there's actually just been no shortage of security solutions today. Uh, they all... like, there's tons of technology. It's typically solved very, very well in a top-down fashion to the CISO and everything. Um, what it's not doing well is it's actually not producing great outcomes. Everybody's buying security software. Nobody is more secure as a result. So our ambition there is to actually start by delivering better outcomes, and for that, we think we need actually a different approach. We think that if you, if you s- if you sell a very sharp point solution to a CISO, which is how it's done today, uh, you're not going to, uh, to have as great outcomes. On the flip side, if you rely on the, uh, large numbers of developers and operations engineers to operationalize security, and you deploy it everywhere on the infrastructure and in the application at every single layer, uh, you have a chance of, uh, delivering better outcomes. You know, the, the analogy I would make, I would make is that there are great medicines today for security, there's great technology, but for it to work you need to inject it in every single one of your organs every day-
- EGElad Gil
(laughs)
- OPOlivier Pomel
... and nobody's doing it. And I think the, uh, the way we intend to do it is, you know, we can deliver it to you in an IV, and that's it. You know, you're going to have it always on, and it's going to be fine. So again, it requires approaching the market fundamentally differently, um, because we are not building on usage and deployment. We are building on ubiquity, not building on great sales performance, uh, at the top level. It's possible that later on we need to combine that with great sales performance at the top level because that's how it's done large enterprises, uh, but for now our focus is really on getting to better, better outcomes.
- EGElad Gil
I wanna go back to sort of the AI opportunity for you guys as Elade was touching on. So, like, if you just take one, um, sort of very naive example, like anomaly detection on logs, on metrics, on security data has existed for a really long time. You guys have this watchdog intelligence layer. I'm sure you're working on lots of interesting things with classical ML approaches and security as well. Like, how would you... how would you rank, like, the AI opportunity within, within your products, um, in, in these different domains?
- OPOlivier Pomel
So there, there are so many new doors that we can open, that's really exciting. One thing I would say is, uh, in general, we've been careful about not over-marketing the AI, and the reason for that is we think it's, uh... it's very easy to over-market AI, and it's very easy to disappoint customers with that. And that's the one thing I find a little bit worrying with the current AI explosion is that the expectations are, are going completely wild, you know, in terms of what can be done with that, and I think, you know, there's going to be maybe a little bit of disa- disillusionment after that. Though I actually am a believer that we can deliver things that are... uh, that weren't possible, you know, just a few years ago. So I don't think the old methods are going away, um, because you, you still need to do numerical reasoning on data streams. For example, you know, you mentioned a watchdog, like watching every single metric all the time everywhere, uh, for, you know, statistical deviation. There are methods that work fantastically well for that, uh, that don't involve language or language models or transformers or any of that. I mean, it's possible that we see some of... some new methods, uh, emerge, uh, using transformers because there's so much work being done on that today, um, but that's not yet the case. So I think those, those methods are not going anywhere. Those methods are also a lot more precise than the... what you get with l- large language models, you know? So I'll give you an example specifically for, uh, operations.... a, if you talk to a customer and if you ask them, "W- would you rather have a false positive and y- and you'll decide whether it's right or wrong, but the computer is going to bring up new situations for you, or, or a false, or, or just nothing, uh, if we're not sure, we won't tell you?" Customers will all tell you, "Oh, give me the false positive. I'll decide." The reality is, you send them two p- two false positives at night, uh, the same week, and they'll turn you off forever. Um, so the reality is, you need to be really, really precise, and with operational workflows, uh, like we do, you're, you're making judgments, you know, uh, a thousand times a minute, you know? So if you're, if you're wrong, you know, even 2% of the time, uh, it becomes really painful really quickly. So, you know, you need to set the bar really high, and for that, those methods are not going away. Now, what's really interesting is that there's a, a number of new other, o- other new doors that are open with LLMs. Uh, one of them is, there's so much data that was off limits before that we can put to use now. Everything that's in knowledge bases and email threads and everything else, all of that actually can be used to put the, uh, information, the numerical information in context. Uh, another way I've seen, I've seen the, uh, the LLMs described online, which I thought was a, very astute, was,
- 31:46 – 42:35
LLMs in Automation and Precision
- OPOlivier Pomel
uh, so basically calculators on language, and you can actually use that really well. Actually, you can... what you can do is, uh, structure, uh, or bring together meta-data from many different places, uh, output from many different numerical models, and use the, the language models to combine that, uh, maybe with some other wikis you have internally, and this, this allows you to combine data in ways that were impossible before. And a lot of new intelligence is going to emerge out of that. Now, the, the challenge, of course, is you still need to be correct, uh, and I think that's what we're working on right now. I- I give you one last example. We, we, so we obviously, we've been working on that quite a bit, and the first thing people do when they, uh, they, they see an error in the production environments and, uh, and they have a ChatGPT window open is they take the stack trace of their error and they ask ChatGPT what's wrong.
- SGSarah Guo
Does it work? (laughs)
- OPOlivier Pomel
Well, 100% of the time it tells you, "Oh, I tell you, this is, this, this thing is wrong." Problem is, in more than the majority of the case, it is wrong, and it's, there's a good reason for it, is that it just can't know because y- if, if you don't have the programs, the program state, you just can't know. What we found though is that if you combine that stack trace, uh, with the actual, uh, program state, you know, which is what are the variables and what were things when, when it error out, you actually can get a very, very, very precise answer pretty much all of the time, uh, from the large language model. So I think the, at least in the short term, um, the magic is going to come from combining the language models with the other sources of data and the, uh, I would say, the more traditional numerical models, to bring new insights to, uh, to our, to our users.
- SGSarah Guo
Yeah, makes a lot of sense. Uh, it seems like there's a real opportunity to build, and I'm sure e- it sounds like, you know, this is the first steps towards building almost like a AI SRE copilot or eventually a more automated solution that can really help not only surface different things but also understand them in real time and, you know, provide an opinion on what, what, what a potential issue may be.
- OPOlivier Pomel
That's, that's definitely... I think we are at the cusp of a, maybe doing more automation than we could before. I would say on the cusp, you know, because we, we're still not there. Like, there's still quite a bit of what needs to happen there. I would say the, the best test for that is, even at places that are extremely uniform and have very large scale, such as, you know, the Googles of the world, the level of automation is still fairly low. It is there, like, it's increasing, but it's still fairly low. And you, and if you use the, uh, the s- the self-driving cars metaphor, like, you know, uh, Google is a, is all highway, you know. So if that can't be automated, like, most of the other companies out there cannot either, right, because m- most other companies are like downtown Rome, you know? So it's a quite a bit more complicated.
- SGSarah Guo
Well, what do you think is missing though? Do you think it's a technology issue or do you think it's just implementation? Because I feel like LLMs are so new, right? ChatGPT launched six months ago.
- OPOlivier Pomel
Yeah.
- SGSarah Guo
And GPT-4 launched three months ago. So is it just new technology and people need to adapt to it, or do you think it's other obstacles in terms of actually, you know, increased automation through the application of this technology?
- OPOlivier Pomel
So for one thing, it's, it's very hard, right? So many of the problems you need to solve, like if you think of se- a self-driving car, like, you know, ev- everybody over the age of 15 can drive. With, y- debugging production issues in a complex environment, you, you need a team of PhDs and, you know, it will take them time and they will disagree and, you know, so I think it's hard. That's, that's one reason. That being said, I think a lot of it might be possible in the end. Uh, the biggest question, I think, not just for observability but for everything else with LLMs is, is this the, uh, um, is this the, the innovation we need or do we have, do we need another breakthrough on top of it to make it to the end? You know, so if I were to cla- qua... again, I, I love analogies so I keep, I keep throwing them at you, but with, uh, LLMs we have, we clearly have ignition. Uh, like there's innovation everywhere, it's happening. We might have liftoff probably in the next, uh, year or so with real production use cases, because right now most of the stuff is still not production, it's still demoware and, you know, private beta and that sort of stuff. I think the question there is do we need a second stage or not? I don't know. Uh, and I think that won't be clear for another maybe couple of years as all those, this current well of innovation reaches production. I would say though there's, there's some, some use cases for which it's very clearly there, like for example, uh, creativity, generating images, generating text. Uh, as humans we're good enough, uh, for debugging what comes out of the machine. Like, you know, you generate an image and the dog has, uh, five legs, you immediately, you know, re-roll the dice and, you know, you get, you get something that works. I think when you, when you try and write code or debug an issue in production, like it's, when the system is wrong, it's less obvious, and so that's what we still need to work on.
- SGSarah Guo
Yeah, makes sense. Are there any other near-term sort of productivity gains that you think are most likely to occur-... either, uh, for Datadog or your customers in terms of the- this new type of AI, because I feel like, to your point, there's a lot of forward-looking complexity in terms of some of these problems, and some of them may be the self-driving example, in terms of you need more stuff to happen before you can actually solve it. And then separate from that, it seems like there's a set of problems, or class of problems, where this new technology actually is dramatically performant. If you look at, you know, Notion's incorporation of LLMs and how it's rethinking documents, or, um, you know, if you look at Midjourney and art creation, to your point, on images and, you know, the subsummation of teams that are generating, uh, clip art or marketing copy or other things like that. I'm just sort of curious, like, where do you think the near terms- gains are for your- for your own area are likely to come?
- OPOlivier Pomel
Well, I- I mean, we see it already in everything that has to do with authoring, writing, drafting. I think it's a- it's there. It's- it's already good enough. It hasn't dra- dramatically changed anybody's processes just yet, but it's- it's- I think it's going to happen in the- in the near term. We see and- we already see, and we'll see m- much more of a- a- an improvement when it comes to, uh, developer productivity. I think there's a whole class of development tasks that are becoming a lot easier, uh, with an AI advisor. Like using a new API, for example, uh, used to be painful. Now, you know, it just takes a few minutes to ask the machine to show you how to do it, and it's gonna... And that's- that's a real, immediate productivity gain there. I think there are some areas that will probably be completely rewritten by AI, in terms of, uh, not just we can do different things, but also, we'll stop doing things because it become inefficient, as everybody does them, with AI. I'll give you an example, so email marketing and things like that. When, you know, you don't need a human to send an email anymore, uh, you can send a million of them from- from a machine and everybody's doing it, that whole avenue and that whole field might change quite drastically.
- EGElad Gil
You think we just killed the channel?
- OPOlivier Pomel
Uh, uh, it will change, and it will- it will have to take a different shape, I would say. Um, someone will have to cut through the noise a different way.
- EGElad Gil
Um, sup- super interesting, and I- I love the analogies. Okay, um, we're running out of time, so I want to ask a few questions on, uh, leadership to wrap up, if that's okay, because, uh, we have a lot of founders and CEOs who listen to the podcast, and Datadog is a company that just keeps executing. The- the company delivered 30% revenue growth when, you know, a lot of people are slowing as they face a- a- a different macro environment. You guys released a cloud cost optimization product, so you're doing certain things that are very specific to the environment, or maybe they were in the works for a long time. What else are you changing about how you run business, if anything?
- OPOlivier Pomel
So, we're actually not changing how we run business. So we've always run the business with the profitability in mind, um, so we, you know, we've always were- l- looked at the margins, looked at building the- the system from the ground up in a way that was, uh, sustainable and efficient. Uh, we never actually built things top-down and thinking, "Well, we'll get it to work and then we'll optimize it." Um, and the reason for that is we, again, we were scared initially that we wouldn't be able to, uh- uh, finance the business, but also, we- we think it's really hard to shed bad habits. You know, once you start doing things in an efficient way, uh, it's- it's really hard to- to move away from that, and we've definitely see that around- seen that around us in the- in the industry. The one thing we're a bit more, um, careful of is, uh, we're tuning our engine a little bit differently when it comes to understanding customer value and, uh, product market fit, because customers themselves are more careful about what they buy and how they buy it and how much of it they buy, so we need to re-tune our- our sights a little bit we adjust our sights a little bit so we don't- we don't make the wrong decisions based on that. To the point I was making earlier, at the same time, we also need to- to move a lot faster in some areas, such as generative AI, just because the field itself, uh, is moving so fast, which is also causing us to, uh- to message the teams internally a little bit differently. So we're telling teams, "Hey, I know you're used to having- uh, to be rea- to being really good at filtering out the, uh- the noise and taking, you know, three or four quarters to, um, zero in on the right use case." For generative AI, you can't actually do that, um, because the noise is part of what is being developed, so accept the fact that you might be wrong a little bit more, uh, but we need to iterate over it with the rest of the market.
- EGElad Gil
Do you guys need new talent to, um, go attack these areas, or does it change your view on talent at all?
- OPOlivier Pomel
Not- not really. I think it's a- it... Talent is always about, you know, finding- so finding people who want- who are entrepreneurial, who want to build, who want to grow, um, and who are going to prove themselves by making a whole array of the business just disappear. Like, you know, the- you- you find the best people just- just obliterate problem areas. Uh, you yeah, black holes for problems. You send problems, they disappear, uh, and that's how you know you- you- you can promote them. You can easily find them in organizations because you see all the work going to them. Uh, that work avoids people that- that, uh, are not performant, and it finds people that are fantastic, and so following the work is a great way to find different direct performance.
- EGElad Gil
I wanna ask one last question 'cause I- I feel like Datadog has so many, um, unique attributes as a company. Another perhaps sort of less obvious path that the company took was serve many different customer segments, right? From sort of-
- OPOlivier Pomel
Mm-hmm.
- EGElad Gil
... um, very small engineering teams to Fortune 100s, and, you know, my understanding is you guys have done that for quite a long time of grown up a little bit into the enterprise, as everyone does. How did you think about that, or why did it work for you?
- OPOlivier Pomel
Well, I mean, the- look, the starting point was bring everybody on the same page, so we- we focused on the- on the humans. We focused on- on thinking, well, the humans, whether they're in dev or in ops, uh, are wired the same, so let's bring them on the same page, on the same platform. And it turns out that humans are also largely the same whether they work for a tiny company or for a large, you know, uh, multinational. I think it was made possible also by the fact that, uh, in the cloud and open source generation, like, the tooling is the same for all companies. Uh, if you go back 15 years, if you were a startup, you were building on open source,
- 42:35 – 44:18
Datadog's Customer Value and Growth
- OPOlivier Pomel
uh, and if you were a large company, you were buying whatever Oracle or Microsoft was selling and, uh, and, you know, building on top of that enterprise-y platform. Today, everybody's building on AWS' open source. Um, it's the same components, um, up and down the stack, so it- it's really possible to- to serve everyone. Uh, it's been good for us. You know, it's- it's given us a- a lot of, uh, like, network- network effects that you wouldn't find in an enterprise software company otherwise, um, by giving us this very broad, like, uh, spectrum of- of customers. Uh, it's a- it's a great differentiator because it's really hard to replicate. You know, you- you can't just replicate the product. You have to replicate the company around it, which is hard, you know, from a competitive perspective. It also creates some complexities, you know, because as much as you serve... Like the users are- are humans, uh, and they feel the same across a whole spectrum, uh, commercially, you don't deal the same way with individuals and with large enterprises, and it's hard for your messaging not to leak from one side to the other, you know? So one example of that is, I think so- uh, recently, there was some- some articles in the news about some of our customers that pay us, you know, ten- tens of millions of dollars a year, and, you know, you have on the individual side, you have people who wonder, how is it possible to pay even tens of millions of dollars a year? Whereas on the high end, obviously, like customers do that because, well, it's commensurate to, uh, the infrastructure they have, and they admit they do it because it saves them money in the end. So, you know, it- you do have this balancing act between the very, very long tail of users and the- the very high end of large enterprises.
- EGElad Gil
Awesome. Thanks so much for joining us on the podcast. This was great.
- SGSarah Guo
Thank you so much.
- OPOlivier Pomel
Thank you. This was fantastic. Thanks. (instrumental music plays)
Episode duration: 44:18
Install uListen for AI-powered chat & search across the full episode — Get Full Transcript
Transcript of episode x0BIJeRyfBE
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome