No PriorsNo Priors Ep. 8 | With Neeva’s Sridhar Ramaswamy
EVERY SPOKEN WORD
65 min read · 13,225 words- 0:00 – 1:32
Introduction
- SGSarah Guo
(instrumental music) . Sridhar, I've learned so much from you as an investing partner, founder, and friend. Welcome to the podcast.
- SRSridhar Ramaswamy
Thank you. Very excited to be here saying I've (laughs) learned so much about companies and investing in tech from you.
- SGSarah Guo
Let's start with the background. Tell us about the motivation to start Neeva when you were already part of creating the dominant search product.
- SRSridhar Ramaswamy
Yeah. So Neeva was a little bit of back to basics thinking. Uh, when I left Google, I knew I wanted to start a company. Um, I spent a lot of time with Vivek about what we wanted to work on, and we ultimately came to the conclusion that, like, we are (laughs) actually really excited about search. Uh, there's the geek in us that like to help people, uh, find information that they needed. Um, and we were also ambitious enough to think that, you know, 20 years in, uh, we could rethink the search product and create a better one. Our aha moment, uh, is a little bit of an abstract aha moment, which is we said, "If we didn't have to deal with ads, if we didn't have to worry about, like, monetizing, we truly could start from back to basics." As, uh, both of you know, in startups, it's as much about taking advantage of opportunity as it is the original direction that you set." So the first three years of Neeva were really about building a better private search engine. And honestly, it also taught us a lot of pretty harsh lessons about consumers and, uh, you know, whether they were ready for change, um, or not. Um, and really, what we saw happen with AI and large
- 1:32 – 11:11
Why Sridhar started a private search engine after leaving Google
- SRSridhar Ramaswamy
language models last year was that aha moment when we realized, "Wait, we can have the great principles that we started Neeva with and create a much, much better experience." And so that's, that's a little bit of the journey to where we were. But at our core, Neeva was like, "There must be a better search product. It cannot be that there's one company, one religion, one product for the whole world."
- SGSarah Guo
So, I think many people, um, who use Google every day would say, like, "It's actually pretty good." And as somebody who was working on this, um, you could see, I think sometimes users are blind when they have a, you know, a default that's this strong. What were the things you thought could be better? And if I could add to that, like, how does that factor into the Neeva mission?
- SRSridhar Ramaswamy
Yeah. So, I mean, an important part, at least early on, was the private and the ads-free. And, you know, we have to say that we underestimated how much people, especially in the US, would care about it. Um, as you know, figuring out consumers is a very tricky thing. People will often not do what they say they, uh, will do or will not even admit to things that... like, they will or will not do. That's just the nature of the game. For us, for example, we were surprised that, um, we did so much better in Europe compared to the United States. And you don't really think of them as being that different, but in practice, in terms of how many people care, it is actually very different. So, a lot of the early Neeva was really about how do we use the power of being, um, privacy-focused and ads-free to create truly a better experience. Um, so we've tried a number of things. Um, they have achieved varying degrees of success. For example, the integration of, uh, things like personal data, personal preferences. Um, but I would say the, the fundamental challenge of Neeva, especially in the United States, has been, how do you get people to take that initial step of caring enough to want to change their search engine? Um, once you actually get people to do that, the job gets considerably easier and they begin to see all of the things that were not really that great about, you know, about that experience. Again, as a startup founder, as a consumer startup founder, I think these are pretty harsh lessons in consumer psychology, but one that, you know, one has to learn.
- SGSarah Guo
So, more recently, you guys had a, a big breakthrough in terms of experience and consumer openness to, um, AI summaries, which look very different from traditional search. Can you just talk about how this product came about and what you had to build to enable it?
- SRSridhar Ramaswamy
Yeah. So, you know, in some sense, AI summaries, uh, (laughs) I am sure there are many Google engineers and execs that'll tell you, "Wait, we've been doing this for 15 years." Um, it's kinda true. Uh, Google launched something called Featured Snippets. Like, I think it was a long time ago, 2010, '11. Um, Google's always known that, you know, an answer right in the main search experience trumps all. Um, who actually knows this really well? Um, Elad will remember this. Google knocked out Live.com, Bing's image search, as the top image search product in the world by integrating image search right into the search experience. Turns out, Bing, you know, Live.com then was the one that had the best image search experience. Uh, Google knocked it out by putting it into the search experience. Um, same thing happened with Yelp and with Local. It didn't matter how good Yelp was. If you could show an answer right in the search experience, that basically won. Similarly, this Featured Snippets, which is really pick out the two or three lines from a website that is exactly the answer that the user is looking for, was always a big win. People love the product. It goes back to, um, essentially, like, Occam's razor. Like, the simplest explanation, um, is anything that minimizes work, people are going to love. Uh, and so if you give an answer instead of ge- letting people click on something, of course they're going to like it. This is the reason why, you know, the currency conversion widget on Google is wildly popular. It's not that you and I, like, can't click and go to another site, but it's like, "Ah, why? It's there." And so answers in that sense are old. Um, but the fundamentals of search have always been that you got back a set of opaque links. And of course, Google's entire business, the trillion-dollar business, is built on, you know, this, this, again, obvious fact that you and I cannot tell, um, really between a good link and a bad link. We can say a little bit if it's the New York Times, our brain, it, like, basically tells us, "Ah, that's a good site." You know, for most sites, we don't really know. We click, we find out. M- but the opacity...... and the linear scanning order is always an important part of how, you know, search has worked. And so answers are, you know, this linear scanning is important to remember. Um, this consistent desire, whether they state it or not, on the part of users to- to get to the answer in the fastest possible way is an important thing to remember. Um, but things like feature snippets were never deployable at scale. The technology simply was not there. Um, even if Google put the full might of its mighty machine, uh, against the problem, the coverage never really extended beyond like 5, 6, 7%, and it would make, um, website owners really unhappy. They were like, "You're taking away my clicks." Um, and so-
- SGSarah Guo
Hmm.
- SRSridhar Ramaswamy
... it was always, like, this edgy feature that Google would be like, "You know, yes, we can show this, but not really too much." Um, our a-ha moment with large language models where we are like, "Wait a minute. For the first time, you know, you have these models that can take, like, any content, um, and come up with a summary that gets to the heart of what this page is saying." And oftentimes, you have to do it in the context of the query. If you have a blog, for example, that has six sections and your query is really about one of those sections, um, then you better find out the right section to summarize. Uh, and so a lot of it was just realizing that what was essentially previously unsolvable, and summaries in particular, are this frustratingly vague concept. You and I can do a reasonable job if given a bunch of different kinds of content to summarize, but actually making a machine learning model do that in general, um, is a tough thing. So, a lot of last year was really like understanding that, but also trying to make it work at scale, um, which is a big effort on our part. We decided that we didn't really want to be beholden to, say, like, using OpenAI's API, um, for doing things like summarizing a 4 billion page index. We built a lot of the technology in-house. Um, but the final cumulative product is these cited summaries, um, which really is one fluid answer, um, when asked a pretty complicated, um, question or- or query. Um, obviously, many people are doing this, uh, doing this now. But for us, that was this a-ha moment of, wait, we can write answers, a single authoritative answer for 50, 60, 70% of queries. Um, and large language models, as you folks well know, are also general purpose learners. Um, the exact same tech that can summarize a piece of text can also be used now to pull out structured information. We realized that we were sitting basically at a goldmine beyond compare in terms of a better search experience. You know, most of what you see for cited summaries are in the context of information-seeking queries, but there's a whole lot of work coming that can tackle different kinds of commercial queries. So, this is the beginning of a lot of work that can be done to make the search experience better. But the core really is, if you can provide a believable answer to a question, people are always going to prefer that over any number of links that you can give them. People don't like clicking on links.
- EGElad Gil
Yeah, it's really interesting because, uh, you know, I overlapped with you at Google, and one of the things I worked on for a while was mobile search. And I remember, to your point, we tried to surface every single, what at the time we were calling 1Boxes, you know, that would trigger with images-
- SRSridhar Ramaswamy
Yep.
- EGElad Gil
... that would trigger with, uh, location information. And it's- it's pretty amazing that you're able to get to such high amounts of coverage just using the LLM side. How do you think about... 'Cause I remember when we were building those individual pieces, there was a lot of custom work. There was custom-
- SRSridhar Ramaswamy
Yep.
- EGElad Gil
... indices for news and crawls, and then there was custom ranking algorithms, you know, everything (laughs) . You- you had sort of specialization. How do you think about the other 30 or 40% that you're covering? Or is the idea eventually to do everything via LLMs? Is that prohibited from a cost perspective? I guess, more generally, how do you think about information retrieval-related problems in this new world and how you map the different types of search queries and different types of results against that?
- SRSridhar Ramaswamy
It's a great question. Uh, so for example, in, like, the 55, 60% that I'm talking about, um, I'm actually excluding the 1Boxing that we already fire. Um, so it doesn't include, like, the stock cards or the weather cards and stuff like that. In fact, we are working on a Poe integration, and part of what the Poe team is saying is like, "Wait, wait, if somebody asks for weather, just give it back. You have the information already. It's not that hard." Um-
- SGSarah Guo
For clarity, Poe is the, uh, Quora app.
- SRSridhar Ramaswamy
Yeah, Poe is the Quora app. It's like, uh, I don't know. What's the right way to put it? It's- it's like a chatbot aggregator. It's a pretty cool app. You can take, uh, uh, some of the 1Boxing. And even there, by the way, um, this code for triggering, as you point out, Elad, used to be, like, really annoying code. Sometimes it would be regular expressions. It's basically like a giant, like,
- 11:11 – 15:25
Information Retrieval Problems, Mapping Search Queries and LLMs
- SRSridhar Ramaswamy
you know, ball of wax when it comes to figuring out how to trigger right. Um, LLMs actually make some of that stuff easier if you want to extract structured information even from user-typed queries. And I'm sure, like, you know, most tech people have dealt with this at some point in their life or the other. All of us have nightmares about writing Beautiful Soup code in order to parse web pages. Uh, it's basically regular expression parsing over ever-changing websites. Um, it's horrible. We have done a bunch of it in the first two, two-ish years of Neeva. That stuff is also easily generalizable with the smallest model that there is. At this point, I don't feel, um, that there's, like, a natural limit to how much LLMs can be used with search. I do feel, however (laughs) , that there's a very strong, uh, limit to how many questions can be usefully answered.And you realize with a shock that search engines are actually pretty terrible at a lot of tail queries that you and I will now no longer think twice about putting into a chat bot. I mean, what do I mean by that? The other time, uh, you know Jason, uh, Calacanis, who like, uh, you folks, uh, has a big podcast. You know, he just typed in, "How are the Knicks doing this year?" Um, into Neeva, um, and a bunch of other search engines. And he was like, "Ah, this AI stuff does not work!" Um, but the real answer is no one in their right mind is going to think of typing in, "How are the Knicks doing this year?" into Google Search because it just never gave great answers for stuff like this. Tail queries have always been served poorly. I don't think that is going to change, uh, instantly. Um, but queries that can be meaningfully answered, I think a lot of them can be answered with LLMs. For what it's worth, the approach that we are taking, which is very much like the beginning of how large language models can be applied to retrieval problems, um, is this technique called retrieval augmented generation. Again, uh, you know, a lot of your listeners know this. It's basically how do you combine, um, a search engine as a tool, um, that a large language model uses? Uh, and even there there's going to be generalization. There is zero reason why we can't recognize that you actually typed in, uh, an arithmetic expression and fired off a Python interpreter for doing this or some other API. So again, even in terms of what can search engines do, we are very much at the beginning. I think we're going to expect a lot more, uh, from these kinds of interfaces. And the difference between, like, a chatbot, um, and a search engine, um, that, like, combines a chatbot and retrieval is going to just look more and more bloody going forward. Um, so hard questions will continue to be hard, but a lot of questions that we expect answers for I think will be eminently answerable with LLMs as one of the tools that go in.
- EGElad Gil
It's super exciting. Relatedly, when I've seen people model out the costs of using LLMs versus more traditional IR approaches, LLMs seem to be more expensive per query. And, you know, I know that when Satya Nadella was talking about integrating these things into Bing, he almost had this your margin is my opportunity style, uh, perspective relative to Google, right? I don't know if it's true or not in terms of how that would substantiate over time, but it almost felt like the claim was that, you know, Bing was okay almost subsidizing LLMs integrated into search to try and draw or sort of hurt the margin on the Google side. How do you think about the potential cost prohibitive nature of LLMs for search? Is it, is it really a thing? Do you deal with it with semiconductors or small models or other things? Or is it, is it not really that, that important of a consideration?
- SRSridhar Ramaswamy
Well, first of all, his, um, uh, comment might have meant two things. There are two ways to think about margin. One is the cost of serving and, uh, the other is the margin that Google makes, say, on an Apple deal. Not clear, um, which one he was, which one he was talking about. But this is a topic that you've written a lot on, Elad, uh, when it comes to, like, just LLMs and cost. We saw something dramatic happen where OpenAI reduced the cost of its API by a factor of 10. Um, that's a little insane, this, this early on. But if you go back to the basics of your question, um, and think roughly,
- 15:25 – 19:06
Google and Bing’s approach to search with LLMs
- SRSridhar Ramaswamy
like, you know, an average very large model call takes about five cents. Um, that's actually... That is astronomical because you're talking $50, um, you know, CPMs for serving, for serving 1,000 queries. Um, now, the average RPM for US queries is about $40 to $50. And clearly that will be a very high cost. Uh, rest of the world is a lot lower, by the way. Like, my memory is on the order of $20, um, if you, if you average over the whole world. So... And, um, I'm sure you folks also know that Sydney, for example, will issue up to three queries, uh, for every question that you ask. I mean, it's an arbitrary limit, but there are like... Sometimes you need to ask more than one question, um, in order to answer it well. Put that way, um, yes, this is an astronomical cost. But personally, I feel that there is more and more evidence that says that you don't need, like, the full power of the largest, biggest model to get most things done. Certainly the way we think about cost, um, page summarization, for example, um, we have, like... We've, we're very comfortable with using models that are in the 5 to 10 billion parameter range. We are very good at fine-tuning them. There's a human feedback loop that is about to kick off and be there. So whatever can be done with very large models for large classes of problems, our attitude is we'll do them all day long for the kinds of problems that we care about. And we are fine running six kinds of models instead of running one model that is going to conquer them all. And so I do feel like for a lot of, like, known problems, um, model size is not really going to be an issue and there's going to be an ongoing reduction both in the size and therefore the cost to serve them. Some of you of course might be referring to the margin that Apple pays out. And, um, if I were them, I would offer, you know, Apple 100% of rev share in order to get at the traffic. It's a way to establish a beachhead. By the way, there's precedence. Um, Google gave more than 100% to AOL and close to 100% to Yahoo in its early years. Um, that's how you make markets. They obviously will be trying everything.
- SGSarah Guo
You're, you're saying that we should expect these players, or that it'd be rational to play even more aggressively from an economics perspective than, um, than we've seen so far.
- SRSridhar Ramaswamy
Oh, absolutely. Absolutely. You know, part of the problem with, um, uh, with Bing's growth has been that Google has fought it off very effectively on the business side. Um, of course, it hasn't helped that it is common perception, true or deserved or not is a different story, um, that Bing search quality is not as good as Google. For what it's worth, there are very few people on the planet that can objectively jur- judge search engine quality and so they need a way to break through and establish meaningful, meaningful presence and so it is perfectly rational, um, for them to start with, you know, a better product but then, um, go out of their way to establish a beachhead, establish a market, because that is going to pay off in a pretty big way for them, um, down the line.
- SGSarah Guo
Every part of this game feels like an expensive game to play and I wanted to ask you about just the building of search, even aside from training LMs. Uh, I remember there was a lot of skepticism when Neeva first started, um, including from yourself, about how any startup could afford to build a new search engine, from both an engineering talent, as an ambition of technical project, infrastructure cost perspective.
- 19:06 – 22:26
Scale challenges when building a search engine startup
- SGSarah Guo
You've built an all-star team but obviously can't spend a billion dollars as a startup. Can you talk a little bit about what's been most challenging to build?
- SRSridhar Ramaswamy
Yeah. Search is one of these things where you need a fair amount of scale before you have any kind of meaningful product. Um, you know, with, like, an ad system, for example, I can tell you how to build, uh, one with a three-person team, um, because it's, like, limited data. Or if you're building a new mail client, it's a, it's a small problem. Yes, you'll have scale problems, but only after you have a million users, not on, like, day one. Search, like setting up, like, a new mobile network, let's say, um, where you have to start from scratch, is problematic from that perspective, simply because you have to do a lot of work to be seen as even vaguely competitive. Um, and so everything from how, uh, we went about doing our crawl to how we built our index, um, has been a struggle. I, I, you know, I won't, I, I won't deny it and it's one of these problems with, like, you know, grown men and women, uh, sane ones will just run away after a while. They'll be like... They work on it for three months.
- EGElad Gil
(laughs)
- SRSridhar Ramaswamy
They'll be, "I can't deal with this. I just need to, like, go." Um, and it's disconcerting to, you know, kinda watch that. Uh, but having said that, you know, we do have an amazing team. Uh, Asim, for example, um, was just brilliant at engineering a system that ran completely on Flash, um, in which we could do things like super rapid iteration, replace the entire index, um, over the space of two days, um, or put in arbitrary amounts of information for experimentation in a much more flexible way, problems that took Google, like, 15 years to solve, we had solved out of the gate simply because he had run into many, many of these problems. We were also opportunistic, uh, you know, to the point of LLMs being these universal input-output machines. We realized that a lot of problems that Google solved with massive scale and user data, we could in fact solve with LLMs. So we use a lot of them for things like query rewriting. Uh, similarly, extracting structured information. Turns out, it's weather that people will ask about weather in, like, many wondrous ways. We are in the process of actually replacing a hard-coded system with one that's based on an LLMs to extract, to extract structure. So we have taken shortcuts wherever we can, um, in order to, uh, in order to do this. Um, it is a daunting problem, but I'll tell you, the single biggest positive, um, thing for the team is actually launching Answers. Because up until then they sort of had this feeling of even if we were to be as good, if not better than Google, no one will care. People can't tell between, like, you know, list of links anyway. Once you turn that into, "And yes, here is, like, an actual answer that my mom can take a look at and say, 'Way better than a bunch of links,'" all of a sudden there's excitement. Um, and so there's the actual psychology, all of you deal with teams, of what excites the team, um, and really it's, it's, it's been over the past few quarters where people have realized, oh wait, this can be a transformational experience that just is, like, a big jolt of electricity through everybody just in terms of how excited they are, how hard they work and things like that.
- EGElad Gil
Yeah, it's very exciting progress. I guess one, one question related to that is when you look at distribution, uh, because you mentioned, you know, consumer habits are quite sticky on the distribution side and I remember,
- 22:26 – 24:11
Distribution challenges and why they release Neeva Gist
- EGElad Gil
uh, even back when I was at Google many years ago, like over a decade ago, probably more than that now, 15 years ago or something, hundreds of millions of dollars a year were being spent on distribution. And obviously that number has grown with the Apple deal and other things. And so do you view it as, like, distribution through superior product? Is it specific integrations or partnerships or how do you think about getting that consumer interest?
- SRSridhar Ramaswamy
Uh, distribution is hard. There's just no question about it. Habits are hard to change. Um, you can dislodge some of this with a, with a superior product. You can dislodge some of it with the dollars. Part of the reason why we released this app called Gist, which was a very different take on search, is we very deliberately said, "If we wanted search to look like Instagram Stories, what should it look like?" It's an experiment. You know, we hope it'll do well. And so sometimes you have to, um, look for change at sort of the locus of change. Uh, the other thing that we are also actively looking at is, uh, you know, in this moment where there's going to be enormous amounts of uncertainty about things like, is search engine traffic basically going to disappear for websites? Um, are, you know, are LLMs going to disrupt, um, the aggregator, uh, publisher relationship in a fundamental way? We are now realizing that we can offer a superior search experience to lots of publishers, whether it's a Reddit or a Boston.com or anyone else, we can give them conversational search on their carpets. So we are going to try a set of different things. We've actually had a fair number of success working with privacy products like Dashlane, um, and, uh, obviously other folks that we are talking to, like ProtonMail, about how we could work better together. Uh, distribution continues to be, like, easily my top worry for how does Neeva get scale.
- EGElad Gil
Yeah. And I guess related to distribution and business model, you opted for
- 24:11 – 28:25
Why Neeva is a privacy centric subscription service
- EGElad Gil
a privacy-centric subscription service without ads quite early and I think at the time that was very, um, innovative thinking, right? I think now that other products, ChatGPT, et cetera, are all sort of coming out with these subscription-based approaches, I, I was just sort of curious how you thought about it. Like when do you think a product should be supported by subscriptions? When should it be supported by ads? And how do you think about it in the context of this type of product?
- SRSridhar Ramaswamy
I mean, for us, it was a way to stand out. It was to give us a clear runway. Um, thoughtfully done, uh, ads monetization is an incredible juggernaut, as everyone that's on this podcast knows, in terms of the kinds of scale that it can, uh, you know, that it can bring and how it can disconnect, uh, monetization from the product. So it's almost like a separate team that is working on it. You know, when it's very successful it can actually kind of get annoying. Uh, I'm sure, like, none of us likes watching broadcast TV anymore. Like, sports broadcasts drive me crazy when I think about, like, how many ads that I have to sit through. Uh, ads sort of come, uh, you know, with elements of self-destruction built in. It's par for the course. When you're doing it, it's always attractive to do things like show more ads. Um, in some ways, you know, hybrid approaches of starting ads free and maybe using ads as an additional mechanism, uh, might be more sustainable even though, you know, reasonable people will argue that, uh, most people that come to ads later, uh, tend to be even more discriminate, um, about how many ads they show and ads quality than the people that have been working on it for the first time. You know, I worked on it, it's also the team, but, um, Google Search ads actually, uh, tried very hard, um, to hang on to quality bars, uh, to hang on to user metrics for a very, very long time. Compare them to somebody like Amazon today. Um, I find the Amazon search experience a joke, um, because it is so full of ads and is actually misleading ads where it's really hard to find what is going on. I think they're viable options. There are structural elements that then come into what should you adopt. If you're in the business of providing answers, like ChatGPT was, um, ads just becomes a whole lot harder to do. Um, there really, you're just, uh, you know, you're betting on the quality of answers. But for many other products that are about more casual consumption, whether it's social media, uh, or even their search might go, I think it's, uh, it's- it's- it's an open question where ultimately it'll settle. I find to point out to people with something like a Gist experience, which is a summary followed by a series of cards, uh, you can stick ads in there. We are not planning to do that, um, but there are many different ways to solve problems.
- EGElad Gil
You know, in the early days of Google one of the arguments that were being made for ads was that the signal in terms of willingness to pay, um, was a way to actually boost a-a meaningful link to somebody. In other words, if there was somebody who's willing to promote a link, that in and of itself was a signal on the potential quality of that link relative to the potential user. Do you think that's a true statement or do you think, um, it used to be or, you know, is-is sort of, are commerce signals like good boosts for actual ranking?
- SRSridhar Ramaswamy
They can be. Uh, but I think the bigger truth is that smart people will come up with great explanations for everything that they do as long as it's convenient to them. The best religion to have are the ones that are aligned with your business interests. Um, and of course the ads team is going to say that. Um, there's some amount of truth in it, um, but that clearly is not an explanation for like two screen fulls of ads when you're searching on your phone. I find this whole thing, uh, of, uh, ads enable Google to make free products, or ads enable Facebook to be available for Ecuadorian people, uh, made by billionaires sitting in Palo Alto. Um, to be entirely self-serving, my attitude is like, "Yep, we can make money with ads. It works pretty well. We're rich. It's okay."
- SGSarah Guo
If we, uh, just sort of project out a little bit and say, um, these summaries, cited
- 28:25 – 30:16
The relationship between search and publishers/content creators
- SGSarah Guo
or not, uh, chatbot experiences, um, answers are really compelling to consumers, how do you see the relationship between search and content producers changing in the long term? Right, if these summaries take traffic from publishers, do we lose the incentive to publish content on the internet?
- SRSridhar Ramaswamy
I think that's one of the big unknowns. Um, I think what is, uh, going to happen, um, is that some of the larger content creators, you know, I would put people like, uh, Reddit and Quora, these are some of the forward-thinking ones very much in that bucket. Um, they're going to say, "We want to be part of search, um, but we don't really want to be part of your answers." Like, you know, taking our data and sticking into LLMs is not really, uh, allowed by our crawl policy. Um, but smaller publishers are not really going to be able to do this. Uh, the bigger ones are going to have things like their own chatbots so that you can browse, uh, Reddit content or Quora content. Uh, so I liken the current moment, um, to, you know, basically, we're going to be dropping, um, a bomb or a giant impulse, uh, to the center of how a lot of us get information. This is going to radiate out from here, um, to a whole bunch of sites, to the content ecosystem. I think it's going to be a little bit... it's- it's a little hard in my mind to predict. It does feel, uh, like there might be more centralization or more consolidation when it comes to content creation. Your average small blog, which could subsidize itself or which could monetize itself with advertising, is going to- is going to find it hard, uh, to compete, um, in this answer world, especially if the expected experience for everybody is going to
- 30:16 – 35:35
Sridhar’s predictions on how AI will disrupt current ecosystems
- SRSridhar Ramaswamy
be, "I don't really want to read giant pages, um, I want to be like talking to you. Give me- give me a bit of a summary of what you're going to say. Then I'll ask follow-up questions." Um, all those experiences are possible but not for every blog that there is. So I think there is potentially a very different platform that is going to evolve for how content is going to be created that looks a little bit different, um, from how it is today.
- EGElad Gil
You know, when you were at Google, um, your team was doing machine learning and AI at a scale that I think roughly didn't exist anywhere else. And you were very forward-thinking in terms of then applying really interesting cutting edge technologies at Neeva and creating one of the really first and most interesting LLM-based, uh, search engines, right? Which I think is super exciting work. What else are you predicting gets most disrupted within the AI world beyond search? Or what are some areas that you think are coming over the next coming years?
- SRSridhar Ramaswamy
I mean, we talked about content, how it's going to get, you know, how it's going to get disrupted. I'm not even talking about synthetic content. Yes, it will be, but I think there will be techniques. That's a cat and mouse game of detecting it. But obvious places where content is generated, actually, (laughs) ironically is going to be advertising. I can see how personalized advertising actually plays a, a pretty big role, especially when it gets to be multimodal. I joke to people that, like, Michael Jordan is going to be telling you, uh, to buy his Air Jordans. Like, you know, look at you in the eye and speak your name and so on and so forth. So advertising with its closed loop for optimization and the relentless focus on efficiency actually is a natural, uh, area. I'm not saying there's not going to be, um, but obviously there are a lot of companies that are saying things like, "Oh, um, we can apply LLM technology to every other information function, whether it's mail or how we consume documents." Um, but what I find, you know, interesting is that, uh, we have a set of incumbent technology companies that are actually very smart and very driven. Uh, you know, Microsoft to be this innovative this late into the game, you don't hear about stuff like that from IBM, not at this sta- scale of, like, consumers and the whole world. Um, so I think they're all going to react pretty quickly, incorporate a lot of it. So I don't know how, like, how much there is going to be pure SaaS innovation, um, on products that we take for granted. I'm not saying there's not going to be, but it's a little bit harder. One of the areas I'm personally very excited about is the generalization concept that I spoke about earlier, um, which is if you think of LLMs as, like, machine language, um, then the natural thing is how do you combine them with the various tools that we use, um, in terms of search engines, calculators, APIs, programs, other websites? Um, so I think, like, action transformers is going to be an incredibly powerful area. Um, the technology is very nascent, so unlike, say, you know, OpenAI's ability to crank out new generations of LLMs, I don't think that tech is yet at a point, um, where people can build lots of applications on top of it. But to me, that is potentially a big breakthrough, not just for things like RPAs, but also potentially for, hey, can you create an AI SRE? Can you create an AI code reviewer? Can you create, like, fill in the blanks? Um, I think that's incredibly exciting, um, but I think the technology is also quite a bit more nascent, um, than what we have just come to expect will happen with language models.
- EGElad Gil
Yeah, the agendization of the world is a very exciting future. Um, so we'll, we'll, we'll, uh, wait with bated breath. Um, as we wrap up, is there anything else you'd like to talk about that we didn't touch on?
- SRSridhar Ramaswamy
Uh, uh, you know, it's, it's trite. It is repeated. Um, but as a, uh, technologist, uh, this is a really exciting moment, uh, where I do think that this is powerful new technology. It's also getting democratized very rapidly. My take is that, uh, you know, WhatsApp was this seminal moment of, like, mobile computing. Your team of 30 people could create a product for the whole world. Um, to me, that represented the power of mobile platforms. And, uh, if two years from now, if, uh, whatever, three college kids, you know, uh, 20 years old are able to build a brand new application that uses the things that we know for sure, whether it's web servers or databases, but also language models in a fundamental way and say, like, "Wow, we never thought of that," um, you know, that feels very possible. That is what is really, um, exciting about where we are. Um, yeah, in, in the meanwhile, super excited for where we are able to take search with Neeva and appreciate all your wisdom and support.
- SGSarah Guo
I'm counting on that to happen, actually. Um, but, but I think Elad thinks it will, too. Sridhar, all... incredible conversation as always. Thank you for joining us on the podcast. We appreciate it.
- SRSridhar Ramaswamy
Thank you, Sara. Thank you, Elad.
- EGElad Gil
Thanks for joining us. (instrumental music)
Episode duration: 35:35
Install uListen for AI-powered chat & search across the full episode — Get Full Transcript
Transcript of episode 5HNVxCWJTC4
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome