Skip to content
The Twenty Minute VCThe Twenty Minute VC

Aravind Srinivas:Will Foundation Models Commoditise & Diminishing Returns in Model Performance|E1161

Aravind Srinivas is the Co-Founder and CEO @ Perplexity where he is on a mission to build the world’s most knowledge-centric company. Recent reports have suggested the company is raising $250M at a $3BN valuation and the company’s cap table currently includes all stars such as Jeff Bezos, Elad Gil, Nat Friedman, Tobi Lutke, Yann LeCun, Naval Ravikant, Paul Buchheit and Andrej Karpathy. Prior to founding Perplexity, Aravind was a Research Scientist @ OpenAI and before that a research intern at both Google and Deepmind. ----------------------------------------------- Timestamps: (00:00) Intro (00:46) AI Passion Journey (05:35) Addressing Diminishing Returns in AI Model Performance (08:16) The Future of AI: Specialized Models & Data Curation (11:28) Advancing AI Reasoning Quality (18:21) The Challenge of AI Memory (20:37) The Future of Foundation Models in AI (27:39) AI Models & Cloud Provider Acquisitions (31:31) Navigating Capital Competition in the AI Industry (40:30) Timing the Expansion into Enterprise Division (47:47) Fundraising Process (51:03) Predicting Perplexity's Dominant Monetization Engine (54:35) Quick-Fire ----------------------------------------------- In Today’s Episode with Aravind Srinivas: 1. Are We Reaching a Stage of Diminishing Returns with Models: Have we reached a stage where more compute will not result in a proportional improvement in model performance? What are the most exciting new modalities we will see breakthroughs in? Is voice the radical new modality that everyone thinks it is and OpenAI demoed? What will it take and when will we have true reasoning in models? 2. Are Foundation Models Commoditising: What is the end state for the foundation model layer? Will we see the specialisation of models? Will different models be used for different things? Is there room for a new foundation model to be born? Is it VC backable? Why does Aravind believe that two players will win this layer? What happens to the rest? What is needed to win in this layer? 3. How to Survive in a World of Incumbents: Funding the Machine: How can any of the current players compete in a world where Microsoft has $300M in free cash flow per day? How much money does one need to build a foundation model today? Are the barriers lowering? Why does Aravind argue that talent is not simply a game of cash? 4. From Burning Money to Printing It: What does Aravind believe are the four core monetisation methods for Perplexity? Why does Aravind think that advertising will be their largest? Why does Aravind think that consumer subscription is not a very good business for them? Is Aravind concerned about having to build an enterprise go-to-market? What will it take to have a super successful API printing money machine business? ----------------------------------------------- Subscribe on Spotify: https://open.spotify.com/show/3j2KMcZTtgTNBKwtZBMHvl?si=85bc9196860e4466 Subscribe on Apple Podcasts: https://podcasts.apple.com/us/podcast/the-twenty-minute-vc-20vc-venture-capital-startup/id958230465 Follow Harry Stebbings on Twitter: https://twitter.com/HarryStebbings Follow Aravind Srinivas on Twitter: https://twitter.com/AravSrinivas Follow 20VC on Instagram: https://www.instagram.com/20vchq Follow 20VC on TikTok: https://www.tiktok.com/@20vc_tok Visit our Website: https://www.20vc.com Subscribe to our Newsletter: https://www.thetwentyminutevc.com/contact ----------------------------------------------- #20vc #harrystebbings #aravindsrinivas #preplexity #ai #founder #ceo #venturecapital #startup #openai #chatgpt #google #whatsapp #deepmind

Aravind SrinivasguestHarry Stebbingshost
Jun 5, 20241h 3mWatch on YouTube ↗

EVERY SPOKEN WORD

  1. 0:000:46

    Intro

    1. AS

      Today's models are just giving you the output. Tomorrow's models will start with an output, reason, elicit feedback from the world, go back, improve the reasoning. That is the beginning of real reasoning era. The biggest beneficiaries of the commoditization of foundation models are the application layer companies.

    2. HS

      Ready to go? (upbeat music) Aravind, I'm so excited for this. I've been looking forward to this one. So first off, thank you so much for joining me today.

    3. AS

      Thank you for having me on, Harry. I've watched all of your sh- uh, episodes, so looking forward to it.

    4. HS

      That is very, very kind of you, my friend. Listen, I want to start with a little bit

  2. 0:465:35

    AI Passion Journey

    1. HS

      on you. How did you first fall in love with AI and realize that actually this was what you wanted to do and spend the majority of your career on?

    2. AS

      It was lit- lit- na- more like an accident. Um, I was just yet another electrical engineering or computer science undergrad doing my courses and doing some interesting projects alongside. There was a point when one of my, um, friends in undergrad told me, "Hey, there's this, uh, contest, um, where you could win, win some prize if, if you came first." And I think I was like, you know, kind of like in need of money, because I wasn't sure, uh, I was gonna get an internship, so I, I tried, tried the contest out. And, um, it was a machine learning contest, but I didn't even know what machine learning was.

    3. HS

      (laughs)

    4. AS

      Um, all I knew from that guy was that, hey, you're gonna be given some data, and you can, uh, use some of the patterns in the data and use it to make predictions on, you know, held-out data that you don't have access to. A server will have it. You submit your algorithm, and it'll score against what is correct and what you predict, and whoever wins the most number of correct predictions wins the contest and you get the prize. That, that's the extent to which I, I was told. And I go and, um, check out this library called scikit-learn. It's a very popular machine learning library. And I have literally no idea what any of these words mean, like decision trees and random forests. Like, none of these things made any sense to me. Um, and so like, I, I just like, literally just did what an AI would do, brute force, random search, but as a human. Did all that, and, uh, we won the contest. I won the contest, and then that gave me a lot of confidence. Okay, uh, I beat people who actually knew machine learning in it, and that gave me a lot of confidence that like, this is something I could be pretty good at. I remember, uh, Sam Altman once telling me... I asked him this question like, uh, two, three years ago. "Hey, like, how do you identify as something where you're, you're naturally good at?" And he said, "Whatever comes easy to you, but seems hard to other people."

    5. HS

      Huh.

    6. AS

      Like, that's a good heuristic to identify things that you could be like mu plus two sigma at compared to the rest. So I felt like, okay, this machine learning is a good, good, uh, thing. It was not called AI. Um, so I, I got into it and I did a cl- I did all the, you know, courses, Pattern in Ma- Pattern Recognition in Machine Learning, the book written by Christopher Bishop. I bought it secondhand in India, uh, for like, you know, something like $2, $3 or some... and, and started reading it. And I really enjoyed it. Like it was pretty mathematical, but also intuitive at the same time. This sw- Kind of the sweet spot I really enjoy. And that got me access to a professor who was, uh, Rich Sutton and Andy Barto student. Uh, and, and he was teaching at my undergrad institute. And, uh, I told him to advise me, give me a project. And he was, um, a reinforcement learning guy. So reinforcement learning is when I actually got into AI. Um, because we've all ridden AI for like checkers or like tic-tac-toe or something like that, like, you know... Or, or even when you play chess as a kid, when you're playing against a computer, you al- you always ask this question, "How does computer play? Like, what is it like?" And then they go, "Oh yeah, it's like an AI, you don't worry about it." So we use "AI" loosely there, uh, but the real definition of AI, what it is, like, oh, it's an agent, it's, it's an environment, receives a reward signal, it optimizes for an objective. All that framework, mathematically, uh, made sense to me once I studied RL. And then he told me, uh, at the end of the class, "Hey, I have a friend of mine, uh, from UK. Uh, his name is David Silver. You know, we used to know each other from PhD days, and, and his startup just got bought by Google, uh, for like half a billion dollars, because they wrote this paper that learned to play Atari games just from the screen pixels. Um, and, um, um, they've open-sourced the code. Why don't you take it and, and now figure out how to play ev- like, like, uh, all the games simultaneously? Like not just one single game. Uh, you learned to play Pong. Uh, you should be able to play Breakout much faster than learning to play Breakout from scratch." Transfer learning. So that was the first project I actually worked on. I loved the idea of, um, um, all the papers that DeepMind did and I would just like literally be in the lab all the time and, you know, keep reading their papers, trying to like implement them, um, and like borrowing gaming GPUs from other people in the lab, uh, and using it to train neural nets on it. So that was the start and like, you know, I, I really enjoyed those days.

  3. 5:358:16

    Addressing Diminishing Returns in AI Model Performance

    1. AS

    2. HS

      I was thinking before this, like what are the single most pressing questions right now? And what do I most want to ask? And I think the first one that came to mind for me, and when I bluntly had so many people message me when I put out about our show, the first one was one of diminishing returns, and it's when we look at model performance, I think we've always had this kind of belief that you throw more compute and you get-

    3. AS

      Yeah.

    4. HS

      ... much better model performance.

    5. AS

      Yeah.

    6. HS

      Do you think we've gotten to a stage now where we're starting to see diminishing returns?

    7. AS

      I think it's a nuanced answer. I can just say no-... and you'd be like, "Okay, if I..." Brute force still works, but that's not the reality either. Like, it's not like if I, if I suddenly came and say, "Hey, Harry, take my $500 million, uh, and, and, and go build a big cluster. Uh, take, like, uh, you know, like trillion tokens and, and get a model better than OpenAI." It's not gonna happen like that. There is still some alpha left in making these models bigger and training them on more tokens. But you would only get the bang for the buck if you put a lot of effort into, like, curating the data. Otherwise, it's just not worth it. Like, I know so many research labs, I can't obviously mention who they are, but who train really big models on a lot of data and ended up with nothing. It's a lot about what data you train on, how you mix, uh, English and, like, other languages and code and, like, math, um, and, like, all the chain of thought reasoning. And then how does it play out in the scaling law, like in terms of Chinchilla optimality? And, like, we later discovered even Chinchilla was not optimal. It was just this guideline. And, and then how do the mixture of expert models, like, like, be more computationally efficient? All these things matter and, like, um, that's where I think, like, you know, those who do it right, those who get these 128 details right are the ones who, uh, end up benefiting more, from more scale, and that happens to be, like, three or four labs at this point. And, you know, like, I think I'll just give you an example. Don't, uh, uh, judge me here, judge Arthur of Mistral. Uh, like, like, when xAI released a model, uh, the open source first Grok, uh, Arthur tweeted saying like, uh, "That's a lot of superfluous parameters," because the model was 300B or something and was not even as good as the Mi- Mistral's, uh, like, like, like, seven times eight, 56B. So you could, you could train a model that's, like, 6X larger and end up still with the worst model. You could have spent a lot more, like, money and ended up with the worst model.

  4. 8:1611:28

    The Future of AI: Specialized Models & Data Curation

    1. AS

    2. HS

      If you talk about the kind of curation of data there as kind of the central factor in terms of determining quality of performance, I, I had Reid Hoffman on the show actually earlier this week, and he said actually that we will see the kind of verticalization of models, that you will use different models for different things. Is that where it leads to, then? Is that what you're pointing towards?

    3. AS

      No, um, and I, I actually think that viewpoint is flawed. Uh, I, I used to think that'll happen too, but I can give you an, another example that kind of defeats that purpose. Um, Bloomberg spent a lot of money training Bloomberg GPT, if you remember. Uh, they w- they even wrote a paper on it saying they trained their own foundation model, and that model, uh, is, is beaten convincingly by, like, a GPT-4, on all the finance benchmarks.

    4. HS

      How do we know that's not case specific? It could be they just bluntly approached it in the wrong way, they didn't have a good enough team, whatever that is. It doesn't necessarily disprove verticalization of models, does it?

    5. AS

      Uh, you know, if they could have had a better team, they could have ended up with a better model, no questions about it. The qu- the, the, the question I'm trying to pose here is that what is the magic in these models? Where is it coming from? These models are magical. Like, you're, you're not training them for what you're using them at test time. Like, the way you prompt and use these models as if they were human in the chat window is not what they were trained to do. They were just trained to predict the next token on the internet. Sure, they were fine-tuned a little bit to be good at chat, to be good at instruction following, all those things, definitely, but that is just a very small amount of compute that was applied to these models. So what makes these models magical is the general purpose emergent capabilities, the fact that they can do things without being taught how to do it, or they can catch things on the fly with some little bit of prompt instructions. Now, that doesn't come from any domain specificity. It comes from, uh, from the emergence of how training on so much. These neural nets are amazing, that if you just throw very diverse set of data at them, um, they, they, they, they pattern match on the abstract skill required to be good at all of them at once. And that abstract skill, that, that abstract IQ, is what is making these models amazing for you, uh, on practical production use cases. So when you are saying, "Oh, I'm just gonna go and make it domain specific," how many tokens do you even have in the domain? Like, think about it. Code is probably the only domain that actually has a lot of tokens. Um, you can throw, like, a lot of enterprise data at a model and say, "I have a lot of e- internal data that nobody else has," but that doesn't mean that these models will absorb a new kind of reasoning that they couldn't get from the internet. It's very, it's, it's, it's like one of those things that very few people understand, why are these models even good at reasoning? It's not well understood. Is it because they're training on math? Is it because they're training on code? And even that is not well understood today. Like, how do you train the model on just, um, textbooks where you have not gotten reasoning? Like, these are questions that we don't yet have good answers

  5. 11:2818:21

    Advancing AI Reasoning Quality

    1. AS

      to.

    2. HS

      Do you think models are good at reasoning, one, and then, like, I think, like, a breakthrough in reasoning will be one of the biggest kind of, um, breakthrough moments in the next wave. How do you feel about where we are today in terms of quality of reasoning, and what is required to break through in the next wave of reasoning quality?

    3. AS

      It really depends on what you call as being good at reasoning. Are, are they better than an eighth grader? I, I think so. Uh, are they better than, like, a 12th grader? Are, are, are, they better than, like, 75% of the 12th graders? Most likely. Uh, are they, like, gonna win the IMO or IOI? No, definitely not. So there is, like, a spectrum, right? Of people good at reasoning even among humans, and...I'm sure, like, AI is, like, somewhere, like, in the median right now, uh, of, like, high schoolers. Um, can it get to, like, a median college undergrad? Definitely, it seems like we're on the pathway to getting there. Would it be like talking to, um, Faraday or Einstein? Not anytime soon. Uh, I think that's what people are c- some people call it as artificial super intelligence. Like, like, um, really the mu plus seven sigma sort of people on the planet. Uh, and like, I think when we achieve that, uh, it'll break all this $20 a month business models. Like would you, um... I know like, you know, b- businessmen in the past like... Have you watched this movie, uh, Prestige?

    4. HS

      No, I haven't.

    5. AS

      Uh, like the movie but like, like by Nolan where the- there's like magicians, you know, competing with each other.

    6. HS

      Okay.

    7. AS

      Um, in that like there's Edison, and, and this magician wants to steal a trick from Edison on how to make things disappear. Uh... Sorry, not Edison, um, um, Tesla. And, and, uh, he's willing to pay like a m- lot of money just for the one trick. And I think that's the sort of thing ma- thing you would get to if models got really good at reasoning, where, uh, just for the output alone is you're not even paying for a monthly subscription. You're paying for one single session, one single chat, one single output. You would pay a lot of money. You're an investor, right?

    8. HS

      Yeah, I am.

    9. AS

      If I literally told you which companies, "Hey, Harry, listen, I, I got all the insider information and all the revenues," blah, blah, blah. If I came and told you, um, you know, or, or let's say even if I didn't have any insider information, if I was such a good reasoner and I gave you this, "Harry, this is gonna be the set of companies that actually matter two years from now," and I gave you such amazing reasoning that you probably would have had to spend like two months talking to 100 people, then would you have paid like, you know, 10K for just that a- answer?

    10. HS

      You'd probably pay 10 million.

    11. AS

      Exactly. So even if you pay 1% of the ROI, it'll be worth it. Or if this, if, if say, uh, people at the level of say Demis Hassabis, uh, they wanna... Look, they're, they're like incredibly smart. Like who's gonna advise Demis? You can count the number of people like in your hand, right? Um, and if Demis feels like there's an AI that can advise him, what's the value of that AI? It breaks all the mental models of like $20 a month. I think that's what is lacking. If you say, do we have true reasoning? And if, if, if the benchmark for true reasoning is an AI that can advise Demis Hassabis, we don't have that today, but there are AIs that can advise, uh, a person maybe making 120K a year in UK. Uh, that I think we can get there. But like this is where like you, you gotta like clearly be precise on what good reasoning is.

    12. HS

      I understand that in terms of the precision around good reasoning. When you think about the trajectory of reasoning quality, I know it's a shit question, forgive me for it, how do you think about the timeline there? Do you think it goes up, flat, up? Is it a continuous gradual increase? How do you think about the trajectory and slope of reasoning improvement?

    13. AS

      I don't think we know the secret sauce yet. At least according to writers, the ma- the, the, the report, like the news media writers, um, they claim like OpenAI has some new thing called Q*., you know, it came out during the whole board saga. Um, and like that's the sort of thing that they're working on to like make these models like use their own data to bootstrap and make themselves more intelligent. And then, um, xAI recently hired this guy, um, Eric Seligman from, from Stanford, who's written these papers on something called the STAR, um, s- self-taught automated reasoner. Um, or like self-taught reasoner, right? Like it, it's basically taking the model itself, make the model explain its own outputs, and then, um, whatever is the right output, you train on that. Or if it was the wrong output, you take the right output, and then you ask the model to explain why that was right and train on that. So you basically are training on not just the output, but also the explanation that was used to achieve the output. And if you can do that, you're basically training a model that can think and reason and get to an output, see if it's correct, go back, reason again, and iterate. That is what is lacking in today's models. Today's models are just giving you the output. Tomorrow's models will start with an output, reason, elicit feedback from the world, go back, improve the reasoning, and until they converge, they'll keep on trying to improve the output. And I think when that is achieved, I don't know when that's gonna be achieved. Maybe it'll be achieved in a year or two, maybe it'll take three, four years. But I think when that is achieved, that is the beginning of I would say the real reasoning era, where we'll figure out how to make these things more efficient, we'll be allow, uh, throwing a lot... The only t- only problem here I see is this is a game that won't be played by academics like before, because just to do the inference compute, to do all these reasoning, like getting an output, going back and reasoning, uh, building a rationale, then going back and getting another output, just to even do this process takes you a lot of inference compute. You have to pay money for this. And so even a single experiment costs you a lot of money until you arrive at the truth of the algorithm, and then that algorithm to run it is gonna cost you a lot of money to get all the data, synthetic data to train on. So I, I feel like this is where, uh, co- companies with a lot of capital are gonna be way more advantaged to pursuing this research. And so if at all it happens that there are only like four or five contenders to do this, and, uh, whoever ends up with the algorithm the first has a massive advantage because it seems too good to be true sort of thing, where once you crack it-... you can just keep throwing more compute at it and, like, get a big lead over the other models.

  6. 18:2120:37

    The Challenge of AI Memory

    1. AS

    2. HS

      So we are absolutely gonna talk about kind of the funding required to, to finance these models. I do just wanna stay on performance and capabilities. Why is it so difficult to have models with memory? Everyone says, "Ah, memory's the challenge." I don't understand why. Can you help me?

    3. AS

      There are two things here to consider. What, what does memory mean? Is it, like, uh, sufficiently long context that's practical for most use cases or is it infinite long context? Like basically there's an AI for Harry that, uh, remembers all your life. Every single aspect of it, every single detail. That is, like, infinite memory, and I, I think, like, we don't even have the algorithms for it yet today. And then there's another AI that's sort of, like, it's like Gmail sort of a thing where, you know, it starts off with, like, a sufficiently large storage that, like, like, is practical enough and it ke- it's, it, it keeps expanding over time and then now it's, like, throttled beyond which you have to pay $10 a month or something, right? That seems more like where we are headed right now. Like, like, people are expanding the token window from 128K, like, started with 32K, then it goes to, like, a million, then Ch- DeepMind announced 2 million. So, so I feel like that is already good enough where at least we can prioritize and throw out, like, what's not relevant and, like, keep, keep, keep using memory and, as, as you said, that's not very hard to do. There is one small challenge there though. It'll be figured out, but today's case is that we have achieved long context before achieving good instruction following. So you can dump a lot into your prompt, you, you have the memory, but models can hallucinate or get confused because of so much information to focus on. You need to ensure that the instruct following capability has no degradation despite adding all this long context capability. I think that's not the case today, which is why, like, these models are not so good that, like, you know, they can just write an entire code base yet. But all that will happen. I think, I think, I think it's, it's just a matter of time before, you know, they run another training run and, like, figure out all these bugs. But the second thing I'm not sure how to do. Like infinite context, I'm not sure.

  7. 20:3727:39

    The Future of Foundation Models in AI

    1. AS

    2. HS

      When we look at the different foundation model providers, I do just wanna kinda move to the ecosystem itself, and before we touch on the funding itself, I just look at it and everyone says that, you know, we're seeing the commoditization of foundation models as you know. I'm just interested to hear your thoughts. How do you see the end state for the foundational model layer? Are they getting commoditized as people say?

    3. AS

      I think today, uh, the word comm- commoditization is sort of true. Uh, I mean, like, i- it's sort of true in the sense a model that's like 75, like GPT-3.75 level model, uh, is commoditized. There are, like, too many models like that today on the market. Some open source and some closed source. I think GPT-4 quality models are not yet commoditized. There's only probably one or two alternatives for the people today like Claude Opus or some P- Gemini, let's say. If it's just, like, two or three alternatives, it's not, I wouldn't call it a commodity yet. But will it be commoditized? I think so. But by the time it gets commoditized, would there be a 4.5 or a 5 that's way better? TBD. The training run is happening. My, my prediction would be that it would be another great model after 4 that's, like, very good. Like, I wouldn't say GPT-4o is, like, a lot smarter than GPT-4 Turbo. It's, it's more reliable. It, it's, it's, it's better, it's faster, cheaper, but it's not like how 4 blew 3.5 out of the water. That sort of thing, uh, uh, if... Whether 5 can do that to 4 would answer your question of whether these models are getting commoditized.

    4. HS

      This is not like a bad business unlike any other before whereby you, every six months, have your core product basically made redundant.

    5. AS

      Is that true though? Like, I saw your interview with Altman and l- like, like, the, uh, Brad Lightcap, but, uh, is it actually true that, like, a product gets redundant because the model gets an upgrade?

    6. HS

      I think so. Is it not?

    7. AS

      Like, which products of products?

    8. HS

      Is GPT-3, is GPT-3 not, like, redundant now that you've got GPT-4?

    9. AS

      Yeah, but your product is never the model. Um, I mean, there's, like... Okay, so, uh, let's maybe decouple this. If there are companies that are working on foundation model competitor to OpenAI-

    10. HS

      Mm-hmm.

    11. AS

      ... it is definitely, like, one of the worst arenas to be part of. Almost like I think there are five men standing today sort of a thing. Um, Google and Anthropic, Meta, uh, Mistral, and, and, and you can say, uh, uh, to, like, after xAI's new funding round maybe you can include them too. But that, that's a game that, like, is so hard to play and I'm very impressed that Mistral was even in the arena with, like, s- 10X lower capital than the rest.

    12. HS

      Are you not in that same arena?

    13. AS

      We post-train models. We post-train them. We're not foundation model trainers. For example, we can take any model that's there on the market today and shape them to be really good at what our product does, including, like, open source models and, like, making them really good. Have we trained a base model? When you say there's, like, a LLaMA 3 70B, there is a base model that's just trained on predicting the next token, and then there's the supervised fine-tuning and ROHS steps that train them to be very good at chat, and, like, s- instruction following, summarization, and, like, translation and all these skills. Now that... The second part is what adds magic to the product. Without that, you're not gonna have these good chatbots. But the first part has the base IQ, that builds the base IQ for these models. Our, we're not doing the first part.... it's very, very like, it's a losing game almost to play the first part because every time you end up finishing a large training run, you've burnt a lot of money, you have a great model, and then you watch it be sup- like destroy in the leaderboard by the next update. And then you have to again catch up. You go spend more money. So where, how are you recovering all that money back? You mu- recover that through the APIs, right? But if no- nobody wants to use your APIs or somebody else is just offering a better model at a cheaper price and faster, that's why I think it's a hard game. Now, it's not a hard game because it's hard to train this model. Sure, the science behind it and the people, difficult to assemble, but ROI-wise, like business-wise, it's very difficult to compete here.

    14. HS

      Is that not what we're saying though about the commoditization of models? Being when you get to a stage, oh shit, everyone's at that stage, we have to do the same again, we have to do the same again, the same again. Your, your model becomes redundant.

    15. AS

      E- exactly. So that's why I said I think the second-tier models, the models that are not the most cutting edge but cheap enough to like operate a business on top of it, will get commoditized. But there will be some frontier models that are like so smart, uh, or like so much better than the second-tier models, and I think those are still, that's still a game being played by like three or four people today.

    16. HS

      Okay. D- as a game being played by three or four people, does that end as three or four people or does it end as one person?

    17. AS

      I think the answer to that really lies on, uh, who cracks bootstrap reasoning. You know, the thing we talked about a little bit earlier about models using their own outputs to reason and improve! Whoever cracks that first, if they allocate all their capital on just scaling that up, I think it'll end up as one person. But if they're hedging, hedging, hedging, it won't end up as one person.

    18. HS

      Who do you think that person is most likely to be?

    19. AS

      It's likely to be, uh, OpenAI or Anthropic. And I, I can make a good case for both of them. Um, OpenAI because they are far ahead in terms of lead they had in, in doing these things first. Anthropic because they're algorithmically a superior company.

    20. HS

      Wow.

    21. AS

      Like they got whatever OpenAI got to with lower capital. They have better post-training and like, like things like that. So OpenAI is on the other hand like extremely, like, like ha- advantaged on capital and, uh, speed so i- i- it really is like a question of who, you know, which matters more. Is it, is it clever brains or, and like some amount of capital or is it good brains, lot of aggression, and lot of capital? If it's the second, it's OpenAI. If it's the first, it's Anthropic.

    22. HS

      If OpenAI didn't have the Microsoft partnership, would you choose Anthropic?

    23. AS

      Anthropic also has partnerships with Amazon, Google.

    24. HS

      Mm-hmm.

    25. AS

      So that's why I'm saying they're both reasonably, like at, at a, at a proximate level equally well-funded organizations. And now xAI also has like pretty good amount of money and good talent but they're laying very far from behind in terms of timelines.

  8. 27:3931:31

    AI Models & Cloud Provider Acquisitions

    1. AS

    2. HS

      I think that you're gonna see the kind of large cloud providers realize that they need to acquire these models in different forms and they will continue their core cash cow businesses as cloud providers, but they will acquire these models and add them in as complementary features that they already provide. Uh, and you'll see your Anthropics, you'll see your Cohers, you'll see your Adapts acquired or acqui-hired by these large cloud providers. Do you agree with me in that prediction of the next three to five years in terms of how it shakes out with those acqui-hires?

    3. AS

      I don't think so. I don't have a prediction on Cohere but, um, I think with OpenAI and Anthropic the value of, uh, the value of those companies is not in the models they have. That is a very, uh, first order approximation. I think the second order approximation is it's in the machine that's building the machine, that specific group of people with all the tacit knowledge required to train these frontier models and innovate algorithmically and what is likely to be the real reasoning breakthrough and the accumulation of compute they have is the reason why they're valued at this price where the revenue and the valuation make no sense. But they are because it's, I always I think about valuation as like how easy or difficult it is to reassemble this whole thing. And, and, and the thing is not just the output. The thing is also the machine that gave you that output. When you say models are getting commoditized so OpenAI and Anthropic are not that valuable, I disagree because these are the same guys who will produce the next model. Are those guys getting commoditized? Like the, the talent? No. In fact it's getting the opposite of commodity. Like they're all being paid a lot of money to stay in these companies and so the knowledge only stays with them because people don't publish anymore. There was even a joke, I, I recently go- got to hang out with one of, a very great researcher. I even made a joke that the best research is the one that's not being published today and like so there's nothing to read on arXiv anymore. So even the guys at Stanford who wrote all these reasoning papers, they are now like Musk paid them a lot of money to work for him so he's not gonna publish anymore. So you, it doesn't no... Like th- that's what's happening. It's, the commodity is not in the model, the commodity is in the people who produce the models and that's not a commodity yet. So that's why I feel like these companies are valued a lot and, and they have so much leverage that they won't get acquired because, uh, like, like you're not, uh, like if these people don't wanna go work at a big company and the big company needs the output to keep doing their business...... like, Microsoft needs GPTs to sell and make Azure the number one cloud. AWS needs cloud to ma- to sell, to make, continue to retain the lead in the cloud market. So, uh, they have no need or, like, desperation to get acquired. Uh, at least the first two. OpenAI and Anthropic, I don't think are gonna get acquired. The flip side is that, um, models, they don't, they don't produce any breakthrough. Like, scientifically, it's not possible to keep cramming more and more tokens at this and keep seeing the juice. That's when what you said is likely to happen. If, like, say, after even one year, s- O- OpenAI doesn't have a better model, yeah, then the leverage goes away because it's over. There's the, you've gotta actually produce a new thing and the people aren't able to produce it, so their value goes down. And we have to l- play it out. I, I think we both could, like... Uh, both of these things could be true. I think these guys will still produce breakthroughs so I, I, that's why I have a different prediction. But time will tell us honestly

  9. 31:3140:30

    Navigating Capital Competition in the AI Industry

    1. AS

      who's right.

    2. HS

      Can I ask you? W- we mentioned kind of a- access to capital there. Obviously, OpenAI slightly more than Anthropic. You know, I think the thing that struck me was when I heard that, you know, Mistral's new funding round in terms-

    3. AS

      Yeah.

    4. HS

      ... of size, was about 30 hours of Microsoft's free cashflow. You know, Microsoft do $330 million of free cashflow per day. In a world where that is the case, I, I, not cynically but just genuinely, how does anyone compete? Like, you know, b- being blunt, we can take this out. You know, you're rumored to be raising this. Rumored, you don't need to comment at all. Like, the amount that you raise relatively is just insignificant compared to Microsoft's free cashflow (laughs) . How does one compete in that world?

    5. AS

      That's why you gotta build a business, right? So first of all, le- let's, let's separate the two things. If Microsoft is generating that much cashflow, uh, why are they not able to poach, uh, all the OpenAI scientists or Mistral scientists to come work for Microsoft? Like, they could take that money and, and, and ask one of those people to, like... Ask, like, ten of those people to, you know, come work here and I'll pay you a lot of money. You know, you don't, no longer need to work at OpenAI, just directly build the AIs here. Whatever GPUs I'm giving for OpenAI I'll give it to you directly. It's not happening, right? For a reason. Uh, the, the, i- i- it's like, people wanna work with other best people so it's o- it's not enough to get one person, you wanna, you have to get the whole thing. Uh, that's why there were all jokes, you know, when, when the whole board drama was happening, that Satya acquired OpenAI, like, at, at a small price because he got the whole team out. I thi- I think that's the, uh, difficulty here, it's, it's, it's... Cashflow doesn't change the dependence, uh, issue. If they can get these models from people other than these two companies, yes, that changes the equation a lot. Like, they can just, uh, um, you know, like, get it from open source and sell the same models and, like, make lo- same amount of money with less spend. Then that is bad news for the foundation models. Um, as for, like, what is the way out of here, I think, like, you gotta build a business yourself. Like, uh, uh, it's fundamentally every company, uh, that raises capital has to eventually build a business or, uh, hope that, like, their algorithmic promise keeps staying forever. I would bet on, like, those who c- are serious about building a bus-... Like, OpenAI is building a business, for what it's worth. I think they, they have, like, whatever, 2 billion in revenue annually which is, like, higher than Snowflake or at least, like, as good as Snowflake, you know? So they're not as capital-efficient as Snowflake but they are, you know, in the same league in terms of recurring revenue and growing faster. So that shows you that, like, you know, if you are serious about not just training these models but also, like, getting it to the market through products and making revenue off it, there is a potential for you to, like, be independent and self-sustaining.

    6. HS

      Are you focused on building a business today?

    7. AS

      Yeah.

    8. HS

      You said we're gonna move away from the £20 per user. It's exactly what you are, £20 per month. Um, I know-

    9. AS

      Today, yeah.

    10. HS

      ... I'm, I'm a customer.

    11. AS

      And I, and I don't think that business is actually that good.

    12. HS

      Why?

    13. AS

      I- it's not high margins enough. If you can get to, like, a YouTube level thing, sure. You know, I, I think, um, Netflix, YouTube sort of, like, user base, 100, 100... You know, 50 to 100 million people paying for you, yeah, definitely that's a great business. But I, I don't think, um, we are at a point where these AIs are so fundamental to people's lives that, like, 100 million people are subscribing to it. If they can get there, if you can build a product that's not just AI but has a lot more things to it and people pay a lot of, like, like the monthly fee for it and the retention is, like, close to 100%, yes, that's a phenomenal business, and I think we will, we will try to do that too. But, uh, e- all these great subscription businesses are also doing ads for a reason, margins. Right? You wanna, you wanna... Like, look, whatever we criticize Google for, the greatest business model, like, in the last 50 years is that click-based advertising. It, it's just insanely good business model, uh, 80% margins.

    14. HS

      What was the internal discussion with you and the team when you were talking about adding advertising in as a monetization engine? Just take me inside that conversation. How did it go and how did it net out?

    15. AS

      You know, there's this whole Larry and Sergey PageRank paper that say, uh, said, like, advertising is fundamentally, uh, incompatible with, like, serving good results to the user in a search engine. Truly believe that and, like, I've read books that said, like, they push back on introducing ads as much as possible until they gave up due to investor pressure. Um-We were like, "Look, let's be practical. This is the most highest margin business model ever invented, but let's do it in a way where we don't have to be as high margins as Google." Like, you don't have to aim for that 80% margins. Like, y- y- as long as you can get a good, reasonably good high margin business without, uh, failing on your duties to user, be happy. Like, don't be greedy. So that's how... That was our thinking and, uh, we, we didn't actually have much debate on this. It was like, what is the way to do ads without corrupting the, um, answer? Uh, withu- without making s- b- b- but, but as in you make sure that, uh, the answer is not, like, influenced by, uh, the ads, or the links th- that you cite are not influenced by the ads. And if you can ensure that, I think it's a great... I think it's a great, uh, idea to explore. That's why we have other, other service areas for ads too, like even the discoverer feature in Perplexity, which has, like, you know, a bunch of threads, interesting, uh, threads every single day to ex- like, read. Uh, that's just gonna be, like, an endless scroll at some point. And, and, and, and, uh, Instagram does ads in that format. T- t- TikTok does ads in their format. So ads is a great business model and when it's relevant, it's amazing. Like, I've literally not met one single person who came and told me Instagram ads suck. It's actually pretty good. It's all about cracking the relevance code. Like, if you crack the personalization and relevance code, ads is, like, pretty amazing.

    16. HS

      Do you think you've cracked the relevance code?

    17. AS

      We w- I'm saying we want to try. I'm, I'm not, I'm not saying we have cracked it and if you cracked it, I think we should be worth way more. But, but, but, like, yeah, I mean, first of all, it, it can only be... It's, it's like a chicken and egg problem. It can only be cracked when you have a lot of users. So advertising is one of those funny things where there's no way it can work well when you don't have a lot of users, and then when you have a lot of users, it can work really well if you get all the details right. I was talking to Mark in recent months, and he told me how, like, in advertising, it's like three tiers where, like, the top tier is, like, Google and then, like, one and a half... Like, one is Google, one and a half is Meta. Because even between Google and Meta, Google benefits from every other advertising other people do because at the end, once you discover the brand, you go to Google and click on the link they have. It's amazing, like, how ev- they benefit from everyone else's hard work all the time. And then there's, like, companies like Twitter and, like, Reddit and Snap, which is, like, third tier. It's like... And, and he said the gap between these two is so high. This is, like, almost climbing the peak of the mountain. This is just, like, somewhere in the bottom. So that, that, that is the extent to which ads have been dominated by, like, Google and Meta at this point today. My point is that if we can get the fundamental mistake that Google made right in our journey very early on where we are not overly greedy on one source of revenue and are diversified enough through subscriptions, advertisements, APIs, enterprise, I think we have a chance to build something that, uh, achieves the alignment between shareholders and, uh, users a lot more. Like, Jeff Bezos has this quote, right, that asymptotically, the shareholder and, uh, and, and, and the user should be aligned. If not, then you don't have a customer focused business. This is where Google got it wrong because asymptotically, they couldn't achieve that alignment between the user, that is you using Google, and the shareholder. Wall Street loves it when Google puts more ads. You hate

  10. 40:3047:47

    Timing the Expansion into Enterprise Division

    1. AS

      it.

    2. HS

      You mentioned OpenAI is two billion in revenue. Um, a lot of that is, uh, enterprise, and they've built their enterprise actually incredibly well. You kindly mentioned my show with Brad where we actually kind of touched on it. How do you think about when's the right time to build out Perplexity's enterprise division?

    3. AS

      The number one insight that motivated us to build this was what is the most used enterprise tool today? Google. I mean, email also is G Suite and it's part of enterprise offering.

    4. HS

      Okay, but Google.

    5. AS

      But, but, but-

    6. HS

      Let's, let's roll with it.

    7. AS

      Google, right? Like, you search. You search every single day at work. All the data is something internal to your company, like, as in the specific queries, but nobody cares because you need it. You cannot live without it. And, and, uh, you pay for it through your time and you pay for it through your data. Um, now this, this thing changes in the AI native search world where people always are... They're always worried about data leaking to AIs. They don't care if data leak to a traditional search engine, but if the search engine now has a lot of AI in it, they're, they're worried. So we said, "Okay, if you wanna use Perplexity at work and your employer doesn't let you use it, we'll solve that problem for you." We'll offer an Enterprise Pro with compliance and security and data governance and literally offer you the same product with all these, uh, security features. And that became our Enterprise Pro. Now, that's just the start. You need features too that are more catered to the enterprise than just the consumer, and that's what we will build, and we wanna build it in a pretty differentiated way, rethink what even internal search means. Like, not just con- build pipes to every single, like, int- e- enterprise tool like Slack or Notion, but really think about, like, like, what, wha- what is, like, the ranking problem? Why is it hard for the enterprise compared to the consumer? And, like, if we can build, like, one UI where all the proprietary data, external data, internal data, all the different models, open source, closed source live in, like, one single platform and, and, and, you know, you can take your output, convert it into good, readable pages, organize it by bo- like, as, as a knowledge base, index it yourself, like, that can be a good...... enterprise offering, and I think we'll work on that. I'm not saying we'll succeed at it, but we will, we'll try to do something.

    8. HS

      With total respect, my friend, uh, are you nervous about building an enterprise product? When you look at the GTMs, it is a very different motion. Enterprise is a... is a big beast to get your head around.

    9. AS

      Mm-hmm.

    10. HS

      Um, and it's a challenge. You, you said there about the scale of OpenAI's sales team. I know. I've got many friends in it. It's a big thing.

    11. AS

      Yeah.

    12. HS

      How do you think about getting your head around the GTM building exercise of an enterprise division? Do people buy Perplexity Enterprise and OpenAI Enterprise, or either/or?

    13. AS

      My sense is that, like, AI is still so early today that nobody's locked in or loyal to any, any particular, uh, enterprise tool in AI, and none of them even have a lock-in effect to, like, make your data live on, like, one single, uh, tool. Like, it's not... I'm not even talking about things like, why is it hard to migrate from Snowflake to Databricks? 'Cause of the SQL format. It's so, so different, and once you wrote all the SQL queries in one format, it's so hard to change. Like, like, it's not even things like that in AI. Like, your custom prompts that you wrote for ChatGPT can be taken over easily to Perplexity. It's very easy. So, um, I, I think enterprises are still willing to tinker and experiment and try different tools. And that said, if there is no differentiation, they will win. In the beginning, the one with the bigger brand and bigger team has an advantage. But is it game over? No. It's just game begins today. I think, like, this is exactly the whole wrapper thing and, uh, um, if the value you add is, like, uh, very little on top of the model, or the model is the one that's adding most of the value and all the stuff you built around it don't matter, yes. But if you built enough value around the model that is very difficult to do without coordinating a bunch of other hard-to-achieve engineering feats that are not just LLM spaced, or, like, have a lot of human element involved in it, it is difficult to see a world where, like, like, that is not valuable and people don't want that. You know, the specific search thing, why is it that, like, um, Google AI overviews was bad? They have the world's greatest index. They have the world's best models too. But it wasn't good enough. Or why is it that, uh, people still think, at least a good chunk of people, I'm not saying everyone, think, um, ChatGPT browsing is not as good as Perplexity, despite them making so many updates over the last one year?

    14. HS

      Why is Perplexity's browsing better than ChatGPT?

    15. AS

      I think it's just a lot of small details. I'm a big believer in those who can orchestrate models and data sources and build great UX, and it's and keep innovating here all the time will survive this whole wrapper argument. I, I think it's just, like, um, gonna be difficult until you build a business, but everyone's always afraid you're gonna die. But as you are accumulating the users, as you are figuring out the business, it feels me- to me more like, like, the biggest beneficiaries of the commoditization of foundation models are the application layer companies.

    16. HS

      Why is that?

    17. AS

      If models get commoditized then the price of the models goes down. And then those who directly reach the user, using those models, harnessing the power of those models but packaging it into, like, great product experience and utility value, and directly own the relationship with the customers or the users have a lot more advantage because they are able to, like, take something that's a commodity and sell it at a premium, which is a great business. If models get commoditized, I'm happy. If models don't get commoditized, I still wanna figure out a way to benefit from that. And that's why this is a great, difficult company to build. It's not something where you just hire an SVP of product from Twitter or Meta and ask them to figure out product for you. It's not easy. They don't have the mental models of, like, what happens when the next AI model's so much better, how to rethink the whole product strategy. Similarly, it's not something where, uh, you hire a great AI person and ask them to, like, build product 'cause they're always gonna think the model's the most important thing and keep trying to do everything through the model. You need the right sweet spot of design and product and AI and search all together. And that assembly is not easy. Like, it's pretty hard. And that's why we are able to do s- things as a wrapper that other people are not

  11. 47:4751:03

    Fundraising Process

    1. AS

      able to do.

    2. HS

      Have you been surprised by the fundraising process?

    3. AS

      Fundraising processes are brutal. I think most people think, like, you just go to... Like, there's always these memes about, like, "Oh, if it's an AI, people are just, like, willing to write you the term sheet without even doing any diligence." Well, like, welcome. Like, why don't you try to raise? It's pretty difficult actually. Um, everyone's asking all the questions that people on Twitter roast, uh, w- rappers. "What happens if, uh, OpenAI does this? What happens if Google does this?" What, what, wh- uh, you know, "Why would they not stop giving you models?" Like, um, "Wh- how will you build your own models?" Like, the, "Wh- how are you gonna ver- build a search index that's, like, really good?" Or, um, you know, "Wh- how do you compete on the enterprise sales?" Like, like, all these are questions everybody asks, and like, and y- when you don't even know, uh, a g- and you, and you don't have a good model of the future yet, you have to give them good arguments. But at the end of the day, it's all, like, arguments. Nothing is...... there. Y- one thing that we do have in our favor is, like, a good track record of execution. We've been around for, like, less than two years, and the amount of things we've shipped is quite a lot compared to the team size and funding we have.

    4. HS

      Of the cash raised, how much goes to compute? Like s- like 50%? Like 75%? Just

    5. AS

      Uh, I don't have the exact number, but like, first of all, let me, let me give you, like, two things. Uh, we have not spent a lot of money. Uh, what I'm, I'm, uh... It's not like most of, uh, cash we raise has already gone away to compute, no. But what I'm saying is, whatever money we spent, majority of it has gone to compute. I don't have the exact percentage, but majority is compute. And the compute is either us buying GPUs and serving models, or post-training models, or, uh, money we pay for, uh, APIs like Anthropic or OpenAI. That's fine, as long... So that's why it's very advantageous to us to not train our own foundation models, 'cause if we were doing that too, most of the funding would have run out. 'Cause the way it works is you have to pay three years in advance to get a big cluster. Like, you have to commit to that. It's not like all the money goes away immediately, but you have to commit to three year to get, like, you know, thousands of GPUs at once if you wanna compete in that game. On the other hand, because when, when not doing that, and we benefit from any commoditization in the models, and when that kind of happens, we have all the money to go out and get users. Like, and, and getting users not, not simply through, like, marketing, but actually more in the Amazon Prime sort of way. Giving a lot of great features at, like, amazing prices, and getting to retain you through superior product execution, and doing it intensely over, like, a sustained period of time.

    6. NA

      Yeah.

    7. AS

      And then, like, you know, building s- like a s- sufficiently large user base and brand loyalty. That, that, that is the model that we are going for. And in, in such a world like advertising can be pretty powerful at that scale.

  12. 51:0354:35

    Predicting Perplexity's Dominant Monetization Engine

    1. AS

    2. HS

      Every business has a core monetization engine. They have ancillaries, but there tends to be one which is dominant. When you look at, you know, uh, Perplexity in five years time, what is your dominant engine? Is it consumer subscription? Is it advertising? Is it enterprise?

    3. AS

      I would predict it'll be advertising. If we crack it, yes, it'll be advertising. If we don't crack it, if we are not... if we, if we don't, if we haven't grown to that level in user base and, or if we grew and didn't figure out how to a- advertise really well, I think it'll be the other two. Either way, we can be profitable. I think with advertising, we can be really, really profitable. And so then you can ask me, "Hey, Arvin, why do you care about profits? Like, Sam Altman doesn't care." But he doesn't care, because he's not interested in actually just focusing on product as a business. Like, he's trying to build AGI. And like, he already told publicly in an interview that, you know, even if we spend, like, you know, $50 billion on AGI, it doesn't matter. So that's a different company. We shouldn't be seen as an OpenAI competitor at all. We're not an AGI lab. We, we, you can say Perplexity and ChatGPT are products in a similar space, and, and there's, like, some competition for mind share and users, but even that will, like, be pretty clear, like, two years from now. You're not gonna keep asking, "How is Perplexity different from ChatGPT?" Today, you are, but two years from now, I don't think so. If that's still the case, one of us is just copying the other.

    4. HS

      What do you think is the best question you are never asked? You've done interviews before.

    5. AS

      I think someone asked me, um, like, "Why are you doing this," uh, sort of thing. And this is sort of question where you don't actually know yourself. I think a lot of people give, uh, these, um, made up answers, like, "Oh, I had an existential crisis. I needed to save humanity from extinction." Uh. (laughs) Like, you know, "I need to preserve the light of consciousness, and so I thought about what are the most important..." Like, like, these are the sort of things that I've seen entrepreneurs say, but, uh, reality is, like, like, you just sort of look up to some people, you wanna be like them, and you try to carve your career path according to what they have done, but then you end up, like, figuring out there are things that you really like and you shape it to the style you want. And, uh, at least that's how it's been for me. I have been a big fan of Larry Page, and I always wanted to do some things that, of that na- of that scale of ambition. But that was not the reason we did search engine, though. Like, we, we started with something else completely. So I, I... That's a question that I actually don't have a clear answer to, but I really like the question, because, uh, i- it's a question worth asking yourself constantly, like, why are you even working on this? Like, uh, Steve Jobs has this thing, right? Like, if you, uh, felt like... If you, if you, uh, internalized death, if you normalized death, and, and, and every day morning you stood in front of the mirror and asked, "If today was my last day, would I still be doing this?" And if the answer to that question is a yes, go ahead and give your best that day. If the answer to that question is consistently no on a regular basis, you really have to rethink your life priorities. And for me, like, Perplexity is yes, like, hell, hell yeah, like, every day. Even though it's painful, even though it's, um, you know, stressful, takes a toll on mind and body, I think it's worth it.

    6. HS

      You still look incredibly young, so don't worry. It hasn't aged you, Arvin. So, all good there, my friend.

    7. AS

      All right. Thank you.

    8. HS

      Uh, but I'd, I'd-

    9. AS

      I, I'm hiding, I'm hiding my gray hair, uh, very cleverly.

    10. HS

      (laughs)

  13. 54:351:03:12

    Quick-Fire

    1. HS

      Listen, I do wanna do a quick fire round. So I say a short statement, you give me your immediate thoughts. And I'd love to start on, what have you changed your mind on most in the last 12 months?

    2. AS

      Long-term view on people. Seen some people, like, not immediately hit the ground running, but give them sufficient time.... they are able to, like, truly transform themselves. It's something that I didn't have the right attitude towards in the beginning, where I always thought, like, those who hit the ground running immediately are the best. But, you know, different people have different styles of showing their true talents.

    3. HS

      What's the biggest misconception in AI today, do you think?

    4. AS

      Short-term thinking. Like, any time somebody comes up with an update, everyone's like, "The other company is done." Or like, "This is over." Um, uh, but that's like, you know, I would say the l- yeah, the, the usual Twitter mob. But I would say the biggest misconception among even the more, uh, well-informed people, because majority of the people in the world are not using chatbots, they just think this is a bubble. They're gonna get really surprised that it's not a bubble. It's not over-hyped. It's actually under-hyped. These things, when taken in the right workflows and form factors that you're already familiar with, will have a lot of impact. Like, chat UI is a new UI. It, it... We are not used to using it. We're all used to using WhatsApp and Signal and all that, but that's different. It's not exactly a chat. It's more like a texting service. When, on the other hand, Word, Docs, Gmail, Google Search... Like, I'm not even talking about the specific products, but more like the form factors or usage, the UIs, you're very familiar with it, and when AI is presented to you in that sort of a format where it feels so obvious and natural as a workflow, it'll have a tremendous amount of impact. And it's not really happened yet.

    5. HS

      Have you seen WhatsApp's integration?

    6. AS

      I have. Uh, it's not the right way to do it.

    7. HS

      Why?

    8. AS

      I'm not going to WhatsApp to search for anything. I'm going to WhatsApp to text people or reply to... When... My WhatsApp, most of the times, is just having like 20, 30 notifications and I... By the time I'm done with them, I just wanna get away from the app. I'm not going there. I'm not clicking on the, pressing on the WhatsApp icon to, like, search for something. Same thing with Instagram. I'm just going there for pretty pictures. I'm not going there for searching about, like, who's, who's won the NBA. It, it, it's just the user intent behind opening the app matters a lot. This is the same reason why they failed multiple times at doing, uh, Stories and Reels. Stories started off as a way to copy Snapchat as a separate app first. That didn't work. Then they tried so many different variants. What really ended up working is the top bubbles. And like, you... And that only works because you, you're starting with the existing user flow. You're, you're already going there to check out other people. So you, you have to really think about like, not just like why this feature is added, but what is the existing user intent in your app and how can you make sure the new feature you're adding ties into the existing intent? That's very important.

    9. HS

      What's your vision for the future of browsers?

    10. AS

      I think you can reimagine the browser when agents start working. There's a reason why we never did a browser. I don't think browser is gonna be disrupted because you get answers instead of links. People still wanna browse and, and get to a new website, get to a specific website, enter details, fill up forms, all those kind of things. That's not really getting disrupted with the traditional chat UI is... Just because you can type in, like, on Perplexity, on the search bar, let's say, that integration is done. I don't think you allow the browser more or something. It's, it's gonna be more productive, but you need the traditional browser functionality a lot. What will ha- change though is, you go to a browser, you just say, "Start the podcast Arvind," and ar- already knows exactly, um, you know, riverside it has to go, like, fill up your logins, and, and get to it exactly, and then after that it's just over. That would be e- amazing. That, that would, that would change everything. Like, or like, buy me this thing on Amazon. Like, like, like it, it's, it's sort of like, um, completely... Then you can go a step further and say like, "What is the future of the OS? What's the future of Mac? What's the future of Windows?" And so browser is just an OS too, right?

    11. HS

      What do you think is the future of OS, then?

    12. AS

      I mean, something like the Her movie can work. Like, like, you know, I'm not talking about the voice but just the OS itself being an AI, completely AI native OS. Like, it's not organized in a traditional way, and you just talk to it and it just work for you. I think that, that, that's amazing, um, vision to have, and, um, that's the sort of thing that doesn't work today. Like, like, GPT-4 cannot, it cannot do it yet.

    13. HS

      What's the hardest element of your role that people don't think about or consider, do you think?

    14. AS

      I think it's just, um, dealing with contradictions all the time. Um, I, I believe the brain is not very good at dealing with contradictions. It actually tires us out when we can't arrive at a convergence point on something. And startup CEOs are all about contradictions. Should you take a risk or should you, like, double down on what you have? Should you, uh, move faster or should you set up the company in a way that it can scale? Is it time to, like, you know, try out this feature just because it's, it's, it's, uh, not something your competitors would do or continue doing what you're doing well but, but your competitors are doing the same thing? You have to, like, constantly deal with these contradictions in so many different dimensions, and that's tiring.

    15. HS

      Penultimate one. If we were to write... You know, we write kind of, uh, pre-mortems as investors, a reason why a company doesn't work, when we write an investment. If you were to write a pre-mortem on Perplexity today, what is the reason why you don't achieve your goals?... access to compute, Google, y- innovating and killing you. Uh, what is that reason?

    16. AS

      Didn't execute well. I mean, there's a saying that startups don't kill themselves, a startup, like, competitors don't kill startups, startups kill themselves. It's not that Google Drive killed Dropbox. People, people say that as an example, but, um, there was, like, a great enterprise business to build in Dropbox that they didn't move really fast a- compared to, like, other companies like Box. So, startups do not kill themselves... Sorry, sorry, competitors do not kill startups, startups kill thems- com- kill themselves. So, if there was a pre-mortem to be written about us, it's, like, CEO not making... being decisive, uh, execution of the company not being good, um, lack of focus, inefficient use of capital. So, largely comes to whatever decisions I've made, correctness of them, the speed of them and execution of them, and whether we are focused or not. If these things are not true on a consistent basis, uh, yeah, I think, I think we would die and that's, that would be the pre-mortem.

    17. HS

      Final one for you. It's 2034. Where would you most like Perplexity to be then? If we do a show then-

    18. AS

      (laughs) .

    19. HS

      ... where is the business then?

    20. AS

      I think I would just want it to be the assistant for facts and knowledge you just cannot live without.

    21. HS

      (laughs) .

    22. AS

      And y- you can ask me, would... 10 years later, do, do people even want facts? You know, there is this thing where you have to always ask this question, like, "What is gonna be true even 10 years from now?" And if you work on that, you're working on the right thing. I feel like even, even in a world with a lot of AI agency, uh, and, and less of human agency, uh, people would still wanna know what's true and what's not true. And so we are working on that. So, if we are the go-to assistant for facts and accurate information and knowledge, I think we'll be fine even 10 years from now.

    23. HS

      Arvin, listen, I've loved doing this. Thank you so much for putting up with my, my straying questions, but you've been a fantastic guest and I so appreciate the time.

    24. AS

      Thank you, Harry. That was great.

Episode duration: 1:03:12

Install uListen for AI-powered chat & search across the full episode — Get Full Transcript

Transcript of episode 4jPg4Se9h5g

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome