Skip to content
The Twenty Minute VCThe Twenty Minute VC

Aravind Srinivas:Will Foundation Models Commoditise & Diminishing Returns in Model Performance|E1161

Aravind Srinivas is the Co-Founder and CEO @ Perplexity where he is on a mission to build the world’s most knowledge-centric company. Recent reports have suggested the company is raising $250M at a $3BN valuation and the company’s cap table currently includes all stars such as Jeff Bezos, Elad Gil, Nat Friedman, Tobi Lutke, Yann LeCun, Naval Ravikant, Paul Buchheit and Andrej Karpathy. Prior to founding Perplexity, Aravind was a Research Scientist @ OpenAI and before that a research intern at both Google and Deepmind. ----------------------------------------------- Timestamps: (00:00) Intro (00:46) AI Passion Journey (05:35) Addressing Diminishing Returns in AI Model Performance (08:16) The Future of AI: Specialized Models & Data Curation (11:28) Advancing AI Reasoning Quality (18:21) The Challenge of AI Memory (20:37) The Future of Foundation Models in AI (27:39) AI Models & Cloud Provider Acquisitions (31:31) Navigating Capital Competition in the AI Industry (40:30) Timing the Expansion into Enterprise Division (47:47) Fundraising Process (51:03) Predicting Perplexity's Dominant Monetization Engine (54:35) Quick-Fire ----------------------------------------------- In Today’s Episode with Aravind Srinivas: 1. Are We Reaching a Stage of Diminishing Returns with Models: Have we reached a stage where more compute will not result in a proportional improvement in model performance? What are the most exciting new modalities we will see breakthroughs in? Is voice the radical new modality that everyone thinks it is and OpenAI demoed? What will it take and when will we have true reasoning in models? 2. Are Foundation Models Commoditising: What is the end state for the foundation model layer? Will we see the specialisation of models? Will different models be used for different things? Is there room for a new foundation model to be born? Is it VC backable? Why does Aravind believe that two players will win this layer? What happens to the rest? What is needed to win in this layer? 3. How to Survive in a World of Incumbents: Funding the Machine: How can any of the current players compete in a world where Microsoft has $300M in free cash flow per day? How much money does one need to build a foundation model today? Are the barriers lowering? Why does Aravind argue that talent is not simply a game of cash? 4. From Burning Money to Printing It: What does Aravind believe are the four core monetisation methods for Perplexity? Why does Aravind think that advertising will be their largest? Why does Aravind think that consumer subscription is not a very good business for them? Is Aravind concerned about having to build an enterprise go-to-market? What will it take to have a super successful API printing money machine business? ----------------------------------------------- Subscribe on Spotify: https://open.spotify.com/show/3j2KMcZTtgTNBKwtZBMHvl?si=85bc9196860e4466 Subscribe on Apple Podcasts: https://podcasts.apple.com/us/podcast/the-twenty-minute-vc-20vc-venture-capital-startup/id958230465 Follow Harry Stebbings on Twitter: https://twitter.com/HarryStebbings Follow Aravind Srinivas on Twitter: https://twitter.com/AravSrinivas Follow 20VC on Instagram: https://www.instagram.com/20vchq Follow 20VC on TikTok: https://www.tiktok.com/@20vc_tok Visit our Website: https://www.20vc.com Subscribe to our Newsletter: https://www.thetwentyminutevc.com/contact ----------------------------------------------- #20vc #harrystebbings #aravindsrinivas #preplexity #ai #founder #ceo #venturecapital #startup #openai #chatgpt #google #whatsapp #deepmind

Aravind SrinivasguestHarry Stebbingshost
Jun 4, 20241h 3mWatch on YouTube ↗

At a glance

WHAT IT’S REALLY ABOUT

Aravind Srinivas on AI Reasoning, Model Commoditization, and Perplexity’s Bet

  1. Aravind Srinivas, CEO of Perplexity, discusses the future of foundation models, arguing that while mid-tier models will commoditize, frontier models and the teams behind them will remain scarce and highly valuable.
  2. He believes current models are around median high-school reasoning and that a true breakthrough will come from “bootstrap reasoning” systems that iteratively generate, critique, and improve their own outputs.
  3. Perplexity is positioning itself as an application-layer company focused on post-training existing models, building superior search/browsing UX, and ultimately monetizing through high-margin, relevance-driven advertising plus subscriptions and enterprise.
  4. Srinivas emphasizes that the real competitive advantage lies in orchestration (data, models, UX, distribution) and in the talent “machine that builds the machine,” not just in owning raw models or compute.

IDEAS WORTH REMEMBERING

5 ideas

Scaling alone is no longer enough; finely curated, well-mixed data is critical.

Srinivas notes many labs have spent heavily training huge models on massive datasets and ended up with weak systems; the real gains now come from careful data selection, mixing domains (languages, code, math, reasoning traces), and tuning countless small details.

Vertical domain models are overrated compared to strong general-purpose models.

Using BloombergGPT as an example, he argues that specialized finance models can still be decisively beaten by a top general model like GPT‑4, because the emergent “abstract IQ” arises from extremely diverse training rather than narrow domain data.

Next-wave breakthroughs will require “bootstrap reasoning,” not just better next-token prediction.

Future systems will generate an answer, explain their reasoning, get feedback, revise, and iterate—training on both outputs and rationales. That self-improvement loop could dramatically upgrade reasoning but will be expensive and likely limited to a few well-capitalized labs.

Memory is improving in practice (long context), but “infinite personal memory” remains unsolved.

Token windows of hundreds of thousands or millions already enable practical long-context use, but models still struggle to maintain instruction-following quality amidst huge inputs, and there’s no good algorithm yet for truly lifelong, person-specific memory.

Foundation models will commoditize at the mid-tier, but frontier capability—and talent—will not.

He believes GPT‑3.5-level systems are already commoditized, and GPT‑4-class will follow, but the real asset isn’t the current model snapshot—it's the tightly knit teams and tacit know‑how needed to repeatedly produce the next frontier model.

WORDS WORTH SAVING

5 quotes

Today’s models are just giving you the output. Tomorrow’s models will start with an output, reason, elicit feedback from the world, go back, improve the reasoning.

Aravind Srinivas

These neural nets are amazing: if you just throw very diverse data at them, they pattern match on the abstract skill required to be good at all of them at once.

Aravind Srinivas

The commodity is not in the model; the commodity is in the people who produce the models, and that’s not a commodity yet.

Aravind Srinivas

The biggest beneficiaries of the commoditization of foundation models are the application layer companies.

Aravind Srinivas

Competitors do not kill startups. Startups kill themselves.

Aravind Srinivas

Diminishing returns, scaling laws, and the importance of data curation in model trainingGeneral-purpose vs. verticalized models and where reasoning capabilities really come fromThe current limits of AI reasoning, memory, and long-context capabilitiesCommoditization of foundation models and the enduring value of frontier labsPerplexity’s strategy: post-training, search UX, and application-layer differentiationBusiness models for AI: subscriptions, enterprise, and high-margin advertisingCompetition, capital, and the long-term role of OpenAI, Anthropic, and big cloud providers

High quality AI-generated summary created from speaker-labeled transcript.

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome