
Aravind Srinivas:Will Foundation Models Commoditise & Diminishing Returns in Model Performance|E1161
Aravind Srinivas (guest), Harry Stebbings (host), Narrator
In this episode of The Twenty Minute VC, featuring Aravind Srinivas and Harry Stebbings, Aravind Srinivas:Will Foundation Models Commoditise & Diminishing Returns in Model Performance|E1161 explores aravind Srinivas on AI Reasoning, Model Commoditization, and Perplexity’s Bet Aravind Srinivas, CEO of Perplexity, discusses the future of foundation models, arguing that while mid-tier models will commoditize, frontier models and the teams behind them will remain scarce and highly valuable.
Aravind Srinivas on AI Reasoning, Model Commoditization, and Perplexity’s Bet
Aravind Srinivas, CEO of Perplexity, discusses the future of foundation models, arguing that while mid-tier models will commoditize, frontier models and the teams behind them will remain scarce and highly valuable.
He believes current models are around median high-school reasoning and that a true breakthrough will come from “bootstrap reasoning” systems that iteratively generate, critique, and improve their own outputs.
Perplexity is positioning itself as an application-layer company focused on post-training existing models, building superior search/browsing UX, and ultimately monetizing through high-margin, relevance-driven advertising plus subscriptions and enterprise.
Srinivas emphasizes that the real competitive advantage lies in orchestration (data, models, UX, distribution) and in the talent “machine that builds the machine,” not just in owning raw models or compute.
Key Takeaways
Scaling alone is no longer enough; finely curated, well-mixed data is critical.
Srinivas notes many labs have spent heavily training huge models on massive datasets and ended up with weak systems; the real gains now come from careful data selection, mixing domains (languages, code, math, reasoning traces), and tuning countless small details.
Get the full analysis with uListen
Vertical domain models are overrated compared to strong general-purpose models.
Using BloombergGPT as an example, he argues that specialized finance models can still be decisively beaten by a top general model like GPT‑4, because the emergent “abstract IQ” arises from extremely diverse training rather than narrow domain data.
Get the full analysis with uListen
Next-wave breakthroughs will require “bootstrap reasoning,” not just better next-token prediction.
Future systems will generate an answer, explain their reasoning, get feedback, revise, and iterate—training on both outputs and rationales. ...
Get the full analysis with uListen
Memory is improving in practice (long context), but “infinite personal memory” remains unsolved.
Token windows of hundreds of thousands or millions already enable practical long-context use, but models still struggle to maintain instruction-following quality amidst huge inputs, and there’s no good algorithm yet for truly lifelong, person-specific memory.
Get the full analysis with uListen
Foundation models will commoditize at the mid-tier, but frontier capability—and talent—will not.
He believes GPT‑3. ...
Get the full analysis with uListen
Application-layer companies stand to benefit most from commoditized models.
As model prices fall, companies like Perplexity that control the user relationship, UX, and orchestration can buy commoditized capability cheap and sell high-value experiences (search, assistants, workflows) at a premium.
Get the full analysis with uListen
Advertising, done with strict relevance and integrity, is likely the dominant AI monetization engine.
Srinivas expects Perplexity’s long-term core revenue to come from highly relevant, non-corrupting ads, supplemented by subscriptions and enterprise; he wants to avoid Google’s over-optimization for ad load while still leveraging ads’ superior margins.
Get the full analysis with uListen
Notable Quotes
“Today’s models are just giving you the output. Tomorrow’s models will start with an output, reason, elicit feedback from the world, go back, improve the reasoning.”
— Aravind Srinivas
“These neural nets are amazing: if you just throw very diverse data at them, they pattern match on the abstract skill required to be good at all of them at once.”
— Aravind Srinivas
“The commodity is not in the model; the commodity is in the people who produce the models, and that’s not a commodity yet.”
— Aravind Srinivas
“The biggest beneficiaries of the commoditization of foundation models are the application layer companies.”
— Aravind Srinivas
“Competitors do not kill startups. Startups kill themselves.”
— Aravind Srinivas
Questions Answered in This Episode
If bootstrap reasoning turns out to be much harder than expected, what alternative paths could lead to significantly better AI reasoning?
Aravind Srinivas, CEO of Perplexity, discusses the future of foundation models, arguing that while mid-tier models will commoditize, frontier models and the teams behind them will remain scarce and highly valuable.
Get the full analysis with uListen AI
How can smaller labs or startups meaningfully contribute to frontier AI progress when inference and training costs for reasoning research are so high?
He believes current models are around median high-school reasoning and that a true breakthrough will come from “bootstrap reasoning” systems that iteratively generate, critique, and improve their own outputs.
Get the full analysis with uListen AI
What governance or product design safeguards are needed to ensure ad-driven AI assistants don’t repeat Google’s misalignment between user and shareholder incentives?
Perplexity is positioning itself as an application-layer company focused on post-training existing models, building superior search/browsing UX, and ultimately monetizing through high-margin, relevance-driven advertising plus subscriptions and enterprise.
Get the full analysis with uListen AI
In a world where models hold rich, lifelong personal memory, what technical and ethical frameworks would be required to manage privacy, control, and consent?
Srinivas emphasizes that the real competitive advantage lies in orchestration (data, models, UX, distribution) and in the talent “machine that builds the machine,” not just in owning raw models or compute.
Get the full analysis with uListen AI
If the OS itself becomes an AI agent, what parts of the current application and browser ecosystem disappear, and which new kinds of companies emerge?
Get the full analysis with uListen AI
Transcript Preview
Today's models are just giving you the output. Tomorrow's models will start with an output, reason, elicit feedback from the world, go back, improve the reasoning. That is the beginning of real reasoning era. The biggest beneficiaries of the commoditization of foundation models are the application layer companies.
Ready to go? (upbeat music) Aravind, I'm so excited for this. I've been looking forward to this one. So first off, thank you so much for joining me today.
Thank you for having me on, Harry. I've watched all of your sh- uh, episodes, so looking forward to it.
That is very, very kind of you, my friend. Listen, I want to start with a little bit on you. How did you first fall in love with AI and realize that actually this was what you wanted to do and spend the majority of your career on?
It was lit- lit- na- more like an accident. Um, I was just yet another electrical engineering or computer science undergrad doing my courses and doing some interesting projects alongside. There was a point when one of my, um, friends in undergrad told me, "Hey, there's this, uh, contest, um, where you could win, win some prize if, if you came first." And I think I was like, you know, kind of like in need of money, because I wasn't sure, uh, I was gonna get an internship, so I, I tried, tried the contest out. And, um, it was a machine learning contest, but I didn't even know what machine learning was.
(laughs)
Um, all I knew from that guy was that, hey, you're gonna be given some data, and you can, uh, use some of the patterns in the data and use it to make predictions on, you know, held-out data that you don't have access to. A server will have it. You submit your algorithm, and it'll score against what is correct and what you predict, and whoever wins the most number of correct predictions wins the contest and you get the prize. That, that's the extent to which I, I was told. And I go and, um, check out this library called scikit-learn. It's a very popular machine learning library. And I have literally no idea what any of these words mean, like decision trees and random forests. Like, none of these things made any sense to me. Um, and so like, I, I just like, literally just did what an AI would do, brute force, random search, but as a human. Did all that, and, uh, we won the contest. I won the contest, and then that gave me a lot of confidence. Okay, uh, I beat people who actually knew machine learning in it, and that gave me a lot of confidence that like, this is something I could be pretty good at. I remember, uh, Sam Altman once telling me... I asked him this question like, uh, two, three years ago. "Hey, like, how do you identify as something where you're, you're naturally good at?" And he said, "Whatever comes easy to you, but seems hard to other people."
Install uListen to search the full transcript and get AI-powered insights
Get Full TranscriptGet more from every podcast
AI summaries, searchable transcripts, and fact-checking. Free forever.
Add to Chrome