Sam Altman, Arthur Mensch and more discuss:Which Startups Are Threatened vs Enabled by OpenAI?|E1156

Sam Altman, Arthur Mensch and more discuss:Which Startups Are Threatened vs Enabled by OpenAI?|E1156

The Twenty Minute VCMay 24, 202418m

Sam Altman (guest), Harry Stebbings (host), Arthur Mensch (guest), Tom Hulme (guest), Des Traynor (guest), Tomasz Tunguz (guest), Emad Mostaque (guest), Brad Lightcap (guest), Sarah Tavel (guest), Tom Blomfield (guest), Guest (guest)

Commoditization and end-state of foundation modelsInvestment attractiveness of model providers vs application layerCloud providers and big tech as future AI utilitiesWhat makes AI startups defensible vs ‘thin wrappers’Vertical, domain-specific AI applications and workflow integrationCopilot strategy: when it favors incumbents vs startupsNew AI-enabled business and pricing models (selling outcomes vs seats)

In this episode of The Twenty Minute VC, featuring Sam Altman and Harry Stebbings, Sam Altman, Arthur Mensch and more discuss:Which Startups Are Threatened vs Enabled by OpenAI?|E1156 explores aI Models Commoditize While Application Layer Becomes True Value Engine The discussion centers on whether enduring value in generative AI will accrue to foundation model providers or to the application layer built on top. Participants argue that base models are rapidly commoditizing due to intense competition, massive capital requirements, and open-source pressure, pushing model providers toward utility-like economics. Most investors and operators on the panel believe the greatest long-term value will be captured by applications that deeply own user relationships, workflows, and domain-specific integration rather than thin wrappers on models. They highlight strategies for startups to avoid being steamrolled by model providers and incumbents, including thick vertical solutions, differentiated UX, new pricing models, and avoiding generic copilot plays that favor large incumbents.

AI Models Commoditize While Application Layer Becomes True Value Engine

The discussion centers on whether enduring value in generative AI will accrue to foundation model providers or to the application layer built on top. Participants argue that base models are rapidly commoditizing due to intense competition, massive capital requirements, and open-source pressure, pushing model providers toward utility-like economics. Most investors and operators on the panel believe the greatest long-term value will be captured by applications that deeply own user relationships, workflows, and domain-specific integration rather than thin wrappers on models. They highlight strategies for startups to avoid being steamrolled by model providers and incumbents, including thick vertical solutions, differentiated UX, new pricing models, and avoiding generic copilot plays that favor large incumbents.

Key Takeaways

Foundation models are rapidly commoditizing and will likely resemble utilities.

With multiple players training similar models on the same hardware and Meta open-sourcing strong models, differentiation on the base model alone is thin and the economics increasingly look like low-margin infrastructure or cloud utilities.

Get the full analysis with uListen AI

Capital intensity and fast depreciation make pure model bets hard to underwrite.

Building frontier models is like constructing a power station that depreciates in months; huge compute and training costs, combined with short-lived technical advantage, make fundamentals-driven investment in new model companies risky.

Get the full analysis with uListen AI

Long-term value will shift to personalized, context-rich applications.

Sam Altman and others argue that enduring differentiation will come from models and products deeply personalized to individuals, integrated into their data and workflows, rather than from generic intelligence capabilities.

Get the full analysis with uListen AI

Startups must build ‘thick wrappers’ that solve end-to-end problems in specific domains.

Founders are warned not to rely on filling temporary gaps in model providers’ features; instead they should fully own a vertical use case, including integrations, workflows, regulatory understanding, and UX that model providers won’t go deep on.

Get the full analysis with uListen AI

AI applications will likely capture more aggregate value than models, spread across many companies.

Historical cloud analysis shows similar total market cap at infra vs app layers, but concentrated in a few infra players vs many app companies; by analogy, investors see higher probability of success in diverse AI application niches.

Get the full analysis with uListen AI

The copilot pattern largely advantages incumbents with existing users and data.

Inline-assist copilot features plug naturally into incumbent products (e. ...

Get the full analysis with uListen AI

AI enables outcome-based pricing models that can disrupt traditional SaaS per-seat pricing.

Instead of selling software seats, AI-native products can charge for completed work or outcomes, making them look like scalable services and allowing them to undercut incumbents tied to headcount-based pricing.

Get the full analysis with uListen AI

Notable Quotes

There will be a small number of providers… doing models at big scale, and it'll be extremely complex, extremely expensive.

Sam Altman

The long-term differentiation will be the model that's most personalized to you that has your whole life context.

Sam Altman

The technology is commoditizing incredibly quickly, which worries me a lot.

Tom Hume (GV)

When we just do our fundamental job, we're gonna steamroll you.

Sam Altman

You're never gonna make money filling in any gaps in the platform… There's a train coming. It's gonna hit you at some stage.

Des Traynor (Intercom)

Questions Answered in This Episode

If foundation models become utilities, what durable advantages can any single model company realistically build beyond scale and distribution?

The discussion centers on whether enduring value in generative AI will accrue to foundation model providers or to the application layer built on top. ...

Get the full analysis with uListen AI

How can an early-stage AI startup rigorously test whether it is a ‘thin wrapper’ or a genuinely defensible, end-to-end solution?

Get the full analysis with uListen AI

In which specific industries are incumbents least likely to build deep AI integrations themselves, leaving room for vertical startups?

Get the full analysis with uListen AI

What are practical ways for AI products to implement and price ‘outcome-based’ or ‘work-done’ models without creating unsustainable service obligations?

Get the full analysis with uListen AI

How might breakthroughs in memory or agentic capabilities change the current belief that models are fundamentally commoditizing infrastructure?

Get the full analysis with uListen AI

Transcript Preview

Sam Altman

There was a time when there were, like, more than a hundred car companies in the US, I believe, or at least close to that. And if you go, like, look at some of the old media at the time, it was like, "No, there's this better car. No, there's this better one. No, there's this better one." I think that same thing holds true for most new industries. I think it's fine. I mean, I think it's probably good. But I don't think that's where the enduring value will be. I think eventually it will shake out. There will be a small number of providers, just dozen, something like that, doing models at big scale and it'll be extremely complex, extremely expensive. And I hope we all continue to push each other to make the models better, cheaper, faster and commoditize in that sense. And the long-term differentiation will not be, I don't think, the base model. Like, that's just, you know, intelligence is just like some emergent property of matter or something. The long-term differentiation will be the model that's most personalized to you that has your whole life context that plugs into everything else you want to do, that's, like, well integrated into your life. But for now, the curve is just so steep that the right thing for us to focus on is just make that base model better and better.

Harry Stebbings

Arthur at Mistral. I'm so intrigued to hear your thoughts. How do you think about the focus on just model improvement where value accrues, and ultimately of the foundation models commoditizing in themselves?

Arthur Mensch

It's two opposing directions. So the first is that the models are getting better and better. So it means that creating a verticalized application, as long as you have the data for it and a good understanding of the use case you're facing, is going to be easier and easier if you have access to the tools that facilitate it. Uh, so that's the first aspect, which would make me think that the, the application layer is going to grow thinner and thinner. But then there's also the fact that the models are, are getting, uh, cheaper and cheaper because we manage to compress them, because we make a lot of improvement on their efficiency. And so that means that effectively this press the competitive pressure there is on the model layer, means that, uh, the price around the model, the dollar per intelligence unit, let's say, is definitely going to reduce. So there's these two aspects of growing ability, compressed price, which on one side says that the application layer is going to grow thin, and on the other side says that the model part is going to grow thin. So for us, the, the approach that we are taking is that the model part is still going to be big enough and that we need to build this platform on top of that, uh, because that's where we are going to enable all of the vertical applications that will be interesting for humanity.

Install uListen to search the full transcript and get AI-powered insights

Get Full Transcript

Get more from every podcast

AI summaries, searchable transcripts, and fact-checking. Free forever.

Add to Chrome