The Ultimate AI Roundtable: What Happens Now in AI, Why Google are Vulnerable | E1085

The Ultimate AI Roundtable: What Happens Now in AI, Why Google are Vulnerable | E1085

The Twenty Minute VCNov 24, 202333m

Harry Stebbings (host), Richard Socher (guest), Emad Mostaque (guest), Des Traynor (guest), Jeff Seibert (guest), Yann LeCun (guest), Chris (Runway co‑founder) (guest), Duy (Contextual founder) (guest), Richard Socher (guest), Jeff Seibert (guest), Des Traynor (guest), Unspecified female guest commentator (guest)

Commoditization and structure of the foundational model (LLM) layerModel size, data quality, and defensibility as moatsOpen-source versus closed AI ecosystems and their implicationsWhere value accrues in the AI stack: infrastructure vs. applicationsEvolving pricing and business models: seats, consumption, and selling outcomesThe role and limits of co-pilots versus full autonomous agents/assistantsStrategic positioning and vulnerabilities of Apple, Google, and Amazon in AI, plus societal impact

In this episode of The Twenty Minute VC, featuring Harry Stebbings and Richard Socher, The Ultimate AI Roundtable: What Happens Now in AI, Why Google are Vulnerable | E1085 explores aI Titans Debate Models, Moats, Business Models, And Big Tech Futures This curated “ultimate AI roundtable” stitches together contrarian views from leading founders, researchers, and investors on how the AI stack, business models, and major tech incumbents will evolve.

AI Titans Debate Models, Moats, Business Models, And Big Tech Futures

This curated “ultimate AI roundtable” stitches together contrarian views from leading founders, researchers, and investors on how the AI stack, business models, and major tech incumbents will evolve.

Participants debate whether foundational models will commoditize, the importance of model size and data, and the long‑term dominance of open versus closed approaches.

They explore where value will accrue (infrastructure vs. applications), how pricing will shift from seats to outcomes, and whether co-pilots are a true innovation or merely an incumbent defense.

Finally, they assess Apple, Google, and Amazon’s AI positioning and discuss AI’s societal impact, jobs, and the need for political—not technological—answers to wealth distribution.

Key Takeaways

Foundational models are likely to commoditize, but quality still varies today.

Several guests predict only a handful of large model providers but expect open-source alternatives to narrow the gap, even as current testing shows notable quality differences between vendors like OpenAI and others.

Get the full analysis with uListen AI

Model size is less important than efficiency and data quality for long-term advantage.

While some argue bigger models and more data still correlate with performance, others highlight rapid progress in smaller, more efficient models and emphasize high-quality, domain-specific fine-tuning data as the true moat.

Get the full analysis with uListen AI

Open-source AI will be critical infrastructure and a powerful competitive force.

Open ecosystems recruit global talent, enable experimentation, and satisfy researchers’ need for inspectable models; over time, open models (e. ...

Get the full analysis with uListen AI

Most investable upside will likely be in applications, not infrastructure concentration.

Historical cloud patterns suggest infrastructure value concentrates in a few giants, while many diverse application companies capture comparable aggregate market cap—meaning more shots on goal at the app layer for founders and investors.

Get the full analysis with uListen AI

AI business models are shifting from seats and uptime to work and outcomes.

Instead of charging per user or pure consumption, future AI applications may price against completed work and guaranteed SLAs on business results (e. ...

Get the full analysis with uListen AI

Co-pilots are a natural incumbent strategy but a limited startup wedge.

Embedding assistive AI into existing products fits big players’ UX, data, and seat-based revenue models; true disruption likely comes from products that remove legacy apps entirely and act as primary “pilots,” not helpers inside old workflows.

Get the full analysis with uListen AI

Incumbent tech giants face divergent AI futures: Apple advantaged, Google exposed, Amazon opportunistic.

Apple is well-positioned to run strong on-device, privacy-preserving assistants; Google’s ad-driven search monopoly makes it existentially vulnerable to chat-first interfaces; Amazon can leverage its engineering culture and cloud to move fast or acquire leadership.

Get the full analysis with uListen AI

Notable Quotes

Models are not a moat. Models eventually don't matter. What matters most is the people building those models and how fast can you change and learn from those models.

Cristóbal (Chris) Valenzuela, Runway

It's very simple. It's because no outfit, as powerful as they may be, has a monopoly on good ideas.

Yann LeCun, Meta

Who wants a co-pilot? I want to be a pilot. The better solution is to remove the application that's shit and just talk straight with AI.

Christian Lanng, Beyond Work (formerly Tradeshift)

No economist believes we're going to run out of jobs because no economist believes we're going to run out of problems to solve.

Yann LeCun, Meta

They need to have that sort of a Jay‑Z like, 'Allow me to reintroduce myself' moment, where they come back and they say like, 'Google 2.0 is here.'

Des Traynor, Intercom

Questions Answered in This Episode

If open-source models reach GPT-4-level capabilities, what distinct advantages—if any—will closed providers realistically retain?

This curated “ultimate AI roundtable” stitches together contrarian views from leading founders, researchers, and investors on how the AI stack, business models, and major tech incumbents will evolve.

Get the full analysis with uListen AI

How can startups practically build defensible moats on top of commoditized models without owning the foundational layer?

Participants debate whether foundational models will commoditize, the importance of model size and data, and the long‑term dominance of open versus closed approaches.

Get the full analysis with uListen AI

What governance or vetting mechanisms would be required for a Wikipedia-like, crowd-maintained knowledge base to safely power universal AI assistants?

They explore where value will accrue (infrastructure vs. ...

Get the full analysis with uListen AI

How should regulators distinguish between regulating AI research versus regulating AI products, and where is the line between them?

Finally, they assess Apple, Google, and Amazon’s AI positioning and discuss AI’s societal impact, jobs, and the need for political—not technological—answers to wealth distribution.

Get the full analysis with uListen AI

For a company built on a legacy seat-based model, what concrete steps are needed to transition to outcome-based pricing without destroying near-term revenue?

Get the full analysis with uListen AI

Transcript Preview

Harry Stebbings

Welcome to 20VC with me, Harry Stebbings. And for this Thanksgiving special, I wanted to bring some of the best minds in AI together for an amazing panel. The only thing is, this panel never really happened. What you're about to hear is the leading minds in AI, from the head of AI at Meta, the founder of Intercom, Stability, Runway, leading AI investor Tom Tunguz, all debate some of the core questions in AI. I pulled together some of those best moments and the most contrarian elements from their different episodes. Let me know what you think of this different style and format. I think it's really special and cool. You can do that on Twitter @HarryStebbings.

Richard Socher

You have now arrived at your destination.

Harry Stebbings

So I want to start at the very core, the foundational model layer. And I want to ask, how do we see this layer in terms of the foundational model providers playing out? And will we see the commoditization of LLMs? Emad, I know that you have some strong opinions here. So Emad at Stability, handing the mic over to you.

Emad Mostaque

I think that there's only going to be five or six foundation model companies in the world in three years, five years. I think it's going to be us, NVIDIA, Google, Microsoft, OpenAI, and, uh, Meta and Apple probably are the ones that train these models.

Harry Stebbings

What about you, Des? You're the founder of Intercom. Do you think we'll see the commoditization of LLMs?

Des Traynor

I, I don't know if it's actually happened yet, if I'm clear, right? Like, we actually torture test all of the LLMs. It's not yet the case that they're all equal. We-

Harry Stebbings

And when you compared OpenAI to the alternative providers-

Des Traynor

Yeah.

Harry Stebbings

... what did the test show you?

Des Traynor

It's basically the quality of conversation, and, like, does it fail any of our, like, hallucination tests? Does it fail any of our trustworthiness tests? Can it infer its own confidence?

Harry Stebbings

Was it close though? Was there a wide-

Des Traynor

Yeah, uh, it... Close and narrowing and, and, like, it's also a work in progress. All of these things are moving targets, right? So, like, we haven't even gotten around to maybe testing the latest and greatest of all the providers, which are increasing in number. You mentioned Mistral, there's also GLaMa, there's Anthropic, there's, like, Cohere, like, there's, like, a whole chunk of them. And it's a bit of work to go around and constantly be trying to find out, has anyone got... We only really care about better, right? Right now, we're not in cost optimization mode, we're just like, "Who's got the best?"

Harry Stebbings

Jeff Sibert, you're the founder of Digits. So you sit on top of these foundational LLMs and then fine-tune them with your own data. What do you think in terms of this commoditization of the foundational model layer?

Jeff Seibert

I certainly think we will. And this may not be a popular position. Obviously, OpenAI is charging ahead, sort of leading the way right now. I think the market forces at work mean there's just immense energy to have an open source equivalent. Meta appears to be highly motivated to open source its work. Many folks want to run these themselves and tune them themselves and so on. That is hard and expensive today, but I can't think of another thing in time in history where something hard and expensive in tech has lasted all that long. It's going to be commoditized.

Install uListen to search the full transcript and get AI-powered insights

Get Full Transcript

Get more from every podcast

AI summaries, searchable transcripts, and fact-checking. Free forever.

Add to Chrome