
The Truth About The AI Bubble
Michael Seibel (host), Harj Taggar (host), Garry Tan (host), Diana Hu (host), Narrator
In this episode of Y Combinator, featuring Michael Seibel and Harj Taggar, The Truth About The AI Bubble explores yC Explores AI Bubble Fears, Model Wars, And Startup Stability The hosts reflect on how the AI ecosystem in 2025–26 has stabilized into clear layers: model providers, infrastructure companies, and application startups, all positioned to capture value. They discuss a sharp shift in YC founders’ preferred LLMs, with Anthropic now edging out OpenAI and Gemini rapidly gaining share, while many teams adopt multi-model orchestration strategies. The conversation tackles whether AI is a bubble, arguing that even if infrastructure is overbuilt, it will mirror the telecom era and benefit future application-layer startups. They also examine hiring, productivity, domain-specific models, and the emergence of smaller, highly profitable teams rather than the fantasy of one-person trillion‑dollar companies—at least for now.
YC Explores AI Bubble Fears, Model Wars, And Startup Stability
The hosts reflect on how the AI ecosystem in 2025–26 has stabilized into clear layers: model providers, infrastructure companies, and application startups, all positioned to capture value. They discuss a sharp shift in YC founders’ preferred LLMs, with Anthropic now edging out OpenAI and Gemini rapidly gaining share, while many teams adopt multi-model orchestration strategies. The conversation tackles whether AI is a bubble, arguing that even if infrastructure is overbuilt, it will mirror the telecom era and benefit future application-layer startups. They also examine hiring, productivity, domain-specific models, and the emergence of smaller, highly profitable teams rather than the fantasy of one-person trillion‑dollar companies—at least for now.
Key Takeaways
Anthropic has overtaken OpenAI as YC founders’ top API choice.
After hovering at 20–25% share, Anthropic now slightly surpasses OpenAI among Winter ’26 applicants, largely due to strong performance on coding tools and agents and deliberate internal focus on coding evals.
Get the full analysis with uListen AI
Gemini is rapidly gaining traction, especially for grounded, real-time information.
Founders report replacing many Google searches with Gemini thanks to its integration with Google’s index and strong reasoning, even preferring it over Perplexity for up‑to‑date, accurate answers.
Get the full analysis with uListen AI
Serious startups are moving to a multi-model, orchestrated architecture.
Rather than being loyal to a single lab, companies are abstracting away model choice, swapping in best-in-class models per task (e. ...
Get the full analysis with uListen AI
The ‘AI bubble’ in infrastructure likely benefits application-layer founders.
Drawing on telecom and Carlota Perez’s installation/deployment framework, the hosts argue heavy GPU and data-center overinvestment will create cheap, abundant capacity that future startups can exploit, much like YouTube after the bandwidth boom.
Get the full analysis with uListen AI
Power and land constraints are pushing innovation into space-based data centers and fusion.
With terrestrial power and siting bottlenecks, YC-backed efforts span data centers in space, terrestrial fusion, and even space-based fusion concepts, indicating how energy scarcity is shaping the AI infrastructure race.
Get the full analysis with uListen AI
Domain-specific smaller models can outperform general LLMs in narrow verticals.
Startups report building 8B-parameter models fine‑tuned with RL that beat frontier models on healthcare benchmarks, highlighting the value of specialized data and post-training—even though labs’ next releases can erode these gains.
Get the full analysis with uListen AI
AI boosts productivity but hasn’t eliminated the need for sizable teams.
Early stories of founders reaching $1M ARR with minimal staff didn’t extend to $10M+; post‑Series A companies still hire heavily because user expectations rise, and execution remains constrained by human talent, though some (e. ...
Get the full analysis with uListen AI
Notable Quotes
“For the longest time, OpenAI was the clear winner… and shockingly, in this batch, the number one API is actually Anthropic.”
— Diana
“I switched to Gemini this year as my just go-to model… I replaced my Google searches with Gemini.”
— Harj
“We have the age of intelligence, the rocks can talk, they can think and they can do work, and you just have to zap them more.”
— Jared
“It kind of doesn’t really matter that much… maybe NVIDIA’s stock will go down next year, but that doesn’t actually mean that it’s a bad time to be working on an AI startup.”
— Jared
“Gamma got to $100 million in ARR with only 50 employees… it’s a good trend to have the reverse flex: look at all this revenue and look how few people work for us.”
— Jared
Questions Answered in This Episode
How should an early-stage startup decide when to adopt a multi-model orchestration layer versus standardizing on a single LLM provider?
The hosts reflect on how the AI ecosystem in 2025–26 has stabilized into clear layers: model providers, infrastructure companies, and application startups, all positioned to capture value. ...
Get the full analysis with uListen AI
In what verticals do specialized small models most clearly beat general-purpose frontier models, and how defensible are those advantages over time?
Get the full analysis with uListen AI
If AI infrastructure is being overbuilt, what concrete kinds of ‘YouTube-like’ companies could emerge to exploit the coming surplus in compute and bandwidth?
Get the full analysis with uListen AI
How might space-based data centers and fusion meaningfully change the cost structure and geographic distribution of AI compute over the next decade?
Get the full analysis with uListen AI
As the AI economy stabilizes, what new moats are actually emerging for application-layer startups beyond access to models—data, UX, workflow integration, or something else?
Get the full analysis with uListen AI
Transcript Preview
I think perhaps the thing that most surprised me is the extent to which I feel like the AI economy stabilized. We have, like, the model layer companies and the application layer companies and the infrastructure layer companies. And it seems like everyone is going to make a lot of money, and there's kind of, like, a relative playbook for how to build an AI-native company on top of the models.
Many episodes ago, we talked about how it was ... felt easier than ever to pivot and find a startup idea, 'cause if you could just survive, maybe if you could just wait a few months, there was likely going to be some, like, big announcement that would completely make a new set of ideas possible. And so, like, finding ideas is sort of returning to sort of normal levels of difficulty.
Welcome back to another episode of The Light Cone. Today, we're talking about the most surprising things that we saw this year in 2025. Diana, you found a pretty crazy one. It's sort of a changing of the guard almost in who is the preferred LLM at YC during the YC batch.
Yes. In fact, we just wrapped up the Winter '26 selection cycle for companies, and one of the questions we asked to all the founders that apply to YC is, "What is your tech stack and model of choice?" And one of the shocking things is that, for the longest time, OpenAI was the clear winner. For all of last year, last couple of batches. Though, that number has been coming down. And shockingly, in this batch, the number one API is actually Anthropic came out a bit more than OpenAI-
Yeah.
... which who would have thought? I think when we started this podcast series back then, OpenAI was like 90-plus percent, and now Anthropic. Who would have thought?
Yeah. And you know, they've been hovering around, like, 20, 25% for most of, like, 202- 2024 and early 2025. And then only even in the last three to six months did this sort of changing of the guard actually happen.
They had that, this, uh, hockey stick with the, with the growth over 52%.
Why do you think that is?
I think there's a couple of things in terms of the tech stack selection. I think as we've seen this year, there's been a lot of wins in terms of vibe coding tools that are getting built out out there, and coding agents. There's so many categories that this ended up being a bigger problem space that actually is creating a lot of value. And it turns out the model that performs the best at it is actually, uh, the models from Anthropic. And I think that's not by accident. I think from the hearing the conversation we had with Tom Brown not too long ago, he came and spoke, is that was one of their internal evals. They on purpose made them their North Star, and you can see it in the model taste as a result of what's the best choice of model for a lot of founders building products is Anthropic.
Install uListen to search the full transcript and get AI-powered insights
Get Full TranscriptGet more from every podcast
AI summaries, searchable transcripts, and fact-checking. Free forever.
Add to Chrome