
Better AI Models, Better Startups
Jared Friedman (host), Garry Tan (host), Harj Taggar (host), Diana Hu (host)
In this episode of Y Combinator, featuring Jared Friedman and Garry Tan, Better AI Models, Better Startups explores how Better AI Models Open Massive Opportunities For Startup Founders The hosts discuss recent frontier model releases like GPT-4o and Gemini 1.5, and what these advances mean for startups, especially YC companies.
How Better AI Models Open Massive Opportunities For Startup Founders
The hosts discuss recent frontier model releases like GPT-4o and Gemini 1.5, and what these advances mean for startups, especially YC companies.
They contrast OpenAI’s consumer-focused, multimodal demos with Google’s technically ambitious Mixture-of-Experts Gemini model and huge context windows, exploring implications for RAG, infrastructure, and model competition.
A major theme is how startups should navigate building on top of powerful models without being crushed by incumbents, drawing parallels to earlier eras competing with Google and Facebook.
They argue that better models generally mean better startup opportunities, particularly in B2B, ‘unsexy’ or highly regulated niches, and edgy consumer areas large incumbents are unwilling to touch.
Key Takeaways
Treat model releases as leverage, not existential threats.
Every new model generation effectively ‘upgrades’ your product with minimal code changes, so the real competition is other startups building faster on top of these capabilities, not just OpenAI or Google.
Get the full analysis with uListen AI
Avoid building the obvious next feature of OpenAI’s core assistant.
If it’s easy to imagine your product as part of OpenAI’s next release, you’re in the danger zone—similar to competing with Google on generic search; focus instead on niches they’re unlikely to prioritize.
Get the full analysis with uListen AI
Bet on a multi-model ecosystem rather than a single winner.
Rough parity between OpenAI, Google, Anthropic, Meta, and open-source models means startups can route between providers, avoid dependence on one vendor, and preserve margins as a real market forms.
Get the full analysis with uListen AI
RAG and layered memory architectures will remain important.
Even with million-token context windows, practical retrieval, privacy, permissions, and enterprise logging needs mean RAG-like pipelines and multi-tier data storage are likely to be enduring infrastructure.
Get the full analysis with uListen AI
Look for ‘valuable but unsexy’ problems big tech won’t demo on stage.
Vertical B2B workflows (e. ...
Get the full analysis with uListen AI
Target areas incumbents avoid for PR, legal, or brand risk.
Edgy categories like AI companions, deepfake-based creative tools, and borderline satire or adult applications create openings where Google, Meta, and now even OpenAI are constrained from moving fast.
Get the full analysis with uListen AI
Model progress directly fuels upsell potential in B2B AI SaaS.
As models get cheaper and more capable, B2B AI products can continuously introduce higher-value features, justify price increases, and capture a growing share of what used to be human ‘transactional labor’ spend.
Get the full analysis with uListen AI
Notable Quotes
“Either way, I actually think we're at this moment where the better the model becomes, if you're already using 4 and suddenly you change one line of code and use 4o, you basically just get smarter by default every generation.”
— Harj
“If there are multiple equivalently powerful models, you're much safer off as a startup.”
— Diana
“Using LLM to automate various jobs is probably as large an opportunity as SaaS, like all of SaaS combined.”
— Jared
“If you can easily imagine that what you're building is gonna be in the next OpenAI release, you know, maybe it will be.”
— Jared
“Things that are increasingly edgy are often the places where there's great startup opportunity.”
— Harj
Questions Answered in This Episode
How can a startup systematically identify ideas that are ‘valuable but unsexy’ enough that OpenAI or Google are unlikely to prioritize them?
The hosts discuss recent frontier model releases like GPT-4o and Gemini 1. ...
Get the full analysis with uListen AI
What criteria should founders use to decide when to rely on large context windows versus investing in robust RAG and custom data infrastructure?
They contrast OpenAI’s consumer-focused, multimodal demos with Google’s technically ambitious Mixture-of-Experts Gemini model and huge context windows, exploring implications for RAG, infrastructure, and model competition.
Get the full analysis with uListen AI
In a future where OpenAI-like assistants live on the desktop with deep access to user data, what defensible positions remain for third-party productivity and agent apps?
A major theme is how startups should navigate building on top of powerful models without being crushed by incumbents, drawing parallels to earlier eras competing with Google and Facebook.
Get the full analysis with uListen AI
How should early-stage AI startups hedge vendor risk and pricing power in a world of rapidly evolving, multi-vendor frontier models?
They argue that better models generally mean better startup opportunities, particularly in B2B, ‘unsexy’ or highly regulated niches, and edgy consumer areas large incumbents are unwilling to touch.
Get the full analysis with uListen AI
Where is the ethical and legal line between acceptable deepfake-based creativity and harmful misuse, and how should startups operating there design guardrails?
Get the full analysis with uListen AI
Transcript Preview
Every time there's an OpenAI product release now, it feels like there's a bunch of startups waiting with bated breath to see whether OpenAI is going to kill their startup.
This is actually a really, uh, crazy moment for all startups. Adding more types of modalities and more capabilities, uh, per model, the, the better off every startup is.
You have to be on top of these announcements and be... kind of know what you're going to build in anticipation of them before someone else does, versus being worried about OpenAI or Google being the ones to build them.
Welcome back to another episode of The Light Cone. I'm Gary. This is Jared, Harj, and Diana, and we're some of the group partners at YC who have funded companies that have gone on to be worth hundreds of billions of dollars in aggregate. And today, we are at an interesting moment, uh, in the innovation of large language models in that we've seen a lot of really new tech come out just in the last few weeks, whether it's GPT-4o, it's, uh, Gemini 1.5. Harj, how are you thinking about, you know, what does it mean for these models to be so much better?
Anytime I see a new announcement from one of the big AI companies with the release of a new model, the first thing I think about is, what does this mean for the startups, and in particular YC startups? And when I was watching the OpenAI demos, it was pretty clear to to me that they are really targeting consumer. Like, all of the demos were cool consumer use cases and applications, which makes sense. That's kind of what ChatGPT was, was a consumer app that went really viral. I just wonder what it means for the consumer companies that we're funding and, in particular, like, how will they compete with OpenAI for these users? What did you think? Like, what, wh- even if we take it back, like, how do consumer products win from, like, first principles? Like, is it more about the product or the distribution, and how do you compete with OpenAI on either of those things?
Yeah, that's a great question. I mean, I think ultimately it's both. And then, uh, how I want it to be is that the best product wins. Uh, how it actually is, is whoever has the best distribution and a sufficiently good product seems to win. Either way, I actually think we're at, uh, sort of, uh, in this moment where the better the model becomes, if you're already using four and suddenly four, you know, you can, uh, change one line of code and suddenly be using 4o, uh, you basically just get smarter by default every generation. And that's really, really powerful. It means that, you, I think we're entering this moment where the i- the IQ of these things is still, you know, four is arguably around 85. It's not that high. (laughs) And then if the next generation, if Cloud Three really is at 100 or, you know, the next few models end up being closer to, you know, 110, 120, 130, this is actually a really, uh, crazy moment for all startups. And, uh, the most interesting thing is, like, uh, adding new capabilities, so having the same model be great at coding, for instance. Uh, that means that, you know, you might have a breakthrough in reasoning, not through just the model reasoning itself, but you could have the model actually write code and have the code (laughs) do better. And even right now, it seems like there's, um, a lot of evidence that if, instead of trying to prompt the model to do the work itself, you have it write code and you execute the code, it can actually do things that reasoning alone could not do. So adding more types of modalities and more capabilities, uh, per model, the, the better off every startup is.
Install uListen to search the full transcript and get AI-powered insights
Get Full TranscriptGet more from every podcast
AI summaries, searchable transcripts, and fact-checking. Free forever.
Add to Chrome