2024: The Year the GPT Wrapper Myth Proved Wrong

2024: The Year the GPT Wrapper Myth Proved Wrong

Y CombinatorDec 13, 202438m

Garry Tan (host), Harj Taggar (host), Diana Hu (host), Jared Friedman (host)

Myth of the ‘GPT wrapper’ and where value accrues in AIRise of open-source models (LLaMA) and multi-model orchestrationEnterprise adoption: pilots turning into real revenue and faster sales cyclesVertical AI agents, especially voice AI and domain-specific workflowsRobotics and the emerging role of LLMs as ‘robot consciousness’AI coding tools and how they’re changing developers, hiring, and interviewsRegulatory landscape, mega funding rounds, and the resurgence of in‑person YC/San Francisco

In this episode of Y Combinator, featuring Garry Tan and Harj Taggar, 2024: The Year the GPT Wrapper Myth Proved Wrong explores 2024 Shatters GPT-Wrapper Myth As Vertical AI Startups Explode The hosts argue that 2024 decisively proved wrong the belief that all AI value would accrue to foundation model companies like OpenAI, as application-layer startups have rapidly grown into real, revenue-generating businesses.

2024 Shatters GPT-Wrapper Myth As Vertical AI Startups Explode

The hosts argue that 2024 decisively proved wrong the belief that all AI value would accrue to foundation model companies like OpenAI, as application-layer startups have rapidly grown into real, revenue-generating businesses.

Open-source models (especially LLaMA), multi-model orchestration, and agentic/voice-based AI have created enormous room for differentiated products, faster enterprise adoption, and capital-efficient companies reaching millions in ARR quickly.

They highlight trends such as vertical AI agents, voice AI, robotics, AI coding tools, and internal LLM-powered systems reshaping how startups are built, scaled, and staffed.

The discussion also covers regulation breaking in favor of startups, major funding rounds (OpenAI, Scale AI, SSI), YC batch performance, the return of in-person Demo Day, and a broader resurgence of in-person work and San Francisco’s tech ecosystem.

Key Takeaways

AI application startups can now reach tens of millions in revenue in 24 months with relatively little capital.

YC partners report multiple examples of companies hitting strong ARR quickly on $2–5M of spend, contradicting the idea that only heavily funded foundation model players can win.

Get the full analysis with uListen AI

Open-source models and LLaMA’s rise have eliminated the realistic prospect of model monopolies.

With top benchmarks now led by LLaMA and strong derivative work, startups have real model choice, making product quality, sales, and zero-churn execution more decisive than owning the base model.

Get the full analysis with uListen AI

Multi-model orchestration is becoming the default architecture for serious AI applications.

Companies increasingly route tasks between fast, cheap models and larger, smarter ones for complex work, evolving simple model routing into sophisticated orchestration stacks tailored to use cases.

Get the full analysis with uListen AI

Enterprise AI pilots are now converting into production deployments and meaningful ARR.

Compared to a year ago, YC sees many startups hitting $1M ARR faster than ever, as enterprises recognize strong ROI and earlier concerns about hallucinations and unreliability are addressed by better infrastructure and agent techniques.

Get the full analysis with uListen AI

Vertical AI agents—especially in voice and customer support—are creating hundreds of distinct, defensible niches.

Voice AI isn’t winner-take-all: workflows differ radically by sector (airlines vs banks vs SaaS), enabling both horizontal infrastructure providers and many vertical apps to thrive simultaneously.

Get the full analysis with uListen AI

AI coding tools are fundamentally changing developer productivity and how startups hire.

Tools like Cursor, Devin, Replit agents, and Claude’s Artifact let small teams build more with fewer engineers, pushing interviews to test AI-native workflows and raising expectations for output per developer.

Get the full analysis with uListen AI

Regulation and politics, while still uncertain, have so far broken in favor of startups rather than entrenching incumbents.

The failure of heavy-handed proposals (like 1047) and the likely rollback of parts of the Biden AI EO under new leadership reduce the risk of regulatory capture by large AI firms, at least in the near term.

Get the full analysis with uListen AI

Notable Quotes

This is the year that everything broke in favor of startups.

Harj

It sounds kind of ridiculous to say that now because… who even remembers the ChatGPT store?

Harj

Choice means that it's not as much about the model… all the other things seem to end up mattering a lot more.

Jared

The time it's taking to reach $100 million in annual revenue is trending down.

Garry

AI coding agents basically broke the standard programming interviews that companies have been doing for years.

Jared

Questions Answered in This Episode

If open-source and multi-model orchestration commoditize models, where will the deepest, most durable moats in AI startups actually come from?

The hosts argue that 2024 decisively proved wrong the belief that all AI value would accrue to foundation model companies like OpenAI, as application-layer startups have rapidly grown into real, revenue-generating businesses.

Get the full analysis with uListen AI

How should an early-stage founder decide between building a horizontal AI infrastructure product versus a highly verticalized agent for a single industry?

Open-source models (especially LLaMA), multi-model orchestration, and agentic/voice-based AI have created enormous room for differentiated products, faster enterprise adoption, and capital-efficient companies reaching millions in ARR quickly.

Get the full analysis with uListen AI

What concrete practices make LLM-based agents reliable enough for mission-critical enterprise workflows, beyond prompt engineering and RAG?

They highlight trends such as vertical AI agents, voice AI, robotics, AI coding tools, and internal LLM-powered systems reshaping how startups are built, scaled, and staffed.

Get the full analysis with uListen AI

How will AI-native coding tools reshape the skills companies optimize for when hiring engineers over the next five years?

The discussion also covers regulation breaking in favor of startups, major funding rounds (OpenAI, Scale AI, SSI), YC batch performance, the return of in-person Demo Day, and a broader resurgence of in-person work and San Francisco’s tech ecosystem.

Get the full analysis with uListen AI

In robotics, what specific combination of commodity hardware and LLM-based ‘consciousness’ is most likely to create a ChatGPT-style breakout moment?

Get the full analysis with uListen AI

Transcript Preview

Garry Tan

The wildest thing right now is you can start a company that can make tens of millions of dollars, literally in 24 months, and, uh, you can do it for potentially, you know, $2 million, $5 million.

Harj Taggar

A year ago, I remember many of the startups in the batch would get sort of enterprise proof of concepts, or pilots in particular, and there was a lot of cynicism around whether any of those pilots would translate into real revenue. Fast-forward a year, I think we have all firsthand experience that these pilots have turned into, like, real revenue.

Garry Tan

It's still early days, honestly. Like, it... you know, we sort of breathe a sigh of relief right now in 2024, but it's anyone's game, honestly. Like, these things are moving so quickly. Welcome back to another episode of The Light Cone. I'm Gary. This is Jared, Harj, and Diana. And collectively, we funded companies worth hundreds of billions of dollars right at the beginning. So, 2024, what a year. How are you feeling about this, Harj?

Harj Taggar

Pretty great. I think this is the year that everything broke in favor of startups. What I've been thinking about a lot recently is when ChatGPT launched, two years ago now, the immediate consensus view was all of the value would go to OpenAI. And very specifically, do you all remember when they announced the GPT or the ChatGPT store?

Garry Tan

Yeah.

Harj Taggar

Like, the... I remember the consensus was everything that was built on to- top of ChatGPT was a GPT wrapper, and the App Store was just going to relea- be released and crush every single person trying to build an AI application, and OpenAI would be a ginormous company, but there'd be no opportunity for startups. It sounds kind of ridiculous to say that now because, um...

Garry Tan

(laughs) Who even remembers the ChatGPT store?

Harj Taggar

Yeah, exactly.

Garry Tan

(laughs)

Harj Taggar

The ChatGPT store itself was a nothing burger, but, like, mo- more importantly, what are the big AI applications today? Like, I'd say, outside of ChatGPT itself, the breakout consumer application is Perplexity. The breakout enterprise application is probably Glean, maybe. Um, in legal tech, you have Casetext, you have Harvey. Pro-sumer, you have PhotoRoom. Like, there's... the point being, there are many, many applications that have been built not by OpenAI. It's been a great time to build startups.

Garry Tan

Yeah. The wildest thing right now is you can start a company that can make tens of millions of dollars, uh, literally in 24 months from zero, and, uh, you can do it for potentially, you know, $2 million, $5 million. That's sort of the story of one of these companies, Opus Clip, which never had to raise a real series A, and that's something that we sort of see across the YC community as well.

Harj Taggar

Yeah. I think that's, um, a particularly important point that you can do it as a startup without ris- raising tons of capital, because post the GPT store launch, I then remember, uh, Anthropic and Claude emerged, and the consensus view for a while was all of the value's going to go to one of these foundation model companies, and that the only way you can compete in AI is to raise huge amounts of money, um, either because you've got venture capital, or you're Amazon or Facebook or Google, um, with tons of cash already. But if you weren't one of the big foundation models, um, there would be no value. And the applications built on top of these things would either be built by the foundation model companies themselves or just not be that valuable. Again, something that turned out to be completely not true, right? And in particular, what drove that is open source. Like, the weird series of events where Meta... (laughs) Like, the weights being leaked and, like, Meta just, like, rolling with it.

Install uListen to search the full transcript and get AI-powered insights

Get Full Transcript

Get more from every podcast

AI summaries, searchable transcripts, and fact-checking. Free forever.

Add to Chrome