
Good News For Startups: Enterprise Is Bad At AI
Harj Taggar (host), Jared Friedman (host), Diana Hu (host), Garry Tan (host)
In this episode of Y Combinator, featuring Harj Taggar and Jared Friedman, Good News For Startups: Enterprise Is Bad At AI explores enterprise AI Struggles Give Ambitious Startups A Rare Opening Shot The hosts dissect an MIT study on AI project failure rates and argue that its viral, pessimistic interpretation is misleading; instead, it reveals how badly most enterprises execute on AI. They explain that big companies overwhelmingly try to build AI internally or via consultants, and these efforts usually fail due to politics, legacy systems, weak product sense, and engineers who don’t really believe in AI. In contrast, specialized startups that deeply integrate into enterprise workflows and build AI‑native products are winning large deals quickly and decisively. The episode frames this as unprecedented good news for founders: enterprises are desperate, more open than ever to young startups, and switching costs will create strong moats for those who execute well.
Enterprise AI Struggles Give Ambitious Startups A Rare Opening Shot
The hosts dissect an MIT study on AI project failure rates and argue that its viral, pessimistic interpretation is misleading; instead, it reveals how badly most enterprises execute on AI. They explain that big companies overwhelmingly try to build AI internally or via consultants, and these efforts usually fail due to politics, legacy systems, weak product sense, and engineers who don’t really believe in AI. In contrast, specialized startups that deeply integrate into enterprise workflows and build AI‑native products are winning large deals quickly and decisively. The episode frames this as unprecedented good news for founders: enterprises are desperate, more open than ever to young startups, and switching costs will create strong moats for those who execute well.
Key Takeaways
Most enterprise AI failures are about execution, not AI being a scam.
The MIT study primarily captures internal and consultant‑led projects, which often fail due to bad software, organizational politics, and weak product execution—not because AI is inherently useless.
Get the full analysis with uListen AI
Startups that go deep into business processes can massively outperform incumbents.
Companies like Tactile, Greenlight, Castle AI, and Reducto win by embedding into core systems of record, understanding domain workflows, and building AI‑native products rather than shallow ‘AI add‑ons’.
Get the full analysis with uListen AI
Enterprises’ preference for incumbents and consultants is breaking down under performance pressure.
Banks and FAANGs initially default to trusted vendors like Ernst & Young or legacy software providers, but repeatedly return to startups after those efforts fail to deliver working AI systems.
Get the full analysis with uListen AI
There is a ‘startup‑shaped hole’ where polymath builders are missing in enterprises.
Successful AI products require rare combinations of cutting‑edge AI knowledge, strong product taste, and deep empathy for human processes—skills that are scarce in large orgs but common in top startup founders.
Get the full analysis with uListen AI
Winning enterprise AI deals requires navigating politics and cultivating internal champions.
Startups succeed by forming real relationships with risk‑tolerant employees, often people who fantasized about doing a startup, and by leveraging founders whose companies were previously acquired into big firms.
Get the full analysis with uListen AI
Engineer skepticism of AI tools is a liability for big companies and an opportunity for others.
Many enterprise engineers dismiss code‑gen and AI as hype, which prevents their firms from shipping competitive AI products, leaving the door wide open for founders and individual engineers who lean in and master these tools.
Get the full analysis with uListen AI
AI implementations create strong switching costs and thus real moats.
Enterprise leaders admit that once they invest in training and integrating a gen‑AI system, switching becomes “prohibitive,” meaning that early, successful AI vendors can lock in durable, defensible positions.
Get the full analysis with uListen AI
Notable Quotes
“The majority of software that actually gets built in the world is very, very bad.”
— Jared
“Apple, a company with infinite resources and infinite access to the smartest people in the world, cannot make a good calendar app.”
— Jared
“For now, there's just this startup-shaped hole in basically every process or every sort of annoying system that should exist that doesn't exist yet.”
— Gary
“If your engineers don't believe in this, then how are you gonna build a product that actually works?”
— Jared
“All these people who are worried that these ChatGPT wrappers won't have moats—like, that's the moat.”
— Jared
Questions Answered in This Episode
How can a new AI startup practically ‘embed’ itself into an enterprise’s systems of record without overextending its small team?
The hosts dissect an MIT study on AI project failure rates and argue that its viral, pessimistic interpretation is misleading; instead, it reveals how badly most enterprises execute on AI. ...
Get the full analysis with uListen AI
What specific skills or experiences best develop the polymath mix of AI expertise, product taste, and domain understanding the hosts describe?
Get the full analysis with uListen AI
How should an enterprise leader respond if their own engineering org is skeptical of AI but they see clear competitive pressure to adopt it?
Get the full analysis with uListen AI
What are the ethical and strategic risks of enterprises becoming heavily locked into a single AI vendor due to high switching costs?
Get the full analysis with uListen AI
For an individual engineer in a big company, what is the most effective way to experiment with AI tools and prove their value internally?
Get the full analysis with uListen AI
Transcript Preview
Engineering teams at these orgs are filled with people that themselves don't actually really believe in AI, don't use code gen tools, think it's all super overhyped, or really excited when an MIT study comes out saying that it's all, like, hype and retweet it-
(laughs)
... and, um, and really want 'cause it's a narrative they want to believe. But the consequence of that for the companies is that they can't build the product. So if your engineers don't believe in this, then how are you gonna build a product that actually works? The knock on effect for startups then is if you can actually build something that works, the enterprises will talk to you because they have no other options; can't build it internally, can't go to an established company. Um, so the startups are actually getting, like, the shot that they never had before.
I guarantee you someone is watching this right now, and, uh, you've just horribly triggered them. Welcome back to another episode of The Light Cone. One of the things that has been really pissing me off is these AI influencers. You see them on X. You see them on YouTube. And they're claiming that 95% of AI projects are failures, and that's proof that AI is a scam. What's the real story, Jared? You actually dug in to the MIT report that these people are grifting with. What does the report actually say? What really went viral was, like, tweets about this study, and I think the tweets are actually quite misleading. Diana and I were talking to a bunch of college students recently, and they had concluded, just by reading, like, the tweet version of the study, that, like, "Oh, all these AI startups that YC is talking about, like, must not be working because the study says that they all fail." But actually, the more I read the study, the more I realized it was actually confirming a lot of the things we've talked about here on th- this podcast about what AI agents are really like in the real world and what approaches and categories are working. And so I thought it'd be interesting for us to talk about what the study really says.
Because it's a very different approach to the go-to market for all these AI solutions. It's not just standard enterprise sales. I think one of the big things that we talk a lot about is this aspect of, um, teams, startup and founders, embedding themself into the business processes and really grokking a lot of the internal systems of record and going deep, deep, deep in the integration, which is not something that has been typically done in the SaaS world. SaaS was, like, very plug-and-play, which is different. But w- when you do succeed and plug into the systems of record, the pot of gold is actually quite big. B- but it does take a long time. We actually have a lot of examples of work with companies that have succeeded, which we can talk about later.
Install uListen to search the full transcript and get AI-powered insights
Get Full TranscriptGet more from every podcast
AI summaries, searchable transcripts, and fact-checking. Free forever.
Add to Chrome