
Anduril & Founders Fund’s Trae Stephens on Choosing Good Quests in the Age of AI | Ep. 35
Trae Stephens (guest), Jack Altman (host)
In this episode of Uncapped with Jack Altman, featuring Trae Stephens and Jack Altman, Anduril & Founders Fund’s Trae Stephens on Choosing Good Quests in the Age of AI | Ep. 35 explores trae Stephens on AI quests, defense tech, and VC strategy Stephens argues AI’s biggest distortion is making it easy to build “uninteresting” companies, pulling top talent into crowded, consensus categories instead of high-impact work like manufacturing semiconductors or building critical infrastructure.
Trae Stephens on AI quests, defense tech, and VC strategy
Stephens argues AI’s biggest distortion is making it easy to build “uninteresting” companies, pulling top talent into crowded, consensus categories instead of high-impact work like manufacturing semiconductors or building critical infrastructure.
He outlines an ethics framework (a feels-good/feels-bad vs. is-good/is-bad matrix) and places defense work in the “feels bad but is good” quadrant—duty-driven work necessary for a functioning society—while warning that “feels good but is bad” vices often require policy constraints.
On Anduril, he emphasizes that the next frontier is production: moving from thousands to tens of thousands of units via design-for-manufacturing and large-scale facilities like Arsenal One, and navigating government procurement where “if you build it, they do not come.”
As a Founders Fund partner, he explains the firm’s edge as access plus open internal debate, founder-centric evaluation, willingness to take tech risk, and aggressive concentration into winners—while avoiding hype/FOMO and “kamikaze rounds” that can harm companies.
Key Takeaways
AI enables trivial startups faster than it enables meaningful ones.
Stephens says the danger isn’t that AI makes hard things impossible, but that it makes “un-hard” things too easy—creating a gold-rush into crowded categories where many teams build near-identical API wrappers.
Get the full analysis with uListen AI
“Good quests” are partly an opportunity-cost argument.
Even if some AI “slop” companies are profitable, putting society’s most capable builders on low-impact work crowds out progress on difficult, high-leverage problems like manufacturing, defense resilience, and advanced hardware.
Get the full analysis with uListen AI
Some vital work will always feel uncomfortable.
Using his 2x2 matrix, Stephens argues defense, law-and-order, and other duty-based domains can “feel bad” yet be essential for societal stability—distinct from clearly harmful/illegal activities.
Get the full analysis with uListen AI
Vices scale into great businesses unless policy constrains them.
For the “feels good but is bad” quadrant (e. ...
Get the full analysis with uListen AI
Regulation will mostly follow lived failures, not precede them.
He portrays policymaking as reactive: technology pushes boundaries, society observes harms, then a functioning democracy builds guardrails—especially relevant given limited technical expertise in Congress.
Get the full analysis with uListen AI
Anduril’s next bottleneck is manufacturing, not prototypes.
The company has proven systems at smaller volumes; scaling to “tens of thousands” requires design-for-manufacturing decisions, supply chain maturity, and major facilities like the planned multi-million-square-foot Arsenal One.
Get the full analysis with uListen AI
Defense procurement is a strategy problem as much as a product problem.
Stephens flips the typical startup ratio: in defense, product may be ~30% of the battle; navigating contracting, incentives, and adoption by a non-traditional prime is often the gating factor.
Get the full analysis with uListen AI
Great-power warfare is shifting to low-cost autonomy across domains.
He claims the cost calculus undermines legacy platforms (e. ...
Get the full analysis with uListen AI
Founders Fund’s performance comes from debate and concentration.
He attributes selection quality to open argument with minimal hierarchy and to concentrating 40–50% of a fund into the top few outcomes—plus discipline about not sprinkling small checks that don’t move the needle.
Get the full analysis with uListen AI
“Kamikaze rounds” can destroy companies even when founders accept them.
Large checks at inflated valuations may look attractive but can become an “anchor around your neck,” harming long-term viability; he frames fundraising as a founder responsibility, not just VC mispricing.
Get the full analysis with uListen AI
Notable Quotes
“The distorting characteristics of AI have less to do with the ability to do interesting, hard things, and it has much more to do with how easy it is to do uninteresting, un-hard things.”
— Trae Stephens
“If we take all of our level one hundred players, and we put them on AI slop companies, what does that mean for all of the things that aren't being done at the same time?”
— Trae Stephens
“The feels bad is good, I would argue, is kind of where Anduril lives. It's this duty and responsibility… for a functioning society.”
— Trae Stephens
“The era of putting five thousand people on a fifteen billion dollar aircraft carrier and using that for force projection is over.”
— Trae Stephens
“This is not the Field of Dreams. If you build it, they do not come.”
— Trae Stephens
Questions Answered in This Episode
In your “good quests” framing, what concrete signals distinguish a meaningful AI company from an “AI slop” company when both may look like API products early on?
Stephens argues AI’s biggest distortion is making it easy to build “uninteresting” companies, pulling top talent into crowded, consensus categories instead of high-impact work like manufacturing semiconductors or building critical infrastructure.
Get the full analysis with uListen AI
You argue the key issue is talent misallocation—what incentives (career, capital, policy, culture) would actually redirect “level 100” builders toward semiconductors, manufacturing, or defense resilience?
He outlines an ethics framework (a feels-good/feels-bad vs. ...
Get the full analysis with uListen AI
Using your ethics 2x2, where do AI companions and “bringing back loved ones” land today—and what specific guardrails would prevent social harms without banning the category?
On Anduril, he emphasizes that the next frontier is production: moving from thousands to tens of thousands of units via design-for-manufacturing and large-scale facilities like Arsenal One, and navigating government procurement where “if you build it, they do not come.”
Get the full analysis with uListen AI
Defense is “30% product, 70% strategy” in your view—what are the top 3 procurement pitfalls that first-time defense founders underestimate?
As a Founders Fund partner, he explains the firm’s edge as access plus open internal debate, founder-centric evaluation, willingness to take tech risk, and aggressive concentration into winners—while avoiding hype/FOMO and “kamikaze rounds” that can harm companies.
Get the full analysis with uListen AI
Arsenal One implies a major design-for-manufacturing pivot—what system-design choices most dramatically lower unit cost for autonomous systems or missiles?
Get the full analysis with uListen AI
Transcript Preview
I always tell defense tech companies this, it's like, this is not the Field of Dreams. If you build it, they do not come. That's not how it works. Like, if you build a perpetual motion machine, and the point of the perpetual motion machine is, like, powering forward operating bases, you go to the Department of War, and you say, "I have built you a perpetual motion machine, and I will sell it to you for a million dollars." They would, "Okay, we're gonna bid this out to market, and we're gonna give Lockheed Martin a one hundred billion dollar contract [chuckles] to rebuild, from white paper, your perpetual motion machine." That's how it works.
[upbeat music] Trae, I'm really happy to be doing this with you. Thanks for making time for it.
Great to be here.
I wanna start with, uh, you wrote a blog post a couple years ago, I think, Good Hard Quests or good-- Choose Good Quests.
Mm-hmm.
That really stuck with me, and I think in many ways, we're in a moment in time right now where people are sort of grappling with, like, what that means in the world of AI. And I think both because of, like, the technology itself, which, like, creates all these, like, weird, "What does it mean to be human? What part of the experience actually matters?" It, like, creates, like, a lot of odd questions. Then you also get these, like, gold rush dynamics, where, like, money's flowing, and you can start a company quickly, and there's, like, a lot of get-rich-quick ideas. And so I think for a lot of reasons, this blog post is, like, super relevant right now. And I'd just be curious to hear how you think about this concept of, like, choosing good quests, like, when we're in the middle of something like AI.
Yeah, you know, I think the, the distorting characteristics of AI have less to do with the ability to do interesting, hard things, and it has much more to do with how easy it is to do uninteresting, un-hard things. Um, you know, there's the, the constant conversation that comes up about, like, the first company to a billion-dollar valuation with one employee. It is possible to do that, but what that means is that all of these people that are coming out of college or, or that have aspirations to be founders, you know, they're doing like whiteboard founding.
Yeah.
They're walking up to a whiteboard. They're writing down a hundred different ideas. Many of them are just, like, using the, you know-
Yeah
... LLM APIs, uh, to do some highly specific task or something like that, and then as they enter the market, there's, you know, dozens of competitors, and it, it's kind of like a, a battle to the death in, like, a really highly consensus category. And that's kind of what I would say is the opposite of a good quest. Like, going into something because you can, trying to generate wealth as your primary motivation, not super interesting. It's sort of akin to, like, a celebrity starting a tequila company-
Install uListen to search the full transcript and get AI-powered insights
Get Full TranscriptGet more from every podcast
AI summaries, searchable transcripts, and fact-checking. Free forever.
Add to Chrome