
No Priors Ep. 109 | With Sarah and Elad
Sarah Guo (host), Elad Gil (host)
In this episode of No Priors, featuring Sarah Guo and Elad Gil, No Priors Ep. 109 | With Sarah and Elad explores aI Image Breakthroughs, Macro Jitters, and the Maturing Model Ecosystem Sarah and Elad discuss the latest leap in AI image and animation generation, framing it as another recurring “wow moment” on a steady curve of quality, control, and aesthetic sophistication. They argue that, despite market volatility, early-stage software startups—especially in AI—are largely insulated from macro concerns, with venture and model funding still deep and active. The conversation then shifts to the evolving foundation model landscape, including convergence on capabilities, unexplored vertical model opportunities, and the tension between one general model and many specialized ones. They close by describing a temporarily more stable AI stack—models, infra, orchestration, and emerging standards like MCP—before predicting the next wave of disruption and consumer products.
AI Image Breakthroughs, Macro Jitters, and the Maturing Model Ecosystem
Sarah and Elad discuss the latest leap in AI image and animation generation, framing it as another recurring “wow moment” on a steady curve of quality, control, and aesthetic sophistication. They argue that, despite market volatility, early-stage software startups—especially in AI—are largely insulated from macro concerns, with venture and model funding still deep and active. The conversation then shifts to the evolving foundation model landscape, including convergence on capabilities, unexplored vertical model opportunities, and the tension between one general model and many specialized ones. They close by describing a temporarily more stable AI stack—models, infra, orchestration, and emerging standards like MCP—before predicting the next wave of disruption and consumer products.
Key Takeaways
AI image and animation quality is improving in recurring, dramatic waves.
From early GAN art and seven-fingered Midjourney images to today’s polished animations, each new generation resets user expectations and exposes how much more room there still is for quality and control.
Get the full analysis with uListen AI
Macro market turbulence matters far less to early-stage software startups than people think.
For small, viable startups—especially pure software plays—cycles in the NASDAQ, tariffs, and sentiment tend to be a shrug unless funding in venture dries up dramatically or you’re on the cusp of an IPO.
Get the full analysis with uListen AI
Foundation language models are converging on capability, making distribution and product differentiation crucial.
Benchmarks show many top models clustered in performance, so advantages will increasingly come from distribution, user experience, verticalization, and how well models are integrated into real workflows.
Get the full analysis with uListen AI
There is large, underexplored opportunity in vertical and scientific models beyond language.
Domains like physics, materials, robotics, and specialized healthcare models may hold significant economic and societal value, but they’re underfunded relative to their potential because they’re harder and less trendy than generic LLMs.
Get the full analysis with uListen AI
Data collection and generation are the core bottlenecks for non-text AI domains.
Unlike language and code, where data is abundant and digital, robotics, chemistry, and other physical domains require expensive, bespoke data generation (labs, robots, experiments), which favors companies that can build those engines.
Get the full analysis with uListen AI
Model choice will be shaped by a speed–cost–reasoning trade-off matrix.
Elad frames a 2×2 where slow, expensive but very capable models power deep reasoning tasks, while fast, cheap specialized models serve narrow but high-throughput use cases, with orchestration layers routing workloads between them.
Get the full analysis with uListen AI
The AI stack is temporarily stabilizing, with standards like MCP accelerating agents.
With clearer layers—models, RAG, infra, evals, orchestration, and now Model Context Protocol to standardize model–data/tool connections—founders have a more predictable platform to build on, even though the next disruption is likely close.
Get the full analysis with uListen AI
Notable Quotes
“I feel like every year or two, there's this moment in the image gen world where people have a 'Wow, that's amazing' moment again.”
— Elad
“For day-to-day technology startups, particularly ones that are not doing hardware, it should really be of minimal actual day-to-day impact.”
— Elad
“Often, the interest level of people working in the industry to build models is divorced from the economic value of these models.”
— Elad
“Anytime you go into the physical world, it's always harder to generate data.”
— Elad
“It feels like a period of brief consolidation... I think we should enjoy the calm while it lasts for, you know, the next week or whatever it is.”
— Elad
Questions Answered in This Episode
How might the next major leap in image or video generation change the economics of animation, gaming, and graphic design work?
Sarah and Elad discuss the latest leap in AI image and animation generation, framing it as another recurring “wow moment” on a steady curve of quality, control, and aesthetic sophistication. ...
Get the full analysis with uListen AI
Given convergence among top foundation models, what durable moats can new AI companies realistically build beyond access to capital and compute?
Get the full analysis with uListen AI
Which specific scientific or industrial domains (e.g., materials, robotics) are ripest for a dedicated model company to emerge today, and why aren’t more founders pursuing them?
Get the full analysis with uListen AI
How should startups decide when to rely on a general-purpose LLM versus investing in training or fine-tuning a specialized model for their domain?
Get the full analysis with uListen AI
What kinds of consumer AI products beyond search and chat are actually plausible in the next 1–2 years, given current model and agent capabilities?
Get the full analysis with uListen AI
Transcript Preview
Hey, listeners. Welcome back to No Priors. Uh, today you've just got me and Elad again.
It's a favorite type of episode. Sarah Habibi, how you doing?
I'm great. I'm so excited. Everything is adorable cartoons that are also, like, slightly nostalgic and sensitive. And tell me about how you react to, uh, Studio Ghibli and also just better image generation.
I mean, I'm a longstanding anime fan, so I think converting the world into everything anime or manga is a very positive step for humanity. So, I view this as something I've been (laughs) waiting for, for a while. I feel like every year or two, there's sort of this moment in the image gen world where people have a "Wow, that's amazing" moment again. And the first version of that was like, oh my God, these, th- th- you know, I think it, maybe even the GAN wave was the first wave. There was a GAN artwork in, like, 2019 or so, or 2018 that went to Sotheby's for, um, auction, which was one of the first sort of, um, AI generated arts back when people were doing these adversarial network-based approaches to generating artwork. And it was kind of these kludgey tool chains, but even then people were like, "Whoa, look at what AI can do right now." And it was super bad, you know (laughs) , in comparison to what you can do today. And then there was kind of the Midjourney, um, early Stable Diffusion wave where those models came out and people were like, "Oh my gosh, this thing is amazing, but everybody has seven fingers in the images, but oh my God, it's amazing. And look at all the things we can do with it and it's gonna transform society," et cetera, et cetera. I feel like we've periodically had these and I feel like this is the latest version of that. And part of it is we're just on this amazing curve of quality and fidelity in this artwork and the ability to do... I mean, even back in the GAN world there was, like, style transfers and, you know, do this in the style of van Gogh and et cetera. But the degree to which it does it so well now and so cohesively and in so many styles and with so much aesthetic beauty and oversight is really striking. And I think we're just hitting another one of those moments where people are like, "Wow, this can really do it for forms of animation and other things." And all this is obviously in the context of, um, uh, ChatGPT and OpenAI and sort of the, the 4.0 models sort of incorporating a lot of this stuff directly in. So I, I think it's fantastic. We're gonna see another thing like this in another year, I think. (laughs) And then that, I think there'll be the very commercial versions of this, uh, which are already sort of happening, but look, we can use it for graphic design completely seamlessly versus it kind of works and we can use it for all these different use cases. And so I feel like we're doing the horizontal version of it, and soon we'll have the vertical versions all come out and obviously there's companies like Recraft and others working on the vertical versions directly, but I just view this as a super interesting evolution of the technology. So I, I think it's super exciting. What do you think?
Install uListen to search the full transcript and get AI-powered insights
Get Full TranscriptGet more from every podcast
AI summaries, searchable transcripts, and fact-checking. Free forever.
Add to Chrome