
No Priors Ep. 100 | With Sarah and Elad
Sarah Guo (host), Elad Gil (host), Narrator
In this episode of No Priors, featuring Sarah Guo and Elad Gil, No Priors Ep. 100 | With Sarah and Elad explores deepSeek, OpenAI Stargate, and 2025: AI Frontier Tightens and Widens Sarah and Elad mark their 100th episode by unpacking DeepSeek’s impact, arguing it’s impressive but broadly on-trend with rapidly collapsing training and inference costs and a narrowing performance gap between major models. They discuss whether foundation models are commoditizing, what still matters about being at the frontier, and how synthetic data and reasoning models could level the playing field. They also review OpenAI’s Deep Research and the Stargate infrastructure initiative, raising concerns about knowledge reliability, propaganda risk, and the strategic role of massive compute. Finally, they offer 2025–27 predictions: consolidation in model providers, a surge in vertical AI apps and agents, progress in robotics and domain-specific science applications, and larger consumer AI experiments.
DeepSeek, OpenAI Stargate, and 2025: AI Frontier Tightens and Widens
Sarah and Elad mark their 100th episode by unpacking DeepSeek’s impact, arguing it’s impressive but broadly on-trend with rapidly collapsing training and inference costs and a narrowing performance gap between major models. They discuss whether foundation models are commoditizing, what still matters about being at the frontier, and how synthetic data and reasoning models could level the playing field. They also review OpenAI’s Deep Research and the Stargate infrastructure initiative, raising concerns about knowledge reliability, propaganda risk, and the strategic role of massive compute. Finally, they offer 2025–27 predictions: consolidation in model providers, a surge in vertical AI apps and agents, progress in robotics and domain-specific science applications, and larger consumer AI experiments.
Key Takeaways
Model performance is converging while costs are collapsing dramatically.
Benchmarks show major models clustering closer in capabilities, while training and inference costs for GPT‑4–class performance have fallen sharply—Elad cites an ~180x cost-per-token drop in 18 months—suggesting intense commoditization pressure at the base-model layer.
Get the full analysis with uListen AI
DeepSeek is impressive but not a true cost or capability shock.
Despite headlines about a $5. ...
Get the full analysis with uListen AI
Being at the frontier still matters for flywheels and market share.
Leading models can capture sticky user share, and—critically—can be used to generate synthetic data, label data, and build tools that accelerate the next generation of models, potentially creating a compounding advantage if capabilities keep bootstrapping.
Get the full analysis with uListen AI
Reasoning models and tools like Deep Research raise the bar for knowledge work.
Deep Research already rivals or surpasses median analyst-level work for many tasks, especially exploratory research in unfamiliar domains, meaning organizations should start benchmarking internal knowledge work against what these tools can produce.
Get the full analysis with uListen AI
AI will become a central gatekeeper of information, amplifying propaganda and censorship risk.
As LLMs blend search, social feeds, and media into one primary interface, whoever controls model outputs wields enormous narrative power; this makes multi-model competition and open source critically important from a civil-liberties standpoint.
Get the full analysis with uListen AI
Massive compute still has strategic value; labs will use all they can get.
Regardless of algorithmic efficiency gains, any serious AGI lab would gladly accept the largest available cluster (like Stargate) if capital were free, implying that large-scale pre-training will remain a key driver of frontier capabilities.
Get the full analysis with uListen AI
Near-term growth will favor vertical AI ops, agents, and specialized data strategies.
The hosts expect 2025 to see consolidation in generic models, expansion of vertical AI ops (legal, customer support, codegen, medical scribing), more agentic workflows, technical robotics milestones, and domain experts designing smarter data-generation pipelines in fields like biology and materials science.
Get the full analysis with uListen AI
Notable Quotes
“DeepSeek is one of those things which is both really important in some ways and then also kind of what you would expect would happen from a trend line perspective.”
— Elad
“In the last 18 months we saw a 180x decrease in cost per token for equivalent level models… the cost collapse on these things is already quite clear.”
— Elad
“If you have a high-quality enough base model to be doing synthetic data generation for a next generation of model, that is actually a big leveler.”
— Sarah
“It really does feel like a really dangerous thing from sort of a propaganda and censorship perspective… the ability to control the output of these things is extremely powerful but also very dangerous.”
— Elad
“One mistake that entrepreneurs and investors make… is you look at something and it’s not working and then you assume it’s not gonna work. But in AI you have to keep looking again and again because stuff can begin to work really quickly.”
— Sarah
Questions Answered in This Episode
If model capabilities and costs are converging, where will durable competitive advantage in AI actually come from—distribution, data, infrastructure, or something else?
Sarah and Elad mark their 100th episode by unpacking DeepSeek’s impact, arguing it’s impressive but broadly on-trend with rapidly collapsing training and inference costs and a narrowing performance gap between major models. ...
Get the full analysis with uListen AI
How should companies redesign knowledge-work roles and org structures once tools like Deep Research become standard, rather than experimental, in day-to-day workflows?
Get the full analysis with uListen AI
What practical mechanisms could ensure AI systems do not become single points of failure—or control—for information, especially in authoritarian or highly polarized contexts?
Get the full analysis with uListen AI
To what extent can synthetic data meaningfully substitute for high-quality human or experimental data in domains like biology, materials science, and healthcare?
Get the full analysis with uListen AI
How should investors and founders price the uncertainty around scaling laws, reasoning breakthroughs, and robotics when planning multi-year AI infrastructure or product bets?
Get the full analysis with uListen AI
Transcript Preview
(music plays) Hey, listeners. Welcome back to No Priors. This episode marks a special milestone. Today is our 100th show. Thank you so much for tuning in each week with me and Elad. Uh, and it's been an exciting last couple of weeks in AI, so we have lots to talk about. Why don't we start with the news of the hour, or, you know, really the last month at this point, and, um, DeepSeek. Uh, Elad, what's your overall reaction?
DeepSeek is one of those things which is both, um, really important in some ways and then also kind of what you would expect would happen from a trend line perspective. And I think there was a lot of interest around DeepSeek for sort of three reasons. Uh, number one, it was a state-of-the-art Chinese model that, um, seemed to have really caught up with a number of things on the reasoning side and in other areas relative to some of the Western models. And it was open source. Number two, there was a claim that it was done very cheaply, so I think the paper talked about like a, a $5.5 million run as sort of the end. And then lastly, I think there's this broader narrative of who's really behind it and what's going on and some, some perception of mystery which may or may not be real. And as you kind of walk through each one of those things, I think on the first one, you know, state-of-the-art open source model with some reasoning capabilities built in, they actually did some really nice work. If you read through the paper, there's some novel techniques in RL that they, uh, they worked on and that, you know, I know some other labs are starting to adopt. I think some other labs have also come up with some similar things over time, but I think it was clear they'd done some real work there. Um, on the cost side, everybody that, that I've at least talked to who's, um, savvy to it basically views every sort of final run for a model of this type to roughly be in that kind of dollar range. You know, five to $10 million, something like that. And really, the question is how much work went in behind that before they distilled down this, uh, smaller model. And my sense is everybody thinks that they were spending hundreds of millions of dollars on
(laughs)
... (laughs) ... uh, leading up to this. And so from that perspective, it wasn't really novel and I think that sort of 20% drop in NVIDIA stock and everything else that happened as news of this model spread was a bit unwarranted. And then the last one, which is sort of speculation of what's going on, is it really a hedge fund, is something else happening, like, you know, uh, felt a little bit, um, well, speculative (laughs) . There's all sorts of reasons that it is exactly what they say it, it, it is and then there's some circumstances in which you could interpret things more broadly. So th- that's kind of my read on it. I mean, what do you think?
Install uListen to search the full transcript and get AI-powered insights
Get Full TranscriptGet more from every podcast
AI summaries, searchable transcripts, and fact-checking. Free forever.
Add to Chrome