No PriorsNo Priors Ep. 100 | With Sarah and Elad
Sarah Guo on deepSeek, OpenAI Stargate, and 2025: AI Frontier Tightens and Widens.
In this episode of No Priors, featuring Sarah Guo and Elad Gil, No Priors Ep. 100 | With Sarah and Elad explores deepSeek, OpenAI Stargate, and 2025: AI Frontier Tightens and Widens Sarah and Elad mark their 100th episode by unpacking DeepSeek’s impact, arguing it’s impressive but broadly on-trend with rapidly collapsing training and inference costs and a narrowing performance gap between major models. They discuss whether foundation models are commoditizing, what still matters about being at the frontier, and how synthetic data and reasoning models could level the playing field. They also review OpenAI’s Deep Research and the Stargate infrastructure initiative, raising concerns about knowledge reliability, propaganda risk, and the strategic role of massive compute. Finally, they offer 2025–27 predictions: consolidation in model providers, a surge in vertical AI apps and agents, progress in robotics and domain-specific science applications, and larger consumer AI experiments.
At a glance
WHAT IT’S REALLY ABOUT
DeepSeek, OpenAI Stargate, and 2025: AI Frontier Tightens and Widens
- Sarah and Elad mark their 100th episode by unpacking DeepSeek’s impact, arguing it’s impressive but broadly on-trend with rapidly collapsing training and inference costs and a narrowing performance gap between major models. They discuss whether foundation models are commoditizing, what still matters about being at the frontier, and how synthetic data and reasoning models could level the playing field. They also review OpenAI’s Deep Research and the Stargate infrastructure initiative, raising concerns about knowledge reliability, propaganda risk, and the strategic role of massive compute. Finally, they offer 2025–27 predictions: consolidation in model providers, a surge in vertical AI apps and agents, progress in robotics and domain-specific science applications, and larger consumer AI experiments.
IDEAS WORTH REMEMBERING
7 ideasModel performance is converging while costs are collapsing dramatically.
Benchmarks show major models clustering closer in capabilities, while training and inference costs for GPT‑4–class performance have fallen sharply—Elad cites an ~180x cost-per-token drop in 18 months—suggesting intense commoditization pressure at the base-model layer.
DeepSeek is impressive but not a true cost or capability shock.
Despite headlines about a $5.5–6M training run and NVIDIA’s stock drop, the hosts believe DeepSeek likely consumed hundreds of millions in prior experimentation and largely fits existing trends rather than resetting the economics of training from scratch.
Being at the frontier still matters for flywheels and market share.
Leading models can capture sticky user share, and—critically—can be used to generate synthetic data, label data, and build tools that accelerate the next generation of models, potentially creating a compounding advantage if capabilities keep bootstrapping.
Reasoning models and tools like Deep Research raise the bar for knowledge work.
Deep Research already rivals or surpasses median analyst-level work for many tasks, especially exploratory research in unfamiliar domains, meaning organizations should start benchmarking internal knowledge work against what these tools can produce.
AI will become a central gatekeeper of information, amplifying propaganda and censorship risk.
As LLMs blend search, social feeds, and media into one primary interface, whoever controls model outputs wields enormous narrative power; this makes multi-model competition and open source critically important from a civil-liberties standpoint.
Massive compute still has strategic value; labs will use all they can get.
Regardless of algorithmic efficiency gains, any serious AGI lab would gladly accept the largest available cluster (like Stargate) if capital were free, implying that large-scale pre-training will remain a key driver of frontier capabilities.
Near-term growth will favor vertical AI ops, agents, and specialized data strategies.
The hosts expect 2025 to see consolidation in generic models, expansion of vertical AI ops (legal, customer support, codegen, medical scribing), more agentic workflows, technical robotics milestones, and domain experts designing smarter data-generation pipelines in fields like biology and materials science.
WORDS WORTH SAVING
5 quotesDeepSeek is one of those things which is both really important in some ways and then also kind of what you would expect would happen from a trend line perspective.
— Elad
In the last 18 months we saw a 180x decrease in cost per token for equivalent level models… the cost collapse on these things is already quite clear.
— Elad
If you have a high-quality enough base model to be doing synthetic data generation for a next generation of model, that is actually a big leveler.
— Sarah
It really does feel like a really dangerous thing from sort of a propaganda and censorship perspective… the ability to control the output of these things is extremely powerful but also very dangerous.
— Elad
One mistake that entrepreneurs and investors make… is you look at something and it’s not working and then you assume it’s not gonna work. But in AI you have to keep looking again and again because stuff can begin to work really quickly.
— Sarah
QUESTIONS ANSWERED IN THIS EPISODE
5 questionsIf model capabilities and costs are converging, where will durable competitive advantage in AI actually come from—distribution, data, infrastructure, or something else?
Sarah and Elad mark their 100th episode by unpacking DeepSeek’s impact, arguing it’s impressive but broadly on-trend with rapidly collapsing training and inference costs and a narrowing performance gap between major models. They discuss whether foundation models are commoditizing, what still matters about being at the frontier, and how synthetic data and reasoning models could level the playing field. They also review OpenAI’s Deep Research and the Stargate infrastructure initiative, raising concerns about knowledge reliability, propaganda risk, and the strategic role of massive compute. Finally, they offer 2025–27 predictions: consolidation in model providers, a surge in vertical AI apps and agents, progress in robotics and domain-specific science applications, and larger consumer AI experiments.
How should companies redesign knowledge-work roles and org structures once tools like Deep Research become standard, rather than experimental, in day-to-day workflows?
What practical mechanisms could ensure AI systems do not become single points of failure—or control—for information, especially in authoritarian or highly polarized contexts?
To what extent can synthetic data meaningfully substitute for high-quality human or experimental data in domains like biology, materials science, and healthcare?
How should investors and founders price the uncertainty around scaling laws, reasoning breakthroughs, and robotics when planning multi-year AI infrastructure or product bets?
EVERY SPOKEN WORD
Install uListen for AI-powered chat & search across the full episode — Get Full Transcript
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome