At a glance
WHAT IT’S REALLY ABOUT
DeepSeek, OpenAI Stargate, and 2025: AI Frontier Tightens and Widens
- Sarah and Elad mark their 100th episode by unpacking DeepSeek’s impact, arguing it’s impressive but broadly on-trend with rapidly collapsing training and inference costs and a narrowing performance gap between major models. They discuss whether foundation models are commoditizing, what still matters about being at the frontier, and how synthetic data and reasoning models could level the playing field. They also review OpenAI’s Deep Research and the Stargate infrastructure initiative, raising concerns about knowledge reliability, propaganda risk, and the strategic role of massive compute. Finally, they offer 2025–27 predictions: consolidation in model providers, a surge in vertical AI apps and agents, progress in robotics and domain-specific science applications, and larger consumer AI experiments.
IDEAS WORTH REMEMBERING
5 ideasModel performance is converging while costs are collapsing dramatically.
Benchmarks show major models clustering closer in capabilities, while training and inference costs for GPT‑4–class performance have fallen sharply—Elad cites an ~180x cost-per-token drop in 18 months—suggesting intense commoditization pressure at the base-model layer.
DeepSeek is impressive but not a true cost or capability shock.
Despite headlines about a $5.5–6M training run and NVIDIA’s stock drop, the hosts believe DeepSeek likely consumed hundreds of millions in prior experimentation and largely fits existing trends rather than resetting the economics of training from scratch.
Being at the frontier still matters for flywheels and market share.
Leading models can capture sticky user share, and—critically—can be used to generate synthetic data, label data, and build tools that accelerate the next generation of models, potentially creating a compounding advantage if capabilities keep bootstrapping.
Reasoning models and tools like Deep Research raise the bar for knowledge work.
Deep Research already rivals or surpasses median analyst-level work for many tasks, especially exploratory research in unfamiliar domains, meaning organizations should start benchmarking internal knowledge work against what these tools can produce.
AI will become a central gatekeeper of information, amplifying propaganda and censorship risk.
As LLMs blend search, social feeds, and media into one primary interface, whoever controls model outputs wields enormous narrative power; this makes multi-model competition and open source critically important from a civil-liberties standpoint.
WORDS WORTH SAVING
5 quotesDeepSeek is one of those things which is both really important in some ways and then also kind of what you would expect would happen from a trend line perspective.
— Elad
In the last 18 months we saw a 180x decrease in cost per token for equivalent level models… the cost collapse on these things is already quite clear.
— Elad
If you have a high-quality enough base model to be doing synthetic data generation for a next generation of model, that is actually a big leveler.
— Sarah
It really does feel like a really dangerous thing from sort of a propaganda and censorship perspective… the ability to control the output of these things is extremely powerful but also very dangerous.
— Elad
One mistake that entrepreneurs and investors make… is you look at something and it’s not working and then you assume it’s not gonna work. But in AI you have to keep looking again and again because stuff can begin to work really quickly.
— Sarah
High quality AI-generated summary created from speaker-labeled transcript.
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome