The Twenty Minute VCJonathan Ross: DeepSeek Special - How Should OpenAI and the US Government Respond | E1253
At a glance
WHAT IT’S REALLY ABOUT
DeepSeek’s Sputnik Moment: Open Source AI, China, and Global Power
- Jonathan Ross argues that China’s DeepSeek R1 release is a “Sputnik 2.0” moment, proving that frontier AI models are now effectively commoditized and can be trained far more cheaply using better data and clever architectures. He explains how DeepSeek leveraged distillation and mixture‑of‑experts (MoE) to match or approach Western model quality while spending relatively little on GPU training, likely using OpenAI itself as a data teacher.
- This shift, he says, undercuts proprietary moats, pressures OpenAI and others to open source, and heightens geopolitical risks as Chinese models and data collection become strategically important to the CCP. Ross stresses that the real long‑term value will accrue to inference infrastructure, brand, distribution, and product quality, not raw model weights.
- He believes NVIDIA and inference‑focused chipmakers will benefit from Jevons Paradox as cheaper, better models massively increase compute demand. At the same time, he warns of AI‑enabled cyber offense, the need for more sophisticated export controls, and urges the US and Europe to respond with aggressive investment, entrepreneurship, and clear strategic doctrine rather than complacency.
IDEAS WORTH REMEMBERING
5 ideasDeepSeek proves frontier‑level AI is no longer a Western monopoly.
R1 was reportedly trained on ~2,000 GPUs with a modest budget but achieved competitive performance by distilling from OpenAI and using clever MoE design, showing that top‑tier models don’t require tens of billions in training spend if you have high‑quality synthetic data and smart architecture.
Data quality and distillation trump sheer token volume.
Scaling laws assume uniform data quality; DeepSeek sidestepped data scarcity by scraping OpenAI outputs and using them as high‑quality training targets, similar to AlphaGo Zero’s self‑play—demonstrating that better data allows fewer tokens and cheaper training for similar or better capability.
LLMs are commoditizing; durable moats will come from ‘seven powers’, not models.
Ross argues that models are now like Linux: swappable and low switching‑cost, so defensibility will rest on Hamilton Helmer’s powers—brand (OpenAI), network effects (Meta), scale economies, distribution, switching costs, and product craftsmanship—rather than who has the single ‘best’ model.
Open source will likely win, pressuring OpenAI and peers to open their weights.
Because open models attract developers, scrutiny, and distribution, Ross believes OpenAI will eventually be forced to open source its leading models to retain users and goodwill, even if it seems to cannibalize short‑term API revenue.
Inference will dwarf training in economic importance and GPU demand.
Drawing on Google experience and Jevons Paradox, Ross expects 10–20x more spend on inference than training over time; cheaper, better models massively expand use cases, making NVIDIA and inference‑specialized chips more valuable, not less, after DeepSeek‑style efficiency gains.
WORDS WORTH SAVING
5 quotes“Yes. It is Sputnik. It is Sputnik 2.0.”
— Jonathan Ross
“Open always wins. Always.”
— Jonathan Ross
“The biggest problem is this has just made it absolutely nakedly clear that the models are commoditized.”
— Jonathan Ross
“Training is where you create the model, inference is where you use the model.”
— Jonathan Ross
“I would love nothing more than to compete directly with Chinese companies on a fair footing… But when the government keeps putting its thumb on the scale, now there’s no avoiding it.”
— Jonathan Ross
High quality AI-generated summary created from speaker-labeled transcript.
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome