Skip to content
The Twenty Minute VCThe Twenty Minute VC

Jonathan Ross: DeepSeek Special - How Should OpenAI and the US Government Respond | E1253

Jonathan Ross is the Co-Founder and CEO of Groq, providing fast AI inference. Prior to founding Groq, Jonathan started Google’s TPU effort where he designed and implemented the core elements of the original chip. Jonathan then joined Google X’s Rapid Eval Team, the initial stage of the famed “Moonshots factory,” where he devised and incubated new Bets (Units) for Alphabet. ---------------------------------------------- Timestamps: (00:00) Intro (02:01) Is DeepSeek News as Big a Deal as It Seems? (04:33) Distillation & DeepSeek's Use of OpenAI Data (07:18) Scraping OpenAI Models for Higher Quality Output (11:40) Concerns About US Customer Data Going to China (13:08) DeepSeek and Its Potential Use by the CCP (23:13) Is DeepSeek Diminishing OpenAI's Distribution Advantage? (33:07) Advising the EU on Europe's Stance Today (34:48) Perplexity in 3 Years (37:22) Commoditization of Models & Big Tech's Stock Struggles (41:01) Nvidia's High Margins and the Strength of Their Moat (42:55) The Future of Efficiency After Nvidia's Success (49:50) The $500BN Stargate Project (54:01) Excitement or Nerves in the AI Arms Race? (57:54) Where Does Value Accrue in Wrapper Apps & Foundation Models? ---------------------------------------------- The 10 Most Important Questions on DeepSeek: - How did DeepSeek innovate in a way that no other model provider has done? - Do we believe that they only spent $6M to train R1? - Should we doubt their claims on limited H100 usage? Is Josh Kushner right that this is a potential violation of US export laws? - Is DeepSeek an instrument used by the CCP to acquire US consumer data? - How does DeepSeek being open-source change the nature of this discussion? - What should OpenAI do now? What should they not do? - Does DeepSeek hurt or help Meta who already have their open-source efforts with Lama? - Will this market follow Satya Nadella’s suggestion of Jevon’s Paradox? - How much more efficient will foundation models become? - What does this mean for the $500BN Stargate project announced last week? ----------------------------------------------- Subscribe on Spotify: https://open.spotify.com/show/3j2KMcZTtgTNBKwtZBMHvl?si=85bc9196860e4466 Subscribe on Apple Podcasts: https://podcasts.apple.com/us/podcast/the-twenty-minute-vc-20vc-venture-capital-startup/id958230465 Follow Harry Stebbings on Twitter: https://twitter.com/HarryStebbings Follow Jonathan Ross on Twitter: https://twitter.com/JonathanRoss321 Follow 20VC on Instagram: https://www.instagram.com/20vchq Follow 20VC on TikTok: https://www.tiktok.com/@20vc_tok Visit our Website: https://www.20vc.com Subscribe to our Newsletter: https://www.thetwentyminutevc.com/contact ----------------------------------------------- #20vc #harrystebbings #jonathanross #groq #founder #CEO #venturecapital #startups #deepseek #openai #ai #samaltman #trump

Harry StebbingshostJonathan Rossguest
Jan 28, 20251h 0mWatch on YouTube ↗

At a glance

WHAT IT’S REALLY ABOUT

DeepSeek’s Sputnik Moment: Open Source AI, China, and Global Power

  1. Jonathan Ross argues that China’s DeepSeek R1 release is a “Sputnik 2.0” moment, proving that frontier AI models are now effectively commoditized and can be trained far more cheaply using better data and clever architectures. He explains how DeepSeek leveraged distillation and mixture‑of‑experts (MoE) to match or approach Western model quality while spending relatively little on GPU training, likely using OpenAI itself as a data teacher.
  2. This shift, he says, undercuts proprietary moats, pressures OpenAI and others to open source, and heightens geopolitical risks as Chinese models and data collection become strategically important to the CCP. Ross stresses that the real long‑term value will accrue to inference infrastructure, brand, distribution, and product quality, not raw model weights.
  3. He believes NVIDIA and inference‑focused chipmakers will benefit from Jevons Paradox as cheaper, better models massively increase compute demand. At the same time, he warns of AI‑enabled cyber offense, the need for more sophisticated export controls, and urges the US and Europe to respond with aggressive investment, entrepreneurship, and clear strategic doctrine rather than complacency.

IDEAS WORTH REMEMBERING

5 ideas

DeepSeek proves frontier‑level AI is no longer a Western monopoly.

R1 was reportedly trained on ~2,000 GPUs with a modest budget but achieved competitive performance by distilling from OpenAI and using clever MoE design, showing that top‑tier models don’t require tens of billions in training spend if you have high‑quality synthetic data and smart architecture.

Data quality and distillation trump sheer token volume.

Scaling laws assume uniform data quality; DeepSeek sidestepped data scarcity by scraping OpenAI outputs and using them as high‑quality training targets, similar to AlphaGo Zero’s self‑play—demonstrating that better data allows fewer tokens and cheaper training for similar or better capability.

LLMs are commoditizing; durable moats will come from ‘seven powers’, not models.

Ross argues that models are now like Linux: swappable and low switching‑cost, so defensibility will rest on Hamilton Helmer’s powers—brand (OpenAI), network effects (Meta), scale economies, distribution, switching costs, and product craftsmanship—rather than who has the single ‘best’ model.

Open source will likely win, pressuring OpenAI and peers to open their weights.

Because open models attract developers, scrutiny, and distribution, Ross believes OpenAI will eventually be forced to open source its leading models to retain users and goodwill, even if it seems to cannibalize short‑term API revenue.

Inference will dwarf training in economic importance and GPU demand.

Drawing on Google experience and Jevons Paradox, Ross expects 10–20x more spend on inference than training over time; cheaper, better models massively expand use cases, making NVIDIA and inference‑specialized chips more valuable, not less, after DeepSeek‑style efficiency gains.

WORDS WORTH SAVING

5 quotes

“Yes. It is Sputnik. It is Sputnik 2.0.”

Jonathan Ross

“Open always wins. Always.”

Jonathan Ross

“The biggest problem is this has just made it absolutely nakedly clear that the models are commoditized.”

Jonathan Ross

“Training is where you create the model, inference is where you use the model.”

Jonathan Ross

“I would love nothing more than to compete directly with Chinese companies on a fair footing… But when the government keeps putting its thumb on the scale, now there’s no avoiding it.”

Jonathan Ross

Why DeepSeek R1 is a geopolitical ‘Sputnik 2.0’ momentDistillation, data quality, and mixture‑of‑experts (MoE) efficiencyCommoditization of LLMs and implications for OpenAI, Meta, and othersChina, CCP influence, data security, and export‑control loopholesWhere economic value accrues: training vs inference, chips, and Jevons ParadoxStrategic options for the US, EU, and big tech in an AI arms raceFuture of AI products, hallucination reduction, and where startups should pivot

High quality AI-generated summary created from speaker-labeled transcript.

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome