
Jonathan Ross: DeepSeek Special - How Should OpenAI and the US Government Respond | E1253
Harry Stebbings (host), Jonathan Ross (guest)
In this episode of The Twenty Minute VC, featuring Harry Stebbings and Jonathan Ross, Jonathan Ross: DeepSeek Special - How Should OpenAI and the US Government Respond | E1253 explores deepSeek’s Sputnik Moment: Open Source AI, China, and Global Power Jonathan Ross argues that China’s DeepSeek R1 release is a “Sputnik 2.0” moment, proving that frontier AI models are now effectively commoditized and can be trained far more cheaply using better data and clever architectures. He explains how DeepSeek leveraged distillation and mixture‑of‑experts (MoE) to match or approach Western model quality while spending relatively little on GPU training, likely using OpenAI itself as a data teacher.
DeepSeek’s Sputnik Moment: Open Source AI, China, and Global Power
Jonathan Ross argues that China’s DeepSeek R1 release is a “Sputnik 2.0” moment, proving that frontier AI models are now effectively commoditized and can be trained far more cheaply using better data and clever architectures. He explains how DeepSeek leveraged distillation and mixture‑of‑experts (MoE) to match or approach Western model quality while spending relatively little on GPU training, likely using OpenAI itself as a data teacher.
This shift, he says, undercuts proprietary moats, pressures OpenAI and others to open source, and heightens geopolitical risks as Chinese models and data collection become strategically important to the CCP. Ross stresses that the real long‑term value will accrue to inference infrastructure, brand, distribution, and product quality, not raw model weights.
He believes NVIDIA and inference‑focused chipmakers will benefit from Jevons Paradox as cheaper, better models massively increase compute demand. At the same time, he warns of AI‑enabled cyber offense, the need for more sophisticated export controls, and urges the US and Europe to respond with aggressive investment, entrepreneurship, and clear strategic doctrine rather than complacency.
Key Takeaways
DeepSeek proves frontier‑level AI is no longer a Western monopoly.
R1 was reportedly trained on ~2,000 GPUs with a modest budget but achieved competitive performance by distilling from OpenAI and using clever MoE design, showing that top‑tier models don’t require tens of billions in training spend if you have high‑quality synthetic data and smart architecture.
Get the full analysis with uListen AI
Data quality and distillation trump sheer token volume.
Scaling laws assume uniform data quality; DeepSeek sidestepped data scarcity by scraping OpenAI outputs and using them as high‑quality training targets, similar to AlphaGo Zero’s self‑play—demonstrating that better data allows fewer tokens and cheaper training for similar or better capability.
Get the full analysis with uListen AI
LLMs are commoditizing; durable moats will come from ‘seven powers’, not models.
Ross argues that models are now like Linux: swappable and low switching‑cost, so defensibility will rest on Hamilton Helmer’s powers—brand (OpenAI), network effects (Meta), scale economies, distribution, switching costs, and product craftsmanship—rather than who has the single ‘best’ model.
Get the full analysis with uListen AI
Open source will likely win, pressuring OpenAI and peers to open their weights.
Because open models attract developers, scrutiny, and distribution, Ross believes OpenAI will eventually be forced to open source its leading models to retain users and goodwill, even if it seems to cannibalize short‑term API revenue.
Get the full analysis with uListen AI
Inference will dwarf training in economic importance and GPU demand.
Drawing on Google experience and Jevons Paradox, Ross expects 10–20x more spend on inference than training over time; cheaper, better models massively expand use cases, making NVIDIA and inference‑specialized chips more valuable, not less, after DeepSeek‑style efficiency gains.
Get the full analysis with uListen AI
Chinese AI services pose real data‑sovereignty and narrative‑shaping risks.
Any Chinese company is ultimately subject to CCP demands for data and content control; a popular open‑source model like DeepSeek can be turned into a global data vacuum or persuasion engine (e. ...
Get the full analysis with uListen AI
Startups and investors must pivot from ‘foundation model’ bets to product and distribution.
With models commoditized and switching costs near zero, Ross advises foundation‑model companies to rapidly pivot toward differentiated products, UX, domain verticals, and infrastructure moats, citing Suno and Perplexity as examples of focusing on product experience rather than raw model IP.
Get the full analysis with uListen AI
Notable Quotes
““Yes. It is Sputnik. It is Sputnik 2.0.””
— Jonathan Ross
““Open always wins. Always.””
— Jonathan Ross
““The biggest problem is this has just made it absolutely nakedly clear that the models are commoditized.””
— Jonathan Ross
““Training is where you create the model, inference is where you use the model.””
— Jonathan Ross
““I would love nothing more than to compete directly with Chinese companies on a fair footing… But when the government keeps putting its thumb on the scale, now there’s no avoiding it.””
— Jonathan Ross
Questions Answered in This Episode
If models are commoditizing this quickly, what specific moats should AI startups prioritize building over the next 12–24 months?
Jonathan Ross argues that China’s DeepSeek R1 release is a “Sputnik 2. ...
Get the full analysis with uListen AI
How should Western governments redesign export controls and cloud access rules to meaningfully constrain adversarial AI development without stifling domestic innovation?
This shift, he says, undercuts proprietary moats, pressures OpenAI and others to open source, and heightens geopolitical risks as Chinese models and data collection become strategically important to the CCP. ...
Get the full analysis with uListen AI
What is the most realistic path for OpenAI to open source its models while preserving a strong business model and avoiding a perception of panic reaction to DeepSeek?
He believes NVIDIA and inference‑focused chipmakers will benefit from Jevons Paradox as cheaper, better models massively increase compute demand. ...
Get the full analysis with uListen AI
How can users and enterprises practically assess and mitigate the data‑sovereignty risks of using Chinese‑origin AI tools like DeepSeek?
Get the full analysis with uListen AI
In a world where LLMs can discover zero‑day exploits autonomously, what new defensive architectures or norms are needed to prevent an unmanageable escalation of AI‑driven cyber conflict?
Get the full analysis with uListen AI
Transcript Preview
So everyone's seen the news about DeepSeek today. Is it as big a deal as everyone is making of it?
Yes. It is Sputnik 2.0. It is true that they spent about $6 million, or whatever it was, on the training. They spent a lot more distilling or scraping the OpenAI model. I can't speak for Sam Altman or OpenAI, but if I was in that position, I would be gearing up to open source my models in response because it's pretty clear you're gonna lose that, so you might as well try and win all the users and the love from open sourcing. Open always wins. Always.
Ready to go? Jonathan, dude, I am so excited for this. So I've heard so many good things from so many different people, so thank you so much for doing this emergency show with me today.
No problem. But before we start, can I, can I just say one thing?
Sure.
Um, I think you have the most amazing, unique go-to-market that I've ever seen in my life for a podcast. I've never seen this before. I think your strategy is you're literally interviewing every single audience member, forcing them to watch videos and get addicted to you.
(laughs) I mean, I thought you were gonna say my accent, but, uh, I'm totally gonna take that. That's wonderful. Um, and yes, you're absolutely right. Uh, sometimes the biggest benefits of your business, you don't actually see until you do them. Um, but I wanna-
And, and And ... do things at scale.
That is totally true. Um, but I do wanna start. Um, obviously everyone's just talking about DeepSeek.
Mm-hmm.
Little bit of context, why are you so well placed to speak about DeepSeek? And let's just start there for some context.
Well, my background, so I started at Google TPU, the AI chip that Google uses, and in 2016 started an AI chip, um, startup called Groq, with a Q, not with a K, um, that builds, uh, AI accelerator chips, which we call LPUs.
Fantastic. Wonderful. I wish everyone as coherent as you in terms of their introductions. Okay, so everyone's seen the news about DeepSeek today. I wanna just start off by saying, is it as big a deal as everyone is making of it?
Yes. It's Sputnik. It is Sputnik 2.0. And, um, even more so, y- you know that, uh, story about how NASA spent a million dollars designing a pen that could write in space, and the Russians brought a pencil? Um, that just happened again. So it's, it's a huge deal. Yeah.
Okay. Why is it such a huge deal? Let's unpack that.
All right. So up until recently, the Chinese models have been behind, um, sort of Western models. And I say Western including, like, Mistral as well and, and some other companies. And it was, um, largely focused on how much compute you could get. Most people actually, m- most don't realize this, most companies have access to roughly the same amount of data. They buy them from the same data providers, and then they just churn through that data with a GPU, and they produce a model, and then they deploy it. And they'll have some of their own data, and that'll make them subtly better at one thing or another, but they're largely all the same. And the more GPUs, the better the model 'cause you can train on more tokens. It's the scaling law. Uh, this model was, uh, supposedly trained on a smaller number of GPUs, um, and a much, much tighter budget. I think the way that it's been put is less than the salary of many of the executives at Meta, and that's not true. It's, it's actually an el- el- there's an element of marketing, uh, involved in the DeepSeek release.
Install uListen to search the full transcript and get AI-powered insights
Get Full TranscriptGet more from every podcast
AI summaries, searchable transcripts, and fact-checking. Free forever.
Add to Chrome