How DeepSeek Shocked Silicon Valley & Crashed Nvidia | Pivot

How DeepSeek Shocked Silicon Valley & Crashed Nvidia | Pivot

PivotJan 28, 20257m

Kara Swisher (host), Scott Galloway (host)

DeepSeek’s performance, cost advantages, and impact on AI competitionMarket reaction: Nvidia, big tech, and energy-related stock volatilityChallenging the brute-force, GPU-heavy AI development modelOpen source vs proprietary AI and Yann LeCun’s public argumentsGeopolitics, US export controls, and Chinese AI workaroundsSafety, guardrails, and differences among major AI model providersBroader economic risk: potential tech correction and systemic exposure

In this episode of Pivot, featuring Kara Swisher and Scott Galloway, How DeepSeek Shocked Silicon Valley & Crashed Nvidia | Pivot explores deepSeek Disrupts AI Economics, Rattles Nvidia, And Tests US Strategy The episode examines how Chinese AI model DeepSeek, touted as cheaper and highly capable, has jolted Silicon Valley and Wall Street, triggering sharp drops in Nvidia and other tech and energy-related stocks. Scott Galloway and Kara Swisher discuss whether DeepSeek undermines the assumption that cutting-edge AI requires massive GPU spending and energy consumption. They explore the role of open source in enabling DeepSeek, with Meta’s Yann LeCun arguing that open models, not China per se, are surpassing proprietary ones. The conversation closes on broader implications: regulatory choices, free trade, market corrections, and whether this shock is a blip or the start of a deeper tech revaluation.

DeepSeek Disrupts AI Economics, Rattles Nvidia, And Tests US Strategy

The episode examines how Chinese AI model DeepSeek, touted as cheaper and highly capable, has jolted Silicon Valley and Wall Street, triggering sharp drops in Nvidia and other tech and energy-related stocks. Scott Galloway and Kara Swisher discuss whether DeepSeek undermines the assumption that cutting-edge AI requires massive GPU spending and energy consumption. They explore the role of open source in enabling DeepSeek, with Meta’s Yann LeCun arguing that open models, not China per se, are surpassing proprietary ones. The conversation closes on broader implications: regulatory choices, free trade, market corrections, and whether this shock is a blip or the start of a deeper tech revaluation.

Key Takeaways

Cheaper, efficient AI models can upend capital-intensive AI strategies.

DeepSeek reportedly matches or beats top Western models while costing a fraction to train, questioning the assumption that AI dominance requires unlimited GPU and energy spending.

Get the full analysis with uListen AI

Markets were primed for a correction in AI-related stocks.

Nvidia and others shed huge market value on the DeepSeek news, but prices merely reverted to levels from a few months prior, suggesting froth rather than structural collapse—at least so far.

Get the full analysis with uListen AI

Open source is accelerating AI progress globally, including in China.

DeepSeek leveraged open research and open-source tools like PyTorch and LLaMA, illustrating LeCun’s point that open models can outperform proprietary ones and spread capabilities beyond US firms.

Get the full analysis with uListen AI

Export controls can motivate innovative workarounds, not just slow rivals.

By limiting Nvidia GPU sales to China, US policy may have pushed Chinese firms to find more efficient methods, arguably increasing long‑term competitive pressure on US tech.

Get the full analysis with uListen AI

The AI stack is likely to bifurcate into ‘Walmart’ and ‘Tiffany’ tiers.

Analysts foresee a cheap, good-enough layer of models (where DeepSeek might sit) and a high-end, high‑compute tier for the most advanced capabilities, rather than a single, monolithic market.

Get the full analysis with uListen AI

Inference costs and user willingness to pay are the real bottlenecks.

As LeCun notes, most spending is shifting to inference for billions of users; the sustainability of AI businesses hinges on whether users or customers will fund that ongoing compute burden.

Get the full analysis with uListen AI

AI safety and guardrail decisions vary significantly across providers.

The hosts contrast Anthropic’s stricter refusals with more permissive behavior from other models like LLaMA, highlighting a fragmented and potentially risky landscape for AI safety norms.

Get the full analysis with uListen AI

Notable Quotes

This workaround might tank the US economy.

Scott Galloway

DeepSeek has profited from open research and open source... Because their work is published and open source, everyone can profit from it.

Yann LeCun (quoted by Kara Swisher)

The correct reading is open-sourced models are surpassing proprietary ones.

Yann LeCun (quoted by Kara Swisher)

Everything eventually goes Walmart, Tiffany, right? And they’re saying this might be the Walmart, and it’s the Chinese.

Scott Galloway

If we had let them just buy NVIDIA GPUs, would they have figured out this workaround?

Scott Galloway

Questions Answered in This Episode

If DeepSeek’s low-cost approach scales, how should US AI companies rethink their capital allocation and GPU spending strategies?

The episode examines how Chinese AI model DeepSeek, touted as cheaper and highly capable, has jolted Silicon Valley and Wall Street, triggering sharp drops in Nvidia and other tech and energy-related stocks. ...

Get the full analysis with uListen AI

Does the rise of powerful open-source models ultimately strengthen or weaken US national security in AI, and why?

Get the full analysis with uListen AI

How should regulators balance the innovation benefits of open source with the risks of models that can be downloaded with few or no guardrails?

Get the full analysis with uListen AI

To what extent did export controls on Nvidia chips accelerate Chinese innovation in algorithmic efficiency rather than slow it down?

Get the full analysis with uListen AI

Are current market valuations of AI and chip companies justified if cheaper, ‘good enough’ models become the norm for most applications?

Get the full analysis with uListen AI

Transcript Preview

Kara Swisher

There's a new AI model on the scene that's smart, cheap, and made in China. It's called DeepSeek, and it's causing a panic in Silicon Valley, which is paying a lot of attention, and also on Wall Street. DeepSeek has reportedly outperformed models from OpenAI, Meta, and Anthropic in some tests, and it operates at a fraction of the cost of those models using fewer high-end chips. This is the ones that are made by NVIDIA and are hard to get, and the incumbents have been pricing them up heavily, uh, by grabbing all of them. The markets are not reacting well to DeepSeek. As of this recording, NVIDIA is down 16%, Oracle is down 10%, Microsoft Soft is down nearly 4%. Obviously, Meta is gonna be affected, all the others. So there's a lot to talk about, and I've seen different analysis of exactly what DeepSeek does. Yann LeCun from Meta was making an argument that it isn't as, what they're re- they're doing sort of a cheap and dirty version, and it's not nearly as... The stuff they're doing is much more advanced by the US companies. Uh, it's currently number one on Apple's, uh, free top apps chart. Uh, again, China invading i- in this country in a very different way. So thoughts on this situation? 'Cause you and I have talked about this quite a bit. Is this money ill-spent by US, uh, companies, and is it being relegated to the en- rich incumbents?

Scott Galloway

Well, first, you just have to temper the, or put some context to the... I mean, NVIDIA's down 15 or 16%. It's shed something like a half a trillion dollars, which basically, if you take out Tesla, it shed today the value of the entire global automobile industry sans Tesla. So this is pretty dramatic. But at the same time, that just takes it back to its valuation in October. And when you look at market dynamics, when these companies have experienced these type of run-ups, it is like a balloon inflating beyond its natural capacity, and the slightest, the slightest touch can pop it. And so in some ways, the market was probably looking for an excuse to take these stocks down a bit, and it got it. Because what's interesting is NVIDIA will have a pretty interesting argument on, on Capitol Hill saying, "When you refused to let us sell into these countries, they come up with workarounds." And in this case, this workaround might tank the (laughs) US economy. And everyone's excited about the fact that these models... OpenAI, supposedly, their models, their LLMs cost 100 million to train, and they're claiming this thing costs... and they've been public, it's open source, costs a little over five million to train. So whereas the majority of LLMs and, uh, AI companies have been taking sort of this brute force strategy where it's buy as many chips as possible, this is saying maybe you don't need as many chips. The thing I find equally interesting is the second-order effects here, and that is Constellation Energy and some of these nuclear stocks have skyrocketed because the choke point was supposing it'll be energy. But now with this, this model, which appears to have chips speaking to each other in a more efficient, less energy consumptive way, nuclear stocks are crashing. Y- uh, electric, Constellation Energy, all these things that have had incredible run-ups are saying, "Wait, the entire supply chain or the assumptions we made about the supply chain in terms of the, the kind of the brute force of chips that we're gonna need, the amount of energy," it's all now coming into a little bit of question. But to be clear, the correction here is like, it's taken them back three months. And all of the stocks that have crashed, quote-unquote crashed, are, are only up, you know, 70% for the year now, not 98. And a lot of, uh, analysts, the smart analysts I've read have said, "Like every community or every sector, it's gonna bifurcate into the cheap layer and then the high-end layer, which will still go hard at massive computing and massive energy and do more sophisticated things." And this will be sort of... You know, everything eventually goes Walmart, Tiffany, right? And they're saying this might be the Walmart, and it's the Chinese, and they'll come up with cheaper models. But I, it's fascinating to see that basically this notion, this, this kind of conventional wisdom that you would need massive GPUs and massive energy may not be, um, kind of the written in law-

Install uListen to search the full transcript and get AI-powered insights

Get Full Transcript

Get more from every podcast

AI summaries, searchable transcripts, and fact-checking. Free forever.

Add to Chrome