Dwarkesh PodcastJensen Huang on Dwarkesh Patel: Why CoWoS Is Nvidia's Moat
CoWoS and HBM commitments placed years early lock supply before rivals can react; no challenger has posted inferencemax results matching Nvidia tokens per watt.
Episode Details
EPISODE INFO
- Released
- April 15, 2026
- Duration
- 1h 43m
- Channel
- Dwarkesh Podcast
- Watch on YouTube
- ▶ Open ↗
EPISODE DESCRIPTION
I asked Jensen about TPU competition, Nvidia’s lock on the ever more bottlenecked supply chain needed to make advanced chips, whether we should be selling AI chips to China, why Nvidia doesn’t just become a hyperscaler, how it makes its investments, and much more. Enjoy! +𝐄𝐏𝐈𝐒𝐎𝐃𝐄 𝐋𝐈𝐍𝐊𝐒
- Transcript: https://www.dwarkesh.com/p/jensen-huang
- Apple Podcasts: https://podcasts.apple.com/us/podcast/jensen-huang-tpu-competition-why-we-should-sell-chips/id1516093381?i=1000761582962
- Spotify: https://open.spotify.com/episode/1viBRy6dQdlSw0OdFvogXB?si=bc2cdbd467ed4ee3
𝐒𝐏𝐎𝐍𝐒𝐎𝐑𝐒
• Crusoe's cloud runs on state-of-the-art Blackwell GPUs, with Vera Rubin deployment scheduled for later this year. But hardware is only part of the story—for inference, Crusoe's MemoryAlloy tech implements a cluster-wide KV cache, delivering up to 10x faster TTFT and 5x better throughput than vLLM. Learn more at https://crusoe.ai/dwarkesh
• Cursor helped me build an AI co-researcher over the course of a weekend. Now I have an AI agent that I can collaborate with in Google Docs via inline comment threads! And while other agentic coding tools feel like a total black-box, Cursor let me stay on top of the full implementation. You can try my co-researcher out at https://github.com/dwarkeshsp/ai_coworker, or get started on your own Cursor project today at https://cursor.com/dwarkesh
• Jane Street spent ~20,000 GPU hours training backdoors into 3 different language models, then challenged my audience to find the triggers. They received some clever solutions—like comparing the base and fine-tuned versions and extrapolating any differences to reveal the hidden backdoor—but no one was able to solve all 3. So if open problems like this excite you, Jane Street is hiring. Learn more at https://janestreet.com/dwarkesh To sponsor a future episode, visit https://dwarkesh.com/advertise. 𝐓𝐈𝐌𝐄𝐒𝐓𝐀𝐌𝐏𝐒 00:00:00 – Is Nvidia’s biggest moat its grip on scarce supply chains? 00:16:25 – Will TPUs break Nvidia’s hold on AI compute? 00:41:06 – Why doesn’t Nvidia become a hyperscaler? 00:57:36 – Should we be selling AI chips to China? 01:35:06 – Why doesn’t Nvidia make multiple different chip architectures?
SPEAKERS
Dwarkesh Patel
hostHost of the Dwarkesh Patel podcast, known for long-form interviews on AI, technology, and geopolitics.
Jensen Huang
guestCEO of Nvidia, leading the company’s GPU and AI computing platform strategy.
EPISODE SUMMARY
In this episode of Dwarkesh Podcast, featuring Dwarkesh Patel and Jensen Huang, Jensen Huang on Dwarkesh Patel: Why CoWoS Is Nvidia's Moat explores jensen Huang on Nvidia’s moats, TPUs, clouds, and China policy Huang argues Nvidia’s core value is “electrons to tokens” conversion, where deep co-design across chips, systems, networking, and software makes commoditization unlikely.
RELATED EPISODES
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome




