Skip to content
Lex Fridman PodcastLex Fridman Podcast

DeepSeek, China, OpenAI, NVIDIA, xAI, TSMC, Stargate, and AI Megaclusters | Lex Fridman Podcast #459

Dylan Patel is the founder of SemiAnalysis, a research & analysis company specializing in semiconductors, GPUs, CPUs, and AI hardware. Nathan Lambert is a research scientist at the Allen Institute for AI (Ai2) and the author of a blog on AI called Interconnects. Thank you for listening ❤ Check out our sponsors: https://lexfridman.com/sponsors/ep459-sb See below for timestamps, transcript, and to give feedback, submit questions, contact Lex, etc. *Transcript:* https://lexfridman.com/deepseek-dylan-patel-nathan-lambert-transcript *CONTACT LEX:* *Feedback* - give feedback to Lex: https://lexfridman.com/survey *AMA* - submit questions, videos or call-in: https://lexfridman.com/ama *Hiring* - join our team: https://lexfridman.com/hiring *Other* - other ways to get in touch: https://lexfridman.com/contact *EPISODE LINKS:* Dylan's X: https://x.com/dylan522p SemiAnalysis: https://semianalysis.com/ Nathan's X: https://x.com/natolambert Nathan's Blog: https://www.interconnects.ai/ Nathan's Podcast: https://www.interconnects.ai/podcast Nathan's Website: https://www.natolambert.com/ Nathan's YouTube: https://youtube.com/@natolambert Nathan's Book: https://rlhfbook.com/ *SPONSORS:* To support this podcast, check out our sponsors & get discounts: *Invideo AI:* AI video generator. Go to https://lexfridman.com/s/invideoai-ep459-sb *GitHub:* Developer platform and AI code editor. Go to https://lexfridman.com/s/github-ep459-sb *Shopify:* Sell stuff online. Go to https://lexfridman.com/s/shopify-ep459-sb *NetSuite:* Business management software. Go to https://lexfridman.com/s/netsuite-ep459-sb *AG1:* All-in-one daily nutrition drinks. Go to https://lexfridman.com/s/ag1-ep459-sb *OUTLINE:* 0:00 - Introduction 3:33 - DeepSeek-R1 and DeepSeek-V3 25:07 - Low cost of training 51:25 - DeepSeek compute cluster 58:57 - Export controls on GPUs to China 1:09:16 - AGI timeline 1:18:41 - China's manufacturing capacity 1:26:36 - Cold war with China 1:31:05 - TSMC and Taiwan 1:54:44 - Best GPUs for AI 2:09:36 - Why DeepSeek is so cheap 2:22:55 - Espionage 2:31:57 - Censorship 2:44:52 - Andrej Karpathy and magic of RL 2:55:23 - OpenAI o3-mini vs DeepSeek r1 3:14:31 - NVIDIA 3:18:58 - GPU smuggling 3:25:36 - DeepSeek training on OpenAI data 3:36:04 - AI megaclusters 4:11:26 - Who wins the race to AGI? 4:21:39 - AI agents 4:30:21 - Programming and AI 4:37:49 - Open source 4:47:01 - Stargate 4:54:30 - Future of AI *PODCAST LINKS:* - Podcast Website: https://lexfridman.com/podcast - Apple Podcasts: https://apple.co/2lwqZIr - Spotify: https://spoti.fi/2nEwCF8 - RSS: https://lexfridman.com/feed/podcast/ - Podcast Playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4 - Clips Channel: https://www.youtube.com/lexclips *SOCIAL LINKS:* - X: https://x.com/lexfridman - Instagram: https://instagram.com/lexfridman - TikTok: https://tiktok.com/@lexfridman - LinkedIn: https://linkedin.com/in/lexfridman - Facebook: https://facebook.com/lexfridman - Patreon: https://patreon.com/lexfridman - Telegram: https://t.me/lexfridman - Reddit: https://reddit.com/r/lexfridman

Lex FridmanhostNathan LambertguestDylan PatelguestGuestguest
Feb 2, 20255h 6mWatch on YouTube ↗

At a glance

WHAT IT’S REALLY ABOUT

DeepSeek’s Shockwave: Cheap Chinese AI, Chips, and Geopolitics Collide

  1. Lex Fridman speaks with semiconductor analyst Dylan Patel and AI researcher Nathan Lambert about the ‘DeepSeek moment’—China’s release of powerful, ultra‑cheap, open‑weight reasoning models V3 and R1—and why it matters technically, economically, and geopolitically.
  2. They unpack how DeepSeek achieved frontier‑level performance at a fraction of OpenAI’s apparent cost, through mixture‑of‑experts architectures, low‑level GPU optimizations, and aggressive post‑training and reinforcement learning for reasoning.
  3. The conversation then zooms out to export controls, NVIDIA’s dominance, TSMC’s centrality, U.S.–China–Taiwan tensions, power‑hungry AI megaclusters like OpenAI’s proposed “Stargate,” and what all this implies for AGI timelines and global stability.
  4. Throughout, they revisit open‑source vs. closed models, the future of agents and reasoning, and how rapidly cheaper, more capable AI will reshape software, industry, and possibly geopolitical power balances.

IDEAS WORTH REMEMBERING

5 ideas

DeepSeek proved frontier‑class, open‑weight reasoning can be ultra‑cheap—and that’s a genuine shock to the ecosystem.

DeepSeek V3 (chat) and R1 (reasoning) match or approach GPT‑4‑class capabilities on many benchmarks while reportedly costing only millions to train, thanks to mixture‑of‑experts, custom attention, and brutal low‑level CUDA‑bypassing optimizations. R1 is open‑weight with a permissive MIT‑style license, which forces U.S. labs, especially Meta and OpenAI, to rethink how closed and how expensive their offerings can remain.

Mixture‑of‑experts and custom attention (MLA) are now central levers for cost‑efficient scaling.

Rather than activating all parameters for every token, DeepSeek’s MoE architecture turns on a small subset of ‘experts’ per step, dramatically cutting training and inference FLOPs while keeping a huge total parameter “knowledge” budget. Their multi‑head latent attention (MLA) further slashes KV‑cache memory and bandwidth needs at long context, enabling dense reasoning traces at a fraction of the usual cost.

Reasoning models fundamentally change inference economics by exploding test‑time compute.

Models like OpenAI o1/o3 and DeepSeek R1 generate long chains of thought and often sample many candidate solutions per query, especially on hard tasks like ARC‑AGI or complex code. This makes inference dramatically more memory‑ and compute‑intensive than classic chat, pushing up costs and making GPU memory bandwidth and interconnect—not just raw FLOPs—the binding constraints.

Export controls constrain China’s *AI usage* and density more than its ability to train frontier models at all.

Patel and Lambert argue that with 10k–50k+ GPUs already in‑country (and access to H20‑class parts, smuggling, and cloud rentals), focused Chinese teams like DeepSeek can still reach the frontier. Controls mainly limit how many GPUs can be devoted to inference at massive scale, meaning China may be able to *build* powerful models but have a harder time deploying them broadly in the economy and military.

TSMC remains a single point of failure for global semiconductors—and thus for AI.

TSMC manufactures the majority of advanced chips used in servers, phones, cars, and AI accelerators. Its R&D base in Taiwan is irreplaceable in the near term. While the U.S. can import engineers and subsidize fabs, and Intel is trying to catch up, any serious disruption to TSMC or Hsinchu R&D would ripple through the entire tech economy, raising the stakes of Taiwan’s security.

WORDS WORTH SAVING

5 quotes

The ‘DeepSeek moment’ is real. Five years from now, people will still remember it as a pivotal event in tech history.

Lex Fridman

Every company that’s trying to push the frontier of AI has failed runs. You need failed runs to push the envelope on your infrastructure.

Nathan Lambert

If you think AGI is five or ten years away, export controls probably guarantee China wins long term—*unless* AI does something transformative in the short term.

Dylan Patel

Open weights means you can download the model to a computer in your own house that has no internet and you’re totally in control of your data.

Nathan Lambert

Superhuman persuasion will happen before superhuman intelligence.

Dylan Patel (citing Sam Altman)

DeepSeek V3 and R1: architectures, training, costs, and user experienceOpen weights vs. open source: licensing, transparency, and ecosystem pressurePost‑training and reasoning: RLHF, RL with verifiable rewards, chain‑of‑thoughtMixture‑of‑experts, MLA attention, and extreme low‑level GPU optimizationsExport controls, Chinese AI capacity, and U.S.–China–Taiwan geopoliticsNVIDIA, TPUs, and the AI hardware / data‑center arms race (megaclusters, Stargate)Open‑source frontier models, distillation, and the future of agents and coding

High quality AI-generated summary created from speaker-labeled transcript.

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome