Dwarkesh PodcastDylan Patel on Dwarkesh Patel: How EUV Tools Cap AI by 2030
Carl Zeiss optics and specialized mirror stacks bottleneck ASML output; CoWoS deposits and turbine delays add years between capex and delivered compute.
At a glance
WHAT IT’S REALLY ABOUT
AI compute scaling hits bottlenecks in chips, tools, and memory
- Hyperscaler AI capex largely pre-pays long-lead items like turbines, land, and construction years ahead, so spend today does not translate linearly into gigawatts online this year.
- GPU economics are shifting because chip value is increasingly set by the utility of today’s best models under scarcity, making older GPUs (e.g., H100) potentially more valuable now than years ago despite newer generations.
- By the late 2020s the binding constraint to AI compute is expected to move upstream to semiconductor manufacturing capacity—ultimately ASML EUV tools and their intricate supplier network—rather than power or data center construction.
- Falling back to older nodes (e.g., 7nm) is not a clean escape hatch because modern performance depends heavily on system-level factors like interconnect, memory bandwidth, and packaging, not just raw FLOPS or node shrink.
- A major memory crunch (HBM/DRAM) is projected to raise prices, reallocate wafers away from consumer devices, and potentially shrink low-end phone/PC volumes while AI captures a growing share of global memory output.
IDEAS WORTH REMEMBERING
5 ideasCapex is increasingly about reserving future constraints, not just buying GPUs.
Hyperscalers spend heavily on deposits for turbines, power agreements, land, and multi-year buildouts; a meaningful portion of “this year’s capex” is actually securing 2027–2029 capacity.
Early compute commitments create durable margin advantages.
Five-year GPU contracts lock in pricing before scarcity reprices compute to model value; late buyers face higher spot/shorter-term rates or revenue-share markups through cloud channels.
GPU “depreciation” can invert under compute scarcity.
If demand is constrained by supply, pricing is anchored to the value generated by current frontier models; as models become cheaper/faster to serve, an H100 can produce more valuable tokens than it could years earlier.
Nvidia’s share at TSMC reflects earlier, firmer demand signals than rivals.
TSMC allocates based on credible long-term commitments and supply-chain readiness; Nvidia secured capacity earlier while some custom silicon programs faced delays, leaving them squeezed later.
By ~2028–2030, EUV tools (ASML) become the ‘lowest rung’ constraint.
Even aggressive expansion only raises EUV shipments to ~100 tools/year by decade end, and the ecosystem’s scaling is limited by highly specialized sub-suppliers and long ramp times.
WORDS WORTH SAVING
5 quotesAn H100 is worth more today than it was three years ago.
— Dylan Patel
By ’28, ’29, the bottleneck falls to the lowest rung on the supply chain, which is ASML.
— Dylan Patel
It might be a hundred billion dollars worth of AI value… held up by this one point two billion dollars worth of tooling.
— Dylan Patel
If takeoff or timelines are slow enough, then certainly China… [can] catch up drastically.
— Dylan Patel
Space data centers… are not this decade.
— Dylan Patel
High quality AI-generated summary created from speaker-labeled transcript.
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome