Skip to content
Dwarkesh PodcastDwarkesh Podcast

Dylan Patel on Dwarkesh Patel: How EUV Tools Cap AI by 2030

Carl Zeiss optics and specialized mirror stacks bottleneck ASML output; CoWoS deposits and turbine delays add years between capex and delivered compute.

Dwarkesh PatelhostDylan Patelguest
Mar 13, 20262h 31mWatch on YouTube ↗

CHAPTERS

  1. Capex vs reality: why $600B doesn’t instantly become 50 GW of compute

    Dwarkesh opens with the apparent mismatch between hyperscaler capex forecasts and the much smaller amount of data-center power that can realistically come online in a single year. Dylan explains that a large share of “AI capex” is spent ahead of time on long-lead items like power agreements, turbine deposits, and site build-outs, so the spending and the delivered compute are offset in time.

  2. AI labs’ funding and the scramble for capacity: why OpenAI/Anthropic still feel constrained

    The conversation shifts to why major labs raise enormous sums despite seemingly manageable per-GW rental costs. Dylan argues that explosive inference revenue growth implies rapidly rising inference fleets, and those fleets must grow even if R&D training stays flat—forcing labs into last-minute, higher-cost sourcing and revenue-share deals.

  3. Why older GPUs can get more valuable: depreciation, pricing, and model-driven utility

    Dylan and Dwarkesh unpack the surprising claim that an H100 can be worth more today than years ago. The key is that in a supply-constrained world, GPU pricing is driven less by “new chip competition” and more by the value of what current models can produce; better models can make old GPUs more economically productive.

  4. Nvidia’s early allocation grab: TSMC N3 capacity, memory lockups, and Google getting squeezed

    Dwarkesh asks why TSMC appears to have allocated so much advanced-node capacity to Nvidia rather than spreading it across competing accelerators (TPUs, Trainium, etc.). Dylan argues Nvidia signaled demand earlier, secured non-cancelable commitments, and coordinated upstream supply-chain readiness (PCBs, memory), while some rivals had delays or were less aggressive—leaving Google short of in-house TPU capacity and forced to deploy more GPUs.

  5. From shifting bottlenecks to the long-run answer: semiconductors, not power, dominate scaling limits

    Dylan frames the evolving constraint story: CoWoS and power mattered recently because they were short lead-time constraints, but as AI becomes the dominant driver of semiconductor demand, the bottleneck reverts to the deepest layers of chip manufacturing. By late decade, the limiting factor is no longer reallocating capacity from PCs/phones to AI, but expanding the absolute semiconductor production base.

  6. ASML as the #1 constraint by 2030: EUV tool throughput translated into gigawatts

    Dylan lays out a concrete conversion from AI data-center gigawatts to EUV ‘passes’ and then to the number of EUV tools required. The punchline is that a relatively small number of EUV tools can gate enormous downstream capex and economic value, and ASML’s ability to scale EUV output is fundamentally slow due to extreme engineering complexity and supply-chain limits (e.g., Zeiss optics).

  7. Why ASML can’t just ‘triple capacity’: the artisanal supply chain behind EUV machines

    Dwarkesh presses on why ASML can’t simply spend more to increase EUV output. Dylan details the intricate sub-supply chains—tin-droplet laser sources, Zeiss multi-layer mirrors, nanometer-scale motion stages, metrology, and on-site assembly—arguing that the constraint is specialized labor, qualification cycles, and production ramp ‘hell,’ not only capex willingness.

  8. Can we dodge EUV by going back to older nodes? Why ‘just use 7nm’ is harder than it sounds

    Dwarkesh proposes a fallback: if EUV caps leading-edge output, why not use older fabs and DUV multi-patterning (as China does) to produce more chips? Dylan argues performance differences aren’t captured by simple FLOPS comparisons; system-level constraints (memory bandwidth, interconnect, scaling efficiency) and architectural advances compound, making modern nodes and packaging/networking far more valuable than raw process shrink alone suggests.

  9. China vs the West by 2030–2035: indigenizing tools, scale vs node leadership, and timeline dependence

    Dylan and Dwarkesh explore when China could outscale the West in semiconductors if timelines are long enough. Dylan expects China to fully indigenize DUV by ~2030 and to have EUV prototypes/tools that ‘work,’ but doubts mass production at scale by then; he argues the geopolitical/industrial outcome depends heavily on whether AI-driven returns justify massive Western capex early (fast timelines) versus slower progress that lets China’s verticalization catch up (long timelines).

  10. The incoming memory crunch: HBM vs commodity DRAM, bandwidth economics, and consumer fallout

    The discussion turns to memory as a near-term choke point: HBM consumes far more wafer area per bit than commodity DRAM, and AI accelerators’ economics revolve around bandwidth per wafer, not bits per wafer. Dylan argues you can’t simply switch to DDR without wasting compute on memory waits, and forecasts rising DRAM prices pushing BOM shocks into smartphones/PCs—especially low- and mid-range devices—driving volume declines and public backlash against AI.

  11. Why power won’t be the limiting factor: behind-the-meter generation, alternative turbines, and modular builds

    Dwarkesh asks whether power can scale to match potential 200 GW/year chip output by 2030. Dylan contends power is solvable via many supply paths—reciprocating engines, aero-derivatives, ship engines, fuel cells, solar+storage, peakers, and unlocking grid headroom—plus industrial modularization that reduces on-site labor and speeds deployment, even if it raises costs modestly relative to GPU value creation.

  12. Why space GPUs aren’t happening this decade: deployment latency, failure rates, and interconnect topology

    Dylan argues space data centers fail the core constraint: in a chip-constrained world, the priority is getting scarce GPUs producing tokens as soon as they’re manufactured. Space adds months of delay, complicates maintenance/RMAs, and introduces extreme networking and reliability challenges—especially as models increasingly benefit from large-scale distributed expert routing and tight interconnects.

  13. Why few hedge funds ‘make the AGI trade’: conviction, interpretation, and bottleneck selection

    Dwarkesh challenges why more finance players haven’t exploited SemiAnalysis-style bottleneck forecasts, pointing to prominent examples. Dylan argues many do trade the theme, but the hard part is conviction about AI takeoff and selecting the most underappreciated constraint (e.g., memory) early enough—especially when industry participants themselves often dismiss aggressive growth projections until reality forces repricing.

  14. N2, Apple, and Taiwan risk: shifting TSMC’s customer gravity and the fragility of concentration

    In closing, they discuss whether TSMC could ‘kick Apple out’ on N2 as AI accelerators move to leading nodes. Dylan expects Apple to become a smaller share of TSMC over time as AI/HPC margins and prepaid capacity commitments dominate, though TSMC will more likely squeeze Apple’s flexibility than remove them outright; they end on Taiwan risk, arguing that evacuating engineers doesn’t replace lost fabs and would cause a severe, global compute growth collapse.

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome