Skip to content
No PriorsNo Priors

How Capital is Powering the AI Infrastructure Buildout with Magnetar Capital's Neil Tiwari

By the end of 2026, AI capital expenditure is projected to hit nearly $700 billion. The question isn’t who has the best model, but who has the most creative financing to build out AI infrastructure and beyond. Sarah Guo is joined by Neil Tiwari, Managing Director at Magnetar Capital, a financial innovator helping the AI industry scale from billions to trillions of dollars in CapEx. Neil explains some of the debt structures used to finance massive GPU clusters, who is taking the risk, and how the industry is maturing. Sarah and Neil also discuss how power distribution, energy storage, and physical materials like steel are the bottlenecks of the AI industry. Plus, Neil gives his take on the future of inference-optimized clouds, and why the market shift away from software and into infrastructure might be an overreaction. Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil Chapters: 00:00 – Cold Open 00:05 – Neil Tiwari Introduction 00:26 – Magnetar’s Story 01:28 – Why CoreWeave Helped Magnetar Win 06:15 – Scaling CapEx Efficiently 09:02 – Debunking GPU Collateral Risk 11:42 – How Deal Structures Evolve 13:01 – What Bottlenecks Buildout 15:28 – Circular Financing Critiques 17:35 – The Shift from Training to Inference Workloads 23:10 – AI Factories 24:12 – Constraints of the Current Power Grid 28:27 – Sovereign Compute Buildouts 29:54 – Physical AI Capital Needs 32:48 – The Capital Rotation Away from SaaS 36:04 – Conclusion

Sarah GuohostNeil Tiwariguest
Feb 26, 202636mWatch on YouTube ↗

CHAPTERS

  1. Why AI infrastructure finance matters now (and who Magnetar is)

    Sarah Guo frames the episode around the massive AI compute build-out and why capital structure—not just chips—determines who can scale. Neil Tiwari introduces Magnetar Capital as a multi-strategy alternative asset manager increasingly central to financing AI infrastructure.

  2. Magnetar’s path into AI compute: discovering GPUs through CoreWeave’s crypto-to-HPC pivot

    Neil explains that Magnetar encountered the “compute problem” before it was called AI—through CoreWeave’s transition from Ethereum mining to high-performance compute. The initial bet was on GPU optionality across workloads, not a precise prediction of the LLM boom.

  3. Why CoreWeave “won” early: scale + reliability as differentiators

    The discussion turns to what enabled CoreWeave to seize the OpenAI training moment and outpace newer entrants. Neil argues the core differentiators were the ability to scale quickly (capital + power + space) and the operational reliability of running large GPU fleets.

  4. The trillion-dollar CapEx reality: why equity alone doesn’t work

    Neil quantifies hyperscaler AI infrastructure CapEx projections and explains why funding it purely with equity is inefficient. The chapter sets up why structured credit and project-like financing became essential to grow compute capacity without extreme dilution.

  5. Inside the early GPU financing playbook: SPVs, contracted cash flows, and amortization

    Neil breaks down the common misconception that lenders were taking direct GPU depreciation risk like “used car collateral.” He explains how SPV structures were primarily underwritten to investment-grade contracted cash flows, with GPU collateral as secondary protection, plus amortization designed to fully pay down debt.

  6. How deal structures are evolving: mixing IG and AI-native customers

    As the market matures, Neil describes a shift from strictly investment-grade offtake portfolios to blended structures that include AI-native labs and startups. The goal is to expand financing access while balancing risk through diversified customer pools.

  7. What actually bottlenecks the buildout: from chip scarcity to power, people, and execution

    Neil explains that the bottleneck shifted from access to GPUs in 2023–2024 to the ability to turn chips into revenue-generating infrastructure. Building and operating datacenters requires labor, equipment, and power delivery—now the limiting factors even when some chips are available.

  8. Circular financing critiques: why Neil believes demand and ROI are real

    Sarah raises concerns about “circular financing” and echoes historical overbuild fears (e.g., dark fiber). Neil argues the current market shows strong real demand, high utilization, and increasingly provable economic value from enterprise AI use cases.

  9. Training → inference: why inference is operationally harder than expected

    The conversation shifts to the workload mix moving toward inference, especially with reasoning and code use cases driving usage. Neil details why inference introduces new complexity—latency, variability, memory throughput, and distributed deployment—changing both operations and financing needs.

  10. Owning inference infrastructure to fix layered margins (and the reliability gap)

    Neil notes compute is often the largest COGS line for AI applications and that reselling/stacking clouds can create layered margin compression. This drives application companies and inference clouds to want their own infrastructure, but reliability and heterogeneous performance across “similar” compute remain big issues.

  11. AI factories: dedicated on-prem and corporate-controlled compute

    Neil describes NVIDIA’s “AI factories” concept as a complement to hyperscaler and neo-cloud mega-regions. The idea is that large enterprises will want dedicated, controllable compute environments aligned to their specific workloads, driving new forms of buildout and financing.

  12. Power reality check: stranded capacity, storage/distribution, and ‘bring your own power’

    Neil argues the power problem is nuanced: the near-term constraint is less about generation and more about making existing capacity usable through storage and distribution. He highlights bottlenecks like transformers and substations and discusses hybrid approaches where sites start small on grid interconnect and add their own generation.

  13. Sovereign compute buildouts: national security, partners, and cyber requirements

    Neil discusses increasing sovereign interest in AI compute as a national security priority. He highlights two key challenges: finding capable partners to build and operate GPU infrastructure quickly, and creating security/cyber environments that meet sovereign requirements.

  14. Physical AI and the capital rotation away from SaaS: financing asset-heavy futures

    Neil connects compute infrastructure financing to the coming wave of capital-intensive “physical AI” (robotics, drones, defense, manufacturing). He also comments on public-market rotations away from SaaS, arguing the market may be over-correcting and that winners will be company-specific based on adoption and integration advantages.

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome