Skip to content
No PriorsNo Priors

How Capital is Powering the AI Infrastructure Buildout with Magnetar Capital's Neil Tiwari

By the end of 2026, AI capital expenditure is projected to hit nearly $700 billion. The question isn’t who has the best model, but who has the most creative financing to build out AI infrastructure and beyond. Sarah Guo is joined by Neil Tiwari, Managing Director at Magnetar Capital, a financial innovator helping the AI industry scale from billions to trillions of dollars in CapEx. Neil explains some of the debt structures used to finance massive GPU clusters, who is taking the risk, and how the industry is maturing. Sarah and Neil also discuss how power distribution, energy storage, and physical materials like steel are the bottlenecks of the AI industry. Plus, Neil gives his take on the future of inference-optimized clouds, and why the market shift away from software and into infrastructure might be an overreaction. Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil Chapters: 00:00 – Cold Open 00:05 – Neil Tiwari Introduction 00:26 – Magnetar’s Story 01:28 – Why CoreWeave Helped Magnetar Win 06:15 – Scaling CapEx Efficiently 09:02 – Debunking GPU Collateral Risk 11:42 – How Deal Structures Evolve 13:01 – What Bottlenecks Buildout 15:28 – Circular Financing Critiques 17:35 – The Shift from Training to Inference Workloads 23:10 – AI Factories 24:12 – Constraints of the Current Power Grid 28:27 – Sovereign Compute Buildouts 29:54 – Physical AI Capital Needs 32:48 – The Capital Rotation Away from SaaS 36:04 – Conclusion

Sarah GuohostNeil Tiwariguest
Feb 25, 202636mWatch on YouTube ↗

At a glance

WHAT IT’S REALLY ABOUT

Financing AI compute: structuring capital, scaling power, and infrastructure bottlenecks

  1. Magnetar Capital’s Neil Tiwari outlines how AI infrastructure has become a trillion-dollar CapEx buildout problem where capital structure—not just chips—determines who can scale.
  2. He describes early CoreWeave financing as primarily secured by investment-grade contracted cash flows (take-or-pay), with GPUs as secondary collateral, addressing common fears about rapid GPU depreciation.
  3. The conversation shifts from 2023–2024’s chip scarcity to 2026’s practical bottlenecks: power delivery, grid constraints, and surprisingly basic shortages like steel, electricians, transformers, and substations.
  4. Looking forward, Tiwari expects growth in complex inference workloads, distributed inference clusters, on-prem “AI factories,” sovereign compute buildouts, and a broader rotation toward asset-heavy “physical AI” that will require creative debt/project-finance approaches.

IDEAS WORTH REMEMBERING

5 ideas

Capital structure is becoming a core competitive advantage in AI compute.

At trillion-dollar CapEx levels, equity-only financing is too dilutive; structured credit and project-like financing enable faster scale while matching asset cash flows to liabilities.

The “GPU collateral is like used cars” critique misses the real underwriting.

Tiwari argues the primary collateral in many AI compute SPVs is contracted, investment-grade cash flows (take-or-pay), with GPUs serving as secondary/tertiary protection rather than the main repayment source.

Fast payback periods can neutralize depreciation risk.

He notes typical compute CapEx can pay back in ~2–3 years while debt may run 4–5 years and amortize fully—reducing reliance on uncertain terminal/residual GPU values.

Financing is expanding from only investment-grade offtakers to mixed portfolios.

Early structures relied on hyperscalers and other IG counterparties; now lenders are increasingly comfortable blending IG buyers with AI-native labs/startups to broaden who can access financed compute.

The bottleneck has shifted from chips to “making chips usable.”

Even with more chip availability, deploying them into revenue-generating clusters is constrained by power delivery, facilities buildout, staffing, and specialized electrical infrastructure.

WORDS WORTH SAVING

5 quotes

We actually stumbled across the compute problem before it was compute.

Neil Tiwari

What really allowed them to kind of win this market early on was focus on two things. It was scale and reliability.

Neil Tiwari

What was oftentimes characterized in the media was these debt structures had GPUs as collateral… What got missed was… the primary collateral was the contracted cash flows from investment-grade counterparties.

Neil Tiwari

Fast-forward to 2026… actually taking these chips and then making them into useful revenue-generating assets is really the bottleneck now.

Neil Tiwari

You don’t see any dark GPUs.

Neil Tiwari

Magnetar’s strategies and entry into computeCoreWeave origins: crypto mining to HPC to AICapEx scale: hundreds of billions to trillionsSPV/DDTL financing and contracted cash-flow collateralGPU depreciation vs. debt amortization mechanicsBottlenecks: power, grid distribution, labor, equipmentShift from training to inference and distributed clustersCircular financing critiques and demand validationAI factories/on-prem compute for Fortune 500Sovereign compute and national securityPhysical AI/robotics and capital intensityCapital rotation away from SaaS and market overreactions

High quality AI-generated summary created from speaker-labeled transcript.

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome