No Priors

How Capital is Powering the AI Infrastructure Buildout with Magnetar Capital's Neil Tiwari

Sarah Guo and Neil Tiwari on financing AI compute: structuring capital, scaling power, and infrastructure bottlenecks.

Sarah GuohostNeil Tiwariguest
Feb 26, 202636m
Magnetar’s strategies and entry into computeCoreWeave origins: crypto mining to HPC to AICapEx scale: hundreds of billions to trillionsSPV/DDTL financing and contracted cash-flow collateralGPU depreciation vs. debt amortization mechanicsBottlenecks: power, grid distribution, labor, equipmentShift from training to inference and distributed clustersCircular financing critiques and demand validationAI factories/on-prem compute for Fortune 500Sovereign compute and national securityPhysical AI/robotics and capital intensityCapital rotation away from SaaS and market overreactions

In this episode of No Priors, featuring Sarah Guo and Neil Tiwari, How Capital is Powering the AI Infrastructure Buildout with Magnetar Capital's Neil Tiwari explores financing AI compute: structuring capital, scaling power, and infrastructure bottlenecks Magnetar Capital’s Neil Tiwari outlines how AI infrastructure has become a trillion-dollar CapEx buildout problem where capital structure—not just chips—determines who can scale.

At a glance

WHAT IT’S REALLY ABOUT

Financing AI compute: structuring capital, scaling power, and infrastructure bottlenecks

  1. Magnetar Capital’s Neil Tiwari outlines how AI infrastructure has become a trillion-dollar CapEx buildout problem where capital structure—not just chips—determines who can scale.
  2. He describes early CoreWeave financing as primarily secured by investment-grade contracted cash flows (take-or-pay), with GPUs as secondary collateral, addressing common fears about rapid GPU depreciation.
  3. The conversation shifts from 2023–2024’s chip scarcity to 2026’s practical bottlenecks: power delivery, grid constraints, and surprisingly basic shortages like steel, electricians, transformers, and substations.
  4. Looking forward, Tiwari expects growth in complex inference workloads, distributed inference clusters, on-prem “AI factories,” sovereign compute buildouts, and a broader rotation toward asset-heavy “physical AI” that will require creative debt/project-finance approaches.

IDEAS WORTH REMEMBERING

7 ideas

Capital structure is becoming a core competitive advantage in AI compute.

At trillion-dollar CapEx levels, equity-only financing is too dilutive; structured credit and project-like financing enable faster scale while matching asset cash flows to liabilities.

The “GPU collateral is like used cars” critique misses the real underwriting.

Tiwari argues the primary collateral in many AI compute SPVs is contracted, investment-grade cash flows (take-or-pay), with GPUs serving as secondary/tertiary protection rather than the main repayment source.

Fast payback periods can neutralize depreciation risk.

He notes typical compute CapEx can pay back in ~2–3 years while debt may run 4–5 years and amortize fully—reducing reliance on uncertain terminal/residual GPU values.

Financing is expanding from only investment-grade offtakers to mixed portfolios.

Early structures relied on hyperscalers and other IG counterparties; now lenders are increasingly comfortable blending IG buyers with AI-native labs/startups to broaden who can access financed compute.

The bottleneck has shifted from chips to “making chips usable.”

Even with more chip availability, deploying them into revenue-generating clusters is constrained by power delivery, facilities buildout, staffing, and specialized electrical infrastructure.

Inference growth changes infrastructure design and the financing problem.

Inference introduces latency, variability, and memory-throughput constraints, pushing toward smaller, distributed clusters; financing must adapt beyond centralized training megaclusters and IG borrowers.

Power constraints are as much about distribution/storage as generation (near term).

Tiwari highlights “stranded power” and argues the next 6–12 months’ leverage comes from storage, flexibility, and “bring-your-own-capacity” approaches (solar, gas turbines) rather than waiting on new generation.

WORDS WORTH SAVING

6 quotes

We actually stumbled across the compute problem before it was compute.

Neil Tiwari

What really allowed them to kind of win this market early on was focus on two things. It was scale and reliability.

Neil Tiwari

What was oftentimes characterized in the media was these debt structures had GPUs as collateral… What got missed was… the primary collateral was the contracted cash flows from investment-grade counterparties.

Neil Tiwari

Fast-forward to 2026… actually taking these chips and then making them into useful revenue-generating assets is really the bottleneck now.

Neil Tiwari

You don’t see any dark GPUs.

Neil Tiwari

The true bottleneck… is… things like structural steel… finding electricians… substations, transformers, air chillers.

Neil Tiwari

QUESTIONS ANSWERED IN THIS EPISODE

5 questions

In the SPV/DDTL structures you describe, what covenants or monitoring metrics matter most (utilization, uptime, counterparty concentration, pricing floors)?

Magnetar Capital’s Neil Tiwari outlines how AI infrastructure has become a trillion-dollar CapEx buildout problem where capital structure—not just chips—determines who can scale.

How do lenders haircut or model the residual value of GPUs today—by generation, secondary markets, or expected inference redeployability?

He describes early CoreWeave financing as primarily secured by investment-grade contracted cash flows (take-or-pay), with GPUs as secondary collateral, addressing common fears about rapid GPU depreciation.

When mixing investment-grade and non-IG offtakers in one financing vehicle, what portfolio construction rules (limits, tranching, reserves) make the risk acceptable?

The conversation shifts from 2023–2024’s chip scarcity to 2026’s practical bottlenecks: power delivery, grid constraints, and surprisingly basic shortages like steel, electricians, transformers, and substations.

What specific operational capabilities create CoreWeave’s reliability edge (software stack, failure management, maintenance, networking), and can newer inference clouds realistically replicate it?

Looking forward, Tiwari expects growth in complex inference workloads, distributed inference clusters, on-prem “AI factories,” sovereign compute buildouts, and a broader rotation toward asset-heavy “physical AI” that will require creative debt/project-finance approaches.

On “circular financing,” what would you consider a red-flag structure (e.g., vendor financing + aggressive rev rec), and what’s the clean version that you’re seeing instead?

EVERY SPOKEN WORD

Install uListen for AI-powered chat & search across the full episode — Get Full Transcript

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome