How Capital is Powering the AI Infrastructure Buildout with Magnetar Capital's Neil Tiwari

How Capital is Powering the AI Infrastructure Buildout with Magnetar Capital's Neil Tiwari

No PriorsFeb 26, 202636m

Sarah Guo (host), Neil Tiwari (guest)

Magnetar’s strategies and entry into computeCoreWeave origins: crypto mining to HPC to AICapEx scale: hundreds of billions to trillionsSPV/DDTL financing and contracted cash-flow collateralGPU depreciation vs. debt amortization mechanicsBottlenecks: power, grid distribution, labor, equipmentShift from training to inference and distributed clustersCircular financing critiques and demand validationAI factories/on-prem compute for Fortune 500Sovereign compute and national securityPhysical AI/robotics and capital intensityCapital rotation away from SaaS and market overreactions

In this episode of No Priors, featuring Sarah Guo and Neil Tiwari, How Capital is Powering the AI Infrastructure Buildout with Magnetar Capital's Neil Tiwari explores financing AI compute: structuring capital, scaling power, and infrastructure bottlenecks Magnetar Capital’s Neil Tiwari outlines how AI infrastructure has become a trillion-dollar CapEx buildout problem where capital structure—not just chips—determines who can scale.

Financing AI compute: structuring capital, scaling power, and infrastructure bottlenecks

Magnetar Capital’s Neil Tiwari outlines how AI infrastructure has become a trillion-dollar CapEx buildout problem where capital structure—not just chips—determines who can scale.

He describes early CoreWeave financing as primarily secured by investment-grade contracted cash flows (take-or-pay), with GPUs as secondary collateral, addressing common fears about rapid GPU depreciation.

The conversation shifts from 2023–2024’s chip scarcity to 2026’s practical bottlenecks: power delivery, grid constraints, and surprisingly basic shortages like steel, electricians, transformers, and substations.

Looking forward, Tiwari expects growth in complex inference workloads, distributed inference clusters, on-prem “AI factories,” sovereign compute buildouts, and a broader rotation toward asset-heavy “physical AI” that will require creative debt/project-finance approaches.

Key Takeaways

Capital structure is becoming a core competitive advantage in AI compute.

At trillion-dollar CapEx levels, equity-only financing is too dilutive; structured credit and project-like financing enable faster scale while matching asset cash flows to liabilities.

Get the full analysis with uListen AI

The “GPU collateral is like used cars” critique misses the real underwriting.

Tiwari argues the primary collateral in many AI compute SPVs is contracted, investment-grade cash flows (take-or-pay), with GPUs serving as secondary/tertiary protection rather than the main repayment source.

Get the full analysis with uListen AI

Fast payback periods can neutralize depreciation risk.

He notes typical compute CapEx can pay back in ~2–3 years while debt may run 4–5 years and amortize fully—reducing reliance on uncertain terminal/residual GPU values.

Get the full analysis with uListen AI

Financing is expanding from only investment-grade offtakers to mixed portfolios.

Early structures relied on hyperscalers and other IG counterparties; now lenders are increasingly comfortable blending IG buyers with AI-native labs/startups to broaden who can access financed compute.

Get the full analysis with uListen AI

The bottleneck has shifted from chips to “making chips usable.”

Even with more chip availability, deploying them into revenue-generating clusters is constrained by power delivery, facilities buildout, staffing, and specialized electrical infrastructure.

Get the full analysis with uListen AI

Inference growth changes infrastructure design and the financing problem.

Inference introduces latency, variability, and memory-throughput constraints, pushing toward smaller, distributed clusters; financing must adapt beyond centralized training megaclusters and IG borrowers.

Get the full analysis with uListen AI

Power constraints are as much about distribution/storage as generation (near term).

Tiwari highlights “stranded power” and argues the next 6–12 months’ leverage comes from storage, flexibility, and “bring-your-own-capacity” approaches (solar, gas turbines) rather than waiting on new generation.

Get the full analysis with uListen AI

Notable Quotes

We actually stumbled across the compute problem before it was compute.

Neil Tiwari

What really allowed them to kind of win this market early on was focus on two things. It was scale and reliability.

Neil Tiwari

What was oftentimes characterized in the media was these debt structures had GPUs as collateral… What got missed was… the primary collateral was the contracted cash flows from investment-grade counterparties.

Neil Tiwari

Fast-forward to 2026… actually taking these chips and then making them into useful revenue-generating assets is really the bottleneck now.

Neil Tiwari

You don’t see any dark GPUs.

Neil Tiwari

Questions Answered in This Episode

In the SPV/DDTL structures you describe, what covenants or monitoring metrics matter most (utilization, uptime, counterparty concentration, pricing floors)?

Magnetar Capital’s Neil Tiwari outlines how AI infrastructure has become a trillion-dollar CapEx buildout problem where capital structure—not just chips—determines who can scale.

Get the full analysis with uListen AI

How do lenders haircut or model the residual value of GPUs today—by generation, secondary markets, or expected inference redeployability?

He describes early CoreWeave financing as primarily secured by investment-grade contracted cash flows (take-or-pay), with GPUs as secondary collateral, addressing common fears about rapid GPU depreciation.

Get the full analysis with uListen AI

When mixing investment-grade and non-IG offtakers in one financing vehicle, what portfolio construction rules (limits, tranching, reserves) make the risk acceptable?

The conversation shifts from 2023–2024’s chip scarcity to 2026’s practical bottlenecks: power delivery, grid constraints, and surprisingly basic shortages like steel, electricians, transformers, and substations.

Get the full analysis with uListen AI

What specific operational capabilities create CoreWeave’s reliability edge (software stack, failure management, maintenance, networking), and can newer inference clouds realistically replicate it?

Looking forward, Tiwari expects growth in complex inference workloads, distributed inference clusters, on-prem “AI factories,” sovereign compute buildouts, and a broader rotation toward asset-heavy “physical AI” that will require creative debt/project-finance approaches.

Get the full analysis with uListen AI

On “circular financing,” what would you consider a red-flag structure (e.g., vendor financing + aggressive rev rec), and what’s the clean version that you’re seeing instead?

Get the full analysis with uListen AI

Transcript Preview

Sarah Guo

[upbeat music] Hi, listeners. Welcome back to No Priors. Today, I'm here with Neil Tiwari of Magnetar Capital. This is a twenty-two billion dollar alternative asset manager at the center of the AI compute build-out. We talk about the financial innovation, depreciation of GPUs, and what's next in AI compute. Welcome. Thanks so much for doing this, Neil.

Neil Tiwari

Absolutely. You know, really happy to be here.

Sarah Guo

So you are leading AI infrastructure at Magnetar. You're at the center of the build-out, enabling it, financing it. For any of our listeners who haven't heard, can you just explain a little bit what Magnetar is?

Neil Tiwari

Sure, um, so Magnetar's been around for-- actually, this is our, our twentieth year. Uh, we're an alternative asset manager, and that can mean a lot of different things.

Sarah Guo

Mm-hmm.

Neil Tiwari

Um, but we have three primary strategies. The first one is private credit, uh, the second one is a venture strategy, and the third is more of a systematic or quantitative-focused, uh, public strategy as well. And so I think, you know, when, when people look at us and, and, you know, why are we here in this moment, especially on building out AI infrastructure, um, I think a lot of it has to do with kind of our unique lens on helping to build, uh, capital-intensive businesses and using creative financing, whether it's venture or other structures with unique elements, and I think we're going to talk a lot about that, but, um, to build out, uh, and, and optimize the balance sheets for these capital-intensive businesses.

Sarah Guo

So I remember hearing about you guys originally. So you're the first investor I think we've ever had on the podcast, I'm excited about this.

Neil Tiwari

That's exciting. Thank you. [chuckles]

Sarah Guo

Uh, I remember hearing about you and Magnetar initially around... I was like, "Who's this big owner of CoreWeave?" [chuckles]

Neil Tiwari

Yeah.

Sarah Guo

And also, um, you know, helping OpenAI with some of their early build-outs. When did you guys first start looking at the problem and thinking about how to, how to solve it?

Neil Tiwari

Yeah, so we actually, you know, stumbled across the, the compute problem before it was compute. Um, you know, we met, uh, CoreWeave back in, uh, twenty twenty-one, and that was when they were actually transitioning from, uh, mining Ethereum into, uh, high-performance compute. And at that time, it was using the GPU as a, you know, uh, an instrument to mine, uh, cryptocurrencies, and interestingly, that same instrument could be used for high-performance computing applications. Uh, and the first one was, uh, visual effects, uh, which-- so think of, like, things like movies, Marvel movies, and things like that.

Sarah Guo

Mm-hmm. Mm-hmm.

Neil Tiwari

And so they were transitioning, um, at that point, between crypto mining into the first kind of, uh, high-performance compute use case, and this was all before AI.

Sarah Guo

Mm-hmm.

Neil Tiwari

And so we made our first investment before the AI trade started, um, but we added a lot of optionality where, you know, we could envision a world where, uh, the GPU could be used for a lot of different high-performance kind of computing applications. I think, um, you know, AI was on the radar, machine learning was on the radar for us, um, but w- I wouldn't say that we could foresee-

Install uListen to search the full transcript and get AI-powered insights

Get Full Transcript

Get more from every podcast

AI summaries, searchable transcripts, and fact-checking. Free forever.

Add to Chrome