No PriorsHow Capital is Powering the AI Infrastructure Buildout with Magnetar Capital's Neil Tiwari
EVERY SPOKEN WORD
35 min read · 7,071 words- 0:00 – 0:05
Cold Open
- SGSarah Guo
[upbeat music]
- 0:05 – 0:26
Neil Tiwari Introduction
- SGSarah Guo
Hi, listeners. Welcome back to No Priors. Today, I'm here with Neil Tiwari of Magnetar Capital. This is a twenty-two billion dollar alternative asset manager at the center of the AI compute build-out. We talk about the financial innovation, depreciation of GPUs, and what's next in AI compute. Welcome. Thanks so much for doing this, Neil.
- NTNeil Tiwari
Absolutely. You know, really happy
- 0:26 – 1:28
Magnetar’s Story
- NTNeil Tiwari
to be here.
- SGSarah Guo
So you are leading AI infrastructure at Magnetar. You're at the center of the build-out, enabling it, financing it. For any of our listeners who haven't heard, can you just explain a little bit what Magnetar is?
- NTNeil Tiwari
Sure, um, so Magnetar's been around for-- actually, this is our, our twentieth year. Uh, we're an alternative asset manager, and that can mean a lot of different things.
- SGSarah Guo
Mm-hmm.
- NTNeil Tiwari
Um, but we have three primary strategies. The first one is private credit, uh, the second one is a venture strategy, and the third is more of a systematic or quantitative-focused, uh, public strategy as well. And so I think, you know, when, when people look at us and, and, you know, why are we here in this moment, especially on building out AI infrastructure, um, I think a lot of it has to do with kind of our unique lens on helping to build, uh, capital-intensive businesses and using creative financing, whether it's venture or other structures with unique elements, and I think we're going to talk a lot about that, but, um, to build out, uh, and, and optimize the balance sheets for these capital-intensive businesses.
- 1:28 – 6:15
Why CoreWeave Helped Magnetar Win
- SGSarah Guo
So I remember hearing about you guys originally. So you're the first investor I think we've ever had on the podcast, I'm excited about this.
- NTNeil Tiwari
That's exciting. Thank you. [chuckles]
- SGSarah Guo
Uh, I remember hearing about you and Magnetar initially around... I was like, "Who's this big owner of CoreWeave?" [chuckles]
- NTNeil Tiwari
Yeah.
- SGSarah Guo
And also, um, you know, helping OpenAI with some of their early build-outs. When did you guys first start looking at the problem and thinking about how to, how to solve it?
- NTNeil Tiwari
Yeah, so we actually, you know, stumbled across the, the compute problem before it was compute. Um, you know, we met, uh, CoreWeave back in, uh, twenty twenty-one, and that was when they were actually transitioning from, uh, mining Ethereum into, uh, high-performance compute. And at that time, it was using the GPU as a, you know, uh, an instrument to mine, uh, cryptocurrencies, and interestingly, that same instrument could be used for high-performance computing applications. Uh, and the first one was, uh, visual effects, uh, which-- so think of, like, things like movies, Marvel movies, and things like that.
- SGSarah Guo
Mm-hmm. Mm-hmm.
- NTNeil Tiwari
And so they were transitioning, um, at that point, between crypto mining into the first kind of, uh, high-performance compute use case, and this was all before AI.
- SGSarah Guo
Mm-hmm.
- NTNeil Tiwari
And so we made our first investment before the AI trade started, um, but we added a lot of optionality where, you know, we could envision a world where, uh, the GPU could be used for a lot of different high-performance kind of computing applications. I think, um, you know, AI was on the radar, machine learning was on the radar for us, um, but w- I wouldn't say that we could foresee-
- SGSarah Guo
Mm-hmm
- NTNeil Tiwari
... everything that happened. We just happened to be, you know, at the right place at the right time, and we continued to double down, um, as the company progressed and started, you know, shifting into more workloads that were machine learning and, and kind of AI training based.
- SGSarah Guo
Did you have, like, an existing significant data center investing footprint?
- NTNeil Tiwari
No.
- SGSarah Guo
Mm-hmm.
- NTNeil Tiwari
I mean, I think, you know, uh, interestingly, at Magnetar, there, you know, w- we have invested across asset classes. Um, so we, we've done a lot of property investing, real estate investing, as an example, um, investing in energy. We had an energy business historically, and so a lot of the elements for, you know, what constitutes a data center: power, energy, land, uh, real estate. You know, we had a lot of the, the background in those spaces. I think we were new to compute, right?
- SGSarah Guo
Mm-hmm.
- NTNeil Tiwari
Like I-- that was a, a new sector for us, and so kind of those two worlds merging, um, you know, we, we obviously, you know, came up on the curve on the compute side, uh, but we had a lot of, you know, background on, um, the, the elements that constitute what it means to build a cloud.
- SGSarah Guo
So you guys just really-- you were in this company, you saw the demand, and you said, like: "It's gonna grow, and we're gonna make this a big part of our business."
- NTNeil Tiwari
Exactly.
- SGSarah Guo
Yeah.
- NTNeil Tiwari
I think, you know, what was interesting was we made our first investment in twenty twenty-one, um, and then about a year later, we continued to see expansion of use cases, uh, for... At that time, it was called high-performance compute, and then it was kind of towards the end of twenty-two, the whole AI, uh, discussion started. And as we entered twenty twenty-three, uh, CoreWeave, uh, started to train models for OpenAI.
- SGSarah Guo
Mm-hmm.
- NTNeil Tiwari
Um, and that's when things really started growing, because the sheer amount of compute that was needed to train an LLM, this was, like, the first time it had ever been done. And what was interesting was what kind of allowed them to take advantage of that opportunity was the historical kind of backgrounds of a lot of the founders, uh, were in energy asset management. And when you fast-forward to today, and you look in, like, what it-- what constitutes your ability to build a GPU cloud, it's your ability to manage these highly complex assets, and it fundamentally comes down to access to power and energy.
- SGSarah Guo
Mm-hmm.
- NTNeil Tiwari
And so they had these elements with them, and they obviously brought on a lot of talent on the cloud side. And so you put all these together, and at that moment, it allowed them to, um, you know, build very large-scale, reliable, um, clusters for OpenAI and obviously many other customers since then. And I think the last comment I'll make is, what really allowed them to kind of win this market early on was focus on two things. It was scale-
- SGSarah Guo
Mm-hmm
- NTNeil Tiwari
... and reliability. And I think those were the two things that, um, are really difficult for a lot of the new entrants since then, 'cause scale has to do with your access to capital-
- SGSarah Guo
Mm-hmm
- NTNeil Tiwari
... your access to energy, power, data center. And then reliability really had to do with their, their ability to manage a giant fleet of GPUs, uh, which is actually quite complicated. Um, you know, whether it's reliability from, you know, GPU failures or software challenges, you know, building a fleet that can healthily be online all the time at, you know, ninety-nine point nine percent reliability is incredibly difficult, and that's something that they had started back in twenty seventeen, twenty eighteen timeframe, and, and they were at the right moment, at the right place, with the right technology stack, um, to really build, um, uh, the optimal cloud for that moment.
- 6:15 – 9:02
Scaling CapEx Efficiently
- SGSarah Guo
... I've definitely experienced that with, you know, our portfolio of companies that are building large training clusters. Uh, uh, it, uh, CoreWeave has a reputation-
- NTNeil Tiwari
Yeah
- SGSarah Guo
-for reliability that not everyone has reached. Can you just help characterize, if you fast-forward, like, two and a half, three years now, like, what is the scale of the problem today?
- NTNeil Tiwari
Yeah. So if you look at, um, kind of CapEx, right? Let's starting with that. So CapEx for AI compute and infrastructure in twenty twenty-six, you know, at least from the hyperscalers, is projected to be between six hundred and sixty and six hundred and ninety, uh, billion dollars. And over the next several years, um, you know, that scales to trillions of dollars, right? And so the, the scale of the problem is: how do you build, um, you know, that size of CapEx efficiently? And I think a lot of that has to do with not only, you know, your ability to have access to, you know, those core elements, um, energy, power, you know, uh, and, and your ability to have data center space, et cetera. But I think one of the things that's not talked about as much is capital-
- SGSarah Guo
Mm-hmm
- NTNeil Tiwari
... and access to capital, and how is capital structured? Um, and what I mean by that is, this is, you know, billions to trillions of dollars of CapEx.
- SGSarah Guo
Mm-hmm.
- NTNeil Tiwari
And just using equity dollars alone is not an efficient way to scale this. That's obviously massive dilution. You know, there's, there's-- it's not an easy problem to solve.
- SGSarah Guo
When we first met-
- NTNeil Tiwari
Yeah
- SGSarah Guo
... I had, like, slowly come to this realization. I was like: "I don't think we should take the dilution for the cluster."
- NTNeil Tiwari
Yeah.
- SGSarah Guo
Yeah.
- NTNeil Tiwari
Right. Exactly. And so that's where I think, you know, when you and I have talked about, like, structuring, and, and I can give a couple examples, um, if that's helpful. I think the first one was, uh, DDTL structures or SPV debt structures that, um, had a... think of it as, like, an SPV.
- SGSarah Guo
Mm-hmm.
- NTNeil Tiwari
Inside of the SPV are the Cap-- is the CapEx, the collateral, um, which is the GPUs, and the contracts themselves. Um, and so in this example-
- SGSarah Guo
Mm-hmm
- NTNeil Tiwari
... the actual asset or collateral was not really just the GPUs themselves, it was really the con-contracted cash flows-
- SGSarah Guo
Mm-hmm
- NTNeil Tiwari
... from, in this case, investment-grade counterparties. And so I think the reason-
- SGSarah Guo
This is the consumer-
- NTNeil Tiwari
Yeah
- SGSarah Guo
... of the compute.
- NTNeil Tiwari
The consumer of the compute.
- SGSarah Guo
Yeah.
- NTNeil Tiwari
Exactly. You know, your Microsofts, your, your Metas, et cetera, of the world. And I think the reason, um, that was done is, is really twofold. When, when you look at the scale of the problem, uh, you know, those particular contracts, uh, needed billions of dollars of debt to finance the CapEx. You know, obviously, for a nascent and, and new and growing company, that's, that's really hard to raise. Um, so part of structuring it this way is ensuring that you have kind of guaranteed offtake on the back end to, uh, minimize the risk for, you know, debt holders.
- 9:02 – 11:42
Debunking GPU Collateral Risk
- NTNeil Tiwari
And I think that's a lot of what the market got wrong, um, especially when there was a lot of press about this early on-
- SGSarah Guo
Mm
- NTNeil Tiwari
... where it was, "There's billions of debt on these highly depreciating assets, and it's extremely speculative." And the, what was of-oftentimes characterized in the media was, uh, these debt structures had GPUs as collateral, and that's like putting a used car a-as collateral, which is obviously just gonna depreciate incredibly fast. You know, that's a very risky kind of structure. And I think what got missed was the, the GPUs themselves were actually, like, the second, second or tertiary level of collateral in those instruments. The primary collateral, uh, was the contracted cash flows from investment-grade counterparties. And so, like-
- SGSarah Guo
It's Microsoft or NVIDIA or somebody like that-
- NTNeil Tiwari
Exactly
- SGSarah Guo
... saying, "I'm committed to pay you."
- NTNeil Tiwari
Exactly.
- SGSarah Guo
Or, like, "I know you can pay me."
- NTNeil Tiwari
Take or pay contracts-
- SGSarah Guo
Yeah
- NTNeil Tiwari
... and they're, like, five years in length.
- SGSarah Guo
Mm-hmm.
- NTNeil Tiwari
So I think that was, like, one feature, uh, that, that's unique to talk about. And then the second one really has to do with, um, the debt itself and how it amortizes. And so, like, in simple terms, you know, when you have debt, you have principal and interest, and you have to pay it off over time. And in these structures, typically, the payback period on the CapEx was roughly two to three years. Um, and the, uh, structures themselves, the debt was over five years, you know, four to five years in length, where the entire debt amortized during the, um, outstanding period that the, that the debt was out. And so at the end, you ended up with zero balance, uh, for the debt, and there was no balloon payment or, or anything that was really due on the back end. And so the question that often c-- you know, comes up, uh, is, you know, isn't that a very risky, uh, type of structure because these things are depreciating incredibly quickly? So I think, you know, there's, there's two comments here. First is, on that depreciation question, in these kind of debt structures, it doesn't really matter because the debt's fully paid off by the end of the debt term against committed contractual, um, you know, contracts from investment-grade counterparties. Um, and then at the very end, the, the actual upside or residual value, and I know there's a lot of questions on, on residual value, is, is held by, um, you know, the, uh, the cloud player in this example, right? CoreWeave, right?
- SGSarah Guo
Mm-hmm.
- NTNeil Tiwari
Or, or, you know, any others. Um, and that's a really interesting prospect because you can see a world where all of this CapEx is paid off-
- SGSarah Guo
Mm-hmm
- NTNeil Tiwari
... incredibly quickly, and there's an opportunity to redeploy it. Um, where you can redeploy it, um, without having to pay for any additional, uh, you know, debt, obviously, against that redeployment.
- 11:42 – 13:01
How Deal Structures Evolve
- SGSarah Guo
How have the instruments changed?
- NTNeil Tiwari
They've changed in several ways, where, uh, you know, the first is... And when you look at these SPVs, I think you're starting to see ways to change the portfolio construction of who can go inside of one of these debt structures. And so, you know, early on, in the early days, these were all only investment-grade counterparties.
- SGSarah Guo
Mm-hmm.
- NTNeil Tiwari
'Cause there was-- the, the space was so nascent, the operators had no experience, and I think now what you're starting to see is a blend of investment-grade and non-investment grade. So, like, what does that actually mean? What that means is, you know, you're, you're seeing these structures with investment-grade counterparties, like your hyperscalers and your other corporates that, that are IG. Um-... um, mixed alongside, uh, some of the AI-native companies. And so think of the AI model companies, the labs, software companies that are building AI, startups. You're seeing those companies get mixed in alongside, um, the IG companies to build a portfolio, 'cause now you have, you know, the cr- the history that you can do this.
- SGSarah Guo
Yeah.
- NTNeil Tiwari
And now you have structures where you can kind of balance the risk, uh, with, with IG and non-IG. And we're continuing to see that kind of move to be able to help finance, you know, really the model companies and a lot of these startups. Obviously, that was difficult to do, you know, three or four years ago. That's starting to become easier, um, as these companies have more runtime and ability to, uh, you know, make the compute
- 13:01 – 15:28
What Bottlenecks Buildout
- NTNeil Tiwari
fungible.
- SGSarah Guo
All our, uh, portfolio companies that buy compute tell me it's a supply-constrained-
- NTNeil Tiwari
Mm-hmm
- SGSarah Guo
... market today. One, is that true, and two, when you think about, like, wow, continuing to grow your business or grow this ecosystem, like, what's going to stop it? Like, what could slow down a build-out?
- NTNeil Tiwari
Yeah. I mean, I think what's interesting is, uh, if you look at, like, 2023, 2024, we were very supply-constrained, and the supply constraint was chips.
- SGSarah Guo
Mm-hmm.
- NTNeil Tiwari
And no one could a- get access to chips.
- SGSarah Guo
Yes.
- NTNeil Tiwari
And then-
- SGSarah Guo
We bought chips.
- NTNeil Tiwari
We bought chips, right? [laughing]
- SGSarah Guo
Yeah. [laughing]
- NTNeil Tiwari
And, you know, there was this thought that, okay, there's gonna be an overbuild of chips, and then the supply constraints will go away. Well, you know, fast-forward to 2026, and what we see is, you know, there is obviously more availability of chips, but to build and operate these, uh, you know, data centers requires people, power, infrastructure, a lot of these things that, uh, have a lot of, of bottlenecks. And so actually taking these chips and then making them into useful revenue-generating assets is really the bottleneck now.
- SGSarah Guo
It's also not clear that there is supply of chips at the latest-
- NTNeil Tiwari
Correct
- SGSarah Guo
... uh, generation-
- NTNeil Tiwari
Yeah
- SGSarah Guo
... at scale-
- NTNeil Tiwari
That's true
- SGSarah Guo
... soon, which is how everybody wants them.
- NTNeil Tiwari
Exactly.
- SGSarah Guo
Yeah.
- NTNeil Tiwari
I think, you know, you see, um, not only-- you're starting to see interesting, and not only just the, the high-end players want access to the latest chips, you're seeing the latest, you know, obviously, startups want access to those, and I think it has to do with efficiency.
- SGSarah Guo
Mm-hmm.
- NTNeil Tiwari
Um, you know, one of our friends, or one of your friends as well, Dylan Patel, over at SemiAnalysis, posted this interesting article last week on inference, and inference spend, and inference kind of performance.
- SGSarah Guo
Mm-hmm.
- NTNeil Tiwari
Um, and, you know, there, there's a lot of, you know, jokes made about Jensen math. Um, and it was interesting 'cause the-
- SGSarah Guo
Seems pretty good at math, honestly. [chuckles]
- NTNeil Tiwari
He's, he's actually great at math. Um, and so for the, uh, Hoppers, the H, uh, 100 or H200 series of GPUs into the Blackwells, uh, there was a claim made that it could be thirty times more efficient, and I think the data from, you know, SemiAnalysis showed that it was ninety to a hundred times more efficient-
- SGSarah Guo
Mm
- 15:28 – 17:35
Circular Financing Critiques
- SGSarah Guo
Um, help me address, like, this, uh, criticism around circular financing.
- NTNeil Tiwari
Yeah, I know, um, it's obviously a topic du jour, and I think, you know, the way we see it and frame it really has to do with the demand signals-
- SGSarah Guo
Mm-hmm
- NTNeil Tiwari
... um, and who are the eventual buyers, and, and how is this being used? And so, at least from what our perspective, we s- we continue to see, uh, insatiable demand. Um, and if you go back to, you know, the previous kind of big tech build-out back in the early 2000s, there was obviously a lot of fiber that was being built, and you had dark fiber, you know, in, in, in, an overbuild happening. And I think what you see here is I, I've... you know, you don't see any dark GPUs-
- SGSarah Guo
No, I've been looking
- NTNeil Tiwari
... any GPU. Exactly.
- SGSarah Guo
Yeah. [chuckles]
- NTNeil Tiwari
Any GPUs used.
- SGSarah Guo
Yeah.
- NTNeil Tiwari
Um, and then number two, you're starting to see, uh, actual economic value. Um, so I think last year, Enterprise AI had about thirty-seven billion of total TAM, um, and it's continuing to grow like crazy, and at least personally, and, and, and I'm sure you see this too, but I use these tools all the t- all the time, and I find it-
- SGSarah Guo
Continuously
- NTNeil Tiwari
... incredibly valuable, right? The actual tokenomics of positive, uh, ROI is, is actually here now, I think, from our perspective. Um, and so the, the circularity, you know, comment, I think, applies when you're building, um, you know, speculative, uh, compute and capacity, uh, or if you're, you know, purely doing vendor financing, and it's, you know, you're trying to do some type of, you know, you need some type of, you know, rev rec-type i- item related to that. And that, that's not what we see. Like, what we see is financing to support, to build out the demand against, uh, use cases that are very positive in their ROI. And so, like, our perspective is that that's, uh, you know, not a real, real concern that we have. Um, and it, and it really has to do with who are the ultimate buyers here? The ultimate buyers have been in, at scale, the hyperscalers. They're deploying this, uh, at scale, and the economics are positive, uh, when you look at a unit economic basis in terms of, uh, deploying intelligence. Um, and I think we're at a moment in time where you're- we're really starting to see that.
- 17:35 – 23:10
The Shift from Training to Inference Workloads
- SGSarah Guo
In my own experience, um, I have been a heavy AI user for several years.
- NTNeil Tiwari
Mm-hmm.
- SGSarah Guo
But reasoning advances the ability to scale inference, especially around code-
- NTNeil Tiwari
Mm-hmm
- SGSarah Guo
... means I'm up against my max limit all the time- [chuckles]
- NTNeil Tiwari
Yeah
- SGSarah Guo
... in a way that was not true, uh, uh, uh, initially. How does the inference workloads actually growing? I mean, it's a, it's a good demand signal-
- NTNeil Tiwari
Mm-hmm
- SGSarah Guo
... that there is value, but how does that change your business?
- NTNeil Tiwari
Yeah, so I think one thing that's interesting that we're seeing is, obviously, there's been the, the shift from training to inference, you know, over the last few years. That, that split continues to grow on the inference side as usable, uh, and ROI-positive applications get developed. I think the two things I see on the inference side now is, um, inference has- is a lot more complex than I think initially thought, and what I mean by that is it's not as simple as-... um, you, you train a model, and then you-- it's easy to inference it. In certain, certain cases, you can do that on, on similar infrastructure, but there are issues around latency, um, fungibility of that, uh, and, and really optimizing the cost of your compute on the inference side. Um, how do you manage, uh, you know, peaks of inference demand? And, and obviously, it's not linear like training, and your GPUs are on all the time, you know, a hundred percent of the time. And so with inference, you have a lot more variability.
- SGSarah Guo
Mm-hmm.
- NTNeil Tiwari
Um, and so there's a lot more nuances, uh, in, in optimizing inference. I think the second thing that's observed, um, that I've seen is, uh, inference is definitely a memory problem, a memory throughput problem. Um, you know, on the inference side, you know, you have these kind of phases called prefill and, and decode-
- SGSarah Guo
Decode.
- NTNeil Tiwari
Right? And how you optimize that across a fleet of GPUs is actually a unique technical problem. Um, and then the third is, what I would say is distribution. Um, you know, a lot of times training infrastructure is, is quite centralized. What you're seeing with inference is, in many use cases, as this becomes more ubiquitous, you're gonna have more and more decentralized, uh, inference clusters.
- SGSarah Guo
Mm-hmm.
- NTNeil Tiwari
And actually, one of my favorite companies is one of your companies, Base10, which is really, you know, optimizing distributed inference at scale. And I think one thing that's interesting when you look at companies like that and, and other inference clouds is: how do you optimize the, uh, compute and, and build out these clusters that could actually look very different than a training cluster? Where training cluster might be fifty, hundred, hundred and fifty megawatts in one kind of four walls.
- SGSarah Guo
Mm-hmm.
- NTNeil Tiwari
I think you're starting to see distributed inference, which could be, you know, four or five megawatts and five separate data centers and stitching them together in, in different areas, right? And that looks very different from a kinda power perspective, how you... You know, the, the software matters a lot more when you're doing, like, distributed inference. And then, in terms of your question, how it impacts us, I think one of the things that we've been, you know, focused on is, um, you know, where we started this conversation with you on, um, financing compute, that was really obviously... Uh, it started with mostly training. Um, a lot of those hyperscalers are now doing a lot of inference on that same infrastructure, but these are investment-grade counterparties. You know, it's easy to- it's easier to lend, uh, money to build out these clusters to those customers. I think now that you have this new crop of inference clouds and application layer companies that are needing tons of inference, I think the, the key question that we're really focused on is: how can we finance the next build, which is distributed inference?
- SGSarah Guo
Mm-hmm.
- NTNeil Tiwari
Um, and maybe the last, you know, one or two takeaways would be, uh, one thing I'm seeing is, you know, for every application layer company out there, the highest line item from COGS is compute.
- SGSarah Guo
Mm-hmm.
- NTNeil Tiwari
Um, and then the inference companies and inference clouds out there, most of them are, um, purchasing up compute from either other clouds or unused ac, ca- uh, capacity. And when you look at, like, margins for that, you've got, like, layered margins.
- SGSarah Guo
Mm-hmm.
- NTNeil Tiwari
And so there's a push to kind of own your own infrastructure-
- SGSarah Guo
Right
- NTNeil Tiwari
... um, to really drive and increase, you know, mar, uh, profit margins, but also it's the ability to kind of have control of your own destiny. And I think a lot of folks are starting to, the application layer companies and inference clouds, are, are grappling with: how can we build and own and operate our own infrastructure? Um, and that's something I'm, I'm, I'm really looking into.
- SGSarah Guo
Mm. I am too, and I think one of the things that, uh, is going to make a big difference in this ecosystem is, like, can the inference clouds, like Base10, can they deliver reliability-
- NTNeil Tiwari
Mm-hmm
- SGSarah Guo
... that you would expect from a, a cloud?
- NTNeil Tiwari
Yeah.
- 23:10 – 24:12
AI Factories
- NTNeil Tiwari
kind of one thing I, I find really interesting that NVIDIA is doing is, is this concept of AI factories.
- SGSarah Guo
Mm-hmm.
- NTNeil Tiwari
And building AI factories, um, you know, behind corporates and AI companies, and maybe the way I unpack that is you've got kind of more large, monolithic cloud players, the hyperscalers and the neo clouds, that are building large-scale, um, you know, cloud environments. Uh, and a lot of where I think NVIDIA and others see this going is, yes, those are gonna be important components, and those are gonna be huge markets, but corporates, Fortune, you know, five hundred AI companies that use a ton of compute-
- SGSarah Guo
Mm-hmm
- NTNeil Tiwari
... will want dedicated AI factories associated with workloads that they run and that they have control over. And so I think you're starting to see, you know, the early indications of how do you finance and build out, uh, almost think of, like, literally AI factories that sit on-prem-
- SGSarah Guo
Mm-hmm
- NTNeil Tiwari
... with the company that can operate their workloads. Uh, it's a-
- SGSarah Guo
You're talking about my Mac mini farm.
- NTNeil Tiwari
Exactly. [laughing]
- SGSarah Guo
[laughing] No, but, but all joking aside,
- 24:12 – 28:27
Constraints of the Current Power Grid
- SGSarah Guo
I, I think one thing that is another supporting factor for use of all of the compute we have is, and, and can create over the coming years, is, um, power is clearly the limiting factor.
- NTNeil Tiwari
Mm-hmm.
- SGSarah Guo
Um, it's easier to get more power in smaller-
- NTNeil Tiwari
Mm-hmm
- SGSarah Guo
... units.
- NTNeil Tiwari
Yep.
- SGSarah Guo
I think that as inference demand is growing, these, uh, anyone who has-... uh, usable compute for inference is gonna find a lot of partners for offtake.
- NTNeil Tiwari
Exactly.
- SGSarah Guo
Okay, let's look at the future a little bit while we, while we have ten minutes. Um, uh, let's talk about the, the macro. Like, people talk about energy, they talk about, um, natural gas, uh, the grid, the slowness of nuclear. Like, what do you think about over the next six or twelve months?
- NTNeil Tiwari
Over the last year, I've been spending a ton of time in the power and energy markets, um, and looking at interesting solutions that can help scale power, you know, for the gap that we see. I think a few observations that we've seen, the first is, um, we do have a power problem, but I think it's a bit more nuanced than, than a, a lot of the reporting out there, where I think-
- SGSarah Guo
It's just we can't generate.
- NTNeil Tiwari
We can't generate, yeah.
- SGSarah Guo
Yeah, yeah.
- NTNeil Tiwari
I think there's actually quite a bit of stranded power across the grid, across the country. And what I mean by that is, you know, a lot of the utilities are built in a way where they're focused on peak power, right? So they've got natural gas peakers, and they're focused on, you know, providing peak power-
- SGSarah Guo
Mm-hmm
- NTNeil Tiwari
... for those moments where demand is, is kind of off the charts. Um, and that's obviously only for a few days out of the year. So there's lots of generating assets out there. Uh, the question is, they're a bit stranded, right? And so there's kind of-- I, I look at the power problem as being kind of multiple fold. The first one is, how can you take the power we have on the grid and actually make it usable?
- SGSarah Guo
Mm-hmm.
- NTNeil Tiwari
And, and a lot of that has to do with flexibility and storage. And so we've been spending a lot of time looking at an energy-- in the energy storage business and distribution. How can you store unused capacity, peak demand shave, uh, capacity, store it, and then distribute it when it's needed?
- SGSarah Guo
Mm-hmm.
- NTNeil Tiwari
Um, we made an investment in a company called Taurus. I think I've, I've mentioned, uh, to you, which is building like this distributed utility layer, uh, almost like this mesh infrastructure-
- SGSarah Guo
Right
- NTNeil Tiwari
... to, um, takes-- to store excess capacity or store capacity from a variety of, of sources and then distribute it at the time when it's needed, and so I think that's kind of a critical layer that, that needs to be built. Um, and then longer term, there is a generation problem, but I think in the shorter term, it- it's really- it's more on the distribution and storage.
- SGSarah Guo
Mm-hmm.
- NTNeil Tiwari
Uh, and then, um, the other piece I would say is, you know, the true bottleneck, um, at least in the short term, the next six to twelve months, is, is incredibly, I don't wanna use the word simplistic, but it's things like, uh, structural steel. It's, uh, finding electricians, uh, that can, you know, build this-
- SGSarah Guo
Sorry.
- NTNeil Tiwari
Yeah.
- SGSarah Guo
You, you can't get enough steel?
- NTNeil Tiwari
You can't get enough steel.
- SGSarah Guo
Okay. [chuckles]
- NTNeil Tiwari
You can't... Yeah. [laughing]
- 28:27 – 29:54
Sovereign Compute Buildouts
- SGSarah Guo
Couple topics to hit before we lose you. Um, uh, new players, how do you think about the sovereigns and what they're doing in their build-outs?
- NTNeil Tiwari
Yeah, I think, um-
- SGSarah Guo
They seem to be able to fund themselves to some degree.
- NTNeil Tiwari
Exactly, right.
- SGSarah Guo
Yeah.
- NTNeil Tiwari
Um, you know, you saw the news from India last week. Uh, obviously, a lot of the news in the Mid East, Southeast Asia. I think, you know, we're continuing to see that sovereigns view compute and AI, you know, as, as, uh-- and, and even we do here in the, in the United States, as, as, as a matter of national security.
- SGSarah Guo
Mm-hmm.
- NTNeil Tiwari
Um, and obviously, the funding of those clusters is, is very different than funding, like, a private cluster, and so you've got, you know, government capital that can be used for that. I... So I think there's two things that, you know, I find interesting in that space. I think one is: who are the partners, um, that are going to build those, that capacity?
- SGSarah Guo
Mm-hmm.
- NTNeil Tiwari
And what are the cybersecurity kind of implications and environments for that? And so those, those are the two nuances, I think, with sovereigns, is they need to find players that can rapidly scale, compute, um, in the, in their countries, and oftentimes they don't necessarily have these players that know how to build-
- SGSarah Guo
Right
- NTNeil Tiwari
... and scale GPU compute. I think that's a great place for the United States to lean in and, and help build, you know, sovereign ecosystems around the world. And then there's a matter of cybersecurity, and, and how do you make it into a, a, a truly, um, you know, safe ecosystem for, for those sovereigns? And so I think there's a lot of work to do still on the cyber side, um, especially as you look at, you know, scaling sovereign AI.
- 29:54 – 32:48
Physical AI Capital Needs
- SGSarah Guo
What is your thinking on physical AI?... So they're, you know, if it works-
- NTNeil Tiwari
Yeah
- SGSarah Guo
- CapEx-intensive build.
- NTNeil Tiwari
Absolutely.
- SGSarah Guo
Yeah.
- NTNeil Tiwari
And, you know, maybe I'll just take a second to say, one of the things that we observed, um, from 2010 to, like, the, you know, the early 2020s, was we were in a very capital asset-light mode of build.
- SGSarah Guo
Mm-hmm.
- NTNeil Tiwari
Like, SaaS was-- You know, you never heard Magnetar in SaaS, right?
- SGSarah Guo
No.
- NTNeil Tiwari
'Cause it was just purely asset-light.
- SGSarah Guo
Mm-hmm.
- NTNeil Tiwari
Compute and everything we saw starting in, you know, 2021 is asset-heavy, and that's where you started hearing a lot more about us. And I think physical AI is actually an extension of that. And so what you're seeing is, part of the reason, I think... And I think we all have scars from the 2010s of hardware companies that did not make a lot of money for us.
- SGSarah Guo
Mm-hmm.
- NTNeil Tiwari
Uh, part of the scars was, it was so difficult to scale hardware companies, um, you know, because the software was so difficult to build. You needed to spend so much money building the hardware, the software was an afterthought. What you're seeing now is, now that you have more general-purpose, uh, software via AI, uh, it can make the hardware easier to scale because you have, you know, software that can be, you know, can interact with more, more hardware. And so I think the natural kind of extension of what we see is kind of what happened in the compute markets, where you really needed flexible capital, where it wasn't just equity, it was debt and, you know, a variety of project finance to really scale CapEx. You're gonna see that same kind of need, uh, in physical AI, and it simply has to do with capital intensity, right? You know, on the compute side, for, like, CoreWeave as an example, they needed billions of capital to scale, uh, you know, that cloud. And I think whether it's a robotics company or whether it's a, you know, uh, a manufacturing, uh, focused company, drones, defense, all of these areas are incredibly capital-intensive. And then, now that you add AI into them, I think it can help them scale faster, uh, quite frankly, and, uh, capital intensity is still there. And so there's a moment in time now where you're gonna have to really look at optimizing balance sheets, um, for physical AI to really grow and scale.
- SGSarah Guo
I think to your point of how the, um, early AI compute contracts were structured, um, I, I, I went from, you know, learning to be an investor in an era and an environment where robotics was a great way to lose a lot of money-
- NTNeil Tiwari
Yeah
- SGSarah Guo
- for a long period of time.
- NTNeil Tiwari
Mm-hmm.
- SGSarah Guo
You remember that, too?
- NTNeil Tiwari
Yeah.
- SGSarah Guo
Um, now I sit on the board of two robotics companies-
- NTNeil Tiwari
[chuckles]
- SGSarah Guo
- so let's hope it's not true anymore. But I, I'd say, like, it, it's just a question of capability to me. Like, you know, whether it's in the home or in industrial settings, where, like, it is simply not a good human job, or we don't have the labor-
- NTNeil Tiwari
Yeah
- SGSarah Guo
... um, you are going to have, I, if-- I, I think the products will support investment-grade buyers-
- NTNeil Tiwari
Yep
- SGSarah Guo
- who are going to have contracts that say, like: "We want it," and you can raise debt against it.
- NTNeil Tiwari
Exactly.
- SGSarah Guo
Right. Um, and so I, I think actually that, that feels of a very similar, um,
- 32:48 – 36:04
The Capital Rotation Away from SaaS
- SGSarah Guo
shape. Last question for you, because it is so timely: What do you make of the general capital rotation out of, out of software, the end of software, and it's all, it's all infrastructure labs and AI natives, I guess?
- NTNeil Tiwari
Yeah, yeah. It's interesting to see that every day there's another industry that kind of tanks, whether it's... You know, you saw the wealth advisors tank for a few days, you saw the consulting, consulting companies, you saw-
- SGSarah Guo
Payments
- NTNeil Tiwari
... real estate. Payments-
- SGSarah Guo
Yeah
- NTNeil Tiwari
... real estate, right?
- SGSarah Guo
Mm-hmm.
- NTNeil Tiwari
And I think what you're seeing at least, is, at le- at least in my view, what I saw was, towards the tail end of 2025 and into 2026, like, there was, a- at least in my view, a big step up in performance of usable AI. And I think, you know, what Anthropic was doing really, and Claude, and, like, I-- we use it all. You know, we-- obviously, we use all the models, but, you know, there, there was a definite step up in performance in making AI usable and seeing that it can, you know, truly disrupt these, you know, non-AI native industries. Uh, I think the reaction and rotation out of each of these names is, is a bit much because when you-- I think there's, there's two factors I look at. One is, when you look at valuations, as an example, I think, um, from a free cash flow perspective, SaaS companies are, are, are valued at, at the lowest they've been in, in, in years, you know. And there's a huge margin difference between, you know, what those rev multiples are today and what, what they've been in the past. And so free cash flow margins have steadily increased significantly for SaaS as a whole over the last four or five years, and revenue multiples have stayed, you know, you know, the same or gone down.
- SGSarah Guo
Mm-hmm.
- NTNeil Tiwari
And so, to me, that's a bit of an exaggeration because it really has to do with individual names versus sectors, and I think that's kind of, at least, my take. Is like, in all of these sectors, there are individual names that will learn how to maximize their, uh, you know, uh, value using AI, and there's those that won't. Uh, but what's happening right now is there's, you know, a hammer being hit across all names and not, you know, specific individual names that might not be using it as well. Um, and then the second point, at least, you know, my view is, there are a number of applications that, you know, on paper, sound really interesting, like, oh, AI could just rebuild Slack, or it could rebuild Salesforce, or it could rebuild, you know, X, Y, and Z. I think, you know, the-- it's not just the product, it's the way that's integrated across multiple services and systems across the enterprise that is a lot more difficult to just replicate-
- SGSarah Guo
Mm-hmm
- NTNeil Tiwari
... than I think some of the mar- public markets are, are kind of reacting to.
- SGSarah Guo
And I do think there's, um, a fundamental question, in addition to what you said, which I agree with, of like: Does anybody wanna rebuild it-
- NTNeil Tiwari
Yeah
- SGSarah Guo
- and own it?
- NTNeil Tiwari
Mm-hmm.
- SGSarah Guo
And, uh, you know, there are, to your point of, like, within the software sector in particular, um, there are companies where, uh, uh, they're structurally more protected-
- NTNeil Tiwari
Yeah
- SGSarah Guo
- than there are companies that are at more risk.
- NTNeil Tiwari
Yep.
- SGSarah Guo
Right?
- NTNeil Tiwari
Agree.
- SGSarah Guo
And I, I think it's as simple as, like, you gotta go select.
- NTNeil Tiwari
Yeah, exactly.
- SGSarah Guo
Um, this has been so fun. Thanks so much, Neil.
- NTNeil Tiwari
Yeah, I really appreciate it.
- SGSarah Guo
Congratulations on all the innovation and, uh, on building out all the compute.
- NTNeil Tiwari
Awesome. Thank you. Good to be here. [music]
- SGSarah Guo
Find us on Twitter, @NoPriorsPod. Subscribe to our YouTube channel if you want to see our faces. Follow the show on Apple Podcasts, Spotify, or wherever you listen. That way, you get a new episode every week. And sign up for emails or find transcripts for every episode at no-priors.com.
Episode duration: 36:04
Install uListen for AI-powered chat & search across the full episode — Get Full Transcript
Transcript of episode WSxVh5WvWZ4
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome