Skip to content
No PriorsNo Priors

How Capital is Powering the AI Infrastructure Buildout with Magnetar Capital's Neil Tiwari

By the end of 2026, AI capital expenditure is projected to hit nearly $700 billion. The question isn’t who has the best model, but who has the most creative financing to build out AI infrastructure and beyond. Sarah Guo is joined by Neil Tiwari, Managing Director at Magnetar Capital, a financial innovator helping the AI industry scale from billions to trillions of dollars in CapEx. Neil explains some of the debt structures used to finance massive GPU clusters, who is taking the risk, and how the industry is maturing. Sarah and Neil also discuss how power distribution, energy storage, and physical materials like steel are the bottlenecks of the AI industry. Plus, Neil gives his take on the future of inference-optimized clouds, and why the market shift away from software and into infrastructure might be an overreaction. Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil Chapters: 00:00 – Cold Open 00:05 – Neil Tiwari Introduction 00:26 – Magnetar’s Story 01:28 – Why CoreWeave Helped Magnetar Win 06:15 – Scaling CapEx Efficiently 09:02 – Debunking GPU Collateral Risk 11:42 – How Deal Structures Evolve 13:01 – What Bottlenecks Buildout 15:28 – Circular Financing Critiques 17:35 – The Shift from Training to Inference Workloads 23:10 – AI Factories 24:12 – Constraints of the Current Power Grid 28:27 – Sovereign Compute Buildouts 29:54 – Physical AI Capital Needs 32:48 – The Capital Rotation Away from SaaS 36:04 – Conclusion

Sarah GuohostNeil Tiwariguest
Feb 26, 202636mWatch on YouTube ↗

EVERY SPOKEN WORD

  1. 0:000:05

    Cold Open

    1. SG

      [upbeat music]

  2. 0:050:26

    Neil Tiwari Introduction

    1. SG

      Hi, listeners. Welcome back to No Priors. Today, I'm here with Neil Tiwari of Magnetar Capital. This is a twenty-two billion dollar alternative asset manager at the center of the AI compute build-out. We talk about the financial innovation, depreciation of GPUs, and what's next in AI compute. Welcome. Thanks so much for doing this, Neil.

    2. NT

      Absolutely. You know, really happy

  3. 0:261:28

    Magnetar’s Story

    1. NT

      to be here.

    2. SG

      So you are leading AI infrastructure at Magnetar. You're at the center of the build-out, enabling it, financing it. For any of our listeners who haven't heard, can you just explain a little bit what Magnetar is?

    3. NT

      Sure, um, so Magnetar's been around for-- actually, this is our, our twentieth year. Uh, we're an alternative asset manager, and that can mean a lot of different things.

    4. SG

      Mm-hmm.

    5. NT

      Um, but we have three primary strategies. The first one is private credit, uh, the second one is a venture strategy, and the third is more of a systematic or quantitative-focused, uh, public strategy as well. And so I think, you know, when, when people look at us and, and, you know, why are we here in this moment, especially on building out AI infrastructure, um, I think a lot of it has to do with kind of our unique lens on helping to build, uh, capital-intensive businesses and using creative financing, whether it's venture or other structures with unique elements, and I think we're going to talk a lot about that, but, um, to build out, uh, and, and optimize the balance sheets for these capital-intensive businesses.

  4. 1:286:15

    Why CoreWeave Helped Magnetar Win

    1. SG

      So I remember hearing about you guys originally. So you're the first investor I think we've ever had on the podcast, I'm excited about this.

    2. NT

      That's exciting. Thank you. [chuckles]

    3. SG

      Uh, I remember hearing about you and Magnetar initially around... I was like, "Who's this big owner of CoreWeave?" [chuckles]

    4. NT

      Yeah.

    5. SG

      And also, um, you know, helping OpenAI with some of their early build-outs. When did you guys first start looking at the problem and thinking about how to, how to solve it?

    6. NT

      Yeah, so we actually, you know, stumbled across the, the compute problem before it was compute. Um, you know, we met, uh, CoreWeave back in, uh, twenty twenty-one, and that was when they were actually transitioning from, uh, mining Ethereum into, uh, high-performance compute. And at that time, it was using the GPU as a, you know, uh, an instrument to mine, uh, cryptocurrencies, and interestingly, that same instrument could be used for high-performance computing applications. Uh, and the first one was, uh, visual effects, uh, which-- so think of, like, things like movies, Marvel movies, and things like that.

    7. SG

      Mm-hmm. Mm-hmm.

    8. NT

      And so they were transitioning, um, at that point, between crypto mining into the first kind of, uh, high-performance compute use case, and this was all before AI.

    9. SG

      Mm-hmm.

    10. NT

      And so we made our first investment before the AI trade started, um, but we added a lot of optionality where, you know, we could envision a world where, uh, the GPU could be used for a lot of different high-performance kind of computing applications. I think, um, you know, AI was on the radar, machine learning was on the radar for us, um, but w- I wouldn't say that we could foresee-

    11. SG

      Mm-hmm

    12. NT

      ... everything that happened. We just happened to be, you know, at the right place at the right time, and we continued to double down, um, as the company progressed and started, you know, shifting into more workloads that were machine learning and, and kind of AI training based.

    13. SG

      Did you have, like, an existing significant data center investing footprint?

    14. NT

      No.

    15. SG

      Mm-hmm.

    16. NT

      I mean, I think, you know, uh, interestingly, at Magnetar, there, you know, w- we have invested across asset classes. Um, so we, we've done a lot of property investing, real estate investing, as an example, um, investing in energy. We had an energy business historically, and so a lot of the elements for, you know, what constitutes a data center: power, energy, land, uh, real estate. You know, we had a lot of the, the background in those spaces. I think we were new to compute, right?

    17. SG

      Mm-hmm.

    18. NT

      Like I-- that was a, a new sector for us, and so kind of those two worlds merging, um, you know, we, we obviously, you know, came up on the curve on the compute side, uh, but we had a lot of, you know, background on, um, the, the elements that constitute what it means to build a cloud.

    19. SG

      So you guys just really-- you were in this company, you saw the demand, and you said, like: "It's gonna grow, and we're gonna make this a big part of our business."

    20. NT

      Exactly.

    21. SG

      Yeah.

    22. NT

      I think, you know, what was interesting was we made our first investment in twenty twenty-one, um, and then about a year later, we continued to see expansion of use cases, uh, for... At that time, it was called high-performance compute, and then it was kind of towards the end of twenty-two, the whole AI, uh, discussion started. And as we entered twenty twenty-three, uh, CoreWeave, uh, started to train models for OpenAI.

    23. SG

      Mm-hmm.

    24. NT

      Um, and that's when things really started growing, because the sheer amount of compute that was needed to train an LLM, this was, like, the first time it had ever been done. And what was interesting was what kind of allowed them to take advantage of that opportunity was the historical kind of backgrounds of a lot of the founders, uh, were in energy asset management. And when you fast-forward to today, and you look in, like, what it-- what constitutes your ability to build a GPU cloud, it's your ability to manage these highly complex assets, and it fundamentally comes down to access to power and energy.

    25. SG

      Mm-hmm.

    26. NT

      And so they had these elements with them, and they obviously brought on a lot of talent on the cloud side. And so you put all these together, and at that moment, it allowed them to, um, you know, build very large-scale, reliable, um, clusters for OpenAI and obviously many other customers since then. And I think the last comment I'll make is, what really allowed them to kind of win this market early on was focus on two things. It was scale-

    27. SG

      Mm-hmm

    28. NT

      ... and reliability. And I think those were the two things that, um, are really difficult for a lot of the new entrants since then, 'cause scale has to do with your access to capital-

    29. SG

      Mm-hmm

    30. NT

      ... your access to energy, power, data center. And then reliability really had to do with their, their ability to manage a giant fleet of GPUs, uh, which is actually quite complicated. Um, you know, whether it's reliability from, you know, GPU failures or software challenges, you know, building a fleet that can healthily be online all the time at, you know, ninety-nine point nine percent reliability is incredibly difficult, and that's something that they had started back in twenty seventeen, twenty eighteen timeframe, and, and they were at the right moment, at the right place, with the right technology stack, um, to really build, um, uh, the optimal cloud for that moment.

  5. 6:159:02

    Scaling CapEx Efficiently

    1. SG

      ... I've definitely experienced that with, you know, our portfolio of companies that are building large training clusters. Uh, uh, it, uh, CoreWeave has a reputation-

    2. NT

      Yeah

    3. SG

      -for reliability that not everyone has reached. Can you just help characterize, if you fast-forward, like, two and a half, three years now, like, what is the scale of the problem today?

    4. NT

      Yeah. So if you look at, um, kind of CapEx, right? Let's starting with that. So CapEx for AI compute and infrastructure in twenty twenty-six, you know, at least from the hyperscalers, is projected to be between six hundred and sixty and six hundred and ninety, uh, billion dollars. And over the next several years, um, you know, that scales to trillions of dollars, right? And so the, the scale of the problem is: how do you build, um, you know, that size of CapEx efficiently? And I think a lot of that has to do with not only, you know, your ability to have access to, you know, those core elements, um, energy, power, you know, uh, and, and your ability to have data center space, et cetera. But I think one of the things that's not talked about as much is capital-

    5. SG

      Mm-hmm

    6. NT

      ... and access to capital, and how is capital structured? Um, and what I mean by that is, this is, you know, billions to trillions of dollars of CapEx.

    7. SG

      Mm-hmm.

    8. NT

      And just using equity dollars alone is not an efficient way to scale this. That's obviously massive dilution. You know, there's, there's-- it's not an easy problem to solve.

    9. SG

      When we first met-

    10. NT

      Yeah

    11. SG

      ... I had, like, slowly come to this realization. I was like: "I don't think we should take the dilution for the cluster."

    12. NT

      Yeah.

    13. SG

      Yeah.

    14. NT

      Right. Exactly. And so that's where I think, you know, when you and I have talked about, like, structuring, and, and I can give a couple examples, um, if that's helpful. I think the first one was, uh, DDTL structures or SPV debt structures that, um, had a... think of it as, like, an SPV.

    15. SG

      Mm-hmm.

    16. NT

      Inside of the SPV are the Cap-- is the CapEx, the collateral, um, which is the GPUs, and the contracts themselves. Um, and so in this example-

    17. SG

      Mm-hmm

    18. NT

      ... the actual asset or collateral was not really just the GPUs themselves, it was really the con-contracted cash flows-

    19. SG

      Mm-hmm

    20. NT

      ... from, in this case, investment-grade counterparties. And so I think the reason-

    21. SG

      This is the consumer-

    22. NT

      Yeah

    23. SG

      ... of the compute.

    24. NT

      The consumer of the compute.

    25. SG

      Yeah.

    26. NT

      Exactly. You know, your Microsofts, your, your Metas, et cetera, of the world. And I think the reason, um, that was done is, is really twofold. When, when you look at the scale of the problem, uh, you know, those particular contracts, uh, needed billions of dollars of debt to finance the CapEx. You know, obviously, for a nascent and, and new and growing company, that's, that's really hard to raise. Um, so part of structuring it this way is ensuring that you have kind of guaranteed offtake on the back end to, uh, minimize the risk for, you know, debt holders.

  6. 9:0211:42

    Debunking GPU Collateral Risk

    1. NT

      And I think that's a lot of what the market got wrong, um, especially when there was a lot of press about this early on-

    2. SG

      Mm

    3. NT

      ... where it was, "There's billions of debt on these highly depreciating assets, and it's extremely speculative." And the, what was of-oftentimes characterized in the media was, uh, these debt structures had GPUs as collateral, and that's like putting a used car a-as collateral, which is obviously just gonna depreciate incredibly fast. You know, that's a very risky kind of structure. And I think what got missed was the, the GPUs themselves were actually, like, the second, second or tertiary level of collateral in those instruments. The primary collateral, uh, was the contracted cash flows from investment-grade counterparties. And so, like-

    4. SG

      It's Microsoft or NVIDIA or somebody like that-

    5. NT

      Exactly

    6. SG

      ... saying, "I'm committed to pay you."

    7. NT

      Exactly.

    8. SG

      Or, like, "I know you can pay me."

    9. NT

      Take or pay contracts-

    10. SG

      Yeah

    11. NT

      ... and they're, like, five years in length.

    12. SG

      Mm-hmm.

    13. NT

      So I think that was, like, one feature, uh, that, that's unique to talk about. And then the second one really has to do with, um, the debt itself and how it amortizes. And so, like, in simple terms, you know, when you have debt, you have principal and interest, and you have to pay it off over time. And in these structures, typically, the payback period on the CapEx was roughly two to three years. Um, and the, uh, structures themselves, the debt was over five years, you know, four to five years in length, where the entire debt amortized during the, um, outstanding period that the, that the debt was out. And so at the end, you ended up with zero balance, uh, for the debt, and there was no balloon payment or, or anything that was really due on the back end. And so the question that often c-- you know, comes up, uh, is, you know, isn't that a very risky, uh, type of structure because these things are depreciating incredibly quickly? So I think, you know, there's, there's two comments here. First is, on that depreciation question, in these kind of debt structures, it doesn't really matter because the debt's fully paid off by the end of the debt term against committed contractual, um, you know, contracts from investment-grade counterparties. Um, and then at the very end, the, the actual upside or residual value, and I know there's a lot of questions on, on residual value, is, is held by, um, you know, the, uh, the cloud player in this example, right? CoreWeave, right?

    14. SG

      Mm-hmm.

    15. NT

      Or, or, you know, any others. Um, and that's a really interesting prospect because you can see a world where all of this CapEx is paid off-

    16. SG

      Mm-hmm

    17. NT

      ... incredibly quickly, and there's an opportunity to redeploy it. Um, where you can redeploy it, um, without having to pay for any additional, uh, you know, debt, obviously, against that redeployment.

  7. 11:4213:01

    How Deal Structures Evolve

    1. SG

      How have the instruments changed?

    2. NT

      They've changed in several ways, where, uh, you know, the first is... And when you look at these SPVs, I think you're starting to see ways to change the portfolio construction of who can go inside of one of these debt structures. And so, you know, early on, in the early days, these were all only investment-grade counterparties.

    3. SG

      Mm-hmm.

    4. NT

      'Cause there was-- the, the space was so nascent, the operators had no experience, and I think now what you're starting to see is a blend of investment-grade and non-investment grade. So, like, what does that actually mean? What that means is, you know, you're, you're seeing these structures with investment-grade counterparties, like your hyperscalers and your other corporates that, that are IG. Um-... um, mixed alongside, uh, some of the AI-native companies. And so think of the AI model companies, the labs, software companies that are building AI, startups. You're seeing those companies get mixed in alongside, um, the IG companies to build a portfolio, 'cause now you have, you know, the cr- the history that you can do this.

    5. SG

      Yeah.

    6. NT

      And now you have structures where you can kind of balance the risk, uh, with, with IG and non-IG. And we're continuing to see that kind of move to be able to help finance, you know, really the model companies and a lot of these startups. Obviously, that was difficult to do, you know, three or four years ago. That's starting to become easier, um, as these companies have more runtime and ability to, uh, you know, make the compute

  8. 13:0115:28

    What Bottlenecks Buildout

    1. NT

      fungible.

    2. SG

      All our, uh, portfolio companies that buy compute tell me it's a supply-constrained-

    3. NT

      Mm-hmm

    4. SG

      ... market today. One, is that true, and two, when you think about, like, wow, continuing to grow your business or grow this ecosystem, like, what's going to stop it? Like, what could slow down a build-out?

    5. NT

      Yeah. I mean, I think what's interesting is, uh, if you look at, like, 2023, 2024, we were very supply-constrained, and the supply constraint was chips.

    6. SG

      Mm-hmm.

    7. NT

      And no one could a- get access to chips.

    8. SG

      Yes.

    9. NT

      And then-

    10. SG

      We bought chips.

    11. NT

      We bought chips, right? [laughing]

    12. SG

      Yeah. [laughing]

    13. NT

      And, you know, there was this thought that, okay, there's gonna be an overbuild of chips, and then the supply constraints will go away. Well, you know, fast-forward to 2026, and what we see is, you know, there is obviously more availability of chips, but to build and operate these, uh, you know, data centers requires people, power, infrastructure, a lot of these things that, uh, have a lot of, of bottlenecks. And so actually taking these chips and then making them into useful revenue-generating assets is really the bottleneck now.

    14. SG

      It's also not clear that there is supply of chips at the latest-

    15. NT

      Correct

    16. SG

      ... uh, generation-

    17. NT

      Yeah

    18. SG

      ... at scale-

    19. NT

      That's true

    20. SG

      ... soon, which is how everybody wants them.

    21. NT

      Exactly.

    22. SG

      Yeah.

    23. NT

      I think, you know, you see, um, not only-- you're starting to see interesting, and not only just the, the high-end players want access to the latest chips, you're seeing the latest, you know, obviously, startups want access to those, and I think it has to do with efficiency.

    24. SG

      Mm-hmm.

    25. NT

      Um, you know, one of our friends, or one of your friends as well, Dylan Patel, over at SemiAnalysis, posted this interesting article last week on inference, and inference spend, and inference kind of performance.

    26. SG

      Mm-hmm.

    27. NT

      Um, and, you know, there, there's a lot of, you know, jokes made about Jensen math. Um, and it was interesting 'cause the-

    28. SG

      Seems pretty good at math, honestly. [chuckles]

    29. NT

      He's, he's actually great at math. Um, and so for the, uh, Hoppers, the H, uh, 100 or H200 series of GPUs into the Blackwells, uh, there was a claim made that it could be thirty times more efficient, and I think the data from, you know, SemiAnalysis showed that it was ninety to a hundred times more efficient-

    30. SG

      Mm

  9. 15:2817:35

    Circular Financing Critiques

    1. SG

      Um, help me address, like, this, uh, criticism around circular financing.

    2. NT

      Yeah, I know, um, it's obviously a topic du jour, and I think, you know, the way we see it and frame it really has to do with the demand signals-

    3. SG

      Mm-hmm

    4. NT

      ... um, and who are the eventual buyers, and, and how is this being used? And so, at least from what our perspective, we s- we continue to see, uh, insatiable demand. Um, and if you go back to, you know, the previous kind of big tech build-out back in the early 2000s, there was obviously a lot of fiber that was being built, and you had dark fiber, you know, in, in, in, an overbuild happening. And I think what you see here is I, I've... you know, you don't see any dark GPUs-

    5. SG

      No, I've been looking

    6. NT

      ... any GPU. Exactly.

    7. SG

      Yeah. [chuckles]

    8. NT

      Any GPUs used.

    9. SG

      Yeah.

    10. NT

      Um, and then number two, you're starting to see, uh, actual economic value. Um, so I think last year, Enterprise AI had about thirty-seven billion of total TAM, um, and it's continuing to grow like crazy, and at least personally, and, and, and I'm sure you see this too, but I use these tools all the t- all the time, and I find it-

    11. SG

      Continuously

    12. NT

      ... incredibly valuable, right? The actual tokenomics of positive, uh, ROI is, is actually here now, I think, from our perspective. Um, and so the, the circularity, you know, comment, I think, applies when you're building, um, you know, speculative, uh, compute and capacity, uh, or if you're, you know, purely doing vendor financing, and it's, you know, you're trying to do some type of, you know, you need some type of, you know, rev rec-type i- item related to that. And that, that's not what we see. Like, what we see is financing to support, to build out the demand against, uh, use cases that are very positive in their ROI. And so, like, our perspective is that that's, uh, you know, not a real, real concern that we have. Um, and it, and it really has to do with who are the ultimate buyers here? The ultimate buyers have been in, at scale, the hyperscalers. They're deploying this, uh, at scale, and the economics are positive, uh, when you look at a unit economic basis in terms of, uh, deploying intelligence. Um, and I think we're at a moment in time where you're- we're really starting to see that.

  10. 17:3523:10

    The Shift from Training to Inference Workloads

    1. SG

      In my own experience, um, I have been a heavy AI user for several years.

    2. NT

      Mm-hmm.

    3. SG

      But reasoning advances the ability to scale inference, especially around code-

    4. NT

      Mm-hmm

    5. SG

      ... means I'm up against my max limit all the time- [chuckles]

    6. NT

      Yeah

    7. SG

      ... in a way that was not true, uh, uh, uh, initially. How does the inference workloads actually growing? I mean, it's a, it's a good demand signal-

    8. NT

      Mm-hmm

    9. SG

      ... that there is value, but how does that change your business?

    10. NT

      Yeah, so I think one thing that's interesting that we're seeing is, obviously, there's been the, the shift from training to inference, you know, over the last few years. That, that split continues to grow on the inference side as usable, uh, and ROI-positive applications get developed. I think the two things I see on the inference side now is, um, inference has- is a lot more complex than I think initially thought, and what I mean by that is it's not as simple as-... um, you, you train a model, and then you-- it's easy to inference it. In certain, certain cases, you can do that on, on similar infrastructure, but there are issues around latency, um, fungibility of that, uh, and, and really optimizing the cost of your compute on the inference side. Um, how do you manage, uh, you know, peaks of inference demand? And, and obviously, it's not linear like training, and your GPUs are on all the time, you know, a hundred percent of the time. And so with inference, you have a lot more variability.

    11. SG

      Mm-hmm.

    12. NT

      Um, and so there's a lot more nuances, uh, in, in optimizing inference. I think the second thing that's observed, um, that I've seen is, uh, inference is definitely a memory problem, a memory throughput problem. Um, you know, on the inference side, you know, you have these kind of phases called prefill and, and decode-

    13. SG

      Decode.

    14. NT

      Right? And how you optimize that across a fleet of GPUs is actually a unique technical problem. Um, and then the third is, what I would say is distribution. Um, you know, a lot of times training infrastructure is, is quite centralized. What you're seeing with inference is, in many use cases, as this becomes more ubiquitous, you're gonna have more and more decentralized, uh, inference clusters.

    15. SG

      Mm-hmm.

    16. NT

      And actually, one of my favorite companies is one of your companies, Base10, which is really, you know, optimizing distributed inference at scale. And I think one thing that's interesting when you look at companies like that and, and other inference clouds is: how do you optimize the, uh, compute and, and build out these clusters that could actually look very different than a training cluster? Where training cluster might be fifty, hundred, hundred and fifty megawatts in one kind of four walls.

    17. SG

      Mm-hmm.

    18. NT

      I think you're starting to see distributed inference, which could be, you know, four or five megawatts and five separate data centers and stitching them together in, in different areas, right? And that looks very different from a kinda power perspective, how you... You know, the, the software matters a lot more when you're doing, like, distributed inference. And then, in terms of your question, how it impacts us, I think one of the things that we've been, you know, focused on is, um, you know, where we started this conversation with you on, um, financing compute, that was really obviously... Uh, it started with mostly training. Um, a lot of those hyperscalers are now doing a lot of inference on that same infrastructure, but these are investment-grade counterparties. You know, it's easy to- it's easier to lend, uh, money to build out these clusters to those customers. I think now that you have this new crop of inference clouds and application layer companies that are needing tons of inference, I think the, the key question that we're really focused on is: how can we finance the next build, which is distributed inference?

    19. SG

      Mm-hmm.

    20. NT

      Um, and maybe the last, you know, one or two takeaways would be, uh, one thing I'm seeing is, you know, for every application layer company out there, the highest line item from COGS is compute.

    21. SG

      Mm-hmm.

    22. NT

      Um, and then the inference companies and inference clouds out there, most of them are, um, purchasing up compute from either other clouds or unused ac, ca- uh, capacity. And when you look at, like, margins for that, you've got, like, layered margins.

    23. SG

      Mm-hmm.

    24. NT

      And so there's a push to kind of own your own infrastructure-

    25. SG

      Right

    26. NT

      ... um, to really drive and increase, you know, mar, uh, profit margins, but also it's the ability to kind of have control of your own destiny. And I think a lot of folks are starting to, the application layer companies and inference clouds, are, are grappling with: how can we build and own and operate our own infrastructure? Um, and that's something I'm, I'm, I'm really looking into.

    27. SG

      Mm. I am too, and I think one of the things that, uh, is going to make a big difference in this ecosystem is, like, can the inference clouds, like Base10, can they deliver reliability-

    28. NT

      Mm-hmm

    29. SG

      ... that you would expect from a, a cloud?

    30. NT

      Yeah.

  11. 23:1024:12

    AI Factories

    1. NT

      kind of one thing I, I find really interesting that NVIDIA is doing is, is this concept of AI factories.

    2. SG

      Mm-hmm.

    3. NT

      And building AI factories, um, you know, behind corporates and AI companies, and maybe the way I unpack that is you've got kind of more large, monolithic cloud players, the hyperscalers and the neo clouds, that are building large-scale, um, you know, cloud environments. Uh, and a lot of where I think NVIDIA and others see this going is, yes, those are gonna be important components, and those are gonna be huge markets, but corporates, Fortune, you know, five hundred AI companies that use a ton of compute-

    4. SG

      Mm-hmm

    5. NT

      ... will want dedicated AI factories associated with workloads that they run and that they have control over. And so I think you're starting to see, you know, the early indications of how do you finance and build out, uh, almost think of, like, literally AI factories that sit on-prem-

    6. SG

      Mm-hmm

    7. NT

      ... with the company that can operate their workloads. Uh, it's a-

    8. SG

      You're talking about my Mac mini farm.

    9. NT

      Exactly. [laughing]

    10. SG

      [laughing] No, but, but all joking aside,

  12. 24:1228:27

    Constraints of the Current Power Grid

    1. SG

      I, I think one thing that is another supporting factor for use of all of the compute we have is, and, and can create over the coming years, is, um, power is clearly the limiting factor.

    2. NT

      Mm-hmm.

    3. SG

      Um, it's easier to get more power in smaller-

    4. NT

      Mm-hmm

    5. SG

      ... units.

    6. NT

      Yep.

    7. SG

      I think that as inference demand is growing, these, uh, anyone who has-... uh, usable compute for inference is gonna find a lot of partners for offtake.

    8. NT

      Exactly.

    9. SG

      Okay, let's look at the future a little bit while we, while we have ten minutes. Um, uh, let's talk about the, the macro. Like, people talk about energy, they talk about, um, natural gas, uh, the grid, the slowness of nuclear. Like, what do you think about over the next six or twelve months?

    10. NT

      Over the last year, I've been spending a ton of time in the power and energy markets, um, and looking at interesting solutions that can help scale power, you know, for the gap that we see. I think a few observations that we've seen, the first is, um, we do have a power problem, but I think it's a bit more nuanced than, than a, a lot of the reporting out there, where I think-

    11. SG

      It's just we can't generate.

    12. NT

      We can't generate, yeah.

    13. SG

      Yeah, yeah.

    14. NT

      I think there's actually quite a bit of stranded power across the grid, across the country. And what I mean by that is, you know, a lot of the utilities are built in a way where they're focused on peak power, right? So they've got natural gas peakers, and they're focused on, you know, providing peak power-

    15. SG

      Mm-hmm

    16. NT

      ... for those moments where demand is, is kind of off the charts. Um, and that's obviously only for a few days out of the year. So there's lots of generating assets out there. Uh, the question is, they're a bit stranded, right? And so there's kind of-- I, I look at the power problem as being kind of multiple fold. The first one is, how can you take the power we have on the grid and actually make it usable?

    17. SG

      Mm-hmm.

    18. NT

      And, and a lot of that has to do with flexibility and storage. And so we've been spending a lot of time looking at an energy-- in the energy storage business and distribution. How can you store unused capacity, peak demand shave, uh, capacity, store it, and then distribute it when it's needed?

    19. SG

      Mm-hmm.

    20. NT

      Um, we made an investment in a company called Taurus. I think I've, I've mentioned, uh, to you, which is building like this distributed utility layer, uh, almost like this mesh infrastructure-

    21. SG

      Right

    22. NT

      ... to, um, takes-- to store excess capacity or store capacity from a variety of, of sources and then distribute it at the time when it's needed, and so I think that's kind of a critical layer that, that needs to be built. Um, and then longer term, there is a generation problem, but I think in the shorter term, it- it's really- it's more on the distribution and storage.

    23. SG

      Mm-hmm.

    24. NT

      Uh, and then, um, the other piece I would say is, you know, the true bottleneck, um, at least in the short term, the next six to twelve months, is, is incredibly, I don't wanna use the word simplistic, but it's things like, uh, structural steel. It's, uh, finding electricians, uh, that can, you know, build this-

    25. SG

      Sorry.

    26. NT

      Yeah.

    27. SG

      You, you can't get enough steel?

    28. NT

      You can't get enough steel.

    29. SG

      Okay. [chuckles]

    30. NT

      You can't... Yeah. [laughing]

  13. 28:2729:54

    Sovereign Compute Buildouts

    1. SG

      Couple topics to hit before we lose you. Um, uh, new players, how do you think about the sovereigns and what they're doing in their build-outs?

    2. NT

      Yeah, I think, um-

    3. SG

      They seem to be able to fund themselves to some degree.

    4. NT

      Exactly, right.

    5. SG

      Yeah.

    6. NT

      Um, you know, you saw the news from India last week. Uh, obviously, a lot of the news in the Mid East, Southeast Asia. I think, you know, we're continuing to see that sovereigns view compute and AI, you know, as, as, uh-- and, and even we do here in the, in the United States, as, as, as a matter of national security.

    7. SG

      Mm-hmm.

    8. NT

      Um, and obviously, the funding of those clusters is, is very different than funding, like, a private cluster, and so you've got, you know, government capital that can be used for that. I... So I think there's two things that, you know, I find interesting in that space. I think one is: who are the partners, um, that are going to build those, that capacity?

    9. SG

      Mm-hmm.

    10. NT

      And what are the cybersecurity kind of implications and environments for that? And so those, those are the two nuances, I think, with sovereigns, is they need to find players that can rapidly scale, compute, um, in the, in their countries, and oftentimes they don't necessarily have these players that know how to build-

    11. SG

      Right

    12. NT

      ... and scale GPU compute. I think that's a great place for the United States to lean in and, and help build, you know, sovereign ecosystems around the world. And then there's a matter of cybersecurity, and, and how do you make it into a, a, a truly, um, you know, safe ecosystem for, for those sovereigns? And so I think there's a lot of work to do still on the cyber side, um, especially as you look at, you know, scaling sovereign AI.

  14. 29:5432:48

    Physical AI Capital Needs

    1. SG

      What is your thinking on physical AI?... So they're, you know, if it works-

    2. NT

      Yeah

    3. SG

      - CapEx-intensive build.

    4. NT

      Absolutely.

    5. SG

      Yeah.

    6. NT

      And, you know, maybe I'll just take a second to say, one of the things that we observed, um, from 2010 to, like, the, you know, the early 2020s, was we were in a very capital asset-light mode of build.

    7. SG

      Mm-hmm.

    8. NT

      Like, SaaS was-- You know, you never heard Magnetar in SaaS, right?

    9. SG

      No.

    10. NT

      'Cause it was just purely asset-light.

    11. SG

      Mm-hmm.

    12. NT

      Compute and everything we saw starting in, you know, 2021 is asset-heavy, and that's where you started hearing a lot more about us. And I think physical AI is actually an extension of that. And so what you're seeing is, part of the reason, I think... And I think we all have scars from the 2010s of hardware companies that did not make a lot of money for us.

    13. SG

      Mm-hmm.

    14. NT

      Uh, part of the scars was, it was so difficult to scale hardware companies, um, you know, because the software was so difficult to build. You needed to spend so much money building the hardware, the software was an afterthought. What you're seeing now is, now that you have more general-purpose, uh, software via AI, uh, it can make the hardware easier to scale because you have, you know, software that can be, you know, can interact with more, more hardware. And so I think the natural kind of extension of what we see is kind of what happened in the compute markets, where you really needed flexible capital, where it wasn't just equity, it was debt and, you know, a variety of project finance to really scale CapEx. You're gonna see that same kind of need, uh, in physical AI, and it simply has to do with capital intensity, right? You know, on the compute side, for, like, CoreWeave as an example, they needed billions of capital to scale, uh, you know, that cloud. And I think whether it's a robotics company or whether it's a, you know, uh, a manufacturing, uh, focused company, drones, defense, all of these areas are incredibly capital-intensive. And then, now that you add AI into them, I think it can help them scale faster, uh, quite frankly, and, uh, capital intensity is still there. And so there's a moment in time now where you're gonna have to really look at optimizing balance sheets, um, for physical AI to really grow and scale.

    15. SG

      I think to your point of how the, um, early AI compute contracts were structured, um, I, I, I went from, you know, learning to be an investor in an era and an environment where robotics was a great way to lose a lot of money-

    16. NT

      Yeah

    17. SG

      - for a long period of time.

    18. NT

      Mm-hmm.

    19. SG

      You remember that, too?

    20. NT

      Yeah.

    21. SG

      Um, now I sit on the board of two robotics companies-

    22. NT

      [chuckles]

    23. SG

      - so let's hope it's not true anymore. But I, I'd say, like, it, it's just a question of capability to me. Like, you know, whether it's in the home or in industrial settings, where, like, it is simply not a good human job, or we don't have the labor-

    24. NT

      Yeah

    25. SG

      ... um, you are going to have, I, if-- I, I think the products will support investment-grade buyers-

    26. NT

      Yep

    27. SG

      - who are going to have contracts that say, like: "We want it," and you can raise debt against it.

    28. NT

      Exactly.

    29. SG

      Right. Um, and so I, I think actually that, that feels of a very similar, um,

  15. 32:4836:04

    The Capital Rotation Away from SaaS

    1. SG

      shape. Last question for you, because it is so timely: What do you make of the general capital rotation out of, out of software, the end of software, and it's all, it's all infrastructure labs and AI natives, I guess?

    2. NT

      Yeah, yeah. It's interesting to see that every day there's another industry that kind of tanks, whether it's... You know, you saw the wealth advisors tank for a few days, you saw the consulting, consulting companies, you saw-

    3. SG

      Payments

    4. NT

      ... real estate. Payments-

    5. SG

      Yeah

    6. NT

      ... real estate, right?

    7. SG

      Mm-hmm.

    8. NT

      And I think what you're seeing at least, is, at le- at least in my view, what I saw was, towards the tail end of 2025 and into 2026, like, there was, a- at least in my view, a big step up in performance of usable AI. And I think, you know, what Anthropic was doing really, and Claude, and, like, I-- we use it all. You know, we-- obviously, we use all the models, but, you know, there, there was a definite step up in performance in making AI usable and seeing that it can, you know, truly disrupt these, you know, non-AI native industries. Uh, I think the reaction and rotation out of each of these names is, is a bit much because when you-- I think there's, there's two factors I look at. One is, when you look at valuations, as an example, I think, um, from a free cash flow perspective, SaaS companies are, are, are valued at, at the lowest they've been in, in, in years, you know. And there's a huge margin difference between, you know, what those rev multiples are today and what, what they've been in the past. And so free cash flow margins have steadily increased significantly for SaaS as a whole over the last four or five years, and revenue multiples have stayed, you know, you know, the same or gone down.

    9. SG

      Mm-hmm.

    10. NT

      And so, to me, that's a bit of an exaggeration because it really has to do with individual names versus sectors, and I think that's kind of, at least, my take. Is like, in all of these sectors, there are individual names that will learn how to maximize their, uh, you know, uh, value using AI, and there's those that won't. Uh, but what's happening right now is there's, you know, a hammer being hit across all names and not, you know, specific individual names that might not be using it as well. Um, and then the second point, at least, you know, my view is, there are a number of applications that, you know, on paper, sound really interesting, like, oh, AI could just rebuild Slack, or it could rebuild Salesforce, or it could rebuild, you know, X, Y, and Z. I think, you know, the-- it's not just the product, it's the way that's integrated across multiple services and systems across the enterprise that is a lot more difficult to just replicate-

    11. SG

      Mm-hmm

    12. NT

      ... than I think some of the mar- public markets are, are kind of reacting to.

    13. SG

      And I do think there's, um, a fundamental question, in addition to what you said, which I agree with, of like: Does anybody wanna rebuild it-

    14. NT

      Yeah

    15. SG

      - and own it?

    16. NT

      Mm-hmm.

    17. SG

      And, uh, you know, there are, to your point of, like, within the software sector in particular, um, there are companies where, uh, uh, they're structurally more protected-

    18. NT

      Yeah

    19. SG

      - than there are companies that are at more risk.

    20. NT

      Yep.

    21. SG

      Right?

    22. NT

      Agree.

    23. SG

      And I, I think it's as simple as, like, you gotta go select.

    24. NT

      Yeah, exactly.

    25. SG

      Um, this has been so fun. Thanks so much, Neil.

    26. NT

      Yeah, I really appreciate it.

    27. SG

      Congratulations on all the innovation and, uh, on building out all the compute.

    28. NT

      Awesome. Thank you. Good to be here. [music]

    29. SG

      Find us on Twitter, @NoPriorsPod. Subscribe to our YouTube channel if you want to see our faces. Follow the show on Apple Podcasts, Spotify, or wherever you listen. That way, you get a new episode every week. And sign up for emails or find transcripts for every episode at no-priors.com.

Episode duration: 36:04

Install uListen for AI-powered chat & search across the full episode — Get Full Transcript

Transcript of episode WSxVh5WvWZ4

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome