Skip to content
a16za16z

Sovereign AI: Why Nations Are Building Their Own Models

What happens when AI stops being just infrastructure - and becomes a matter of national identity and global power? In this episode, a16z’s Anjney Midha and Guido Appenzeller explore the rise of sovereign AI - the idea that countries must deploy AI capabilities that align with their national values. From Saudi Arabia’s $100B+ AI ambitions to the cultural stakes of model alignment, we examine: - Why nations are building local “AI factories” - How foundation models are becoming instruments of soft power - What the DeepSeek release tells us about China’s AI strategy - Whether the world needs a “Marshall Plan for AI” - And how open-source models could reshape the balance of power AI isn’t just a technology anymore—it’s cultural infrastructure. This conversation maps the new battleground. Timecodes: 00:00 Sovereign AI 00:37 The Rise of Local AI Platforms 03:09 AI Factories vs. Data Centers 05:44 Cultural Implications of AI 08:57 Global AI Leadership and Strategy 11:59 The Role of Government in AI Development 15:31 Conclusion: Foundation Model Diplomacy Resources: Find Anj on X: https://x.com/AnjneyMidha Find Guido on X: https://x.com/appenz Stay Updated: Let us know what you think: https://ratethispodcast.com/a16z Find a16z on Twitter: https://twitter.com/a16z Find a16z on LinkedIn: https://www.linkedin.com/company/a16z Subscribe on your favorite podcast app: https://a16z.simplecast.com/ Follow our host: https://x.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.

Anjney MidhahostGuido Appenzellerhost
May 24, 202516mWatch on YouTube ↗

EVERY SPOKEN WORD

  1. 0:000:37

    Sovereign AI

    1. AM

      This is a massive vulnerability. We've gotta control our own stack. It's not just self-defining the culture, but self-controlling the information space. Do we build? Do we partner? What do we do? The United States and AI right now has the world leadership. Instead of colonization, what we have is now, I think, foundation model diplomacy. Big structural revolution is both a threat and opportunity. [upbeat music] Andres Guido, uh, we wanna talk about Sovereign AI, uh, AI and geopolitics, and let's start with the news. Uh, our partner, Ben, is in the Middle East right now, um, what happened, uh, and, and why is it so important?

  2. 0:373:09

    The Rise of Local AI Platforms

    1. AM

      What happened is, um, the kingdom announced that they're going to build their own local hy-hyperscaler or AI platform called Humane. And I think why it's notable is that they are as opposed to the status quo of the cloud era, um, they're viewing the AI era as one where they'd like the vast majority of AI workloads to run locally. Mm-hmm. If you can-- If you think about the last twenty years, the way the cloud evolved was that the vast majority of cloud infrastructure basically existed in two places, right? China and the U.S. And the U.S. ended up being the home for the vast majority of cloud providers to the rest of the world. That doesn't seem to be the way AI is playing out, 'cause we have a number of frontier nations who, who are basically s- raising their hands and saying, "We'd like infrastructure independence." The idea being that we'd like our own infrastructure that runs our own models, that can decide where we have the autonomy to build the future of AI independent of any other nation, which is quite a big shift. Um, and I think the headline numbers are somewhere in the range of, um, a hundred to two hundred and fifty billion worth of cluster build-out that they've announced, of which about five hundred megawatt seems to be the atomic unit of these clusters that they're building. So what's going on? A number of countries st- y- the-- with the kingdom being the one that's most recent are an- have been announcing sort of what we could think of as Sovereign AI- Yeah ... clusters. Uh, and that's, and that's a pretty dramatic shift from the pre-AI era. I think it's spot on. I think many sort of geopolitical regions are reflecting back what happened in previous big tech cycles, and whoever-- wherever the, the technology is built and whoever controls the underlying assets has a tremendous amount of power of shaping regulation, shaping- Right ... you know, the-- how this technology is being used, and, and also puts themselves in a position then for the next wave that comes out of that. And, you know, if-- With the Industrial Revolution, having oil was important, and now having data centers- Right ... uh, is important. And so I think it's, it's a very exciting development. Yeah. You can often tell why something is important to somebody by the semantics that folks use to communicate a new infrastructure project. They're being

  3. 3:095:44

    AI Factories vs. Data Centers

    1. AM

      called AI factories. Hmm. They're not being called AI data centers. They're being called AI factories. And there-- I think there's two, two ways to kind of respond to that. One train of thought would be, "Hey, that's just branding. You know, the marketing people doing their thing." And under the hood, this is really just data centers with slightly different components. An opposing view would be, actually, no, this is not just marketing. If you look under the hood, and if you look at the actual... If you, if you X-ray the, the data center itself, very little of it is, is, is the same as, as was the case twenty years ago. Hmm. The big one, of course, the, the big difference in active components being GPUs. Today, I think if you look at the average five hundred megawatt data center, and you looked at what percentage of the, the CapEx that r- was required to build that data center or operated, rather, went to GPUs. I, I think we're also seeing a specialization, right? The, the kind of- Right ... data center you build for a classic CPU-centric workload and what you build for, you know, a high-density AI data center look very different. Right. Right. You need liquid cooling to the rack. You, you need very different, uh, uh, you know, energy supply. Right. You close to a power plant. You want to lock in that energy supply, um, early on. So it's a-- And, and then we're also seeing a, a change, I think, in the consumer behavior, where classically you want a very full stack that has lots of services that helps- Right ... the enterprise build all these things. We're seeing more and more enterprises that are actually comfortable with just building on top of a simple Kubernetes abstraction or something. Right. Right. And basically, you know, cherry-pick a couple of Snowflake or, or, or database-type services on the side that help them complement that. Uh, so I think there's a new world. A-and so th-that's certainly true that the technical components in the-- in an AI factory are completely different from a traditional data center. And then there's what does it do? And historically, a lot of the workloads that traditional data centers were doing hosted workloads for enterprises or developers, whoever it might be, where most of that, the, the, the data sets and the workloads were actually not particularly opinionated. And when I say opinionated, I mean they, they're not necessarily subject to a ton of cultural- Hmm ... oversight. Yeah. You could argue that was not the case with China. Right. Right? Where China wanted full sort of oversight over that-- over those workloads. The offset server we have. Right. But for the, for the better part of the, mm, you know, 2000s, the vast majority of enterprise workloads didn't need decentralized serving. What's different about AI seems to be

  4. 5:448:57

    Cultural Implications of AI

    1. AM

      that these models aren't just compute infrastructure, they're cultural infrastructure. Hmm. They're trained on data that has a ton of embedded values and cultural norms in them. That's a training step, and then when you have inference, which is when the models are running, you have all these post-training steps you add that steer the models to, to say something or not, to refuse the user or not.And that last mile is where things over the last, I would say year, have made it more and more clear that countries want the ability to control what the factories produce or not within their jurisdiction. Whereas that urgency didn't quite exist as much with-

    2. AM

      Because of the cultural factors or because of certain independence or resilience or...?

    3. AM

      My sense is there's two things going on, but one I think would be the rise of capabilities in these models being now well beyond what we'd consider sort of early toy stage of a technology. Now you have foundation models literally running in defense, in healthcare, in financial services industries. You know, ChatGPT has about 500 million monthly active users making real, you know, decisions in their daily lives. And so I think that makes a lot of governments go, "Wait a minute. It w- if we are dependent on some other country for the underlying technology that our military, our defense, our healthcare, our financial services, and our daily citizens' lives are driven on, um, that seems like a critical point of failure."

    4. GA

      I think it's not just self-defining the culture, but, but self-controlling the information space. I mean, today w-we're starting to see how in many cases, models are replacing search, right?

    5. AM

      Right.

    6. GA

      I no longer go to Google. I go to ChatGPT, and that comes back with an answer. If there's historical fact, and say in the Chinese model it does not show up, in the US model it does show up, right? That is the reality that people grow up with. And you know, if you write an essay in school, in the future, many of these essays will be graded by an LLM.

    7. AM

      Right.

    8. GA

      So in fact, in school, something that may be truthful, right, may be graded as wrong because whoever controlled the model decided that that should not be part of the training course.

    9. AM

      Right.

    10. GA

      So I think it has a very profound effect of-- on, on public opinion and sort of, you know, on values.

    11. AM

      Right. And is your expectation that this is going to play out, and to what extent is it going to play out? Where in- on the cloud, as we mentioned, there's a Chinese internet and the sort of Western rest of the world internet. How widespread is this sovereign AI thing gonna go?

    12. GA

      So if you look at the Industrial Revolution, sort of oil was the foundation of, of a lot of the technologies, right? You needed oil reserves in order to participate. And I think it'll be a little bit the same thing, right? If you want to build industry in a particular country, if you want to be able to export things, if you want to be able to drive development, and if you want to simply harness the power that comes with that, you need the corresponding reserves. And I mean, AI data centers are a little bit like these oil reserves, with the big difference being you can actually construct them themselves-

    13. AM

      Right

    14. GA

      ... if you have the, uh, you know, necessary investment, uh, um, dollars and the, uh, you know, willpower to do it. But I think it will be-- they will be the foundations for building all the layers on top-

    15. AM

      Right

    16. GA

      ... that ultimately, I think, determine who wins this race.

    17. AM

      And talk more about the, the implications behind what this means. Like, is this something that the US should be excited about?

  5. 8:5711:59

    Global AI Leadership and Strategy

    1. AM

      Are there now winners across the board in all, all these local environments? Why don't you talk about some of the big implications here?

    2. GA

      Big structural revolution is both a threat and an opportunity.

    3. AM

      Yeah.

    4. GA

      Right? I think it's in, uh, United States and AI right now has the world leadership.

    5. AM

      Yeah.

    6. GA

      Right? The-- that's an opportunity. Um, hanging on to it won't be easy.

    7. AM

      Right.

    8. GA

      As it is in every, every tech revolution.

    9. AM

      Don't we want people to be dependent on us in the same way that they were in, in sort of the, the Cloud revolution, or d-do we benefit somehow from, from it being more sort of decentralized and...?

    10. GA

      I, I think-- look, the, the world is not one place.

    11. AM

      Right.

    12. GA

      Right? So, so I think complete centralization won't happen. I think it's, uh, uh, being the leader is good. Uh, having strong allies that, that also have, uh, uh, that technology is also very valuable. So it's probably a balance, um, of those that we're looking for.

    13. AM

      Like, we're clearly in an unstable equilibrium right now.

    14. AM

      Yeah.

    15. AM

      And so Guido's right that the, the arc of hu-humanity's, uh, and history is such that we will probably-- that things will shake out until there's a stable equilibrium. And so question is, what, what is the stable equilibrium? And I think one way to reason about it is you could look at historical analogies. So, you know, post-World War II, when Europe was completely decimated, there was a group of really enterprising, you know, folks in the private sector and the public sector who got together and said, "Hey, we, we can either choose to turn our backs on Europe and apr-- you know, adopt a posture of isolationism, where we, we mostly focus on a post-war American-only agenda. Or we can try to adopt a policy where we know that if we don't help out our allies, somebody else will."

    16. AM

      Yeah.

    17. AM

      And so they came up with this idea called the Marshall Plan, right, where a number of, um, leading enterprises in the US got together, like GE and General Motors, and, and literally subsidized the massive reconstruction of Europe that helped a, a lot of European economies sort of quickly get back on their feet. And at the time, there was a ton of criticism of the Mar-Marshall Plan because it was viewed almost as a net export of capital and resources. But what it did end up doing is then solidified-- finding this unbelievable trade corridor between the US and Europe for the next fifty years, which really kept China out of that equation for the-

    18. GA

      Seventy years, yeah

    19. AM

      ... seventy years, really. And so I think we have a choice-

    20. GA

      Mm-hmm

    21. AM

      ... which is to either approach it the way we would alli-- the Mar-Marshall Plan for AI, right? And say, well, a stable equilibrium is certainly not one where we just turn our back on a bunch of allies, 'cause China's definitely-- uh, has, has enough of the compute resources to try to export great models like DeepSeek to the rest of the world. So what do we want our allies on, DeepSeek or Llama? That's sort of what it comes down to at, at the, at the model level of the stack, right? The reality is that a number of countries are not waiting around to find out. The ones that certainly ha-have the ability to fund their own sovereign infrastructure are ru-rushing to do it right now.

    22. AM

      And what does that mean for the sort of nationalization debate or how you see that playing out? You know, Le-Leopold Aschenbrenner, formerly of OpenAI, in his

  6. 11:5915:31

    The Role of Government in AI Development

    1. AM

      famous sort of report talked about how, hey, if, um, if this thing becomes as critical to national security as, as, as we think it will be, at, at some point, they're not just gonna letGovernments aren't just gonna let private companies run it. They're gonna want to have a much more integrated sort of a-a-approach with it. Um, wh-wh-where do you stand on sort of with the likelihood of that and, and what does that mean if the feasibility of regulation in a world where, um, it's much more decentralized?

    2. GA

      I think I have probably a strong opinion on that.

    3. AM

      Yes.

    4. GA

      I mean, I, I grew up in Germany, right? So benefiting from the Marshall Plan. One lesson I took away from that is that I think any kind of centralized planned approach does not work, right?

    5. AM

      Mm-hmm.

    6. GA

      I mean, to, to some degree, the Eastern Bloc, Eastern Germany, right, was a-- uh, Eastern Germany as Western Germany was a nice AB test, you know, central planning versus a free market economy-

    7. AM

      Yeah

    8. GA

      ...what works better, right? And I think the results speak for themselves. Um, so I think basically having the government drive all of AI strategy, you know, Manhattan-style project or Apollo project, pick your favorite successful project there, uh, I can't see that working. You probably need a, a highly dynamic ecosystem of a large number of companies competing. There's some areas, I think, where the government can be-have a hugely positive effect, right? On the research side, right, we've seen it again and again, like with funding fundamental research, which is not quite applied enough yet for, for enterprises to pick up, right, is very valuable. I think it can help in terms of setting good regulation. Bad regulation can easily torpedo, uh, AI, as we've seen. Um, and, and so I think there's a, there's a strong will for government to lead this and to direct this. There's no master plan at the end of the day that you can make that basically has all the details. That has to come from the market.

    9. AM

      I don't agree with the Ashenbrenner, I think, point of view, actually. The history of centralized planning at the frontier of technology is not great, barring a, a few situations that were essentially, um, brief sprints of war, right? And arguably even the Manhattan Project, which is kind of the, the analogy I think he uses in the, in his piece, um, we now know that there were, there were leaks. Like, I mean, it was literally a cordoned-off facility in Los Alamos or whatever, and there were still spies.

    10. AM

      Right.

    11. AM

      For anyone who has ever had the both pleasure and displeasure of working, you know, in any large government system, it, it's a pipe dream. The good and the bad news is that in a sense, it doesn't really matter where the model weights are. It matters where the infrastructure that runs the models are. In a sense, inference is almost more important. A year ago, we were in a pretty rough spot, I would say, with the arc of regulation, where people would-- there, there were a number of proposals in the United States to try to regulate the research and development of models versus the misuse of the models. I think that luckily w-we have moved on. Just a year before DeepSea came out, you had a number of like tech leaders in Washington testifying that China was like five to six years behind the U.S. with confidence on the record, and then DeepSea comes out twenty-six days after OpenAI puts out the frontier. I mean, it just shattered all of that, that, those arguments, and the fact that it was an MIT-licensed model meant that every other country had access like immediately.

    12. AM

      Yeah.

    13. AM

      So on-- the calculus has changed, right? And I, I think it means that the only way to win is build the best technology and out export anybody else. Then if the question is whose math is the world using, we'd, we'd love for it to be American math.

    14. AM

      Right.

    15. AM

      Um, so I, I, anyway, my, my view is that we are much better off embracing the ability for other countries to serve their own models, and ideally, the best product wins, which is the best models just come from the U.S. and our allies.

    16. AM

      Yeah.

    17. GA

      Is that the new age of LLM diplomacy that we're entering here? [chuckles]

  7. 15:3116:02

    Conclusion: Foundation Model Diplomacy

    1. AM

      Ben had a great, um, talking point to this at, I think he was at FII Riyadh last year, and he said something to the effect of, because these models, like we discussed earlier, are cultural infrastructure, you don't wanna be colonized in the digital era-

    2. AM

      Hmm

    3. AM

      ...in the, in, in sort of cyberspace, and I think that's, that's pretty spot on.

    4. AM

      Yeah.

    5. AM

      Instead of colonization, what we have is now, I think, foundation model diplomacy. [upbeat music]

Episode duration: 16:15

Install uListen for AI-powered chat & search across the full episode — Get Full Transcript

Transcript of episode Vw0XjhfAWis

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome