Skip to content
No PriorsNo Priors

No Priors Ep. 125 | With Senior White House Policy Advisor on AI Sriram Krishnan

Sriram Krishnan was never interested in policy. But after seeing a gap in AI knowledge at senior levels of government, he decided to lend his expertise to the tech-friendly Trump administration. Senior White House Policy Advisor on AI Sriram Krishnan joins Elad Gil and Sarah Guo to talk about America’s AI Action Plan, a recent executive order that outlines how America can win the AI race and maintain its AI supremacy. Sriram discusses why winning the AI race is important and what that looks like, as well as the core goals of the Action Plan that he helped to author. Together, they explore how AI is the latest iteration of American cultural exportation and soft power, the bottlenecks in upgrading America’s energy infrastructure, and the importance of America owning the “full stack” from GPUs and models to agents and software. Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @skrishnan47 | @sriramk Chapters: 00:00 – Sriram Krishnan Introduction 01:00 – Sriram’s Role in Government 03:43 – Impetus for the America AI Action Plan 06:14 – What Winning the AI Race Looks Like 10:36 – Algorithms and Cultural Bias 12:26 – Main Tenets of the America AI Action Plan 19:13 – Infrastructure and Energy Needs for AI 22:56 – Manufacturing, Supply Chains, and AI 24:52 – Ensuring American Dominance in Robotics 26:30 – Translating Policy to Industry and the Economy 29:30 – Should the US Be a Technocracy? 32:33 – Understanding the Argument Against Open Source Models 36:07 – Conclusion

Sarah GuohostElad GilhostSriram Krishnanguest
Jul 31, 202536mWatch on YouTube ↗

EVERY SPOKEN WORD

  1. 0:001:00

    Sriram Krishnan Introduction

    1. SG

      (digital music) Hi, listeners. Welcome back to No Priors. Today, Elad and I are here with Sriram Krishnan, a top White House official currently serving as the senior White House policy advisor on artificial intelligence. A former tech executive and venture capitalist, he's one of the lead authors on the American AI Action Plan released this past week. We talk about the national implications of the AI race, what position we hold today, the workforce and energy needs of the future, and how to win.

    2. EG

      Sriram, thank you so much for joining us today for No Priors.

    3. SK

      Thank you for having me. I'm long-term fan. Never been invited before. I was always a bit sad. But no, thank you for having me for the very first time. And before, I just have to point out for folks who are listening on audio that Elad has never looked as good, as dashing, as handsome as he does now. Elad, you've dressed up for me. I'm honored.

    4. EG

      This is how you can tell that Sriram is in politics now. He has the liquid tongue of gold with which he coaxes everybody into doing his bidding, so it's very good. So,

  2. 1:003:43

    Sriram’s Role in Government

    1. EG

      you know, for our audience, Sriram has been a well-known Silicon Valley individual. He worked at Andreessen Horowitz. He worked at a number of the sort of marquee companies and names in Silicon Valley over the last decade plus, and now, he's in government, and he's really working on a variety of exciting initiatives around AI and other areas. Could you tell us a little bit more about your role? And should we be calling you Your Excellency or is there some special title we should be using now that you're in government?

    2. SK

      Uh, you don't have to, but I will take it. But no, thank you. It's fascinating for me to be here talking to you in this capacity because I've known both of you for forever and ever. You know, we've had hundreds or, uh, of interactions and also been such a fan of the pod. Congratulations. Just to give a little bit of backstory, I've been in Silicon Valley for a long time. I feel very old. Uh, I did a tour of all the large consumer social media companies, and then, um, I was at, uh, Andreessen Horowitz for the last four years, uh, competing actively for series A term seats with both of you folks, I'm sure. And all this while, I had no real intention of joining government, I wasn't very particularly interested in policy, but what, what wound up happening, uh, is a couple of years ago, I moved to England to head up all of Andreessen's international efforts. And at the time, the UK was kind of a hotbed of all the AI policy debates. They had this AI safety summit, uh, at Bletchley Park, and th- this was kind of the peak of, I would say, the effective altruist versus the AI kind of drama which was going on. And, uh, and I got pulled into a lot of those discussions, and I remember thinking to myself, like, "Wow, a lot of people who are ma- in s- very senior roles in governments in the United States back then, uh, and in other parts of the world, didn't know what they were talking about when it came to AI." Uh, I was convinced that they were doing the absolute wrong thing on many topics. For example, open source or helping startups, and it was just really, really bad in a way which I think the industry didn't really appreciate until much later. And that got me interested into just policy, which by the way was a word which I didn't even understand what it means. We can even get into what that means, but, uh, but it got, it got me interested in policy. When President Trump got inaugurated, in the first week he did two things. One, he rescinded the Biden executive order on AI, which was bad and awful in many, many ways which we can get into, and then he signed a new executive order which basically said that America should dominate and win on AI. And he then he called upon a few of us to say, "You guys need to come up with a plan within six months to figure out how America is going to dominate and win." And so that put us off of the races, and I think everything that has happened since then accumulated in the event that we had yesterday where we put together, you know, we put out this long docu- 28-page document on America's AI action plan with a bunch of executive orders. So, so that's kind of the, uh, little bit of history.

    3. EG

      That's

  3. 3:436:14

    Impetus for the America AI Action Plan

    1. EG

      great. And could you tell us a little bit more about what the main things that you considered as you put together this plan were? What are the things that you worry about geopolitically? How do you think about AI and competition, big tech versus small tech? Like, it feels like there's a lot of threads in that, and it'd be great just to get a view of what are the main, the main issues that created this plan, and then it'd be great to talk through the plan itself.

    2. SK

      One of the catalytic moments which happened was the day before I started this job, I get a call, and this was the weekend DeepSeek had come out, and there's actually some chatter that I've heard online that they- China timed it so it can come out right after the president got sworn in. And we were like, "Hey, we just want you to come in and brief a lot of people at the White House on DeepSeek," because we're like, "Hey, what is this? Is it cheaper? Is it faster? Do, do they have some magic way of training these models which only cost a few million bucks and not, you know, hundreds of billions of dollars? What's going on?" And you folks might remember that narrative which existed that weekend. And so I got to go, and, you know, uh, me and David helped brief all of the White House leadership. But it was really a starting gun, uh, because I think that moment was profound because it immediately told us a few things. It told us that America doesn't have a huge lead on AI. It actually has a very, very small lead. Uh, if you remember at the time, DeepSeek was the only reasoning model which was not OpenAI's. I don't think Claude had come out with a reasoning model yet. I don't think, uh, Google had it. It was the only non-OpenAI reasoning model. It was very high up in the leaderboards. Uh, it was a bit unclear as to what their cost claims were and how they had gotten there. I think we know a lot better now.

    3. EG

      Yeah, and all that, all that ended up turning ou- turning out to be very overstated, right? Like, basically, there was a claim that it was a few million dollars to train the model, and they didn't really talk about the hundreds of millions of dollars they were probably spent to get to that point. It's sort of the last training run, you know, was sort of what they paid for, yeah.

    4. SK

      Yes, absolutely. I think... I would say there were claims which were inflated and then claims were taken seriously. The claims which were inflated was, to your point, the kind of...I would say they put out like the final training run and as with all the oblations and training costs. And I mean, if you look at the paper by the way, I don't think they make the claim that the total cost was a few million bucks. I think that's what the press imputed. But I do think they deserve a lot of credit. I always make a point to say, look, DeepSeek did some very, very good technical work. If you think about didn't, they didn't have as good hardware as the American model companies do. What they did with KB caching, MLA, you know, there are some, you know, the- multiple theories on how they actually got chain of thought. Maybe they did it by themselves, maybe they had some help from American companies. But there was some really novel new work there, right? And I think it told us that, okay, we don't, we don't have a lead that we can take for granted. They were definitely the best open source model. So that way... And I think to even today I would probably say, uh, the Chinese models, DeepSeek and are the best open source models. But symbolically it told us that we are now in a race. In a very close race, by the way.

  4. 6:1410:36

    What Winning the AI Race Looks Like

    1. SK

    2. SG

      What does it mean to win the AI race? Like why, why do we need to win that and what, like what would losing mean? How would we know if we've won?

    3. SK

      Well, I, I suspect you folks agree, AI might be the most transformational, economic, cultural force of our lifetime. And I, I believe that if the country or the ecosystem which winds up getting ahead is going to have these cyclic effects, right? Like you're gonna, you know, you're gonna power productivity. You're gonna have drug discovery. You're going to, you know, discover, uh, new material sciences, new technologies which then feed back into your infrastructure, feed back into your economy, and you're gonna get this flywheel effect where whoever winds up getting ahead could wind up really accelerating ahead in kind of a classic network effect ecosystem way that all of us in Silicon Valley will understand. Now, that is purely on the civilian economic context. You can also imagine a military context, right? Think about everything from drones to autonomous weapons. I'm pretty sure it's not in our best interest to have another country have that same economy of scale and flywheel and race ahead of us. So that's the risk. Now one interesting question that we have been pondering between the two is how do you actually measure what it means? How are we doing in the race? And one measure I've been playing around with, and maybe I'll get your take on, is I think Google just announced this morning, uh, uh, that they inferenced one quadrillion tokens a month or quarter, I forget which one. And one of the measures I've been thinking about is let's say the world inferences, I don't know, maybe let's call it 10 quadrillion tokens a month. We don't know what the number is, right? What share of those tokens are being inferenced on American hardware, on American models, right? And how do we maximize that market share? And that's kind of one of the m- mental models I've been playing with. And it's in a way you can think about it as we are America Inc. We have a product stack starting from GPUs with NVIDIA and AMD and a bunch of others. We have a model layer with obviously OpenAI and Grok and Gemini and many, many others. We have an application layer. You had many, many more of them on your podcast from agents to all kinds of software. How do we make sure this American stack is dominating that market share of tokens? Inference would be a good metric.

    4. EG

      That's really interesting because of one other thing that you didn't mention I feel is a cultural exportation through the models. And so if you look at prior waves of sort of culture spread, it was the movie industry, it was social media, and then now it's these models because a lot of people go to these models as a cho- a, a source of truth for history, for information, for other things. And there have been some famous examples in some of the Chinese models where there's omission of Tiananmen Square or omission of other facts. And relatedly there's some things in some of the US models that seem very politically slanted or otherwise not quite great. But it's interesting to also think about it from the perspective of broader cultural exports. I just wanted to add that to your points on defense and scientific progress in other areas. I think that's another key thing.

    5. SK

      It's actually something we are actually addressing. And you're absolutely right. Like I grew up, uh, in India and m- a lot of my exposure to Western culture was in the internet and Google. And obviously a large part of the internet was American and, you know, that kind of introduced me to Americana. And imagine if 1995 the internet was not run by America but run by one of ou- our adversaries. And so in a similar context you're absolutely right, when DeepSeek came out, I think that all these great examples of lots of stuff in there which doesn't probably align to American values. Now we are actually addressing this. President signed an executive order yesterday. Uh, it's called No Woke AI in the Federal Government. And what it does, and you know, this- this is probably gonna be, uh, one of the spicy, uh, bits for your audience, is that it basically says that look, from the day one of the Trump administration, we have tried to fight back against DEI, you know, wokeness, critical race theory, whatever you want to call it, in all parts of the federal government, right? And all kinds of propaganda. And what this EO does is actually very simple. It says that all models that the federal government will procure, AKA your taxpayer dollars will be spent on, has to do two things. They have to be truth seeking, and they can't have artificial ideological bias added. If bias is added, you just have to be transparent about where you're getting that bias from. It should be very simple for most people, but to your point, you know, that cuts to the heart of, uh, you know, if you're saying nothing happened in Tiananmen Square in 19, you know, in the early '90s, that cuts the heart of that. It also cuts the heart of many, many other things in the, from the culture wars that we have now been trying to

  5. 10:3612:26

    Algorithms and Cultural Bias

    1. SK

      fight against.

    2. SG

      Hey, Sriram, you used to work in social media for a long time, right? Uh, like this sounds a little bit familiar in terms of like is it a platform, is it a publisher? What is the information consumption that most consumers have? Like where does that analogy apply or break down?

    3. SK

      It's a good question. I think in some ways that's for the industry and the ecosystem to answer a bit. You're right, I, I spent a lot of time at...... Facebook, now Meta, uh, at Twitter. One of the things I saw when I was at Twitter was how easily you could inject cultural bias in your algorithms. I have so many stories about how if you pick the right kind of filter accounts, which then feed into the trending algorithm, which then feed into Twitter moments, and then which every journalist or editor will wake up and next thing you know, it's like one of the news stories of the land. And BuzzFeed will write a piece saying, "People on the internet are talking about this." I saw this over and over again. And it left me with this profound appreciation of how algorithms can shape culture and, you know, one of the things I always say is Twitter or X is the memetic battleground upon which we fight a lot of these ideological battles. So when it comes to AI, I think it's probably going to be very similar. Like, my kids use ChatGPT to answer everything, right? From history to geography to, you know, just kind of silly kids questions. And you can easily imagine a world where people inject their own cultural biases into this. And, you know, in the EU we have a few good examples. We have a, you know, we have examples of the Pope being seen as a Black person, misgendering someone being seen as worse than thermonuclear explosion. Uh, and a lot, a lot of it is meant to say that you can easily imagine a world where these systems, which are at the heart of so many things that the government is going to use, we're always going to use, we don't want them artificially injected with an ideology, or at least without being transparent about it.

    4. EG

      What are some of the other

  6. 12:2619:13

    Main Tenets of the America AI Action Plan

    1. EG

      main points of the announcement from yesterday?

    2. SK

      So one of the ways David and I try to think about this with some of the people we work on is, is it should, it should make sense as a strategy for almost like a technology company and I hope that, you know, please go read the document. Uh, it's actually pretty readable and, you know, and hopefully for those of you who work in the tech industry, it should kind of make sense. And we think if America is going to win the race with China, they need to do three things, and they ladder up to this strategy. The first is we need to build infrastructure, right? At the heart of this, if you kind of go back to the scaling laws, what do we need? We need compute, computation, we need data. And in the United States, it's been really challenging with the grid we have, with kind of this crazy permitting that we have around, uh, constructing new data centers to get some of these projects off the ground. So the first part of the action plan really dives into what the President calls build, baby, build, kind of playing on drill, baby, drill, which is all about how do we make sure we are building infrastructure? Because obviously, you know, some of, uh, the other countries aren't. And just as an example, one of the things it talks about is to make permitting on federal land a lot easier for data centers when it comes to old environmental laws or other regulations which get in the way. So think of that of, like, let's make sure we are building the infrastructure to power these models, uh, as we scale up. So that's number one. The second pillar is innovation, which I would kind of say as let's make sure, uh, all these amazing companies, everyone that you know of and, you know, or maybe some places you don't exist yet, can build applications and models or anything they want as fast as they can. And what we're trying to do is do, a couple of things I really want to highlight. The first is we want to cut through red tape. You know, like, until last, a year and a half ago, I was in California along with all of you. California almost passed SB-1047, which if that had happened, it would have been the end of open source, by the way, in the United States. We would not have a LLaMA, we would not have an Open Mini coming out and, um, and a lot of states which want to do versions of this. And we think that AI is a national priority and if you're going to compete with China, we need to make sure that these are things that we take, deal with at the national level, rather than every single state, uh, especially states which have ideologies that, you know, you and I may not agree on, um, and try and set their own rules. Uh, and by the way, some people may not understand this, I don't understand this, if you have a small state set rules, it can often become the de facto law for the country because if you're a company, you're like, "Well, I have to operate in this state or I have an office, so let me just do that for everybody." It's kind of, just like the EU does that. So we want to make sure cut through red tape, let's make sure if there's regulation, it happens at the federal level. So that's very, very key, um, because I think that's going to enable not just the big companies, but every series A, series B, acquihire companies, whatever the kids are doing these days, you know, we make sure that they are off of this. That's number one. The second part is open source. Now, I think we probably talked about this a bit offline, open source is one of the big reasons I actually got into the policy world. The Biden administration really, really tried to scare people about open source, talk about how unsafe it was. SB-1047 obviously tried to kind of basically ban it in many ways and what the EU does is say like, "Open source is a space where the United States needs to win." It actually points to some resources that is going to be made available to research. Because I think you and I know open source is the way everyone from a kid in a, in their bedroom or their dorm room all the way to a startup to all the way to somebody who wants a, you know, lower cost of inference in their IoT device or robotic startup, that's what they're using.

    3. EG

      For context too, much of the internet runs on open source software, right? So the server software and other things, much of that is open source. The protocols are all open for the internet. Uh, that's also true for crypto. And so, you know, it's interesting because removing open source from things like AI actually just centralizes power, right? It centralizes power into a small number of companies that could then be controlled by the government. And so to some extent, the fact that you all are supportive of open source means you actually are supportive of a thousand flowers blooming, but also a lack of direct government control in literally everything AI. So it's a very interesting, uh, counterstance to take.

    4. SK

      By the way, Elad has our talking points better than I have because that is absolutely right. One very fundamental difference I think we have with the Biden administration is the Biden team really looked as AI as something to be centralized and controlled. Everything was about how do we make sure that we regulate these three or four companies and only three or four companies can build AI. They got to submit their models for testing. It was all about control in a centralized fashion. Now, when I moved to DC, one of the things I realized is that's kind of the-... the way D.C. thinks, which is control and centralize in one place. You and I know that's not how Silicon Valley thinks. And one of the ways th- one of the reasons Silicon Valley is the envy of the world is because anybody, any day, can go to Y Combinator seed round or raise a rou- uh, or just go off to the races, and they could just build something amazing and it catches everyone's imagination. And I think what we want to do is enable just that, rather than say, "Okay. We want to centralize power, you know, within a 10-mile radius of where I am right now."

    5. EG

      Yeah. In general, too, central planning tends to lead to very bad economic outcomes. And so that's the collapse of the Soviet Union, etc. And so it's- it's- it's a, it's something that's been tried many times before in many industries and it tends to lead to a very bad place, in terms of innovation, in terms of economics.

    6. SG

      I think one of the things that people, like, underprice about open source models is they're going to happen, and it's a strategic weapon. They're happening, and Western companies are using Chinese open source models very broadly already. And- and so, like, if you believe that, like, not every model is going to be ideologically, like, neutral or, you know, aligned with American and democratic values, um, then you probably have a problem, right? And so the- the ability to, like, support, you know, whatever your point of view, like pluralism and openness and innovation, um, and have some control of, like, as a, as an ecosystem versus in a centralized way, is- is a very different point of view than, like, you know, "We'll let China develop it."

    7. SK

      Yes. And I think you're making a profound point. And you're already seeing that where, when somebody is using DeepSeek or Qwen, that's an expression of soft power. And I think I would much rather have, you know, them using a model built by somebody who kind of agrees with us and has our values. That's number one. The other issue I would point to is that these models, we don't know what's inside them. Interpretability is still a nascent field and you could very easily see ways where you plug a model into Cursor or Windsurf and you generate a piece of code, and then two years down the road, it turns out that code had a little if statement saying, "If I'm running in some piece of critical infrastructure, go do something else." And we don't have ways to validate all that. And so a lot of reasons why we want to make sure that our American models or Western models wind up winning, and this is something I think we're going to put a lot of focus on.

    8. SG

      Just because you have such a good view

  7. 19:1322:56

    Infrastructure and Energy Needs for AI

    1. SG

      into this, can we talk a little bit about infrastructure and energy, since you kind of made that, like, point number one in terms of what- what sort of stack we need? Like, people hear these claims from the leaders of the large labs that, you know, "We're building a data center the size of Manhattan," or, you know, "It's the energy that a- a city uses at any point." Can you contextualize, like, how much capacity we really need to build and sort of, like, what the biggest bottleneck is? Like, is it, is it the grid? Is it sources? Is it workforce? Like, you know, when you want to solve this problem, like as- as a systems person, like, what is the first problem?

    2. SK

      Okay. So the first thing I would say is, it- it is a system and this system was one that wasn't really battle-tested for decades. Somebody showed me this number. I think the United States basically had, like, 1 to 2% of power usage growth for a very, very long period of time. And so you can imagine this whole system of everything from gas turbines, coal, renewable energy. There was the regulation of which kind of really stopped nuclear. And then you had these first state utility companies which often didn't have the incentive to innovate. You basically kind of ran the state. You aren't really, you know, getting new demand or competition. You had a grid which wasn't really pushed because, again, you didn't need to. And, uh, and then you have essentially a patchwork of environmental laws, regulation, everything from water to emission to a whole other sort of things that I'm sure I'm forgetting, right? So somebody kind of explains to me as kind of this tangled spaghetti mess of things, which, again, until two years ago, was just fine because you and I were not dramatically using more energy than what we were doing 10 years ago. Now, that obviously changed. The scaling laws arrived and everybody is trying to build new things. And I think the way we are trying to attack it is that every single step of the way, which is one, is how do we make generation better? Second is, how do we make sure we make constructing these data centers better, making it easier to kind of these regulations and kind of get this red tape out of the way, making sure we put fo- focus on the right energy sources, uh, and making sure, like, you know, we have those lined up? So we are trying to take an approach to all of this, but it is a complicated problem just because there is so many different players, so many different states, and so many different patchwork of laws and regulations involved. But I think what... You know, I encourage folks to look at the executive order yesterday which the president signs on infrastructure, which I think is going directly at this. We also have something called the National Energy Dominance Council which works very closely with Secretary Burgum and Secretary Wright, you know, of Interior Energy. And I think you're going to see a lot more from us on that front. The short answer, Sara, is it's complicated. I think we're ta- we're taking a very, very strong approach to this, but there's going to be more to come.

    3. EG

      How do you think, um, energy infrastructure is gonna feed into these big data center build outs? And so, you know, one theory I heard is that fiber is cheap and easy to lay while grid is hard, building out the electrical grid. And so therefore, you're gonna centralize data centers near sources of cheap power, and then you just run, you know, fiber into them versus moving things around based on, you know, other types of, uh, capacity from a telecommunications or their perspective. Are there specific sources of energy that you think are gonna power this AI revolution, are there things we need to reinvest in? Obviously, the president has issued some executive orders around nuclear. I'm just sort of curious how you think about what that future really will be, and what are the major sources of energy that we really need to be dependent on, and how does that all shape up from an infra perspective?

    4. SK

      What I think we see our role as is, like, get rid of the red tape. Let's make sure the permitting on these things are super easy. Nuclear, that's another case where I think for decades and decades, the climate lobby and the doomers that kind of stop any real efforts over there. So I think we're seeing a lot of effort to let's get the run, uh, let's-... get the red tape out of the way, let's get construction going and see where we get.

    5. EG

      The other thing that I think is interesting from an infrastructure perspective

  8. 22:5624:52

    Manufacturing, Supply Chains, and AI

    1. EG

      is manufacturing capability and supply chain. And a subset of AI supply chain is dependent on China or other countries. Are there certain areas of supply chain that we should be repatriating back, or how should we be thinking about more generally American manufacturing?

    2. SK

      I say that America needs not just engineers, but it needs people up and down the stack. It needs electricians, technicians. We need to get construction going and we need to get these jobs and this whole ecosystem back in, um, back in the US. So if you look at the action plan, there's a bunch of stuff in there about this. Uh, I think I mentioned two parts of the action plan w- which is building and then innovation on r- cutting out red tape and open source. And the president also talked a little bit about, uh, copyright yesterday. The third piece of the a- action plan, which I also think is a pretty dramatic switch away from how the Biden folks thought about it, is around making sure the world uses our standards and our technology. So just for context, and again, this is something unless you are a policy wonk you may not be super familiar with, uh, under the Biden era, there was something called the Biden Diffusion Rule, which basically was a 200-page document, which basically made it illegal for America to export GPUs. It was really hard for, uh, you know, if you're Jensen or if you're Lisa Su, to really kind of get your GPUs out to other countries. Even some of our allies who want to a- who are really enthusiastic about AI and they want to help us out, but we are not actually giving them GPUs. So we listened to that order, and one of the things we talked about is how do we make sure that we get all of our allies around the world using the American stack? So that means how do you make sure, um, and we just did this in the Gulf with the American AI Acceleration Partnerships, how do we make sure we are getting our GPUs over? And one of the opportunities of doing that is we get our GPUs over, we probably get them to run our models as opposed to models from, you know, another country and we go from there. So having this sense of an American stack that we can export and the world standardizes on, that's, I would say is the third part of the action plan.

    3. EG

      One other topic that I think people believe

  9. 24:5226:30

    Ensuring American Dominance in Robotics

    1. EG

      that China has a lead in right now is certain areas of robotics. That could be human form or other, it's drones, it's potentially catching up on self-driving and autonomy. So if you think about that both from a societal perspective, it's obv- obviously automotive, so if you look at European market share of cars, BYD and others are really taking enormous amounts of share. And if you look at these, these are the same technologies that would also be used from a defense perspective. And so to some extent one could argue that there's two parts of AI. There's the digital form side of it, and then there's the real world robotics and drones and interactive side. How do you think about that in the context of American policy and what, what in the action plan addresses that capabilities to build these physical world products?

    2. SK

      The action plan actually has a section in it of making sure, you know, we are set up for robotics. I think that's obviously, that's going to get super key within the next 18 to 24 months. I would say it ladders from everything else that we talked about, uh, both in the US and internationally. The first is making sure that our model companies can actually build as fast as they can, um, our startups can go innovate as fast as they can. The second piece I would say is we want to make sure that the world is using our robotics companies and our models and not, uh, say DeepSeek or Qwen. And that's actually one of the things

    3. NA

      (laughs)

    4. SK

      ... because if- when I was talking to a bunch of robotic startups, you're seeing a lot of distilled DeepSeek, a lot of distilled, um, Qwen out there. And what we want to do is to make sure that we have an open source response, an American response, which pushes our products as a standard out there. But, you know, it is a focus. I think it's going to increasingly come into focus, uh, in the next say six months to 12 months, and we are spending a lot of time on it.

    5. EG

      Related to that, there's always a question

  10. 26:3029:30

    Translating Policy to Industry and the Economy

    1. EG

      of how do things actually get done in, in politics and how does it translate into the real world and I think, you know, you've gotten something like 90 different agency actions listed in this action plan. And how do you think about these things actually translating into industry, the economy, action by companies and other players? Like what, what are the mechanisms that you all have to sort of, uh, ensure that these things come together or happen? And if they don't come together, what's, what's plan B?

    2. SK

      Well, there is no plan B. We want to get this done. And I think one of the things from the Trump administration you will see is that the administration moves really, really fast, you know, which is why in the first week we had a bunch of executive orders. Uh, look, we already work on all of it. Um, i- we had three executive orders signed yesterday, one for infrastructure, one for export, which kind of ties to a lot of things we talk about, and one to stop ideology and wokeness and DEI. And I think you're going to see a lot more. We are already at work on, um, pretty much all of it. There is no plan B. We're gonna go get this done. And the other part I would say from yesterday is I've been inundated with just great response from the industry. A lot of folks that you and I know, uh, who are just really excited to see the government actually maybe understand AI and is actually happy to, uh, you know, make sure that American companies can go, uh, bui- build American AI. So I think they're also very excited to go partner with us. So it's go, go, go, no time to waste. We're getting it done. There is no plan B.

    3. EG

      It's actually exciting because I think to your point on understanding AI in government, when I've looked at prior administrations, be they Republican or Democrat, a lot of the people who went into them from tech were- weren't the core driving forces of tech in the technology world. In other words, they could create people, very nice people, but it wasn't the top of the industry, it wasn't necessarily the deepest technical experts in some cases. Obviously there's counterexamples to that. And so I think one thing that's striking about this administration is the caliber of tech people that they actually got this time around is very high relative to prior administrations. So I think, uh, that impacts on the understanding, it impacts how you all are thinking about the world. So I found that very, um, exciting and inspiring in terms of just having a really strong technical basis for what you all are doing. So I think that's really good.

    4. SK

      Thank you. There's a lot of, there're a lot of great people in the administration in the tech industry, not just in AI, but for example, you have, uh, Emil Michael, you know, as the undersecretary for R&D, uh, who's running DARPA. In, in the Pentagon, you have many, many others. You know, one of the things, uh, I think about is we just bring a understanding of how...... the tech industry works, what is possible, what isn't. We bring a sense of urgency. We also just really deeply understand the technology. Like, you'd be shocked at how often I've seen David Sacks in a meeting explain h- how inferencing works, you know, what high bandwidth memory is, you know, how the world has shifted from a, you know, pre-training context to a post-training context. And so we can just really mix it up on the technical details, and we obviously also have a lot of strong social ties to the industry so we can call upon them to help us out. So I think it just adds a very different flavor of understanding of AI where, again, to go to my other point, I think D.C. kind of just suffers from a lack of real technical understanding of both the industry and of the products involved.

    5. SG

      I'm exposing my cards a little bit here, but it, it

  11. 29:3032:33

    Should the US Be a Technocracy?

    1. SG

      sounds from both your policies and what you're saying, Sriram, that you're on the same page. Do you think that the U.S. should be like a- a- a technocracy? Like if it... Just that- that simple statement.

    2. SK

      What does a technocracy mean?

    3. SG

      Leading with technology and then having a bunch of people in technology leading the country.

    4. SK

      I'm not sure I would think of it that way. Um, the way I see it is America has been blessed to have the leading technology ecosystem of the world, and that is an ecosystem which is in an intense competition right now, and I think we could have easily lost that competition and it's still a very, very close race, and we need to do everything we can to protect, preserve, and extend our lead. But at the end of day, if you look at this administration, we are still trying to make sure that, you know, we serve the American worker, uh, the American workforce. That is, if you look at the action plan, you know, that is at the heart of everything we do. So I don't think I see it maybe exactly the way you describe it. I see it more as, you know, we have something in a technology ecosystem that is the envy of the world. The president, by the way, you know, when he was on stage yesterday, he talked about a lot of the inventions that the United States had made, right? Like, we did the integrated circuit. We had Shockley invented the transistor. We had the Fairchildren all the way we... You know, i- internet came from us. Uh, we did PageRank and Google. Uh, we did the iPhone in Cupertino. So many of these things that the world winds up using. So what do we do to make sure we preserve that lead, especially when it comes to AI? And, and if you look at AI, look, there are so many potential timelines that AI could take. I have read AI 2027 from Daniel. Uh, I have read, uh, much more optimistic takes on AI. I think there's going to be an event horizon where beyond which you and I can have reasonable discussions on how AI could play out. But in any one of those scenarios, I want to make sure that the United States is well-positioned where we can take advantage of the productivity and the science and the technology breakthroughs that's going to happen and then be set up for whatever happens next. So I'm not sure I really... I kind of answered the question you phrased it.

    5. SG

      No, no, no. You, you did. I was... I was trying to ask the question i- in a bit of a triggering way because I think a lot of people would say like, it shouldn't just be driven by the technologists and it's like, what good that... does that, you know, do us in- in sort of winning the AI race? But I- I think that's actually a really profound claim that you made, which I- I- I hear as like, the country that builds the most capable AI systems, they- they gain a lot of upstream control and influence that has been traditionally very American, right? And we should all care about that. Like you, you know, use the examples of accelerating life sciences, new materials, optimizing industry, being more efficient in healthcare and education and things that matter to every American and to compounding national wealth. And so I- I actually think that like sometimes a lot of this discussion becomes like, you know, an argument about like what parties have influence versus like what position do we want to have as a country, right? And whether or not we want that edge.

    6. SK

      That's fair. And I think very simply, we want to win.

    7. SG

      Yeah. So I- I have two questions for you before we run out of time.

  12. 32:3336:07

    Understanding the Argument Against Open Source Models

    1. SG

      One is just going back to this idea of you being the strong proponent of open source and open weights. What is the strongest counterargument to the people who would raise concern that the PDOOM, the probability that there's some sort of like cycle of key man risk or, you know, some factor of abuse of these powerful models, um, increases with open source models?

    2. SK

      If you look at the action plan, it's kind of a manifestation of how we think about things, right? Like, we don't talk a lot about risk. We talk a lot about having systems in place to identify cyber risks, bio risks, et cetera. I think the difference from the Biden administration or the folks who talk about PDOOM a lot on LessWrong is that we are just inherently more optimistic. If folks haven't seen it, I encourage them to watch the vice president's speech in Paris where he talked about, look, we want to embrace AI with optimism rather than fear. And I think one of the things which happened is that there was such a lot of fear, uh, I would say mistakenly placed on open source. I think there are two kinds of fears people, uh, talked about. One was what you talked about, which is, hey, w- what are the risks if these models could do really bad things? The second was, hey, are we actually giving away our secrets to China? Like, um, and i- in... What DeepSeek showed us is that China are actually building these models just perfectly fine just by themselves and, uh, it's actually American models which were far behind. So, uh, you know, immediately I think that argument got, um, refuted. On the PDOOM thick question, I think that's a perfectly fair question and I think we need to be vigilant about it and the action plan, um, talks about it, but we have to remember we are in a race with China and there are going to be catastrophic consequences if Chinese models are running on every robot, every camera, every car, every device around the world, and we just got to face that reality.

    3. EG

      I think also the people who are driving the PDOOM arguments to some extent are coming from one or two large companies that have closed source models. And so I think we also forget the incentives of who's actually pushing for this. It's a very traditional form of regulatory capture. If you have a big pharma company, they work with the government to prevent other entrants into the industry, and this is exactly what it felt like is at least partially happening in the AI world. Now, that may be for...... you know, perceived altruistic causes or other things. We're worried about humanity, but I do think the reality is a small number of companies have been kind of pushing this narrative pretty strongly that open source is, is bad and these are companies that control the closed source models.

    4. SK

      Oh, absolutely. I think, you know, uh, I think there's a... but I think there's a few things going on. One is that, you know, people pushing for regulatory capture. Second is obviously, you know, the schools of thought from effective altruism and a lot of people kind of worried about this all kind of mixed together. Here's my rebuttal to that. Like, I think one of the things that open source software has shown us on the internet is that by default, open source is just safer and more secure. Uh, what does Linus' law say? "More eyes make every bug shallow." And over the last 20 years, what has the security industry learned? More scrutiny you put your libraries through, the more scrutiny you put your browser rendering s- engines to, the safer it becomes. And we have seen that time and time and time again. And I think the same holds true for open source and open weights where I start off, you know, if, if you have a model up on Hugging Face and somebody downloads a 500 gigabyte file and there are thousands of students and researchers just kind of pounding away on that, I think there's a good chance they're going to find issues a lot better than a very small safety team, uh, inside a large lab. So, uh, I'm a big fan of open source sometimes being a lot more secure than closed source as well.

    5. NA

      Awesome. Thanks so much, Shriram.

    6. EG

      Thank you, Your Excellency.

  13. 36:0736:47

    Conclusion

    1. EG

      Your Governorship? Your Grace? I'm not sure again what the right title is. Your Policy Advisorship?

    2. SK

      Uh, feel free, Elan, right? You know, uh, uh, the more inflated, it helps my ego. So thank you.

    3. EG

      (laughs)

    4. NA

      It was Your Excellency. That's what we started with.

    5. EG

      We really appreciate the time today, Your Shriramship, so thank you for joining.

    6. SK

      Thank you so much. It's such an, it's such an honor, you know, and, uh, I, I love the work you folks do and thanks for having me.

    7. NA

      Find us on Twitter @nopriorspod. Subscribe to our YouTube channel if you wanna see our faces. Follow the show on Apple Podcasts, Spotify, or wherever you listen. That way, you get a new episode every week. And sign up for emails or find transcripts for every episode at no-priors.com.

Episode duration: 36:47

Install uListen for AI-powered chat & search across the full episode — Get Full Transcript

Transcript of episode l8fG5DcjucA

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome