Skip to content
a16za16z

The Current Reality of American AI Policy: From ‘Pause AI’ to ‘Build’

a16z General Partners Martin Casado and Anjney Midha join Erik Torenberg to unpack one of the most dramatic shifts in tech policy in recent memory: the move from “pause AI” to “win the AI race.” They trace the evolution of U.S. AI policy—from executive orders that chilled innovation, to the recent AI Action Plan that puts scientific progress and open source at the center. The discussion covers how technologists were caught off guard, why open source was wrongly equated to nuclear risk, and what changed the narrative—including China's rapid progress. The conversation also explores: - How and why the AI discourse got captured by doomerism - What “marginal risk” really means—and why it matters - Why open source AI is not just an ideology, but a business strategy - How government, academia, and industry are realigning after a fractured few years - The effect of bad legislation—and what comes next Whether you're a founder, policymaker, or just trying to make sense of AI's regulatory future, this episode breaks it all down. Timecodes: 0:00 Introduction & Setting the Stage 0:47 The Policy Shift: From Fear to Action 1:47 The Pause AI Movement & Industry Response 2:28 Historical Parallels: Internet vs. AI Regulation 3:34 The SB 1047 Bill & Cultural Shifts 6:28 Open Source AI: Risks, Debates, and Misconceptions 13:39 The Chilling Effect & Global Competition 18:55 Changing Sentiments: From Caution to Pragmatism 21:18 Open Source as Business Strategy 28:45 The AI Action Plan: Reflections & Critique 32:41 Alignment, Marginal Risk, and the Future Resources Find Martin on X: https://x.com/martin_casado Find Anjney on X: https://x.com/AnjneyMidha Stay Updated: Let us know what you think: https://ratethispodcast.com/a16z Find a16z on Twitter: https://twitter.com/a16z Find a16z on LinkedIn: https://www.linkedin.com/company/a16z Subscribe on your favorite podcast app: https://a16z.simplecast.com/ Follow our host: https://x.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details, please see a16z.com/disclosures.

Martin CasadoguestAnjney MidhaguestErik Torenberghost
Aug 15, 202541mWatch on YouTube ↗

EVERY SPOKEN WORD

  1. 0:000:47

    Introduction & Setting the Stage

    1. MC

      And so we've been through all of these tech waves, and we've learned how to have this discussion in a way that, that for the United States' interest balances these two things. And if we're gonna make a departure from a posture that was developed from 40 years, we better have a pretty damn good reason.

    2. AM

      Today, a new frontier of scientific discovery lies before us. You can sometimes judge a book by its cover [chuckles] and I think this was a strong start. [upbeat music]

    3. ET

      Okay, so we're talking, uh, a, a week or two after the, uh, action plan, uh, has, has been announced. Um, looks like we've come a long way. Why, why don't we, uh, trace... You guys have been on the front lines for, for years now in, in, in this discourse fighting, you know, to, to, to make this possible. W- why don't we trace where we've been so that we could then understand what, you know, how we got here and where we're going?

  2. 0:471:47

    The Policy Shift: From Fear to Action

    1. MC

      I mean, under the Biden administration, we had the executive order, um, which was basically the opposite of what we're seeing today. I mean, it was trying to limit innovation. It was doing a bunch of fear-mongering. But to me, what was even more striking was not regulators being regulators.

    2. ET

      Right.

    3. MC

      You'd expect that. But if you remember, Ansh, and this is why we got involved, is you'd have these politicians, you know, making recommendations, which is fine, you'd expect that, but nobody was saying anything. You know?

    4. ET

      Hmm.

    5. MC

      It was like academia was silent.

    6. ET

      Right.

    7. MC

      The startups were silent. And if anything, like, the technologists were kinda supporting it. So we were in this super backwards [chuckles] world-

    8. ET

      Right

    9. MC

      ... where it was like innovation is bad or dangerous, and we should regulate it, we should pause it, you know. There was this discourse and it w- and it was, like, somewhat fueled by tech as opposed, you know. And then nobody was going against it. And so I think today we should definitely talk about the action plan. It's great.

    10. ET

      Yeah.

    11. MC

      But we should also talk about how, like, the entire industry has kind of come around to say like, "Listen, we need to keep these things in check. We need to be sensible-"

    12. ET

      Yeah

    13. MC

      ... when thinking about it."

    14. ET

      I mean, pause

  3. 1:472:28

    The Pause AI Movement & Industry Response

    1. ET

      AI, that was two years ago?

    2. MC

      Um-

    3. ET

      Remember the, the big, uh, sort of, you know, all the CEOs signed this petition, um-

    4. MC

      I mean, this-

    5. AM

      Oh, yeah, I think that was the last AI action summit, right? Uh, the one before Paris. That was about a year ago.

    6. MC

      There's been s- guys, there's been so many of these. [laughs]

    7. ET

      Yeah, I've lost track. I've literally lost track.

    8. MC

      Like, like, really. No, no. R- remember, like, what, what was the, the, like, the, um, uh, Dan Hendrix's, um, CAIS? What was the, the, the, the California AI...

    9. AM

      The Center for AI Safety.

    10. MC

      Center for AI Safety. That's it.

    11. AM

      That's-

    12. MC

      Center-

    13. AM

      The nonprofit. Yeah, yeah, yeah.

    14. MC

      Yeah. And then they got, like, all of these, like, um, people to sign this list, you know, like, we ne- need to worry about the existential risk of AI. And, like, that

  4. 2:283:34

    Historical Parallels: Internet vs. AI Regulation

    1. MC

      was the mood. It was almost like... Like, can I just do something quick? By contrast, right? So I, I was, you know, there during kind of, like, the early days of the web and the internet, and at that time you actually had examples of the stuff being dangerous, right? Like Robert Morris, like, let out the Morris worm. It took down critical infrastructure. We had... So we had new types of attacks. Um, we had viruses. We had worms. We had critical infrastructure. We actually had a different doctrine for the nation which said, you know, the more we get on the internet, the more vulnerable we are. So instead of, like, mutually assured destruction, we have this notion of asymmetry. So there was all of these great examples of why should we, we be concerned. And what did everybody else do? Pedal to the metal. [chuckles]

    2. ET

      Right.

    3. MC

      Invest more technology. This is great. And so, like, you know, we were still at the time, like, we wanted the internet. We wanted to be the best. We wanted to build it out. You know, the startups were all over it, and, and, and coming into this AI stuff two years ago, it was the opposite, which is, like, there were the concerns with new technology, which you always have, but no... Like, there were very few voices that were like, "Actually it's really important we invest in this stuff," and things. So that's kinda, to me, the bigger change is this more cultural change.

  5. 3:346:28

    The SB 1047 Bill & Cultural Shifts

    1. AM

      I think that's right. I, I, I, I... There was a moment in, I think it was last summer, where somebody sent you and me a link to the 104- SB 1047 Bill, and I remember Martin and I reacting like, "There's no way this is gonna get any steam." What was absurd to us, I think, was that it made it through the House and the Senate, and it was on its way to a final vote, and would've become law one signature from the governor later.

    2. ET

      Wow.

    3. AM

      And I think there was this escalation where I realized we... The, the, the na- I f- I think... My, my view is that technologists like to technology and polit- politicians like to policy.

    4. ET

      Yeah. [chuckles]

    5. AM

      And we don't... We, we pretend like these two things are in different worlds, and as long as these two worlds don't collide and w- and the engineers get to, like, build interesting tech, and, and we... There- there's no sort of, like, self, uh, own too early in the-

    6. ET

      No

    7. AM

      ... in, in... We, we generally trust in our policymakers, and that changed completely I think last summer, um, whi- which is the v- really weird cultural shift, which is no, no, no, the, the, the... A lot of the policymakers who actually I think were quite open about the fact they didn't know much about the technology, 'cause it was moving so fast, still felt like something had to be done, therefore this is something, therefore it must be good, and that this was this... You know what? I, I think the most egregious example of, of this being adversarial was SB 1047.

    8. ET

      I totally agree.

    9. AM

      But that culture shift was w- one from give, let's let the tech mature-

    10. ET

      Yeah

    11. AM

      ... and then decide how to regulate it later, uh, to, like, before let, let's try to regulate it in its infancy, was, was, like, a ma- massive, I think, shift in, in my, in my head.

    12. MC

      But, but let's just talk about how bad it was. You had VCs [laughs] whose, like, their entire job is investing in tech talking against open source. You know?

    13. AM

      It was absurd.

    14. MC

      Like Vinod, Founders Fund, they're like, "Open source AI is dangerous. It gives China the advantage."

    15. AM

      Right.

    16. MC

      And there was just some sort of prognostication that if we didn't do open AI, like, the Chinese would somehow forget math and not be able to create models, and then you forward by a year and they've got the best models by far and we're way behind. So it was like, it was like the people that are supposed to be protecting the US innovation brain trust were somehow on the side of the let's slow it down.

    17. AM

      Right.

    18. MC

      And I think that now there's this realization of actually China's really good at creating models-

    19. AM

      Right

    20. MC

      ... and they've done a great job. We've kind of hamstrung ourself from whatever discussion we were having, you know, which... And I think you're right. I think we're just-Like, it's good to be con- concerned about the dangers and job risks, but it has to be a fulsome discussion. You need both sides. And when, and when you and I jumped in, it just didn't feel fulsome at all. It was like one side was dominant, and there was almost no one on kind of the pro-tech, pro-innovation, pro-open source side.

    21. AM

      I just think it didn't feel grounded in empirics, right?

    22. MC

      No. Well, certainly not that. [laughs] It came from, you know-

    23. ET

      So what is the steel man of the, the critique of, of open source that they were making,

  6. 6:2813:39

    Open Source AI: Risks, Debates, and Misconceptions

    1. ET

      uh, a coup- couple years ago?

    2. MC

      That it-- You know, this is like a nuclear weapon. Would you open source your nuclear weapon plans? Would you open source your F-16 plans? So the idea was that somehow, like, this was like... And, and, uh, you know, nuclear weapons are not dual use. Nuclear energy is dual use, right? An F-16 is, is not dual use like a jet engine is dual use. But a lot of the analogies that were used at the time were something that, you know, if you squint one way, parts of it are dual use, they could be used for good or for bad. But, like, the examples were clearly the weapons, and that's what they would say. They would say, "Listen, these things are incredibly dangerous. Would you open source, like, whatever the plans for an F-16?" And then, you know, the other side would slowly decide that, like, this conversation is ridiculous. We gotta go ahead and stand up. It says, you know, "No, you would not do this for an F-16 because-

    3. AM

      Right

    4. MC

      ... that is a fighter, you know, jet. And however, like, a lot of the technologies used to build it, yes, this is, you know, fundamental. It's not like people aren't gonna figure out anyways, and we need to be the leader just like we were the leader in nuclear, and we were... Then by the way, in nuclear, like, if you go historically, when that came out, we invested incredibly heavily in it. The things that we thought were proximal to weapons, of course, we made sensitive. You know, but this, you know, all the universities were involved, like, the entire country had the, the discourse, and that just wasn't what was happening.

    5. AM

      I, I, I think that that's true. They were basically like there was a substantive argument against open source, and there was an atmospheric one, and the substantive one was like the one Martin mentioned, that, um, the technology was being confused for the applications.

    6. MC

      Hmm.

    7. AM

      Right? And all the, all the worst case outcomes of, of the application or misuses were then being confused-

    8. MC

      But they were also theoretical too.

    9. AM

      And-

    10. MC

      It was just even worse than that.

    11. AM

      Right.

    12. MC

      It was like you're, you're, you're right in what you're saying, but it was like this could potentially create bioweapons. It's funny, we got a bioweapon expert, and he's like, "Well, not really. I mean, like, the difference between, like, a model and, and Google is almost nothing." But, you know, like, that was used as this, you know, straw person argument, and then it could hack into a whole bunch of stuff. Like, nobody had ever done it before, but it was theoretical. So it was like these theoretical arguments that were very specific-

    13. AM

      Right

    14. MC

      ... versus a broad technology.

    15. AM

      That was one, and then the atmospherics were there was a famous former CEO who went up in front of Congress and, and literally in a testimony said, "The US is years ahead of China."

    16. MC

      Yeah.

    17. AM

      "And so since the, these, these are nuclear weapons and the misuses were being conf- confused with the technology, and we're so far ahead, let's lock it down so we can maintain that lead, and therefore our adversaries will never get their hands on it," which were both just fundamentally wrong.

    18. MC

      Which is-

    19. AM

      For the reason Martin said, like, co- substantively, AI was not introducing new marginal risks. So if you did an eval on how much easier it is-

    20. MC

      Well, at least, at least not identified at the time. I mean-

    21. AM

      Not at the time

    22. MC

      ... I mean, you would go to Dawn Song, who is, like, a safety researcher, McCarthy Genius Fellow at Berkeley, and you'd say, "What are the mar- marginal risks of AI?" She'd say, "Great question. We should research this."

    23. AM

      That should be a good research problem, yeah.

    24. MC

      Like, literally the world expert-

    25. AM

      Right

    26. MC

      ... on this question was like, "This is a very important, but it's an open research statement."

    27. AM

      Yeah. So, so, so no empirical evidence at the time that this was, that AI was creating net new marginal risks and just factual inaccuracies that we were ahead of China. Because if you just paid attention to what was happening, DeepSeek had already started to publish a, a fantastic set of papers, including DeepSeekMath, uh, V2, which came out last summer. And you're like, "Okay, obviously these guys are clearly close to the frontier. They're not that, they're not years behind." And so when R1, DeepSeek R1 came out earlier this year, you know, a lot of Washington was, like, shocked. "Oh my God, they're..." Like, "How did these folks catch up? They must have stolen our weights." And like, no, actually, it's not that hard to distill on the outputs-

    28. MC

      [laughs]

    29. AM

      ... of our labs.

    30. MC

      Have you actually looked at the author list of any paper in AI? [laughs] Like, where do you think these people come from?

  7. 13:3918:55

    The Chilling Effect & Global Competition

    1. MC

      of capacity. And so, you know, basically it would move the conversation to the courts and outside of policy, which is again, historically, we've taken a, a policy position on these things which follows precedence that we understand, you know, to make sure that we don't introduce externalities, like for example, allowing, you know, China to race [chuckles] ahead-

    2. AM

      Right

    3. MC

      ... with open source, which is, you know, which has happened.

    4. AM

      And the key thing is y-y-by moving it to the court, even if you don't, you could-- one could argue, "Oh, Anj, but like, sure, it's moving to the courts. That means it's open for debate. It's not clear that open weights are going to be regulated with liability." The point is that that creates a chilling effect. The chilling effect is the idea that-

    5. ET

      Of course

    6. AM

      ... when an, when our best talent is considering-

    7. MC

      I could, I could be sued. Like I'm, I, I like, you know, I'm a random kid-

    8. AM

      Right

    9. MC

      ... in Arkansas developing something. Like I, I don't want to be in a world where-

    10. AM

      Right

    11. MC

      ... [chuckles] it can be resolved in the courts.

    12. AM

      Right.

    13. MC

      Hey, I ca- I can't even afford, you know, whatever.

    14. AM

      And in a situation where you have an entire nation-state backed entity like China ma- actually doing the opposite of a chilling effect, right? Encouraging a race to the frontier. Why on earth would we want... You know that there's this meme of a guy on a bike and he picks up a stick and puts it into his front wheel [laughing] and peddles forward?

    15. ET

      Yeah.

    16. AM

      That's the effect of a chill. That, that is what chilling effect is, right?

    17. ET

      Yeah.

    18. AM

      At a time when your, your primary adversary is, is racing.

    19. ET

      So let's trace how the conversation has changed because we don't see Vinod tweeting about o-open source anymore. Obviously, OpenAI has changed their tune, especially right now. What, um... Is it really just DeepSeek? Is, is that... Or, or how do you trace kind of how, how the sentiment shifted on open source?

    20. MC

      Let's, let's go through a few theories. I'm not really sure what happened. I almost felt like it was almost culturally in vogue to be a thought leader on the negative externalities of tech, and it kinda started with Bostrom, but it was picked up by Elon, it was picked up by, um, uh, Moskowitz. I mean, a bunch of like these intellectuals that like we all respect and still do. I mean, they've, they're just really the titans of our industry and our era. They were asking these very interesting intellectual ex- uh, questions around like, do we live in a simulation? What happens if AI can recursively, uh, self-improve? And then actually, you know, they created whole kind of cultures and online social discourse around this stuff. And so I think to no small part, that became a bit of a runaway train, and it's just catnip to policymakers.

    21. AM

      Yeah.

    22. MC

      You know, and so I, I think part of it is like people didn't really realize [chuckles] that this-

    23. AM

      Yeah

    24. MC

      ... had become so real because, of course, GPT-2 comes out and then 3 comes out and like all this stuff's amazing, and somehow it got conflated. So I, I think part of it is just a path dependency on, on where we came from, which is kind of the legacy of Bostrom. I think that was part of it.

    25. AM

      I think the ungenerous approach would be, would be that there was a lot of... Discourse is awesome, but a lot of the people pushing the discourse were first-order thinkers. They weren't doing the math on, wait, wait a minute, if policymakers who have no, um, background in frontier AI, which by the way nobody does 'cause this space is only three, four years old, start to bu- take discourse as canon, which is a big difference, then what happens? What are the second and third-order effects? And the second and third-order effects that, are that you start making laws that are really hard to undo and, and start mistaking interesting thought experiments as the basis for policy. And once that happens, those of us who've... Look, law, law is basically code. Code is, code is hard to refactor. Law is like impossible to refactor [chuckles] . And so I think the second and third or- third-order effects was that were of a lot of well-intentioned folks, for example, in the existential risk community saying, "Look, if you're intellectually honest about the rate of progress of AI, it's not crazy to say that there are some existential risks on the technology. It's non-zero." Sure. Yes, that is true. But then to then say that that threshold is high enough to start introducing nash- sweeping changes in regulation to the way we create technology, that leap I don't think a lot of the early proponents of that technology realized they would do that. In fact, I think Jack Clark, who runs policy for Anthropic, literally tweeted like towards the end of the SB 1047 saga, he was like, "I guess we, we should have-- We didn't realize the impact of how far this could have gone." And, and I think to those of us who had interacted with DC before and regulation before, it-- Like the second and third o- third-order effects were much more discernible or legible. And then I think what DeepSeek did was just made it super legible to everybody else. So I think-

    26. ET

      Yeah

    27. AM

      ... they were already... Like I think DeepSeek was the catalyst.

    28. ET

      Right.

    29. AM

      But it, it wasn't like there was a step... It, it didn't change the reality that the second and third-order effects of policymakers confusing sort of like discourse for fact-

    30. ET

      Yeah

  8. 18:5521:18

    Changing Sentiments: From Caution to Pragmatism

    1. ET

      M- Mark had this sort of, the Baptist bootleggers-

    2. MC

      Yes, I, I was just gonna say. Exactly. Yeah

    3. ET

      ... true believers and then, um, sort of people who u- use the sort of that thinking for, to support their own ends. And it, and it seems like that's changed even just on the company-

    4. MC

      But, but the re- but the reality is I think it was driv- I, I think the majority of people are, are neither.

    5. ET

      Yeah.

    6. MC

      The mar- the majority of people are pragmatists-

    7. ET

      Right

    8. MC

      ... that are not trying to take advantage of the system, that think, "Well, maybe if we have this discourse, it's an honest discourse, and then we'll self-police." And then I just feel like the silent majority was not part of the discussion. Maybe the biggest change now is, like, those people are there. Like, the founders are there. Academia is there. VCs are there. Now, now the people that are not either Baptists or bootleggers are driving the discussion, which I actually is in- independent of the action plan itself, I feel much more in a better position now. Like for example, there's still a bunch of stupid regulation that's popping up, but I'm not calling Anj at night and think, "We have to do something now," 'cause I feel like, okay, there's actually representation-

    9. ET

      Yeah, right

    10. MC

      ... that's sensible, where at the time there was none.

    11. ET

      Right.

    12. MC

      And I, I think to move, you know, to the, to the action plan, I think this is a great... Like, if you read the first page, right, what a marked shift the fact that the co-authors include technologists.

    13. ET

      Yeah.

    14. MC

      Right? It's... And, and I think that was the core problem is DC is a system, like a self-contained system, and the valley is a self-contained system, and I think a lot of the people here were assuming best intentions over here and vice versa.

    15. ET

      Yeah.

    16. MC

      And what happened is a few bad actors essentially used that arbitrage opportunity to represent Silicon Valley's views incorrectly in DC. And when we saw some of the legislation, we had policymakers calling us up and saying, "Wait, you guys aren't happy with 1047? But, uh, the guys, you're, the other tech people were calling us and saying you'd love more of this kind of regulation." We would said, "What other tech people?" And it turns out we are not one homogenous group. Little Tech is extraordinarily different from Big Tech, which is extraordinarily different from the academic communities. And I, I think one of the things we had to contend with was, like, we used to be one shared culture.

    17. ET

      Right.

    18. MC

      And then when tech grew, it, we actually, th- there are some major differences in, in the Valley at least between party... We, we're not one tech ecosystem anymore.

    19. ET

      Yeah.

    20. MC

      We have different interests. And DC hadn't updated that. And, and I think what's amazing about the action plan is it's written by people who have bridged both-

    21. ET

      Yeah

    22. MC

      ... with enough representation across, like, the four or five different subcultures within tech who have different interests.

    23. ET

      Great.

    24. MC

      I think that's new.

    25. ET

      Yeah, yeah. Going back to, to op- open source, why don't you talk a little bit about just sort of the how different companies...

  9. 21:1828:45

    Open Source as Business Strategy

    1. ET

      Uh, help us make sense of how different companies have, have, have thought about it or from a sort of, uh, business strategy perspective. You know, I mean, we saw Meta with maybe the first big o- o- open source push. Um, you know, OpenAI has sort of evolved there too.

    2. MC

      Right.

    3. ET

      And I, I've seen even Anthropic seems to, being involved in their dialogue, um, a, a, a little bit. Um, how should we think about open source as a, as a, as a business strategy in terms of what, what, what's changed here and, and why?

    4. MC

      Oh, look, I, I don't think this is... This part is actually, like, is, is playing out beautifully along the same trend lines of all pr- p- previous computing infrastructure, databases, analytics, operating systems. Like Linux, the, the way it works is the closed source pioneers the frontier of capabilities, it introduces new use cases, and then the enterprises never know how to consume that technology, and when they f- do figure out eventually that they want cheaper, faster, more control, they need somebody like a Red Hat to then introduce them and, and provide solutions and services and packaging and forward deployed engineering and all of that around it. And which is why the arc generally in enterprise infrastructure has been closed source wins applications and open source tends to do really well in infrastructure, especially in large government customers, regulated industries where there's a bunch of security requirements, things need to run on-prem, the cu- the customer needs con- total control over it. Broadly, you could call that the sovereign AI market right now. Lots of governments and lots of in- legacy industries are going, "Wait, this open source thing is really critical to us." So I think whereas two, three years ago it was v- open source was viewed as, like, this, like, largely philosophical endeavor, which it is. Open source has always been political and philosophical by definition, but now there's an extraordinary business case for it, which is why I think you're seeing a lot of startups and companies also changing their posture 'cause they're going, "Wait a minute, some of the largest customers in the world, enterprise customers happen to be governments, and happen to be legacy industries, and Fortune 50 companies, and they want stuff on-prem," and that's when you go adopt open source. I say I think there's been a business shift as well. I don't know if you'd agree. Yeah, this is great. I, I, so I totally agree. I, I do think it's interesting to have the conversation where it's the same and where it's different. Like, everything Anj said is exactly right, which is we have a very long history with open source, and it's a very useful tool for businesses, but also for research and academia, et cetera. But let's just talk about businesses and startups, right? It's a great way to get a distribution advantage. It's a great way to enter a market where you're not an incumbent and you're a startup. So it's just kind of one of the tools for building in software that's been used, and open source has been used in, in a very similar way, right? I mean, you can use it for recruiting, you can use it for brand, you can use it to get distribution, and we see all of that. But there's something that's unique about AI that software doesn't have, and, like, we're seeing very viable business models come out of it that don't have the limitations of traditional software. And, and this is for two reasons. One of them is, like, open weights is not the ability to produce the weights.

    5. ET

      Right.

    6. MC

      And open software is the ability to produce the software. Like, if, if you give me open software, I can compile it, I can modify it, whatever. But giving open weights, you don't have that. You don't have the data pipeline, you know, when you're talking about open weights. So you don't actually enable your competitors in the same way open software enables it. So that's one. The second one is, is this is very nice business model that's kind of a peace dividend to the rest of the industry, which is you, you

    7. MC

      Open, you produce open weights to your smaller models that anybody can use, but the larger model you keep internally, which is, uh, actually also more difficult to operationalize for inference, right? I mean, there's kinda good reasons to do this. Um, and then you charge for the largest model, and then, you know, the, the, the smaller open models you use for brand or distribution-

    8. AM

      Right

    9. MC

      ... or whatever. And so I, I feel like it's actually almost an evolved from a, a, from a business strategy and an industry perspective version of open source-

    10. AM

      Yeah

    11. MC

      ... for these reasons.

    12. AM

      I think it's the AI flavor of open core, which was historically a theoretically sup- was supposed to be a theoretically sort of sustainable model for open source software development, which, which was really hard to implement 'cause of the reasons Martin said, where once you gave away the code, it was really hard for you to protect your IP. But with weights, you can contribute something to the research community, you can give developers control, you can allow the world to red team it and make it more secure while you're still able to actually, because of the way distillation works and some of the ways like post-training works, you can still actually hold onto some of the core IP, which then allows you to build a viable, sustainable business. And that is unique about open, open source.

    13. MC

      But also you have the data pipelines, you have-

    14. AM

      Right

    15. MC

      ... the data. Like, I mean, nobody else could r- just 'cause I give you the we- weights doesn't mean you can recreate-

    16. AM

      Right

    17. MC

      ... the model. Like, you could distill it to a subset model. There's a bunch of stuff you can do, but not necessarily recreate it. And so I, listen, ha- having been kind of a student of open source business models for 20 years and have watching, you know, it, it shaped the way that the, the industry has adopted and built software, I actually think that the, the AI one is, is more beneficial s- to the companies doing it for sure. Um, but as a result of that, we're gonna continue to see a lot of it.

    18. AM

      Yeah.

    19. MC

      And so I think we should just kind of assume that open source is part of it, and every country is gonna do it. And one of the best things about this current AI action plan is it acknowledges that, and it wants to incent the United States to be the leader in it-

    20. AM

      Yeah

    21. MC

      ... which is such a dramatic shift from where we were this time last year.

    22. AM

      Yeah. There, there's sort of an ecosystem mindset that people who, if you've worked in any kind of developer business, which Martin and I unfortunately have spent, you know, way too long doing, to, you know, working on dev infrastructure and dev tools, but you, you s- sort of internalize this idea that w- when if you-- It's often, you have to often sort of trade off short-term revenue for long-term ecosystem value, right? And I think what this, the action plan shows is that ye- yes, in the short term, it may seem like we're giving away IP to the rest of the world by open sourcing weights and showing the rest of the world how to create reasoning models and all of this stuff. But in the long term, if every other major nation is running their entire AI ecosystem on the back of American chips and American models and American post-training pipelines and American RL techniques, then that ecosystem win is orders of magnitude more valuable than any short term sort of give of IP, which anyway as it, as we, as we saw with DeepSeek, that, that marginal headstart is, is, is minimal.

    23. ET

      Okay, so just to close the loop on open source, o- over the next several years, h- how do you predict o- open source and closed source will, will intersect? Like, what will the industry look like?

    24. AM

      Yeah. Well, I think these are two different markets.

    25. ET

      Yeah.

    26. AM

      I mean, like l- literally the requirements of the customers are completely different, right? So if you're a developer, you're building an application, and you happen to need the latest and greatest frontier capabilities today, you have a different set of requirements than if you're a nation state deploying, like, a chat companion for your entire employee base of, like, 7,000 government employees, and you need... A- a- and the product requirements, the shape of how you provide those, do you deploy them, the infra, the service, the support, and then the revenue models are completely different. And so often I think people don't realize that closed source and open source are not just differences in technology, but completely different markets altogether. They serve different types of customers. And so I, and I, and I think if you believe AI is this sort of explosive new platform shift, then there'll be winners in both. I do think what we need to contend with is that it seems like it's getting harder and harder to be a category leader if you don't enter fast. Like, the, the speed at which a new startup is able to enter the open source or the closed source market and create a lead is absurd, right? We're both, we both have, have the chance to work with founders who are, I mean, literally a, you know, 20-something-year-olds out of college, two years out of college, building revenue run rate businesses in the tens to hundreds of millions of dollars serving both, both of these markets expanding like this. And so I, I think the, the biggest mistake is to confuse these two markets as one-

    27. ET

      Yeah

    28. AM

      ... and to do the classic like, "Oh, let's wait to see how they evolve," because the, the pace at which a new entrant is able to actually create a, a lead in the category is, is quite stunning.

  10. 28:4532:41

    The AI Action Plan: Reflections & Critique

    1. ET

      Let's go into the action plan. Um.

    2. AM

      Right.

    3. ET

      What, what are our biggest r- r- reflections from it? W- where are we most excited?

    4. AM

      If you look at the quote that they start with, um, I wanted to read it out 'cause I thought it was pretty poignant. It was, "Today, a new frontier of scientific discovery lies before us." And I thought that first opening line was fantastic out of all the things they could have said. You know, w- they could have said, "We're in a nuclear... We, we're in an arms race," which, which sure, the first page, the, the-

    5. ET

      Yeah

    6. AM

      ... title says, "Winning the Race." But if you actually start reading the document, the first sentence is a, is a quote from the president that says, "Today, a new frontier of scientific discovery lies before us." And I, I love that they led with something inspirational.

    7. ET

      Yeah.

    8. AM

      Because ultimately, the technology has to confer some benefits on humanity. And I, and I personally, I just love the i- the fact that we are just starting to explore what these frontier models mean for scientific discovery in physics, in chemistry, in material science, and we need to inspire the next generation to wanna go into those areas 'cause it's hard.

    9. ET

      Yeah.

    10. AM

      It's really hard to do AI in the physical world. You have to literally hook up wet labs and start doing experiments in an entirely new way, and you need people who are excited not only about wanting to do machine learning work, but also the hard work of, of, of, of being lab technicians and running experiments and chem-- Like, literally pipetting new materials and, and chemistry, right? And that, um, I, I think was missing, uh, in a lot of the discourse under the previous administration. So I was, I was... You can sometimes judge a book by its cover. [chuckles]

    11. ET

      [chuckles]

    12. AM

      And I think this was a strong start. And now I think we should, we should actually, like, dive into some of the bullets.

    13. ET

      Yeah.

    14. MC

      Okay. So the other one that I, I thought was a huge omission is there's basically no real mention ofAcademia investing in academia. Like there's, you know, some oblique references to it, but it's just been such a mainstay of innovation, computer science [chuckles] -

    15. AM

      Right

    16. MC

      ... of the last 40 years. Not having a, a major part of it, I think it's a shame. And I understand that right now there's kind of a standoff between higher ed and, and, and the administration. I get it, and I, uh, and I actually think that both sides actually have fairly reasonable points. Um, but, you know, to have a major tech initiative without including academia just feels like we're, you know, what is it? Fighting a battle with a hand behi- tied behind our back, like some aphorism. So...

    17. AM

      Th- this is a good problem to have, which is that I think it's ext- extremely ambitious. It's a little bit light on execution details, right? Which is what happens next? Um, so a good, a good example of that is I think I did-- I, I do think directionally it w- it was great that they said, "We need the..." L- Let's read this bullet point, um, on build an e- AI evaluations ecosystem. I love that because, um, it acknowledges that, hey, before we start actually passing grand proclamations of what these models are risky, uh, or, or whether these models are dangerous or not, let's first even agree on how to measure the risk in these models before jumping the gun. Um, that, that part, I think in addition to the open source bullet, was probably, I thought, the most sophisticated thinking I've seen in any policy document. And look, the reality is America leads the way, and so every other... W- You know, within 24 hours of, of this dropping, Martin and I were getting texts and messages from folks in many other governments around the world going, "What do you guys think?" And I, and I-- It, it was not hard for me to endorse it and tell them, like, "Look at it as a reference document because there are things here that, um, arguably are more sophisticated than policy, um, experts even in, in Silicon Valley would recommend." Because building an AI evaluations ecosystem is not easy, and they, I think lay out a pretty thoughtful proposal on, on, on the fact that that's important. Now the question is how? And I, I think that's what we have to help DC with, the hard work of like implementing this stuff. Um, but the vibe shift going from let's not jump the gun on saying these models are dangerous, let's first talk about building a, a scientific grounded framework on how to assess the risk in these models, to me was, um, was not at all a given, and I, I was really excited about

  11. 32:4141:49

    Alignment, Marginal Risk, and the Future

    1. AM

      that.

    2. MC

      Yeah.

    3. ET

      There's been a lot of focus in the last few years by, by several companies, but also by, by the broader industry around this idea of alignment. Um, have we made any progress on alignment? Or h- h- h-what is your sort of perspective of, of what are they trying to do? Is that a feasible goal? Um, he- help us understand what, what they're trying to solve for.

    4. MC

      So at an almost tautological level, alignment's an obvious thing you'd wanna do. I have a purpose. I want to align the AI to this purpose, and it turns out these models are problematic, generally unruly, chaotic, whatever, uh, adjective you wanna use. And so like, you know, understanding how to better align them to any sort of stated goal is, is very obviously a good thing. And so I think we'd all agree that alignment to whatever the goal is to make it more effective that, that goal and do that thing is good, especially given these models who ha- tend to have a mind of their own. The subtext, um, that certainly I bristle to is that, is that the people doing the alignment are somehow protecting the rest of us from whatever they think their ideal is as far as, you know, dangers to me or thoughts I shouldn't have or information I shouldn't be exposed to. Um, which is why I think we need to be, even when we come up with policy, we need to be very careful not to impose like a different set of, um, you know, ideological rules on top of these. I, I, I just, I just think, like, alignment is something we should all understand. Actually aligning them [chuckles] to me is, is, is kind of where I take issue from any sort of, kind of top-down mandate.

    5. AM

      Um, I, I, I agree, and I think, you know, there's a, there's a quote from a researcher, um, who-- which, which I think is very a- accurate, which is you, you gotta think about these AI systems as, as almost biological systems that are grown, not coded up, right? Because sure, they express as software, but in many ways when you're training a model, it i- you are actually growing in, in this environment of, of a bunch of prior history and da- training data, et cetera. And often empirically, you actually don't know what the capabilities of the model are until it's actually done training. So I think that's a useful analogy. Where I think that falls down is when people go, "Oh, well, if it's, if we can't align it because we actually don't know, it, it's a biological mechanism. Until it's grown up, you don't know what its risks are," and so on, then we can't deploy these AI models in mission-critical places until we've solved, let's say, the black box problem, the mechanistic interpretability problem, which is can you trace deterministically why a model did something? We've made a lot of advances as a space in the last few years, but it still remains a research problem. But that doesn't mean just 'cause you, you don't understand the true mechanism of the system doesn't mean you don't unlock its useful value. If you look at most general purpose technologies in history, electricity, n- nuclear fusion, like we-- there, there are many examples of technologies where we knew they were complex systems, and we didn't truly understand at an at- atomistic level or mechanistic level how they work, but we still use them. [chuckles]

    6. MC

      And we don't, we don't understand how the internet works. I mean, like, there's a whole research of network measurements trying to find out what the heck the internet was gonna do. Is it gonna have congestion collapse? I mean, like, you know, any complex system has states that you just don't understand. Now, let's now say these models more so than many-

    7. AM

      Right

    8. MC

      ... and the applications are very real. But like we, we know how to deal with ambiguity and kind of-

    9. AM

      Right. We don't even know how our brains work.

    10. MC

      No, we think- [laughing]

    11. ET

      Our consciousness.

    12. AM

      Yeah.

    13. MC

      That's right.

    14. AM

      And but we, we don't stop working with other human beings.

    15. MC

      Unfortunately, we're stuck with them.

    16. AM

      Yeah. [laughs]

    17. MC

      So like we, we have no option on that one. [laughs]

    18. ET

      Yeah. Totally. Um-

    19. AM

      I mean, I think to extend that analogy, what do you do? You, you're like, "Okay, I don't know how a brain works. It's got a bunch of risks. This person may be crazy, but I still want to unlock all the beautiful benefits of, of the big, beautiful brains that humans have."

    20. ET

      [laughs]

    21. AM

      And so you develop education. You, you send kids to school-

    22. ET

      Yeah

    23. AM

      ... and you teach them values, and then you send them off to college, and then they get to learn something specific, and then you get to test them in the real world environment. They get a resume, and they get work experience, and they get to prove that they actually are within a risk-based framework, manageable and so on.

    24. ET

      Yeah.

    25. AM

      And that, and that as a society has unlockedHuman capital, right?

    26. ET

      Right.

    27. AM

      Like the great, arguably the greatest technology we've had in, you know, uh, five hundred years of modern industrial innovation. So I, I think what, what I hate about the alignment discourse is it sometimes confuses the, the fact that we don't understand the system for the fact that then we can't use it. And I think, uh, we-- I don't think we've, we've... Like, for a long time, I think mechanistic interpretability, which is kinda like-

    28. ET

      Yeah

    29. AM

      ... some folks would say is the holy grail, is like being able to re-reverse engineer why a model does something, is still a research problem, but that doesn't mean we haven't made progress on how to use unaligned models or to improve alignment to a point where they're useful in massive ways like software engineering-

    30. ET

      Right

Episode duration: 41:58

Install uListen for AI-powered chat & search across the full episode — Get Full Transcript

Transcript of episode L5za2B9p448

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome