a16zThe Current Reality of American AI Policy: From ‘Pause AI’ to ‘Build’
EVERY SPOKEN WORD
45 min read · 9,225 words- 0:00 – 0:47
Introduction & Setting the Stage
- MCMartin Casado
And so we've been through all of these tech waves, and we've learned how to have this discussion in a way that, that for the United States' interest balances these two things. And if we're gonna make a departure from a posture that was developed from 40 years, we better have a pretty damn good reason.
- AMAnjney Midha
Today, a new frontier of scientific discovery lies before us. You can sometimes judge a book by its cover [chuckles] and I think this was a strong start. [upbeat music]
- ETErik Torenberg
Okay, so we're talking, uh, a, a week or two after the, uh, action plan, uh, has, has been announced. Um, looks like we've come a long way. Why, why don't we, uh, trace... You guys have been on the front lines for, for years now in, in, in this discourse fighting, you know, to, to, to make this possible. W- why don't we trace where we've been so that we could then understand what, you know, how we got here and where we're going?
- 0:47 – 1:47
The Policy Shift: From Fear to Action
- MCMartin Casado
I mean, under the Biden administration, we had the executive order, um, which was basically the opposite of what we're seeing today. I mean, it was trying to limit innovation. It was doing a bunch of fear-mongering. But to me, what was even more striking was not regulators being regulators.
- ETErik Torenberg
Right.
- MCMartin Casado
You'd expect that. But if you remember, Ansh, and this is why we got involved, is you'd have these politicians, you know, making recommendations, which is fine, you'd expect that, but nobody was saying anything. You know?
- ETErik Torenberg
Hmm.
- MCMartin Casado
It was like academia was silent.
- ETErik Torenberg
Right.
- MCMartin Casado
The startups were silent. And if anything, like, the technologists were kinda supporting it. So we were in this super backwards [chuckles] world-
- ETErik Torenberg
Right
- MCMartin Casado
... where it was like innovation is bad or dangerous, and we should regulate it, we should pause it, you know. There was this discourse and it w- and it was, like, somewhat fueled by tech as opposed, you know. And then nobody was going against it. And so I think today we should definitely talk about the action plan. It's great.
- ETErik Torenberg
Yeah.
- MCMartin Casado
But we should also talk about how, like, the entire industry has kind of come around to say like, "Listen, we need to keep these things in check. We need to be sensible-"
- ETErik Torenberg
Yeah
- MCMartin Casado
... when thinking about it."
- ETErik Torenberg
I mean, pause
- 1:47 – 2:28
The Pause AI Movement & Industry Response
- ETErik Torenberg
AI, that was two years ago?
- MCMartin Casado
Um-
- ETErik Torenberg
Remember the, the big, uh, sort of, you know, all the CEOs signed this petition, um-
- MCMartin Casado
I mean, this-
- AMAnjney Midha
Oh, yeah, I think that was the last AI action summit, right? Uh, the one before Paris. That was about a year ago.
- MCMartin Casado
There's been s- guys, there's been so many of these. [laughs]
- ETErik Torenberg
Yeah, I've lost track. I've literally lost track.
- MCMartin Casado
Like, like, really. No, no. R- remember, like, what, what was the, the, like, the, um, uh, Dan Hendrix's, um, CAIS? What was the, the, the, the California AI...
- AMAnjney Midha
The Center for AI Safety.
- MCMartin Casado
Center for AI Safety. That's it.
- AMAnjney Midha
That's-
- MCMartin Casado
Center-
- AMAnjney Midha
The nonprofit. Yeah, yeah, yeah.
- MCMartin Casado
Yeah. And then they got, like, all of these, like, um, people to sign this list, you know, like, we ne- need to worry about the existential risk of AI. And, like, that
- 2:28 – 3:34
Historical Parallels: Internet vs. AI Regulation
- MCMartin Casado
was the mood. It was almost like... Like, can I just do something quick? By contrast, right? So I, I was, you know, there during kind of, like, the early days of the web and the internet, and at that time you actually had examples of the stuff being dangerous, right? Like Robert Morris, like, let out the Morris worm. It took down critical infrastructure. We had... So we had new types of attacks. Um, we had viruses. We had worms. We had critical infrastructure. We actually had a different doctrine for the nation which said, you know, the more we get on the internet, the more vulnerable we are. So instead of, like, mutually assured destruction, we have this notion of asymmetry. So there was all of these great examples of why should we, we be concerned. And what did everybody else do? Pedal to the metal. [chuckles]
- ETErik Torenberg
Right.
- MCMartin Casado
Invest more technology. This is great. And so, like, you know, we were still at the time, like, we wanted the internet. We wanted to be the best. We wanted to build it out. You know, the startups were all over it, and, and, and coming into this AI stuff two years ago, it was the opposite, which is, like, there were the concerns with new technology, which you always have, but no... Like, there were very few voices that were like, "Actually it's really important we invest in this stuff," and things. So that's kinda, to me, the bigger change is this more cultural change.
- 3:34 – 6:28
The SB 1047 Bill & Cultural Shifts
- AMAnjney Midha
I think that's right. I, I, I, I... There was a moment in, I think it was last summer, where somebody sent you and me a link to the 104- SB 1047 Bill, and I remember Martin and I reacting like, "There's no way this is gonna get any steam." What was absurd to us, I think, was that it made it through the House and the Senate, and it was on its way to a final vote, and would've become law one signature from the governor later.
- ETErik Torenberg
Wow.
- AMAnjney Midha
And I think there was this escalation where I realized we... The, the, the na- I f- I think... My, my view is that technologists like to technology and polit- politicians like to policy.
- ETErik Torenberg
Yeah. [chuckles]
- AMAnjney Midha
And we don't... We, we pretend like these two things are in different worlds, and as long as these two worlds don't collide and w- and the engineers get to, like, build interesting tech, and, and we... There- there's no sort of, like, self, uh, own too early in the-
- ETErik Torenberg
No
- AMAnjney Midha
... in, in... We, we generally trust in our policymakers, and that changed completely I think last summer, um, whi- which is the v- really weird cultural shift, which is no, no, no, the, the, the... A lot of the policymakers who actually I think were quite open about the fact they didn't know much about the technology, 'cause it was moving so fast, still felt like something had to be done, therefore this is something, therefore it must be good, and that this was this... You know what? I, I think the most egregious example of, of this being adversarial was SB 1047.
- ETErik Torenberg
I totally agree.
- AMAnjney Midha
But that culture shift was w- one from give, let's let the tech mature-
- ETErik Torenberg
Yeah
- AMAnjney Midha
... and then decide how to regulate it later, uh, to, like, before let, let's try to regulate it in its infancy, was, was, like, a ma- massive, I think, shift in, in my, in my head.
- MCMartin Casado
But, but let's just talk about how bad it was. You had VCs [laughs] whose, like, their entire job is investing in tech talking against open source. You know?
- AMAnjney Midha
It was absurd.
- MCMartin Casado
Like Vinod, Founders Fund, they're like, "Open source AI is dangerous. It gives China the advantage."
- AMAnjney Midha
Right.
- MCMartin Casado
And there was just some sort of prognostication that if we didn't do open AI, like, the Chinese would somehow forget math and not be able to create models, and then you forward by a year and they've got the best models by far and we're way behind. So it was like, it was like the people that are supposed to be protecting the US innovation brain trust were somehow on the side of the let's slow it down.
- AMAnjney Midha
Right.
- MCMartin Casado
And I think that now there's this realization of actually China's really good at creating models-
- AMAnjney Midha
Right
- MCMartin Casado
... and they've done a great job. We've kind of hamstrung ourself from whatever discussion we were having, you know, which... And I think you're right. I think we're just-Like, it's good to be con- concerned about the dangers and job risks, but it has to be a fulsome discussion. You need both sides. And when, and when you and I jumped in, it just didn't feel fulsome at all. It was like one side was dominant, and there was almost no one on kind of the pro-tech, pro-innovation, pro-open source side.
- AMAnjney Midha
I just think it didn't feel grounded in empirics, right?
- MCMartin Casado
No. Well, certainly not that. [laughs] It came from, you know-
- ETErik Torenberg
So what is the steel man of the, the critique of, of open source that they were making,
- 6:28 – 13:39
Open Source AI: Risks, Debates, and Misconceptions
- ETErik Torenberg
uh, a coup- couple years ago?
- MCMartin Casado
That it-- You know, this is like a nuclear weapon. Would you open source your nuclear weapon plans? Would you open source your F-16 plans? So the idea was that somehow, like, this was like... And, and, uh, you know, nuclear weapons are not dual use. Nuclear energy is dual use, right? An F-16 is, is not dual use like a jet engine is dual use. But a lot of the analogies that were used at the time were something that, you know, if you squint one way, parts of it are dual use, they could be used for good or for bad. But, like, the examples were clearly the weapons, and that's what they would say. They would say, "Listen, these things are incredibly dangerous. Would you open source, like, whatever the plans for an F-16?" And then, you know, the other side would slowly decide that, like, this conversation is ridiculous. We gotta go ahead and stand up. It says, you know, "No, you would not do this for an F-16 because-
- AMAnjney Midha
Right
- MCMartin Casado
... that is a fighter, you know, jet. And however, like, a lot of the technologies used to build it, yes, this is, you know, fundamental. It's not like people aren't gonna figure out anyways, and we need to be the leader just like we were the leader in nuclear, and we were... Then by the way, in nuclear, like, if you go historically, when that came out, we invested incredibly heavily in it. The things that we thought were proximal to weapons, of course, we made sensitive. You know, but this, you know, all the universities were involved, like, the entire country had the, the discourse, and that just wasn't what was happening.
- AMAnjney Midha
I, I, I think that that's true. They were basically like there was a substantive argument against open source, and there was an atmospheric one, and the substantive one was like the one Martin mentioned, that, um, the technology was being confused for the applications.
- MCMartin Casado
Hmm.
- AMAnjney Midha
Right? And all the, all the worst case outcomes of, of the application or misuses were then being confused-
- MCMartin Casado
But they were also theoretical too.
- AMAnjney Midha
And-
- MCMartin Casado
It was just even worse than that.
- AMAnjney Midha
Right.
- MCMartin Casado
It was like you're, you're, you're right in what you're saying, but it was like this could potentially create bioweapons. It's funny, we got a bioweapon expert, and he's like, "Well, not really. I mean, like, the difference between, like, a model and, and Google is almost nothing." But, you know, like, that was used as this, you know, straw person argument, and then it could hack into a whole bunch of stuff. Like, nobody had ever done it before, but it was theoretical. So it was like these theoretical arguments that were very specific-
- AMAnjney Midha
Right
- MCMartin Casado
... versus a broad technology.
- AMAnjney Midha
That was one, and then the atmospherics were there was a famous former CEO who went up in front of Congress and, and literally in a testimony said, "The US is years ahead of China."
- MCMartin Casado
Yeah.
- AMAnjney Midha
"And so since the, these, these are nuclear weapons and the misuses were being conf- confused with the technology, and we're so far ahead, let's lock it down so we can maintain that lead, and therefore our adversaries will never get their hands on it," which were both just fundamentally wrong.
- MCMartin Casado
Which is-
- AMAnjney Midha
For the reason Martin said, like, co- substantively, AI was not introducing new marginal risks. So if you did an eval on how much easier it is-
- MCMartin Casado
Well, at least, at least not identified at the time. I mean-
- AMAnjney Midha
Not at the time
- MCMartin Casado
... I mean, you would go to Dawn Song, who is, like, a safety researcher, McCarthy Genius Fellow at Berkeley, and you'd say, "What are the mar- marginal risks of AI?" She'd say, "Great question. We should research this."
- AMAnjney Midha
That should be a good research problem, yeah.
- MCMartin Casado
Like, literally the world expert-
- AMAnjney Midha
Right
- MCMartin Casado
... on this question was like, "This is a very important, but it's an open research statement."
- AMAnjney Midha
Yeah. So, so, so no empirical evidence at the time that this was, that AI was creating net new marginal risks and just factual inaccuracies that we were ahead of China. Because if you just paid attention to what was happening, DeepSeek had already started to publish a, a fantastic set of papers, including DeepSeekMath, uh, V2, which came out last summer. And you're like, "Okay, obviously these guys are clearly close to the frontier. They're not that, they're not years behind." And so when R1, DeepSeek R1 came out earlier this year, you know, a lot of Washington was, like, shocked. "Oh my God, they're..." Like, "How did these folks catch up? They must have stolen our weights." And like, no, actually, it's not that hard to distill on the outputs-
- MCMartin Casado
[laughs]
- AMAnjney Midha
... of our labs.
- MCMartin Casado
Have you actually looked at the author list of any paper in AI? [laughs] Like, where do you think these people come from?
- 13:39 – 18:55
The Chilling Effect & Global Competition
- MCMartin Casado
of capacity. And so, you know, basically it would move the conversation to the courts and outside of policy, which is again, historically, we've taken a, a policy position on these things which follows precedence that we understand, you know, to make sure that we don't introduce externalities, like for example, allowing, you know, China to race [chuckles] ahead-
- AMAnjney Midha
Right
- MCMartin Casado
... with open source, which is, you know, which has happened.
- AMAnjney Midha
And the key thing is y-y-by moving it to the court, even if you don't, you could-- one could argue, "Oh, Anj, but like, sure, it's moving to the courts. That means it's open for debate. It's not clear that open weights are going to be regulated with liability." The point is that that creates a chilling effect. The chilling effect is the idea that-
- ETErik Torenberg
Of course
- AMAnjney Midha
... when an, when our best talent is considering-
- MCMartin Casado
I could, I could be sued. Like I'm, I, I like, you know, I'm a random kid-
- AMAnjney Midha
Right
- MCMartin Casado
... in Arkansas developing something. Like I, I don't want to be in a world where-
- AMAnjney Midha
Right
- MCMartin Casado
... [chuckles] it can be resolved in the courts.
- AMAnjney Midha
Right.
- MCMartin Casado
Hey, I ca- I can't even afford, you know, whatever.
- AMAnjney Midha
And in a situation where you have an entire nation-state backed entity like China ma- actually doing the opposite of a chilling effect, right? Encouraging a race to the frontier. Why on earth would we want... You know that there's this meme of a guy on a bike and he picks up a stick and puts it into his front wheel [laughing] and peddles forward?
- ETErik Torenberg
Yeah.
- AMAnjney Midha
That's the effect of a chill. That, that is what chilling effect is, right?
- ETErik Torenberg
Yeah.
- AMAnjney Midha
At a time when your, your primary adversary is, is racing.
- ETErik Torenberg
So let's trace how the conversation has changed because we don't see Vinod tweeting about o-open source anymore. Obviously, OpenAI has changed their tune, especially right now. What, um... Is it really just DeepSeek? Is, is that... Or, or how do you trace kind of how, how the sentiment shifted on open source?
- MCMartin Casado
Let's, let's go through a few theories. I'm not really sure what happened. I almost felt like it was almost culturally in vogue to be a thought leader on the negative externalities of tech, and it kinda started with Bostrom, but it was picked up by Elon, it was picked up by, um, uh, Moskowitz. I mean, a bunch of like these intellectuals that like we all respect and still do. I mean, they've, they're just really the titans of our industry and our era. They were asking these very interesting intellectual ex- uh, questions around like, do we live in a simulation? What happens if AI can recursively, uh, self-improve? And then actually, you know, they created whole kind of cultures and online social discourse around this stuff. And so I think to no small part, that became a bit of a runaway train, and it's just catnip to policymakers.
- AMAnjney Midha
Yeah.
- MCMartin Casado
You know, and so I, I think part of it is like people didn't really realize [chuckles] that this-
- AMAnjney Midha
Yeah
- MCMartin Casado
... had become so real because, of course, GPT-2 comes out and then 3 comes out and like all this stuff's amazing, and somehow it got conflated. So I, I think part of it is just a path dependency on, on where we came from, which is kind of the legacy of Bostrom. I think that was part of it.
- AMAnjney Midha
I think the ungenerous approach would be, would be that there was a lot of... Discourse is awesome, but a lot of the people pushing the discourse were first-order thinkers. They weren't doing the math on, wait, wait a minute, if policymakers who have no, um, background in frontier AI, which by the way nobody does 'cause this space is only three, four years old, start to bu- take discourse as canon, which is a big difference, then what happens? What are the second and third-order effects? And the second and third-order effects that, are that you start making laws that are really hard to undo and, and start mistaking interesting thought experiments as the basis for policy. And once that happens, those of us who've... Look, law, law is basically code. Code is, code is hard to refactor. Law is like impossible to refactor [chuckles] . And so I think the second and third or- third-order effects was that were of a lot of well-intentioned folks, for example, in the existential risk community saying, "Look, if you're intellectually honest about the rate of progress of AI, it's not crazy to say that there are some existential risks on the technology. It's non-zero." Sure. Yes, that is true. But then to then say that that threshold is high enough to start introducing nash- sweeping changes in regulation to the way we create technology, that leap I don't think a lot of the early proponents of that technology realized they would do that. In fact, I think Jack Clark, who runs policy for Anthropic, literally tweeted like towards the end of the SB 1047 saga, he was like, "I guess we, we should have-- We didn't realize the impact of how far this could have gone." And, and I think to those of us who had interacted with DC before and regulation before, it-- Like the second and third o- third-order effects were much more discernible or legible. And then I think what DeepSeek did was just made it super legible to everybody else. So I think-
- ETErik Torenberg
Yeah
- AMAnjney Midha
... they were already... Like I think DeepSeek was the catalyst.
- ETErik Torenberg
Right.
- AMAnjney Midha
But it, it wasn't like there was a step... It, it didn't change the reality that the second and third-order effects of policymakers confusing sort of like discourse for fact-
- ETErik Torenberg
Yeah
- 18:55 – 21:18
Changing Sentiments: From Caution to Pragmatism
- ETErik Torenberg
M- Mark had this sort of, the Baptist bootleggers-
- MCMartin Casado
Yes, I, I was just gonna say. Exactly. Yeah
- ETErik Torenberg
... true believers and then, um, sort of people who u- use the sort of that thinking for, to support their own ends. And it, and it seems like that's changed even just on the company-
- MCMartin Casado
But, but the re- but the reality is I think it was driv- I, I think the majority of people are, are neither.
- ETErik Torenberg
Yeah.
- MCMartin Casado
The mar- the majority of people are pragmatists-
- ETErik Torenberg
Right
- MCMartin Casado
... that are not trying to take advantage of the system, that think, "Well, maybe if we have this discourse, it's an honest discourse, and then we'll self-police." And then I just feel like the silent majority was not part of the discussion. Maybe the biggest change now is, like, those people are there. Like, the founders are there. Academia is there. VCs are there. Now, now the people that are not either Baptists or bootleggers are driving the discussion, which I actually is in- independent of the action plan itself, I feel much more in a better position now. Like for example, there's still a bunch of stupid regulation that's popping up, but I'm not calling Anj at night and think, "We have to do something now," 'cause I feel like, okay, there's actually representation-
- ETErik Torenberg
Yeah, right
- MCMartin Casado
... that's sensible, where at the time there was none.
- ETErik Torenberg
Right.
- MCMartin Casado
And I, I think to move, you know, to the, to the action plan, I think this is a great... Like, if you read the first page, right, what a marked shift the fact that the co-authors include technologists.
- ETErik Torenberg
Yeah.
- MCMartin Casado
Right? It's... And, and I think that was the core problem is DC is a system, like a self-contained system, and the valley is a self-contained system, and I think a lot of the people here were assuming best intentions over here and vice versa.
- ETErik Torenberg
Yeah.
- MCMartin Casado
And what happened is a few bad actors essentially used that arbitrage opportunity to represent Silicon Valley's views incorrectly in DC. And when we saw some of the legislation, we had policymakers calling us up and saying, "Wait, you guys aren't happy with 1047? But, uh, the guys, you're, the other tech people were calling us and saying you'd love more of this kind of regulation." We would said, "What other tech people?" And it turns out we are not one homogenous group. Little Tech is extraordinarily different from Big Tech, which is extraordinarily different from the academic communities. And I, I think one of the things we had to contend with was, like, we used to be one shared culture.
- ETErik Torenberg
Right.
- MCMartin Casado
And then when tech grew, it, we actually, th- there are some major differences in, in the Valley at least between party... We, we're not one tech ecosystem anymore.
- ETErik Torenberg
Yeah.
- MCMartin Casado
We have different interests. And DC hadn't updated that. And, and I think what's amazing about the action plan is it's written by people who have bridged both-
- ETErik Torenberg
Yeah
- MCMartin Casado
... with enough representation across, like, the four or five different subcultures within tech who have different interests.
- ETErik Torenberg
Great.
- MCMartin Casado
I think that's new.
- ETErik Torenberg
Yeah, yeah. Going back to, to op- open source, why don't you talk a little bit about just sort of the how different companies...
- 21:18 – 28:45
Open Source as Business Strategy
- ETErik Torenberg
Uh, help us make sense of how different companies have, have, have thought about it or from a sort of, uh, business strategy perspective. You know, I mean, we saw Meta with maybe the first big o- o- open source push. Um, you know, OpenAI has sort of evolved there too.
- MCMartin Casado
Right.
- ETErik Torenberg
And I, I've seen even Anthropic seems to, being involved in their dialogue, um, a, a, a little bit. Um, how should we think about open source as a, as a, as a business strategy in terms of what, what, what's changed here and, and why?
- MCMartin Casado
Oh, look, I, I don't think this is... This part is actually, like, is, is playing out beautifully along the same trend lines of all pr- p- previous computing infrastructure, databases, analytics, operating systems. Like Linux, the, the way it works is the closed source pioneers the frontier of capabilities, it introduces new use cases, and then the enterprises never know how to consume that technology, and when they f- do figure out eventually that they want cheaper, faster, more control, they need somebody like a Red Hat to then introduce them and, and provide solutions and services and packaging and forward deployed engineering and all of that around it. And which is why the arc generally in enterprise infrastructure has been closed source wins applications and open source tends to do really well in infrastructure, especially in large government customers, regulated industries where there's a bunch of security requirements, things need to run on-prem, the cu- the customer needs con- total control over it. Broadly, you could call that the sovereign AI market right now. Lots of governments and lots of in- legacy industries are going, "Wait, this open source thing is really critical to us." So I think whereas two, three years ago it was v- open source was viewed as, like, this, like, largely philosophical endeavor, which it is. Open source has always been political and philosophical by definition, but now there's an extraordinary business case for it, which is why I think you're seeing a lot of startups and companies also changing their posture 'cause they're going, "Wait a minute, some of the largest customers in the world, enterprise customers happen to be governments, and happen to be legacy industries, and Fortune 50 companies, and they want stuff on-prem," and that's when you go adopt open source. I say I think there's been a business shift as well. I don't know if you'd agree. Yeah, this is great. I, I, so I totally agree. I, I do think it's interesting to have the conversation where it's the same and where it's different. Like, everything Anj said is exactly right, which is we have a very long history with open source, and it's a very useful tool for businesses, but also for research and academia, et cetera. But let's just talk about businesses and startups, right? It's a great way to get a distribution advantage. It's a great way to enter a market where you're not an incumbent and you're a startup. So it's just kind of one of the tools for building in software that's been used, and open source has been used in, in a very similar way, right? I mean, you can use it for recruiting, you can use it for brand, you can use it to get distribution, and we see all of that. But there's something that's unique about AI that software doesn't have, and, like, we're seeing very viable business models come out of it that don't have the limitations of traditional software. And, and this is for two reasons. One of them is, like, open weights is not the ability to produce the weights.
- ETErik Torenberg
Right.
- MCMartin Casado
And open software is the ability to produce the software. Like, if, if you give me open software, I can compile it, I can modify it, whatever. But giving open weights, you don't have that. You don't have the data pipeline, you know, when you're talking about open weights. So you don't actually enable your competitors in the same way open software enables it. So that's one. The second one is, is this is very nice business model that's kind of a peace dividend to the rest of the industry, which is you, you
- MCMartin Casado
Open, you produce open weights to your smaller models that anybody can use, but the larger model you keep internally, which is, uh, actually also more difficult to operationalize for inference, right? I mean, there's kinda good reasons to do this. Um, and then you charge for the largest model, and then, you know, the, the, the smaller open models you use for brand or distribution-
- AMAnjney Midha
Right
- MCMartin Casado
... or whatever. And so I, I feel like it's actually almost an evolved from a, a, from a business strategy and an industry perspective version of open source-
- AMAnjney Midha
Yeah
- MCMartin Casado
... for these reasons.
- AMAnjney Midha
I think it's the AI flavor of open core, which was historically a theoretically sup- was supposed to be a theoretically sort of sustainable model for open source software development, which, which was really hard to implement 'cause of the reasons Martin said, where once you gave away the code, it was really hard for you to protect your IP. But with weights, you can contribute something to the research community, you can give developers control, you can allow the world to red team it and make it more secure while you're still able to actually, because of the way distillation works and some of the ways like post-training works, you can still actually hold onto some of the core IP, which then allows you to build a viable, sustainable business. And that is unique about open, open source.
- MCMartin Casado
But also you have the data pipelines, you have-
- AMAnjney Midha
Right
- MCMartin Casado
... the data. Like, I mean, nobody else could r- just 'cause I give you the we- weights doesn't mean you can recreate-
- AMAnjney Midha
Right
- MCMartin Casado
... the model. Like, you could distill it to a subset model. There's a bunch of stuff you can do, but not necessarily recreate it. And so I, listen, ha- having been kind of a student of open source business models for 20 years and have watching, you know, it, it shaped the way that the, the industry has adopted and built software, I actually think that the, the AI one is, is more beneficial s- to the companies doing it for sure. Um, but as a result of that, we're gonna continue to see a lot of it.
- AMAnjney Midha
Yeah.
- MCMartin Casado
And so I think we should just kind of assume that open source is part of it, and every country is gonna do it. And one of the best things about this current AI action plan is it acknowledges that, and it wants to incent the United States to be the leader in it-
- AMAnjney Midha
Yeah
- MCMartin Casado
... which is such a dramatic shift from where we were this time last year.
- AMAnjney Midha
Yeah. There, there's sort of an ecosystem mindset that people who, if you've worked in any kind of developer business, which Martin and I unfortunately have spent, you know, way too long doing, to, you know, working on dev infrastructure and dev tools, but you, you s- sort of internalize this idea that w- when if you-- It's often, you have to often sort of trade off short-term revenue for long-term ecosystem value, right? And I think what this, the action plan shows is that ye- yes, in the short term, it may seem like we're giving away IP to the rest of the world by open sourcing weights and showing the rest of the world how to create reasoning models and all of this stuff. But in the long term, if every other major nation is running their entire AI ecosystem on the back of American chips and American models and American post-training pipelines and American RL techniques, then that ecosystem win is orders of magnitude more valuable than any short term sort of give of IP, which anyway as it, as we, as we saw with DeepSeek, that, that marginal headstart is, is, is minimal.
- ETErik Torenberg
Okay, so just to close the loop on open source, o- over the next several years, h- how do you predict o- open source and closed source will, will intersect? Like, what will the industry look like?
- AMAnjney Midha
Yeah. Well, I think these are two different markets.
- ETErik Torenberg
Yeah.
- AMAnjney Midha
I mean, like l- literally the requirements of the customers are completely different, right? So if you're a developer, you're building an application, and you happen to need the latest and greatest frontier capabilities today, you have a different set of requirements than if you're a nation state deploying, like, a chat companion for your entire employee base of, like, 7,000 government employees, and you need... A- a- and the product requirements, the shape of how you provide those, do you deploy them, the infra, the service, the support, and then the revenue models are completely different. And so often I think people don't realize that closed source and open source are not just differences in technology, but completely different markets altogether. They serve different types of customers. And so I, and I, and I think if you believe AI is this sort of explosive new platform shift, then there'll be winners in both. I do think what we need to contend with is that it seems like it's getting harder and harder to be a category leader if you don't enter fast. Like, the, the speed at which a new startup is able to enter the open source or the closed source market and create a lead is absurd, right? We're both, we both have, have the chance to work with founders who are, I mean, literally a, you know, 20-something-year-olds out of college, two years out of college, building revenue run rate businesses in the tens to hundreds of millions of dollars serving both, both of these markets expanding like this. And so I, I think the, the biggest mistake is to confuse these two markets as one-
- ETErik Torenberg
Yeah
- AMAnjney Midha
... and to do the classic like, "Oh, let's wait to see how they evolve," because the, the pace at which a new entrant is able to actually create a, a lead in the category is, is quite stunning.
- 28:45 – 32:41
The AI Action Plan: Reflections & Critique
- ETErik Torenberg
Let's go into the action plan. Um.
- AMAnjney Midha
Right.
- ETErik Torenberg
What, what are our biggest r- r- reflections from it? W- where are we most excited?
- AMAnjney Midha
If you look at the quote that they start with, um, I wanted to read it out 'cause I thought it was pretty poignant. It was, "Today, a new frontier of scientific discovery lies before us." And I thought that first opening line was fantastic out of all the things they could have said. You know, w- they could have said, "We're in a nuclear... We, we're in an arms race," which, which sure, the first page, the, the-
- ETErik Torenberg
Yeah
- AMAnjney Midha
... title says, "Winning the Race." But if you actually start reading the document, the first sentence is a, is a quote from the president that says, "Today, a new frontier of scientific discovery lies before us." And I, I love that they led with something inspirational.
- ETErik Torenberg
Yeah.
- AMAnjney Midha
Because ultimately, the technology has to confer some benefits on humanity. And I, and I personally, I just love the i- the fact that we are just starting to explore what these frontier models mean for scientific discovery in physics, in chemistry, in material science, and we need to inspire the next generation to wanna go into those areas 'cause it's hard.
- ETErik Torenberg
Yeah.
- AMAnjney Midha
It's really hard to do AI in the physical world. You have to literally hook up wet labs and start doing experiments in an entirely new way, and you need people who are excited not only about wanting to do machine learning work, but also the hard work of, of, of, of being lab technicians and running experiments and chem-- Like, literally pipetting new materials and, and chemistry, right? And that, um, I, I think was missing, uh, in a lot of the discourse under the previous administration. So I was, I was... You can sometimes judge a book by its cover. [chuckles]
- ETErik Torenberg
[chuckles]
- AMAnjney Midha
And I think this was a strong start. And now I think we should, we should actually, like, dive into some of the bullets.
- ETErik Torenberg
Yeah.
- MCMartin Casado
Okay. So the other one that I, I thought was a huge omission is there's basically no real mention ofAcademia investing in academia. Like there's, you know, some oblique references to it, but it's just been such a mainstay of innovation, computer science [chuckles] -
- AMAnjney Midha
Right
- MCMartin Casado
... of the last 40 years. Not having a, a major part of it, I think it's a shame. And I understand that right now there's kind of a standoff between higher ed and, and, and the administration. I get it, and I, uh, and I actually think that both sides actually have fairly reasonable points. Um, but, you know, to have a major tech initiative without including academia just feels like we're, you know, what is it? Fighting a battle with a hand behi- tied behind our back, like some aphorism. So...
- AMAnjney Midha
Th- this is a good problem to have, which is that I think it's ext- extremely ambitious. It's a little bit light on execution details, right? Which is what happens next? Um, so a good, a good example of that is I think I did-- I, I do think directionally it w- it was great that they said, "We need the..." L- Let's read this bullet point, um, on build an e- AI evaluations ecosystem. I love that because, um, it acknowledges that, hey, before we start actually passing grand proclamations of what these models are risky, uh, or, or whether these models are dangerous or not, let's first even agree on how to measure the risk in these models before jumping the gun. Um, that, that part, I think in addition to the open source bullet, was probably, I thought, the most sophisticated thinking I've seen in any policy document. And look, the reality is America leads the way, and so every other... W- You know, within 24 hours of, of this dropping, Martin and I were getting texts and messages from folks in many other governments around the world going, "What do you guys think?" And I, and I-- It, it was not hard for me to endorse it and tell them, like, "Look at it as a reference document because there are things here that, um, arguably are more sophisticated than policy, um, experts even in, in Silicon Valley would recommend." Because building an AI evaluations ecosystem is not easy, and they, I think lay out a pretty thoughtful proposal on, on, on the fact that that's important. Now the question is how? And I, I think that's what we have to help DC with, the hard work of like implementing this stuff. Um, but the vibe shift going from let's not jump the gun on saying these models are dangerous, let's first talk about building a, a scientific grounded framework on how to assess the risk in these models, to me was, um, was not at all a given, and I, I was really excited about
- 32:41 – 41:49
Alignment, Marginal Risk, and the Future
- AMAnjney Midha
that.
- MCMartin Casado
Yeah.
- ETErik Torenberg
There's been a lot of focus in the last few years by, by several companies, but also by, by the broader industry around this idea of alignment. Um, have we made any progress on alignment? Or h- h- h-what is your sort of perspective of, of what are they trying to do? Is that a feasible goal? Um, he- help us understand what, what they're trying to solve for.
- MCMartin Casado
So at an almost tautological level, alignment's an obvious thing you'd wanna do. I have a purpose. I want to align the AI to this purpose, and it turns out these models are problematic, generally unruly, chaotic, whatever, uh, adjective you wanna use. And so like, you know, understanding how to better align them to any sort of stated goal is, is very obviously a good thing. And so I think we'd all agree that alignment to whatever the goal is to make it more effective that, that goal and do that thing is good, especially given these models who ha- tend to have a mind of their own. The subtext, um, that certainly I bristle to is that, is that the people doing the alignment are somehow protecting the rest of us from whatever they think their ideal is as far as, you know, dangers to me or thoughts I shouldn't have or information I shouldn't be exposed to. Um, which is why I think we need to be, even when we come up with policy, we need to be very careful not to impose like a different set of, um, you know, ideological rules on top of these. I, I, I just, I just think, like, alignment is something we should all understand. Actually aligning them [chuckles] to me is, is, is kind of where I take issue from any sort of, kind of top-down mandate.
- AMAnjney Midha
Um, I, I, I agree, and I think, you know, there's a, there's a quote from a researcher, um, who-- which, which I think is very a- accurate, which is you, you gotta think about these AI systems as, as almost biological systems that are grown, not coded up, right? Because sure, they express as software, but in many ways when you're training a model, it i- you are actually growing in, in this environment of, of a bunch of prior history and da- training data, et cetera. And often empirically, you actually don't know what the capabilities of the model are until it's actually done training. So I think that's a useful analogy. Where I think that falls down is when people go, "Oh, well, if it's, if we can't align it because we actually don't know, it, it's a biological mechanism. Until it's grown up, you don't know what its risks are," and so on, then we can't deploy these AI models in mission-critical places until we've solved, let's say, the black box problem, the mechanistic interpretability problem, which is can you trace deterministically why a model did something? We've made a lot of advances as a space in the last few years, but it still remains a research problem. But that doesn't mean just 'cause you, you don't understand the true mechanism of the system doesn't mean you don't unlock its useful value. If you look at most general purpose technologies in history, electricity, n- nuclear fusion, like we-- there, there are many examples of technologies where we knew they were complex systems, and we didn't truly understand at an at- atomistic level or mechanistic level how they work, but we still use them. [chuckles]
- MCMartin Casado
And we don't, we don't understand how the internet works. I mean, like, there's a whole research of network measurements trying to find out what the heck the internet was gonna do. Is it gonna have congestion collapse? I mean, like, you know, any complex system has states that you just don't understand. Now, let's now say these models more so than many-
- AMAnjney Midha
Right
- MCMartin Casado
... and the applications are very real. But like we, we know how to deal with ambiguity and kind of-
- AMAnjney Midha
Right. We don't even know how our brains work.
- MCMartin Casado
No, we think- [laughing]
- ETErik Torenberg
Our consciousness.
- AMAnjney Midha
Yeah.
- MCMartin Casado
That's right.
- AMAnjney Midha
And but we, we don't stop working with other human beings.
- MCMartin Casado
Unfortunately, we're stuck with them.
- AMAnjney Midha
Yeah. [laughs]
- MCMartin Casado
So like we, we have no option on that one. [laughs]
- ETErik Torenberg
Yeah. Totally. Um-
- AMAnjney Midha
I mean, I think to extend that analogy, what do you do? You, you're like, "Okay, I don't know how a brain works. It's got a bunch of risks. This person may be crazy, but I still want to unlock all the beautiful benefits of, of the big, beautiful brains that humans have."
- ETErik Torenberg
[laughs]
- AMAnjney Midha
And so you develop education. You, you send kids to school-
- ETErik Torenberg
Yeah
- AMAnjney Midha
... and you teach them values, and then you send them off to college, and then they get to learn something specific, and then you get to test them in the real world environment. They get a resume, and they get work experience, and they get to prove that they actually are within a risk-based framework, manageable and so on.
- ETErik Torenberg
Yeah.
- AMAnjney Midha
And that, and that as a society has unlockedHuman capital, right?
- ETErik Torenberg
Right.
- AMAnjney Midha
Like the great, arguably the greatest technology we've had in, you know, uh, five hundred years of modern industrial innovation. So I, I think what, what I hate about the alignment discourse is it sometimes confuses the, the fact that we don't understand the system for the fact that then we can't use it. And I think, uh, we-- I don't think we've, we've... Like, for a long time, I think mechanistic interpretability, which is kinda like-
- ETErik Torenberg
Yeah
- AMAnjney Midha
... some folks would say is the holy grail, is like being able to re-reverse engineer why a model does something, is still a research problem, but that doesn't mean we haven't made progress on how to use unaligned models or to improve alignment to a point where they're useful in massive ways like software engineering-
- ETErik Torenberg
Right
Episode duration: 41:58
Install uListen for AI-powered chat & search across the full episode — Get Full Transcript
Transcript of episode L5za2B9p448
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome