Skip to content
The Twenty Minute VCThe Twenty Minute VC

The Early Days of Anthropic & How 21 of 22 VCs Rejected It | The Four Bottlenecks in AI | Anj Midha

Anjney Midha is the founder of AMP, and a founding investor in Anthropic. Most recently, Anj was General Partner at Andreessen Horowitz, leading frontier AI investments. He serves on the boards of Mistral, Black Forest Labs, Sesame, LMArena, OpenRouter, Luma AI and Periodic Labs and is an early angel in ElevenLabs among others. Prior to that, Anj was the cofounder/CEO of Ubiquity6 (acquired by Discord) and a partner at Kleiner Perkins. ----------------------------------------------- Timestamps: 00:00 Intro 01:25 Are Scaling Laws Dead? 02:55 The Four Bottlenecks Holding AI Back 07:36 Why AI for Science Sucked 09:36 Sovereign Data & the Cloud Act 13:31 The Investment Thesis Behind Mistral 14:27 The Brutal Early Days of Anthropic 20:52 Public Benefit Corporations: Mission vs Profit in the Age of AI 23:06 The AMP Grid: Building the Electricity Grid for Compute 25:21 Co-Founding Companies Like Kleiner Used to 35:30 We're in a GPU Wastage Bubble, Not an AI Bubble 37:49 Why Compute Isn't Fungible 42:16 How China Is Winning the AI Race 45:15 Coordinating Defense Against AI Distillation Attacks 49:07 Perfect Competition Is for Losers 01:01:43 Quick-Fire Round ----------------------------------------------- Subscribe on Spotify: https://open.spotify.com/show/3j2KMcZTtgTNBKwtZBMHvl?si=85bc9196860e4466 Subscribe on Apple Podcasts: https://podcasts.apple.com/us/podcast/the-twenty-minute-vc-20vc-venture-capital-startup/id958230465 Follow Harry Stebbings on X: https://twitter.com/HarryStebbings Follow Anjney Midha on X: https://twitter.com/AnjneyMidha Follow 20VC on Instagram: https://www.instagram.com/20vchq Follow 20VC on TikTok: https://www.tiktok.com/@20vc_tok Visit our Website: https://www.20vc.com Subscribe to our Newsletter: https://www.thetwentyminutevc.com/contact ----------------------------------------------- #20vc #harrystebbings #anjneymidha #founder #investor #vc #anthropic #mistral #ai #amp

Anjney MidhaguestHarry Stebbingshost
Apr 14, 20261h 15mWatch on YouTube ↗

EVERY SPOKEN WORD

  1. 0:001:25

    Intro

    1. AM

      AI alignment, don't get me wrong, is hard, but not the hardest problem. Human alignment is really the problem right now.

    2. HS

      Our guest today is the most prominent AI investor in the ecosystem, Anj Midha. Why is he the most prominent? Three reasons. Number one, he's one of the founding investors of Anthropic. Number two, he led AI investments for Andreessen Horowitz, where he made investments in Black Forest Labs, Mistral, Sesame, among others. And then third and finally, today he's the founder of AMP, where he provides compute and invests in the world's best AI companies.

    3. AM

      If we don't secure frontier model inference or what I call state-of-the-art inference behind a coordinated Iron Dome, I don't think we have a sustainable shot at staying at the frontier over the next decade. There's no saturation in superconductor discovery at all.

    4. HS

      Ready to go? [rock music] Anj, I am so looking forward to this, dude. I have stalked the shit out of you for the last three or four days. I spoke to Bing Gordon. I had a catch-up with Bing before this. Very nice to speak to him. Uh, so thank you so much for joining me today, dude.

    5. AM

      Thank, thanks for having me. It's, it... Too long. It only took us what? Eight years, nine years? I forget when it was. [chuckles]

    6. HS

      I was twelve when we last did it. [laughing]

    7. AM

      [laughing] Well, twelve in startup land is twenty-five, right? So-

    8. HS

      Dude, I'm confused. Help me

  2. 1:252:55

    Are Scaling Laws Dead?

    1. HS

      out. I had Demis on the show the other day from DeepMind. He was like, "Yeah, I'm not sure if we're seeing scaling laws, but we are definitely seeing slightly diminishing, like return/performance as we scale." So potentially, are we getting to a stage where increased compute is no longer leading to increased performance?

    2. AM

      Oh, no, absolutely not. [chuckles] No, that's, that's, that's not true at all. In, in certain domains that are well explored, like coding, for example, yes, there's an increasing amount of compute required to get an incremental gain in some eval that's v-super saturated. But if you said, "Anj, what about material science?" You know, I'm sitting here at Periodic Labs office. This is my incubat-- Like the, my latest incu-incubation is called Periodic Labs. I spend three days a week here in, in Menlo Park. We have a thirty-thousand-square-foot facility where we have LLMs that then predict new materials, new supercon-conductors. We then have robots synthesize those new materials, and then we have, we have physical machines like X-ray diffraction machines validate whether those materials have the properties that were predicted by the LLMs, and then we pipe that, we, we, we pipe that verification data back into our training run, you know, how many other times we need. And I can tell you, throwing more compute at the problem is probably having, yeah, super exponential gains right now per iteration. So it depends on which domain you're talking about, which modality. There's no saturation in superconductor discovery, for example, at all. The bitter lesson is holding is well and alive.

  3. 2:557:36

    The Four Bottlenecks Holding AI Back

    1. HS

      I, I totally get that. [chuckles] Can I ask you, when we look at the bottlenecks around performance and progression today, what are the bottlenecks that really persist most significantly to you? Is it, is it algorithms? Is it data? Is it compute? Can you help me understand which is most lagging?

    2. AM

      So there's four or five. It's co-, um, context, feedback, which I'm happy to talk about. It's compute. There's capital, which you need to like con- you know, continuously sort of deploy the compute and context feedback loops. And then there's culture. And I think that culture actually might be the most important [chuckles] bottleneck of all time. But those are the four, I would say. Now, look, algorithmic innovation, I think, is a function of culture, basically. Because if you have the right culture, you get to attract the best researchers, the best researchers, the best research talent, then wants to work on pushing the frontier. And algorithmic innovation just falls out of having a really good team that's very flexible on what kind of architecture they wanna use. If you have the right culture, the algorithmic innovation bottleneck solves itself. Because then the, the researchers are not focused or like tied to one architecture versus another. They're not going, "I'm all in on LLMs or transformers versus diffusion models." The best scientists and researchers just wanna solve the problem, the mission. And if you have a very mission-driven culture where they're like, "We want to move the frontier of coding or the frontier of material science," the algor- the algorithmic stuff takes care of itself. But so, so I'm not that cons-- That's actually not the bottleneck anymore, in my view. Two, three years ago, that was a huge bottleneck where we were trying to figure out which algorithms scale. Is there, are there some limits to the transformer architecture versus diffusion models? And what I've come to realize is, if you solve the culture problem, you can solve the research and the algorithmic problem. Then the bottlenecks of context feedback, which is what is the data you need to keep doing frontier research over and over again, is, is, is step number one, because actually I think that is also where you have the most business and commercial advantage. I think there's lots of alpha and, uh, value to be gained in pre-training, mid-training, and so on. But you know, that last mile where you, you deploy a model or an agent in some new domain, and then you collect feedback on how it's performing in real time, and then you-- Like I was saying, here we do physical verification of material science at Periodic. Um, where-wherever there are some unique context feedback loops that are, that are missing today, that's where you p-probably have the biggest bottlenecks on capabilities. And so what you should be doing if you're trying to advance the frontier is, is going, "Okay, you know, these models suck," for example. About a year ago, as an example, I realized there was a lot of talk about models being good at physical, physics and chemistry, AI for science. And I was a visiting scientist at the Applied Physics Department at Stanford, and we started benchmarking these models, or, you know, Cl-Claude, Geminis, and so on. And surprise, they sucked. They were so badI was like, there's, there's, there's this disconnect between the marketing hype of AI for science and the reality where these models are terrible, at the time at least. They were starting to get good at code, but they were terrible at scientific analysis. And [sighs] you know, the conclusion was pretty simple. They were just missing a lot of the, the physics and chemistry data you need to reason about the physical world. But to do that, we don't have enough of that data on the internet, because the internet is mostly pre-trained data about things like blogs and blah, blah, blah, and coding. But if you need physics and science, that's a real bottleneck because that data is, is locked up in national labs and academic labs, it's locked up in physical, uh, you know, semiconductor manufacturing plants. How do you get that data in? That was the bottleneck I realized was really the, the critical part of getting these models to reason about the physics and science frontier, which is something I care about deeply. And so the way we solved that at Periodic was, you know, set up a physical lab with robots doing all that. You could, you could apply that same recipe to whatever domain where you wanna see more and more progress. Then you ask, okay, how much compute and infrastructure do you need to keep that RL loop or the physical verification loop scaling at bigger and bigger scale? And then you need the capital to fund all this. You need equity, debt, a whole bunch of different structured finance vehicles to get, you know, land, power, shell. So that's the compute bottleneck. And then lastly is the culture, because if you have all of those three things, but you don't have the right team and the right mission-driven culture, the whole thing falls apart. And, and so those, in my mind, are the four bottlenecks I wake up, you know, every day trying to figure out how we, we unblock for

  4. 7:369:36

    Why AI for Science Sucked

    1. AM

      the best teams.

    2. HS

      If we just go through them, when we look at that context feedback on the data side-

    3. AM

      Mm-hmm

    4. HS

      ... will we see then a generation of vertically integrated foundation model-

    5. AM

      Yes

    6. HS

      ... companies like Periodic for a ton-

    7. AM

      Yes

    8. HS

      ... of different things? Yeah.

    9. AM

      Yeah. You know, uh, when I went to grad school, uh, for machine learning, I, I w- I went to Stanford for bioinformatics, which was machine learning applied to healthcare. We were-- The space was not as good as marketing as it is today, so super intelligence, love it, you know? At the end of the day, what are we talking about? We're talking about very powerful models within some domain and, and we are seeing the sort of within distribution, very, very powerful capabilities that are... You could definitely call them superhuman because there's no way, for example, I, as a n- an individual scientist, could analyze the reams and reams of data coming out of the lab here without AI models. There's just no chance. And so the fact that you can take all of the data from, you know, training fr- from, from a la- a physical lab and just throw it at a bunch of AI models and ask it to analyze things is a superhuman capability. We didn't have that before. Okay, fine. So let's call that super intelligence. Within coding, within material science, within each of these domain distributions, we are seeing capabilities that are superhuman. We didn't have them before. A-and, and in fact, I would say we're even starting to see automation of those tasks, especially where there's, there's coding involved, to starting to be s-somewhat recursive, right? Where if you have a good coding model, then you can say, "Okay, let me automate like data analysis, let me a-automate like data cleaning, and so on." Some people would call that recursive self-improvement. Totally happening. But it's, it's not like I can just say to a coding model, "Please bootstrap a, a physical R&D lab for me in Menlo Park, get all the permitting, go raise, g- you know, go h- find Ansh to raise money from, go set up the physical infrastructure, and just like bootstrap all this data." That's just an entirely different kind of frontier and

  5. 9:3613:31

    Sovereign Data & the Cloud Act

    1. AM

      execution and sort of problem.

    2. HS

      My question to you then is like, how do I determine what is n-not going to get Claudified in that vertical model company build-out?

    3. AM

      Yeah.

    4. HS

      Because you could look at a Cursor and say, "Well, they've built their own vertical model end to end-"

    5. AM

      Yeah

    6. HS

      "... and it's been Claudified," if we're being blunt. Periodic won't be because of the physical data that's being produced in the labs.

    7. AM

      Right.

    8. HS

      How do I know what will be Claudified versus won't in that model there?

    9. AM

      Yeah, this is a good question. Okay. If we want to sort of unlock frontier progress generally across a bunch of domains, then where are the bottlenecks, and where will the value accrue? Context is, is not necessarily the moat, I would not say yet. I, I think, I think venture capitalists are very quick to analyze moats, but I would say context feedback loops, where you are, you have unique and differentiated access, is where progress will be most legible to you. And if there are other teams who don't have access to that context, it'll also be where you have a superior business model. And so here's an example I give in the class, right? Sovereign data. Are you familiar with the CLOUD Act?

    10. HS

      Yeah.

    11. AM

      Yeah. Okay. So the, you know, the, the US CLOUD Act says that, h-hey, if there's mission-- if, if there's any data workloads, infrastruc- cloud workloads running on infrastructure that is managed by an American company, then the US government has to be able to access that data. Now, if you happen to be running military defense mission-critical workloads in Europe on AI infrastructure that is managed by an American company, well, that context, which is super critical, can't be sent over across the border. That's an example of a unique and sensitive context that needs to be run locally. And so if you're ASML, you're, um, CMA CGM that's doing logistics at scale, and some of this logistics is with mission-critical supplies, you can't have your supply chain data being processed by an AI bot that's running on servers that is subject to the CLOUD Act. So what do you do? You look for local infrastructure partners. You start going, "Hey, who are the providers, AI infrastructure providers in Europe that we trust?" Well, it turns out there aren't that many who can actually handle mission-critical infrastructure at scale for AI. So you call up someone called Arthur Mensch-Who is a French scientist from DeepMind turned entrepreneur and starts a lab called Mistral, who is running massive workloads, and you say, "Arthur, would you actually build infrastructure that can be secure locally?" And that's why suddenly in July of twenty twenty-five, i- at the, at Viva Tech in Paris, you have President Macron and Jensen standing on stage next to Arthur, a thirty-three-year-old scientist unveiling a gigawatt AI infrastructure facility in Paris. Why? Because the context, the mission-critical context of those workloads is so important to be run locally that you can't run them on A-Amazon, AWS, GCP or Azure, and it's the first time in fifteen years that the, that the sort of hyperscaler dominance is, um, up for grabs for startups.

    12. HS

      With the greatest of respect, is that the core investment thesis of Mistral for you?

    13. AM

      For me? Yeah. Independence at scale of, of at every part of the AI infrastructure stack, like land, power, shell in Europe that's sovereign, it's local, compute infrastructure that's local, and models that are trained locally. By the way, fully open, so they can be deployed and customized globally wherever needed. But certainly in Europe, like the full independent stack is, is the, is the bet. Yeah.

  6. 13:3114:27

    The Investment Thesis Behind Mistral

    1. HS

      Do Anthropic and OpenAI just accept that and roll over? I, I don't understand, 'cause government is a mega portion of their efforts and workload today, and like both of them when I speak to them are like, "Oh, we're absolutely coming for Europe." S-so, so how do they get around that?

    2. AM

      Well, I can't speak for OpenAI too much, uh, 'cause I'm not involved there directly, but Anthropic, I will say, you know, the mission and vision has always been very, um... I think it's always been very American aligned, right? They've always said, "Hey, America is the crown jewel of the world in terms of innovation." Uh, this is where we're located. Again, Anthropic is located in Silicon Valley, um, and I think the company really, really wants to do what's best for the American government and the American way of life, which is democracy and freedom. It turns out the world's largest enterprise customers are governments and Fortune five hundred companies, and many of those that are overseas need these c-workloads to be running locally.

  7. 14:2720:52

    The Brutal Early Days of Anthropic

    1. HS

      You said about obviously being involved with Anthropic since the earliest of days. I'm just fascinated. I think people kind of forget about their early days almost. People talk about like, oh, SBF investing early and what a visionary he was.

    2. AM

      Right.

    3. HS

      What was, what was Anthropic and Dario like in the early days?

    4. AM

      Well, so I've known Tom forever. Uh, Tom, you know, was one of the, the lead authors on GPT-3. Um, we've been friends for many-- we'd been friends for many years. Tom gave me a call and said, "Anj, you know, we, for various reasons, we wanna leave and start this new lab called Anthropic. We're gonna need, uh, a lot of capital. We're gonna need compute." I, I had already sold Ubiquity6 at that point, so I'd kind of gone through the founder journey. Um, and so Dario, Tom, and I started doing these weekly sessions in early twenty twenty-one to try to figure out how to turn what was really a research hypothesis, right, which is scale, the scaling recipe into a business hypothesis. Um, and look, I would say it, it took like really twelve to twenty-four months, um, and they did a lot of the hard work on figuring out how, how, how do we really sort of operationalize this, the idea of this AI pair programmer, right? Where you take the context feedback loop of the local repository, the files, the directories of programming, and kind of, uh, sort of in a, in a very methodical way, make predictable progress on the capabilities of, um, of, of software engineering. And I, I thought it was a very, you know, if, if anything, my biggest flaw is as an investor, as a founder, is being too early to things. That, that was my lesson with Ubiquity6. I was early to the whole computer vision, which is now, you know, obviously blowing up the whole multimodal sort of generative modeling space. Um, and since then I have, I think, updated my strategy on how to get timing right. But at the time, you know, our, our-- the recipe was pretty simple, right? Raise some money, buy some compute, get a little bit of context data on programming, put out a basic version of the model, deploy it with, with teams that we trust who are doing coding, and then pipe that feedback loop back into the training run over and over again. W- And when you do that with inference, inference gives you sort of two things, right? It gives you revenue to buy more compute, and it gives you the context feedback to keep improving the capabilities curve. And I was like, "Great. This makes total sense, guys. Let's go raise money." I invested a bunch of my money, uh, that was just life savings, which was not much given I was a poor founder at the time, which where most of my net worth was tied up in Discord stock. And it, and, and it pains me sometimes to, to look back at the emails of friends. So I introduced them to twenty-two f- you know, friends up and down Sand Hill Road, and so there were some investors there, and we got twenty-one nos, right? And I was like, "What, what are you guys thinking?" And they said, "Well, Anj, this, this, this recipe sounds good in theory, but like, where's the proof?" And I said, "Proof? The-- These are the guys who invented GPT-3. How much more proof do you want?" And they said, "What's GPT-3?" I was like, "Oh my God," like how do you go about educating somebody who doesn't even understand the technology and the breakthroughs that are happening in the machine learning community? Now, I was lucky 'cause I, I had that training from grad school. I'd started a computer vision company. So something that was super legible to me just was a completely different world. And then for those investors, we were pitching... Remember, we, we originally tried to go out and raise five hundred million, and then had to reanchor to only raising a hundred million dollar seed round, which at the time felt like a lot, but of course was tiny compared to how much OpenAI had raised, 'cause by then I think OpenAI had already raised a billion dollars. And so the whole idea of compute multipliers where we could, for every dollar of venture capital raised, produce a unit of it, of intelligence for six times less, was notLike the VCs did not understand it, which is why, you know, over the next 24 months, the people who got it were either people like, you know, some of the folks in the ML community who also had an overlap with the effective altruist community like SBF, but also Amazon, right? This was very legible to Amazon because they were watching what was happening with Azure and OpenAI, and they were like, "Well, this is super aligned. If you guys actually can create a bunch of state-of-the-art models that are hosted on Amazon, that's super accretive to, to their AWS business." And that's why, you know, it resulted in deep compute and capital eq- for equity partnership with Amazon that was originally $4 billion. You know, a lot of this is public now. But at the time it was, it, it was a really tough journey and I would give Dario, Tom, the other co-founders, you know, Daniela, Jack, Sam McCandlish, um, uh, like it-- uh, Jared, Jared Kaplan, they were-- It was such a brutal time getting this company going. Like people don't understand-

    5. HS

      Is there any, is there anything you would have advised them differently knowing all that you know now?

    6. AM

      I'm not sure I would because the world is a very different place today, you know? And at the time, it really did feel like there was no one they could trust.

    7. HS

      Is it not impossible not to be hauled up in front of Congress if you reach a certain scale?

    8. AM

      Oh. Totally.

    9. HS

      Whether you, whether you're Google or whether you're Facebook or whether you're Anthropic fighting against the Pentagon, it, it-

    10. AM

      Oh, it's-

    11. HS

      You get to a scale where it is impossible not to have that conflict.

    12. AM

      Oh, absolutely. No, what are you talking about? Look, I started AMP as a public benefit corporation because I, I think it's actually a very aligned model. Have you heard of REI, right? REI is a public benefit corporation. They make billions of dollars in revenue and profit. Have they ever been hauled up in front of Congress? No. Like Ben & Jerry's, public benefit corporation. Have they been, you know, hauled up in front of Cong- No, it's because it-- they self-moderated, right, at the time and they said, "Here's our mission, but we have to make-- we have to build a business." And as long as you hold those two things in sort of... Those things are not in conflict long term. If your goal in life is long term to push humanity forward in some stable, reliable way, then you al- there are always tensions where you have your mission and then you have your profit motive, and you've got to be able to, to like moderate between those two. And I think public benefit governance allows you to do that, and I think we need more public benefit charters in Silicon Valley and in, in technology, and I think we will get there. If you look at the arc of infrastructure businesses, for example,

  8. 20:5223:06

    Public Benefit Corporations: Mission vs Profit in the Age of AI

    1. AM

      right-

    2. HS

      I, I actually j- I actually had a chat with a mutual friend of ours who asked not to be revealed.

    3. AM

      Okay.

    4. HS

      Um, and they said, "For fuck's sake, all these PBCs, public benefit corporations, will these startup founders not just fucking win their market first?"

    5. AM

      [laughs] I mean, how are they feeling? Are they investors in Anthropic?

    6. HS

      No.

    7. AM

      Okay. So tell them to give me a call when they'd like to be investors in the world's fastest growing business of all time, and then they can lecture me about public go- benefit governance and market share adoptions. Public benefit governance gives the leadership the ability to make decisions that sometimes are not legible to shareholders as best for them.

    8. HS

      What decision-

    9. AM

      And in the long term- Yep. Go ahead.

    10. HS

      What decision can you foresee with AMP that is aligned to your mission but does not put the profit motive/incentive first?

    11. AM

      There are many up and down the stack because we see ourselves as a full stack scaling partner to the best frontier technology teams, and we also kind of see ourselves a little bit as h- our job is to propose independent standards for AI and, as an institution, try to, uh, evangelize the adoption of those standards through pro- you know, profit-generating businesses. We have a venture capital business. We also have an infrastructure business. A, a good example of this for now is we're actually giving away most of our compute at cost. Now, if you're a shareholder, you'd go, "Wait, Anj, you have billions of dollars of compute infrastructure you're giving away at cost?" Yes, because we think that's the right thing for humanity, and we think that's the right way long term to have a healthy, independent ecosystem, which is what our mission is. Our mission says AMP is a public benefit holdings company. Our mi- our, our vision is, is to ensure there's a healthy, independent frontier technology ecosystem. Our mission is to maximize the world's frontier output. To do that long term, we think the teams that are truly doing innovation, like truly doing-- pushing the frontier of science and engineering need ac- compute access, and many of those teams today can't afford to pay price gouging the, the, the in, in extraordinary prices for compute infrastructure today. And so you know what? Yeah, we're happy to provide them access of that in a way that's mission

  9. 23:0625:21

    The AMP Grid: Building the Electricity Grid for Compute

    1. AM

      aligned.

    2. HS

      Anj, how do you secure the compute supply? Maybe I should know this, but it's the most starved resource today.

    3. AM

      Right.

    4. HS

      How do you secure a resource that no one else can seemingly secure?

    5. AM

      Well, step one is you get there first before people realize [laughs] how, how valuable it is. And, uh, luck- you know, if I've been, um, beating the, the drumbeat on this for four years now, right? I-- When I got to a16z as a general partner, the first thing I did is I sat down with Mark and Ben and said, "We need more compute. We need compute access for these incubations I'm gonna do." And they said, "No problem, Anj. Let's set up a program. What do you need?" So we used, you know, our balance sheet to start procuring compute through the Oxygen program. That gave me the ability to build pretty deep relationships with the industry and build trust with compute partners who now we have lots and lots of relationships with that we're scaling, um, in ways that would be very hard if I didn't have that time and the, uh, sort of flexibility to understand the what is required to really get that infrastructure right. You know, we've talked a little bit publicly about what we're building, which is the AMP Grid, which is essentially a, a, a-What, what, what the electricity grid did for electricity, we're trying to do for compute infrastructure. We see ourselves as an independent system operator of the grid. We- we're not a cloud provider. We don't own our own data centers. Uh, we're not a traditional venture capital firm either. We see ourselves as an independent system operator, which means our job is to coordinate capacity across the ecosystem in a way that allows the best teams, the ind- best independent teams, to provision for their base load, not their peak, so they don't have to over-provision, but when they want to be able to spike up and down for training runs, for inference needs, they, they feel secure that the capacity exists. We are roughly in 1885 Industrial re- you know, Revolution England right now, where you have all... You know, these, these frontier labs are like factories that the steam engine has been discovered. You can use steam to produce all kinds of new products, and many of them are running their own generators in their backyards at half capacity, and I'm going, "This makes no sense. Let's all pool our generators so that a shoe factory can spike up during the day, a steel factory can spike up during the night, and then you maximize utilization, um, and ultimately output."

  10. 25:2135:30

    Co-Founding Companies Like Kleiner Used to

    1. HS

      When you think about allocating it, are you not using compute and the cost of compute as a loss leader for your venture fund business, which then comes in and says, "Okay, you name any of your incredible businesses that you own, whether it's your Anthropic, it's your Mistrals, your Black Forest Labs," and say, "Okay, uh, you'll get the compute at cost, but for that, we need $300 million invested," and that's your way of winning.

    2. AM

      Oh, that's, that's, that's not at all how we make the... Th- th- those are not... That's not the deal. The deal is-

    3. HS

      Okay

    4. AM

      ... we-- The deal is I incubate new companies, like Periodic Labs, one at a time. That's all I can only do is one at a time. 'Cause I, I like to team up with scientists or engineers who are at the forefront of their field. It takes a lot of work to create these new companies from scratch. You know, it, it-- in many ways, I had the privilege to, to realize that we are entering a back to the future era of venture capital. If, if you think about the birth of modern industries, you know, let's talk about semiconductors, uh, gene editing, you know, the biotech industry, or, uh, self-driving cars. Silicon Valley, in the early days of the founding of what I call these frontier industries, the way you start the most iconic companies is very different from how fun- companies were funded for the last 10 years in the Zurp era. Intel, for example, right, was a very close partnership between a couple of scientists and a investor called Arthur Rock, who was a founding investor and was at the office every day. Arthur literally used to-- Arthur wrote the stock incentive plan. He used to run all hands at the company every week. If you look at Genentech, which was incubated in the basement of Kleiner, right? Bo- um, it was... Co-founders were Herb Boyer, professor at UCSF, and Bob Swanson, who was an associate at Kleiner, and I, I got to apprentice in that mode of venture capital, 'cause when I got to Kleiner, you know, I was t- 20. I was wrapping up grad school at, at Stanford Med School, but I was working nights and weekends, um, at Kleiner on the investing team, and Brook Byers, who was the KPCB and B, had an office next to me, and he had some free time, so I would go to him and be like, "Brook, you know, teach me your ways." And he regaled me with all the stories of how Genentech was being founded, and I was like, "Wait. So you're saying basically Bob, like, co-founded Genentech here in the basement at Kleiner?" And he's like, "Yeah, we were-- That's what it meant to be a partner." And I said, "Well, but that's not what happens here anymore. Like, we write a bunch of checks to SaaS companies, and then they go off and do stuff." And he was like, "Different times." And if you look at-

    5. HS

      Is it-- Are, are they mutually exclusive? And what I mean by that is can you have a venture ecosystem where you have a bunch of people writing a bunch of checks, as we have done for the last 10 years, and a next generation, or to your point, a back to the future era of venture capital where you co-found the business side by side?

    6. AM

      Right.

    7. HS

      Can they run side by side?

    8. AM

      Yeah.

    9. HS

      Or are we actually entering an era where we're back to the future era, as you say, where value accrual is in the co-founding and incubation side?

    10. AM

      Um, I, I think it's very hard for them to coexist inside of one person, and it's very hard to coexist sometimes inside of even one firm, because, you know, there's a reason I'm sitting here at Periodic Labs. I work here three days a week. Every day from 8:00 AM to 8:30 AM for the last year, Liam Dos and I have had a stand-up every morning where we go through the priorities of the company, and then we, we make them, we prioritize, we go and execute. I mean, the compute team at, of AMP is sitting upstairs procuring compute for, for the Periodic guys. I've al- my role models have always been the Arthur Rocks and the Bob Swansons and the Mark Mark- Mike Markkula, personal computing. Effectively, the f- first CEO for the first year of Apple was Mike Markkula, who was an angel investor, and he was the one doing all the CapEx, you know, supply chain and capital and all of that stuff that allowed Steve and, and Wa- Jobs and Woz to focus on the product and the engineering and, and that kind of deep partnership is what I get really excited about.

    11. HS

      Can I get back to something you said before, which is like we're at the Industrial Revolution stage, and I was like, "Okay, help me understand that." If we're at the Industrial Revolution stage, what does that mean for where we're going and how I should be acting as an investor today?

    12. AM

      You have to hold two things in conflict that can seem paradoxical, um, and this is, this is the most important thing I learned from Mark and Ben, which is when the future... The future is not, uh, is, is not determined, and so anyone who tells you that they can predict the future with certainty should be taken with a healthy dose of, uh, suspicion. And, and instead, I, I try to approach things like a scientist and go, "What are the biggest bottlenecks? Let's come up with a hypothesis on how these bottlenecks will be solved, and let, let's run multiple experiments in parallel," and then whichever one emerges, you just have to be very truth-seeking and, and be willing to claim... Like, say you were wrong, right? And, and, and I would say as an investor, your job is to come up with a hypothesis for where the future is going and be willing toTo, to make multiple different experiments that are aligned with your mission in parallel, and be willing to be wrong, and be honest with your LPs that some of them may be wrong, honest with your-

    13. HS

      What, what do you s- what do you say to a Brian Singerman of the world who always said that, "I'm not smart enough to predict the future, but I, my job is to pick founders that are able to do so"?

    14. AM

      I think that the most-- the safest way to predict the future is to invent it, right? So do the hard work, come up with your point of view on if we're in Industrial Revolution England, what happened next? And what were the emerging properties of the businesses that became valuable in institutions over the next fifty years after 1885? And then figure out which part of that world-- whi-which figure from history of that era do you, do you look up to the most? And what were... Y-you know, go read about their lives and the businesses they ran and the, and the tensions that emerged in the practice of their business later in life, because then they made mistakes when they were young, and try to learn from their mistakes. And then, and then go and execute.

    15. HS

      What's a parallel property direction from 1885 onward style timeframe that you think will play out in the next era?

    16. AM

      Well, obviously in the world of infrastructure, I think we need something like the grid for, in the compute infrastructure. So that's what I've s-spent most of my days on, which is a coordinating mechanism for, uh, that, that allowed the s- the, the commo- not the commoditization necessarily, but the transition of, uh, coal and electricity from being these resources that were being hoarded to being stable, reliable, uh, commodities that, that the best engineering teams, the best factories had access to, right? That's, so that's, that's what I think about a lot. I think if you're-- s-since you're so talented at media and you're so talented at storytelling, um, I think I would-- and, and your mission is to push the European continent. I think one of the things if I was you is I would be talk, uh, trying to figure out how do we educate the leading capital allocators and infrastructure allocators in Europe about the coming era, whether that's through media, whether that's through educational programs, and get them to understand their role in unblocking the bottlenecks for the best scientists and engineers in Europe.

    17. HS

      It's, uh, largely a lack of pension fund reform in a lot of cases, to be quite honest.

    18. AM

      Okay. So spend your time on pension fund reform.

    19. HS

      How much more cash do we need in Europe for frontier AI to be what we think it can be? Is it like two X? Is it 10 X?

    20. AM

      That's a good question. I, I would try to go about it from a top-downs approach and bottoms-up sizing approach. Um, you know, for us at AMP, when I look at the grid, we are building out-- whi-which is sort of a reasoning by analogy. Uh, we have started securing about one point three gigawatts of compute infrastructure. That's roughly forty billion dollars of cloud spend over the next four years, and that is financed roughly, you know, between tw- with about twenty percent of equity, the remaining is debt. So twenty percent, that's about ten billion dollars of equity capital. The remaining is all debt capital. We have a bunch of partners that help us put together these equity and debt packages to secure compute infrastructure for our companies. I would say in Europe, I would talk to Arthur and figure out how much he thinks is required for the independent ecosystem over there. But in multiples of gigawatt, like if, if you, if you're doing sort of your atomic unit of math in gigawatts, I would from a, from a top-down perspective, you know, I think Google is roughly at twelve to fifteen gigawatts of that I'm aware, aware of, of, of infrastructure for internal and external deployed needs. Now they have a huge land power shell pipeline coming, but, you know, i-i-if Europe does not have access to Google-level infrastructure, then what are you guys even doing? Right? Like that's roughly what the continent needs for full sovereignty, right? To have as, at least as much infrastructure locally as there is within the Alphabet holdings sort of pool over the next four years.

    21. HS

      Is the d- what's easier, the equity raise or the debt raise?

    22. AM

      I would say the biggest challenge has been figuring out the right aligned financial structure across both in a way that's legible to capital allocators at scale. Took me about a year to really get all the pieces right. But there are very large equity pools. The, the, the-- let me put it this way. There are a lot of balance sheets, long-term mission-aligned balance sheets in the world who don't h- who have, um, who are mission-aligned at wanting to help frontier scientists, res-researchers, university labs get access to the compute they want, but they don't have oper... they don't have OpEx. They don't have cash to spend on the compute. So if you can find a way to align equity, um, debt, balance sheets in a way that's risk sort of, um, de-risked, the fundraising is not a problem. It's, it's actually a systems design problem, which took me, again, a year, it probably took me four years to get right. But now that we've figured it out, it's, it, it's not been a problem.

  11. 35:3037:49

    We're in a GPU Wastage Bubble, Not an AI Bubble

    1. HS

      Do you think we are under-invested still today in data centers?

    2. AM

      We are deeply under-invested in security, in secure compute. Okay, uh, let me put it this way. We are not in an AI crisis. We are not in an AI bubble, for sure. I'll tell you that, which is the, the, the n- the question I keep getting asked. We are definitely in a GPU wastage bubble where there are stranded pockets of compute, like billions of dollars of compute, that are sitting unutilized, and if we could pool them together on a grid across the independent ecosystem-

    3. HS

      Why are they unutilized? Sorry.

    4. AM

      For a couple of different reasons. Um, one is they're-- compute is not fungible. So unlike electricity, which had to go through a process of standardization, you know, AC, DC, where megawatts or megawatts or megawatts, compute is not fungible today. So for-forget fungibility of compute across different manufacturers like Nvidia and AMD. Within a manufacturer-NVIDIA chips, for example. The H100s, the GB200s, the GB300s, these are all completely different chip types. So if you have one cluster where you're doing a training run on H100s, and then you wanna sort of con- do continued post-training of that or, or, or have that do a distributed training run of that, um, training cl- uh, workload on GB200s, doesn't work. So th- th- they're just like stranded pools of compute because FLOPS are f-- the atomic unit of computation is FLOPS. I wish FLOPS were fungible, but not all FLOPS are born equal today. And so if you provisioned a cluster two, three years ago with H100s, and now you wanna y- you actually wanna run some of those workloads on for, for the newer generation models, you're memory bound by H100 chips. You can't unlock, you know, the be- the, the benefits of the Blackwell chip without basically just like buying a new cluster. And so now suddenly you have this H100 cluster that you don't wanna do training on anymore because it's, it's old school. It doesn't... Like the chip doesn't have the right memory, m- memory properties to train your frontier models. And so-- And it's very hard for any individual company to ha- like see all of this stuff, but when you're on seven or eight boards like I am, and you've been doing this, you know, 15 years, and you start to see patterns emerge, you're going, "Wait a minute. Why is there all this unutilized compute sitting here and there?" This

  12. 37:4942:16

    Why Compute Isn't Fungible

    1. AM

      is ludicrous for everybody.

    2. HS

      Are frontier models moving faster than the pace of, uh, chips? As you said, that was H100s, where you, you have newer and newer models, and then you're training them on older and older chips because that's what's free, and it's not moving in lockstep. Is that, is that the problem that we're articulating?

    3. AM

      No, no, no. The problem we're articulating is that compute is not fungible. There are no standards for fungibility, and there are no institutions enforcing standardization of compute enough. So we are in the pre-standardization era of compute today, which, which was the pre-standardization era of electricity in 1885. And the next-- I, I hope we can, we can self-regulate, self-standardize, and self, um, enforce standardization so that we can skip the boom and bust cycles that happened with electricity over the next fifty years. And this happens with every infrastructure cycle in the pre-standardization era. It happened with electricity in 1885. It happened with steel. It happened with railroads. And every time you have this boom and bust cycle, what happens is wars are fought, [sighs] companies backstab each other. It's super painful. It's annoying. And my view is that compute not being fungible is what's resulting in the all this talk about AI, the AI bubble. But what people forget is that we, we don't have a AI capabilities bubble. The, the capabilities are extraordinary in every domain. We have an infra-infrastructure wastage crisis right now, and it's because there are no open standards. There's no open protocol for how FLOPS from one, um, data center can flow to somebody else who needs it across chip types, across secure boundaries, and, uh, it's resulting in a lot of pain for the ecosystem. People are just-

    4. HS

      If we have compute standardization in the way that you said, will we-

    5. AM

      Yeah

    6. HS

      ... remove the boom and bust cycle, or is that just one part of it?

    7. AM

      I think that will go a long way in, in preventing this, and instead just allowing this.

    8. HS

      I'm sorry for asking this. You're like, "Jesus Christ, Harry, I'm a professor at Stanford, and you thought-

    9. AM

      [laughs]

    10. HS

      ... waste my time like this," which is a fair statement. Um, British accent goes a long way there. Um, what is the biggest bottleneck or barrier to compute standardization that you want to achieve?

    11. AM

      Uh, it all goes back to alignment, man. Misaligned incentives up and down the stack.

    12. HS

      How is Silicon Valley and DC not on the same alignment?

    13. AM

      For one, I don't think we have standardized on whether AI should be regulated, treated, procured as just as good old-fashioned software or like a new kind of system. You know, like I, again, I went to grad school for machine learning, and what you learn in machine learning 101 is models are statistical. They're not deterministic, right? So when you have a statistical system, it's different from-- Th- there are some properties of a statistical system that are different from a spreadsheet. A spreadsheet is deterministic software, and a statistical model today is not. And so should the procurement of a spreadsheet be the same from an IT perspective as a statistical model? Open debate. That is c- the core debate. That's the problem. Like AI alignment, don't get me wrong, is hard, but not the hardest problem. Human misalignment, human alignment is real- is really the problem right now we have in, in the world. We need technologists who are-- who understand the difference between deterministic software and statistical systems to propose a set of standards for how procurement for this should work, and then we need standards people in DC. We have this thing called NIST. We have various bodies in the government that should get together and say, "Thank you guys for proposing this standard. This is where it makes sense. This is where it doesn't. This is called an RFC process, and we're gonna standardize on this definition of procurement." This is what happened with TCP/IP with the internet. It happened with AC/DC and electricity. We have not done that yet for the model era. And unfortunately, the difference between sta- like, th- these are called open standards. The standardization process is being confused with marketing. Now, President Trump is actually, I think, trying to do his best, from what I can tell, in at least giving America enough freedom to innovate that these standards can even be discovered in our labs here. Because first you need somebody to actually pioneer and figure out what the standards even should look like. I think that there's just a lot of

  13. 42:1645:15

    How China Is Winning the AI Race

    1. AM

      noise.

    2. HS

      Do you worry that basically the CCP is subsidizing a generation of Chinese models that are now being used by American companies, whereby they have frontier models to essentially set where model capabilities can be and then have a real effort to make the open source Chinese models as close to those benchmarks as possible-

    3. AM

      Yeah

    4. HS

      ... much, much cheaper?

    5. AM

      I mean, the engineering execution right now u- up and down the stack in China is extr- Here's what's happening, right? What they realizedIs that the AI scaling race is not a chip race. It's a full stack systems co-design race where if you, if you can't compete head-to-head on chips for now, what do you do? You compete on systems design. You say, "Okay, we can't-- we don't have leading-edge chips here, right, yet, so let's try to compete on systems." The-- You co-design the chip that you have, might be Huawei-

    6. HS

      Mm

    7. AM

      ... with the compute infrastructure, with the training run, and then you design that, okay, to, to have a bunch of performance improvements at every layer of the stack. And then what you do is you do adversarial distillation at scale, where you take Western models and then you, from various different endpoints, you distill the, the state-of-the-art, and then you try to get as many performance gains as possible on that data, and then you release that back out to the world as open models, and then you see what people react to, and then you get feedback, and then you do the next run and the next run, and then you catch up. And at the point you catch up, you say, "Wait a minute. We're starting to be at the frontier. Why do we need to open source anymore? This is good enough for our local domestic needs." It's beautiful. It's actually-- And, and, and that has actually, by the way, resulted in innovation. They're, they're innovating at every step, uh, uh, part of the cycle, and that's why Huawei chips are able to produce capabilities improvements today in China that rival some of the best chips here when, when integrated up and down the stack. In a sense, it's the Google strategy, right? Google is integrated Landpower shell, TPUs, Borg, xDorg, GQM, Gemini, then the deployment. I mean, the systems co-design there up and down results in e-efficiency that, that gives you huge performance gains at the end of the day. China has replicated that strategy using open source as sort of a bootstrapping mechanism to catch up. It's, it's extraordinary.

    8. HS

      Does that concern you?

    9. AM

      Uh, are, are you kidding? Absolutely. That's why I think what we need is a Western grid that is where all inference, frontier inference is served through an Iron Dome, right? Where, where if there's any adversarial distillation attacks on any one of our teams, we coordinate together. So because I'm on seven boards, I, I'm in group chats where I get texted by one founder saying, "Anj, is anyone else noticing today that there's a huge spike in distillation on from this region?" And then I put them in a group chat, we coordinate. It's very informal right now. But what we

  14. 45:1549:07

    Coordinating Defense Against AI Distillation Attacks

    1. AM

      need is-

    2. HS

      You said before that state-sponsored attacks on frontier AI labs are getting worse. What do we not know that we should know?

    3. AM

      Um, we should know that there are insider threats. We should know that there's distillation happening across the US and Europe that is taking advantage of our dist- of, of us all not being united. That, that distillation is, is taking advantage of our political systems, that our mission-critical infrastructure is, is quite vulnerable, especially data centers that are serving, uh, workloads that are being used by enterprises. And I think that from a business standpoint, if we don't secure frontier model inference or what I would call state-of-the-art inference behind a coordinated Iron Dome, we-- I don't think we have a sustainable shot at, at staying at the frontier over the next decade.

    4. HS

      I'm sorry, what does that mean, an Iron Dome for inference in terms of sustaining it?

    5. AM

      It means that all inference is served, no matter which company is serving it, is served through a shared proxy that can i- tell each other when there's an attack happening on one part of the frontier. Think of it as an Iron Dome across the entire Western Front, right? And just 'cause you're here, you're in one company, you, you, you can't see that your model being served through this other company is being distilled. So it's a, it's a deployment coordination protocol. It, it's, it's basically my group chat [chuckles] that I've got with like s- you know, a bunch of different founders, but scaled, where people go, "We're seeing this attack today," and others go, "We are too. Let's coordinate on defensive response."

    6. HS

      I'm sorry for my lack of cohesion on question. Really, I feel guilty, and I don't blame you for leaving this interview thinking, "God, he's got worse over the eight years, not better." But I was watching this interview speaking of inference with someone I think from Base10, and they were saying that the demand for inference has grown not linearly, but combinatorially, and that is how we would see it progress over the next three to five years. Do you agree with that?

    7. AM

      If we keep scaling capabilities, that will definitely happen. The problem is there are a couple bottlenecks on scaling capabilities that are quite existential. One of them we've talked about is-- I mean, the four core bottlenecks on capabilities progress we've talked about, right? It's context, compute, capital, and culture, and I think capital allocation, huge problem. We gotta educate people on why this is-- why these capabilities are extraordinary. Like it, it-- this is, this is like the biggest financial bonanza of all time if you know where to allocate. I mean, there's a reason why I invested in Anthropic in the seed round, and now as you've, like, pointed out, like, the returns of all the, the body of work I've done over the last four years are attracting LPs at the highest levels. But we're just getting started, and so that, th-that... I, I think some of these projections you see are correct if we unblock the bottlenecks along the way. In computer infrastructure, secure compute infrastructure that's fungible, that's standardized, that's the biggest bottleneck. I think if there's any reason why OpenAI, Anthropic, Gemini, and so on don't hit their revenue targets over the next few years, it's because they won't have access to enough compute. I will say there's, there's like a related bottleneck. When I was at Stanford many years ago as a kid, I, I took this class that Peter taught called, uh, I think ca-- it was turned into this book called Zero to One. P- this is Peter Thiel. I used to be-- I was an e-editor for The Stanford Review, and he had this, um, quote, right, which is, "Competition is for losers." And, um-You know, having done this now for 15 years, I've kind of updated my theory of business, and I think he was-- he was not wrong, but he was insufficiently precise, which is that I think perfect competition is for losers. I also think

  15. 49:071:01:43

    Perfect Competition Is for Losers

    1. AM

      monopolistic comp-

    2. HS

      What does, what does that mean, perfect competition is for losers?

    3. AM

      It means that if you have 10 different-- like 50 companies all doing LLM training or doing coding models, that's, that's a losing proposition. It's, it's like, you know, perfect competition is like restaurants. There's no defensibility. That's why restaurants go out of business all the time. It's very hard for them to differentiate. On the other hand, in monopolistic com- monopolies are mafias. If once you have a monopoly at one part of the stack, they stop innovating, and instead they try to go up or down by using the balance sheet to acquire, they start hoarding resources, they start saying, "You give me this, and I will force you to basically subsume yourself to me." And I'm seeing that kind of behavior up and down the stack, and mafias are not good for innovation. I, I think we're in an era of op- what we need is optimal competition. The optimal competition s-set, set-up is you have three or four teams in every frontier that are making extraordinary progress, and so if you invest in them, you get extraordinary returns. But they're not so comfortable as, uh, to be a monopoly such that they can stop innovating. And that's important, because when they stop innovating, as humanity, we're fucked. And so I believe that optimal competition, we're, we are living-- we, we need to transition to the era of optimal competition in frontier technology. And I think we need leaders, stewards, venture capitalists, politicians, educators to remind the world that we have already lived through this era of boom and bust and so on. And so these, these companies, like what's gonna happen, right? Like you said, Anj, Basstan and Inference, all these companies. Inference is an extraordinary growth curve ahead, but it's not going to be an extraordinary growth curve if there are 50 inference companies all competing with each other on a race to the bottom, which is kind of what's happening right now. Like it is not clear to me that we need 50 inference companies, and it's not clear to me that VCs are smart enough to realize that they're just lighting hundreds of millions of dollars on fire in a category where having four or five really good inference trusted providers is net good. But-

    4. HS

      Will the VC subsidization of 50, 20-- 50, 60, 70, whatever companies it is, not make it impossible for the good companies, the four or five, to progress through that cycle?

    5. AM

      It, it's a bit of a self-destructive mechanism because if you have 50 different companies all competing for scarce compute resources, then the, the folks who are actually innovating don't have-- can't get it, and so they can't do their next round of product innovation and so on. And that's the problem when you have... Like this, this-

    6. HS

      Are we... Is that where we are now, though?

    7. AM

      That's where we are right now is the best inference teams are calling me up. Actually, all inference teams are calling me up and saying, "Anj, do you have compute for us?" Because that's their product is reselling compute. But it's been hoarded. It's been hoarded by the hyperscalers. It's been hoarded by people who are not innovating but are sitting on compute. And it's so obvious to me now that I've left a16z, I'm an independent ecosystem public benefit corporation, that the, that the existential threat to innovation in this category is lack of compute. Now, that's why AMP started procuring compute for the independent ecosystem a while ago, and so we are trying to find a way to get these teams enough compute that they need to keep innovating. But I wish-

    8. HS

      What will determine the four or five inference companies that win versus the others that don't?

    9. AM

      Supply. Access to supply.

    10. HS

      It's that simple?

    11. AM

      Yep. Compute supply. If you don't have compute, how do you do inference, man? W-what are you selling? You need a product to sell. So if you're, if you're making a steam engine, you need coal.

    12. HS

      One of your former partners, uh, tweeted last night that we're gonna enter a time where only model-- I'm, I'm trying to remember it, and I wrote down parts of it, but only model creators access the most powerful models, and that will power obviously the services and the application layer or the apps that they provide. Do you believe that will be a world in which we exist, where model providers inherently kind of safeguard the best models for their provisioning of apps, à la Claude, potentially or not? What Martin is suggesting is that in competing cases, they will offer a worse model which gives them an advantage. A-as an example, ElevenLabs, which serves a huge amount of application layer companies, will reserve their latest models so they can offer the best customer support and then sell their older models to Sierra and Decagon so they have a worse quality model retaining the best for themselves.

    13. AM

      The embedded assumption there, right, what we have learned over the year, like empirically over the history of technology, is that you want-- if you have a general purpose product like the iPhone, right, that works for everybody, then the natural, the natural incentive is to amortize the cost of product development of this over the largest number of users. So if you have a general model that's good for everybody, it will be available to everyone. If you have specialized models that are good for some people, there will be price-- there will be product segmentation. And I think what this is telling us is that if there are many custom models, they will-- some of them will be accessible, others will not be. And so if anything, I, I think we should see the fact that like there are frontier model labs saying, "Hey, here's a new model we have. It only makes sense for some large enterprises to access this," as vindication of the, of the like ecosystem truth that they're gonna-- there's gonna be an ecosystem of different models of different types. There's no one large God model. And, uh, if-- 'cause if there was, I think there would be the, the market desire to have, you know, prime ministers, presidents, and I, and students all use the same iPhone. Because inherently, you can raise the most money and invest the most product budget dollars to in-- for a general product and amortize the cost of that across everybody. But if you have specialized modelsYeah, I don't think they're gonna be accessible to everybody, and they don't need to be. I, I, I, I, I think this open and closed access thing is somewhat overblown. I think it just empirically from a systems perspective, if you look at the history of technology, if you have general products, they're, they're, they're distributed to the masses. If you have custom products, they have enterprise segmentation, some are accessible to the enterprise, others are not.

    14. HS

      Are there foundation model layer companies that are yet to be built that will be worth over $100 billion?

    15. AM

      Oh, so many. I'm s- Periodic is one. I'm sitting one-- in one right here [chuckles] right? But they're not foundation model companies. I would call them frontier systems companies. This is the problem. Every time I kept calling-- trying to educate people, you know, four years ago where they'd be like, "Anj, but, you know, Anthropic is a foundation model company, and Mistral is a foundation model company." No, guys, that's just one part of what they do. Maybe they're, they're starting there because that's a very par-- that's a core competence, but there's a reason why, you know, Anthropic also has a thing called Claude Code, and there's al- there's a reason why Mistral has something called Mistral Compute, and there's something called-- There's a, there-there's a reason why, you know, Microsoft, who's a cloud, also has a copilot business. You know, these labels or categories of foundation model when-- need to be viewed, I think, with more suspicion than they are. Like what matters is the full s- the systems co-design, the systems, the, the full stack li-like frontier research loop that you need to run with customers, and then later y- when that happens, when you say, "Oh my god, Anj, Anthropic is now-- They have, they have-- They, they were a model company, and now they're launching a product called Claude Code?" And I was like, "What do you mean? That was the product plan all along. Of course, you need to have a, a pair programmer interface for a model. Like why, why would you assume otherwise? Oh, 'cause you just weren't paying attention, and you had your neat market maps that your associates were giving you, and you thought that was, that was truth." The-these-- The commercial community has forgotten how to build businesses, and they've forgotten the difference between first principles and marketing. That's the problem. That's one of the other misalignment problems. The ground truth of these businesses, machine learning systems businesses, they've always been frontier systems businesses. They were never just foundation model businesses. Now, okay, if you had to package that up and tell your LPs that because that was legible to them, then I, I can't blame you, I guess. But it-- my-- the LPs I work with, I'm very upfront with them. I say, "Look, these categories are going through huge reinventions, and, and, and if you wanna-- when you partner with me, what you get is a full stack sort of partner, and I will tell you the first principles of what's going on, and these first principles insights will change over time. But you gotta be comfortable with huge CapEx outlays in businesses that end up winning the entire category." That's what frontier technology is. So I don't know. I think foundation models have been a deeply misunder- And, and this is part of why I started the class four years ago. I just thought security at scale was going through a bunch of reinvention, and then we reinvented the class to be infrastructure at scale last year, and this year it's frontier systems because not enough people realize that to keep the, the tech-- the capabilities frontier moving, you need to think about these projects, these companies as frontier systems projects, not foundation model projects. Does that make sense?

    16. HS

      It does, but when I hear about the CapEx required, I, I respectfully ask, do you have enough money? I think the $1.3 billion-

    17. AM

      No

    18. HS

      ... was-

    19. AM

      No. [laughs]

    20. HS

      Yeah, like how much money do you-

    21. AM

      Not there

    22. HS

      ... yeah, how much money do you need, Anj?

    23. AM

      Well, for the gigawatt, 1.3 gigawatt, which was kind of our, our proof of concept, that, that capital is not a problem. I think the question is if you wanna scale beyond that-

    24. HS

      Mm.

    25. AM

      Yeah, we need way more capital to be deployed in-- across the Western front in the United States and US allied countries.

    26. HS

      How much money do you think you need?

    27. AM

      A-as long as the capabilities frontier keep moving and we want a healthy independent ecosystem, we'll just keep raising more capital. There's no end to that. I, I, I don't, I don't really d- The day machine learning stops working as a systematic way to give humanity more capabilities, that's when I'll say, "We have enough, Harry." But, uh, that's so far out, I don't even know how to reason about that.

    28. HS

      If I-- I could talk to you all day, but before we do a quick fire, how will venture be fundamentally different in five years' time than it is today?

    29. AM

      Well, again, go back to history, right? I think there will be a few people like Arthur Rock and, um, you know, Bob Swanson and, and Mike Markkula who turn their, their practice into institutions. Then there'll be others who don't, and I think if they don't evolve themselves for what entrepreneurs of this era need, then I think they should get out of the venture capital business because we don't need more bankers. Like, you know, one of the beautiful things-- I, I, uh-- my friend Vlad, who runs Robinhood, floated recently this like venture fund thing on, on Robinhood, right?

    30. HS

      Yeah, yeah. Venture-- Robinhood Ventures, I think it is.

  16. 1:01:431:15:08

    Quick-Fire Round

    1. HS

      Dude, I'm gonna do a quick fire round with you because otherwise I'm gonna take all day.

    2. AM

      Sure.

    3. HS

      You can advise, you can advise an LP investing in venture funds one thing.

    4. AM

      Hmm.

    5. HS

      What do you advise them?

    6. AM

      Educate yourself, take the class, do all the readings. Do the readings. Don't, don't skip the hard work. St- too many LPs are outsourcing their hard work, the, the work they're supposed to be doing as capital allocators, which is like understanding what's actually going on, and then decide which venture managers and allocators you think have a unique defensible advantage of the bottlenecks. I, I would be investing in the bottlenecks, basically.

    7. HS

      Dude, too many, too many GPs are not doing the work. The amount of GPs who've never built anything with AI is astonishing.

    8. AM

      I agree. Completely agreed.

    9. HS

      And I don't think you can be in-- like, I, I don't-- you'll laugh at me. Like, I've built with every different, like, vibe code provider. I'm trying to turn my media company into an AI-first media company. It's pathetic compared to the shit that you do, but at least I'm trying. I'm seeing the bottlenecks of super based integrations [chuckles] and everything that comes with it, and you learn by building.

    10. AM

      Learn by building.

    11. HS

      I think if you're not doing that in the begi-beginning, you shouldn't be investing, period.

    12. AM

      I completely agree. I mean, I, I was-- There, there's a sovereign country that came to me at the end of last year and said, "Anj, we wanna bring twenty-six of our ministers to your house and do a one-year program where we educate-- It's a frontier AI program where we learn what's going on in AI from, from lectures and so on, and then we wanna do a deployment project where each of our ministers actually build AI agents." And I said, "You know what? That-- Like, if you do C-- take Stanford CS 153, that-- it's a microcosm of this course I'm doing with this country, the sovereign fund that we partnered with, um, and that's the way. You, you have to work, like, do the work to read the literature, understand what's going on in research, and then deploy yourself. Like, build tools. Uh, you know, the class project, the Stanford CS 153 class project is the one person frontier lab. Because I do believe genuinely that what would have taken fifty people to do four years ago, now with the right AI tools, you can do with one person. And as a leader, if you haven't played with these tools and deployed yourself and built your own agent, I don't think you understand what's going on. I'm not letting the, the ministers who are taking this class with me, I'm not letting them graduate until they build and deploy agents. I've told them that they're not getting, they're not getting their graduate certification.

    13. HS

      Have you told your wife that you've got twenty-six ministers coming to your house? [laughs]

    14. AM

      She let me co-host them-

    15. HS

      We're in a date night, Anj. [laughs]

    16. AM

      She let me co-host them at our house in SF, you know, a few, few weeks ago, and I'm very lucky to... Viv, I don't deserve Viv, I'll tell you that. But she's very, very-- She, she's mission aligned, and we both believe that the best thing we could be doing with our time is, is educating at scale.

    17. HS

      What makes Dario so good that other people don't see from the outside?

    18. AM

      One, sheer scientific brilliance. Truly, like, world-class technical ability in his domain. An obsessive, um, desire for truth seeking, to admit, like, to, to, to keep reasoning, reasoning, reasoning, doing-- to, to keep doing experiments until he's... He's a physicist at heart, right? Like, I, I think Dario is a physicist at the end of the day. He's not actually a computer scientist. Um, and so a physicist, a, a world-class physicist tries to derive, and he, and he, and he's an applied physicist, um, derive laws, general laws of reality by looking at data and running empirical experiments. He's, he's an empiricist, and he has an obsessive desire to be a good em-empiricist. And the third is m-mission alignment, culture. He said, "This is our focus. This is our mission. No drift. We won't take shortcuts. We, we are willing to make huge trade-offs to hit this mission." And that attracts the best talent, incredible talent. In the face of criticism of people saying, "You're a mercenary, you're blah, blah, blah. You're just doing this for profit." No, actually, it turns out there's a ruthless desire to, to stay focused on the mission, and that results in hard trade-offs and priorities. And if you don't, if you're not aligned on that mission, then you'll just think he's crazy or, you know, he's evil or whatever. It's crazy how much ad hominem attacks people I've seen against him, but he's that-- got that clarity of mission.

    19. HS

      What have you changed your mind on in the last 12 months?

    20. AM

      You know, the biggest one is, um, health. Um, I, I've had some health experiences be- be- both the family members and myself have had health experiences that made me realize we all just don't know how much time we have on Earth. And that makes you stop taking for granted how much time we have. And so I start taking time much more seriously. But I would say, and this was my, my kind of Anj-- You know, every lecture I do at Stanford, um, we talk a lot about scaling laws and technical stuff, but I also give the kids like an Anj's life scaling laws lesson, uh, you know, the, at every-- I, I'm very inspired by Richard Feynman. Uh, Feynman's lectures, you know, always kind of combine technical education with a little bit of life coaching for them. And, and my fir-- my, my, like, number one scaling law for them, for the students was take life seriously, but don't take it so seriously that you forget what makes it worth living, which isHave fun with friends. Work on interesting projects with people you love. Don't take relationships for granted. It's humans that make the world go around, and if you're so focused on your next fund or your next raise or whatever, you just take for granted the one thing we all don't know how much we have, which is time with each other. And so I just started valuing my time more, my relationships with people. I, you know, there's so many people. I mean, my parents. You know, I left my parents behind in India to move to college, um, at Stanford, and I have gone weeks of my life not calling them or texting them, and now they're, you know, in their 60s, and I've-

    21. HS

      I would give you a hug if we were in person. [laughs]

    22. AM

      Fuck, I'm so sorry, man. I... [laughs]

    23. HS

      Don't worry. It's okay.

    24. AM

      Jesus.

    25. HS

      You know, the first money we ever made from the show, we made it because my mom has MS, and we couldn't afford treatment for her, and the only way that I could pay for it was by putting adverts in the show. [laughs]

    26. AM

      Wow.

    27. HS

      And, and that, that was how we did it. And they still pay for it. Thank you to Vanta for paying for Mom's MS. [laughs]

    28. AM

      Thank you, Christina. Yeah.

    29. HS

      Yeah.

    30. AM

      Thank you for the corporate sponsors. Oh, yeah, man. The trade-offs, you know, the sacrifices are...

Episode duration: 1:15:18

Install uListen for AI-powered chat & search across the full episode — Get Full Transcript

Transcript of episode a1ymdW-h33E

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome