Skip to content
The Twenty Minute VCThe Twenty Minute VC

AI Fund’s GP, Andrew Ng: LLMs as the Next Geopolitical Weapon & Do Margins Still Matter in AI?

Dr. Andrew Ng is a globally recognized leader in AI. He is Founder of DeepLearning.AI, Executive Chairman of LandingAI, General Partner at AI Fund, Chairman and Co-Founder of Coursera. As a pioneer in machine learning Andrew has authored or co-authored over 200 research papers in machine learning, robotics and related fields. In 2023, he was named to the Time100 AI list of the most influential AI persons in the world. ----------------------------------------------- Timestamps: 00:00 Intro 01:04 What are the Biggest Bottlenecks in AI Today? 09:31 How LLMs Can Be Used as a Geopolitical Weapon 15:07 Should AI Talent Really Be Paid Billions? 19:15 Why is the Application Layer the Most Exciting Layer? 29:30 Will AI Deliver Masa Son's Predictions of 5% GDP Growth? 38:43 Do Margins Matter in a World of AI? 40:36 Is Defensibility Dead in a World of AI? 48:24 Will Human Labour Budgets Shift to AI Spend? 55:05 Are We in an AI Bubble? 56:28 Quick-Fire Round ----------------------------------------------- Subscribe on Spotify: https://open.spotify.com/show/3j2KMcZTtgTNBKwtZBMHvl?si=85bc9196860e4466 Subscribe on Apple Podcasts: https://podcasts.apple.com/us/podcast/the-twenty-minute-vc-20vc-venture-capital-startup/id958230465 Follow Harry Stebbings on X: https://twitter.com/HarryStebbings Follow Andrew Ng on X: https://twitter.com/AndrewYNg Follow 20VC on Instagram: https://www.instagram.com/20vchq Follow 20VC on TikTok: https://www.tiktok.com/@20vc_tok Visit our Website: https://www.20vc.com Subscribe to our Newsletter: https://www.thetwentyminutevc.com/contact ----------------------------------------------- #20vc #harrystebbings #andrewng #aifund #ai #china #LLM #defensibility #bottleneck #aibubble

Andrew NgguestHarry Stebbingshost
Nov 17, 20251h 6mWatch on YouTube ↗

EVERY SPOKEN WORD

  1. 0:001:04

    Intro

    1. AN

      (instrumental music plays) In my career working in AI, (computer data sound effects) I have yet to meet a single person that ever felt like they had enough compute.

    2. HS

      I could not ask for a better guest. Andrew Ng, globally recognized leader in AI.

    3. AN

      Data centers are the critical infrastructure for building the digital economy. I think that open-way models is a tremendous source of geopolitical influence. (computer data sound effects) The work ethic, the velocity, when China's government makes a all-nation commitment, it's all-industrial commitment, that's actually a very powerful force that I wouldn't underestimate.

    4. HS

      Ready to go? (instrumental music plays) Andrew, I've been an admirer for a long time, so I've been really looking forward to making this happen, so thank you so much for joining me today.

    5. AN

      No, yeah. Thank you, Harry. I watch a bunch of shows. I really enjoyed your recent one with, uh, my friend, Martin, Martín Casado as well. That was very memorable. So actually thrilled to be, uh, uh, here.

    6. HS

      I d- I love Martín. Very, very special

  2. 1:049:31

    What are the Biggest Bottlenecks in AI Today?

    1. HS

      man. I, I wanna start with something that you've said before. You said AI is the new electricity, and when I think about electricity and where we are today, I wanna understand the bottlenecks. And everyone seems to suggest that it really is about data, compute, and algorithms. Is that the three parameters to which we should think about bottlenecks, and if so, which one do you think is the biggest bottleneck?

    2. AN

      I would say the two biggest bottlenecks right now, um, it may be, uh, uh, I, I think electricity is one of them. Uh, so in the US, I am honestly worried that, uh, many data center operators were stuck in kind of permitting and, you know. And, and I know that local community support is important and some people don't want a data center there, but, um, once we build roads and railways as the infrastructure for a certain generation, data centers are the critical infrastructure for building the digital economy, and so lack of electricity in, in America and in number of Western countries is a problem. And in contrast, I see China building power plants left and right, including nuclear, so that will be interesting dynamic. And semiconductors is another bottleneck. Um, but AI is so complicated. I think we also need more data. We also need more, um, better algorithms. You know, all of it is worth working on, but in the short term, some constraints with electricity and, and, and semiconductors.

    3. HS

      Can you talk to me about the c- constraints around semiconductors that you think are most pressing that most people don't realize?

    4. AN

      First, in my career working in AI, I have yet to meet a single AI person that ever felt like they had enough compute. So, um, you know, give us any amount of compute, we will use it all up and say, "We still don't have enough." So this is a constraint for the last 20 years or so. But what I'm seeing is, um, with the rise of gen AI, there are very valuable workloads, uh, for example, AI-assisted coding. You know, it's fantastic. It's making us so much more productive. But if you use cloud code enough, sometimes you get rate-limited, and I find that many companies have, really have excess demand, which is a very rare problem to have, but so many people want more LM inference, want more tokens generated, and we just don't have the semiconductors and data centers and electricity to meet the demand. But, you know, there's a lot we could do with AI, um, token generation, uh, and it's frustrating when we can't, uh, when on supply side, we can't supply enough to people that want it. On the demand side, you know, you, you get rate-limited if you, if you, if you use too much.

    5. HS

      How should I think about that insatiable need for more compute, and the improvements that come from it, with the recognition that many people say GPT-5 was the example that scaling laws have been reached to certain extent and a focus on efficiency has been a transition? How should I balance those two, two seemingly differing opinions?

    6. AN

      So, it is true that, um, token generation is getting more efficient and cheaper. In fact, um, if you look at OpenAI's, uh, open-weight model, the, the, the, they, they actually, um, have released models that are very efficient to run, so I think they did a good job with, um, was it like 120, 100 plus 20 billion parameters or something with I think 5.7 billion active? So it's actually a very efficient model to run. Um, but despite the cost of token generation falling, uh, our demand for it is, you know, insatiable. I, uh, one, one interesting thing that's happened in AI is, if we look at where the buckets of value, uh, one of the big buckets of value is AI-assisted coding. And I think this hearkens back to an earlier era. In a previous generation, I think Google came to dominate, you know, horizontal information discovery like web search, but there was room for lots of verticals when the internet was being built. So we wound up with, you know, Travelocity and Expedia fought it out for travel, bunch of folks fought it out in retail, uh, bunch of others fought it out in transportation, social media, and so on. What we're seeing now is, um, ChatGPT has such a strong consumer brand. Uh, ChatGPT seems to be the dominant player in the hor- new h- new-gen horizontal information discovery, although I think Gemini, with its channel advantage through control of Android and Chrome, you know, is, is a, is a serious player as well. But if that's where horizontal information turns out to be, then there's still plenty of room for lots of verticals to be built up. And one of the clear buckets of really valuable verticals is AI coding assistants, where cloud codes, um, is, you know, I use that every day, love it, OpenAI's, uh, uh, uh, Codex has a lot of momentum as well, but it's clearly making developers so much more productive and efficient that the demand is just through the roof for less, "Let's use more and more of this." One thing I find exciting is, I often look at AI coding assistants as a harbinger for what might happen to other job functions as well, as the AI marketing tools become more efficient, as the AI recruiting tools become more efficient, and the AI finance tools become more efficient. So I often look at AI coding assistants as a, a, you know, maybe a foreshadowing what may happen as well to other sectors as the tools get better for them too.

    7. HS

      I had Joelle Pieneau from Cohere and formerly of Facebook on the show recently, and she said that AI coding assistants are in the same place that maybe image generation was in 2016, 2017, in terms of maturity. Do you think that's a fair state of the environment today, or do you not think so?

    8. AN

      I don't know. I- I think it's further along. I think in 2016, image generation wasn't super valuable. Uh, I don't remember it being that valuable back then. But I- I- I think today, AI coding assistants is really... Actually, uh, at- at AI Fund, my, uh, head of, uh, engineering received... I- I would say, "Hey, let's think about standardizing on tools." And, you know, basically he- he- he said, you know, "I need these tools, and you have to pry them out of my cold, dead hands." Where I- I- I think our developers feel really strongly. I- I myself, I don't ever want to have to code again with our AI coding assistants. So I think the tools are really working well, but still with a lot of, uh, uh, headroom for how much better it can get.

    9. HS

      I- I do just want to go back to the kind of the core bottom lines. We said there about electricity, and we said there about semiconductors. Yet, I think when we look at, like, the build out of data centers today, as you said, regulation has been a big part of preventing that in a lot of ways. Do you think Trump has done more to help or to hurt the progression of AI in the United States from an infrastructure perspective?

    10. AN

      Over the last few years, the US federal government has done some good things and some less helpful things. Um, I feel like clearing out unnecessary regulations has been a very good move. Um, even last year, the, uh, bipartisan Schumer AI Insights Forum, I- I- I think there are a lot of people lobbying the US government to pass stifling regulations. You know, there are a lot of hyped up AI safety narratives saying AI could lead to human extinction, which is kind of a ridiculous statement. Um, uh, they try to get stifling, anti-competitive regulations passed often to try to shut down open source, open weight. Fortunately, we've beat back a lot of that. But I think the bipartisan Schumer Insights Forum did a really good job digging to the truth and concluding that America should be investing in AI rather than, you know, passing unnecessary regulations to slow it down. Um, I think Trump, uh, did a good job, um, uh, and then his whole team, David Sacks and Krishnan and so on, did a good job clearing out unnecessary regulations. Uh, on the flip side, one of America's huge competitive advantages has been this ability to attract talent, uh, including high-skilled talent as well as, you know, frankly, young talent that may not currently be high-skilled but could be high-skilled in the future. And so I think, um, to the extent that America is, uh, uh, uh, not investing as much in attracting talent, I think that would be an unforced error. Um, and then I think lastly, you know, investments in science, right? Uh, uh, I think, uh, helping our institutions of higher education, uh, have the resources to train our grad students, to invest in scientific technology, I think that's really precious. And so anything that damages that, I think would also be very unfortunate.

    11. HS

      If I gave you a regulatory magic wand, Andrew, what would you change that would have the most significant needle-moving impact?

  3. 9:3115:07

    How LLMs Can Be Used as a Geopolitical Weapon

    1. HS

    2. AN

      America is fortunate to have a lot of very smart people wanting to come here, uh, to do really challenging, really tough problems. Um, many of our Nobel laureates are immigrants. Uh, you know, Einstein, canonical example, was an immigrant. I think continuing to cultivate America, um, as a place to attract great talent to work together in- in a place, in a- in a democratic nation that- that respects the rule of law, I think that would, uh, help us move ahead. Um, I think that, uh, securing the semiconductor supply chain would be very valuable as well. Uh, a lot of friends in Taiwan, I love Taiwan, uh, and also, um, America's dependency on TSMC, uh, is concerning in case anything happens. Um, and then I think making... A- a- and frankly, there- there- there's one very funny thing that happened in society, which is, um, there was recently a Pew report showing, I think, how much Americas, you know, uh, uh, think AI would be good for them, enthusiastic for it versus not enthusiastic. And even though a lot of AI technologies were invented in America, um, a lot of people don't trust or don't like AI.

    3. HS

      The joys of what I do, Andrew, is I get to speak to incredible people and then kind of cross-reference what they say. You know, David Cohn from Sequoia said, "Hey, a really useful barometer for effectiveness is can AI replace the bottom 5% of capabilities of what a workforce does?" Joelle from Cohere said, "No, that's crap." (laughs) Uh, it- the real question is, can it 10X people's ability? Forget the bottom 5%. Can it 10X? How do you think about a barometer for success of the workforce with AI with those in mind?

    4. AN

      In the case of software engineering, it is accelerating the writing of codes. Uh, there are so many projects that used to take, you know, six engineers half a year to build, that today, I or one of my engineers can build in a weekend. I hope that we never have to go back to coding without AI assistants again, because the acceleration, the productivity boost is- is- is- is incredible. I remember... Or for example, one weekend, I thought, "Oh, I want to, you know, uh, my daughter, I wanted flashcards for my daughter to practice multiplication. Uh, she wanted to practice multiplication. She wanted flashcards. So I thought, "I could either drive to the store and buy a bunch of flashcards for her, or I could just, you know, use AI to write code for me to generate and print out a bunch of flashcards." And so I did the latter. And so this is a very low economic value task, but with AI-assisted coding, I could get that done, uh, uh, very quickly.

    5. HS

      Do you think vibe coding is an enduring market? Like, do you think everyone will want to code?... and accessibility is important? Or do you think it bluntly just allows builders to build better and more efficiently?

    6. AN

      I think we need all of the above. Um, you know, I've, I've had mixed feelings about the term vibe coding, but nitpicking terminology aside, I think everyone should learn to code. Uh, what I'm seeing is, for a lot of job roles that aren't just software engineering, people that can code can get more done than people that can't code. For example, I think, um, my, one of my, um, my marketer wanted to run a user survey once, and, uh, she wanted, you know, something for people to give live feedback. And she looked at the app store, couldn't find anything, so she said, "You know what? I'm gonna spend two days to code that." It did take- took her two days, but my, uh, uh, marketer then built a little mobile app where users could swipe, you know, left or right to give feedback on some marketing messages we wanted to user test. And because of that, we were able to run user experiments, get feedback, and so it helped her do her job better as a marketer. Whereas in contrast, a marketer that couldn't code a little app to, you know, let people swipe around and give feedback, they would just not have been able to do this, would not have gotten the feedback, would not have been able to move forward. Today, my best recruiters, not only do they screen resumes by hand, they are writing prompts to get AI to help them screen resumes. Um, it has been interesting. We've, we've, we've-

    7. HS

      Which is amazing, but going to your point on, like, oh, people shouldn't be fearful and they are fearful, you, you under- you see that that would lead to efficiency gains, which mean headcount reductions. If you can screen so much more with AI... I'm not into this kind of fearmongering, so... But like if you can screen a lot more with AI, I don't need my three other analysts.

    8. AN

      I think there's a small subset of jobs that, you know, frankly are in trouble. Uh, but I think for the vast majority of knowledge workers, uh... Uh, actually, w- here's one thing about, uh, hype. Um, AI is amazing. There's a lot of stuff it can't do. So this phantom AGI someday with AI can do everything a human can do, I think we're very far away from that. I'm gonna say, oh, like, decades away, maybe even longer. And the trick is, if AI could do 30% of a recruiter's job, you know, who knows, maybe 50%, although that feels a little bit high, there's another, like s- 50 to 70% of stuff that we still need the human to do. But it's also clear that if you use AI and someone doesn't, that's actually a huge difference in what you can accomplish. So better, you know, much better off using AI, but because AI can't do everything, there's still plenty of work that we still need humans to do for a lot of job

  4. 15:0719:15

    Should AI Talent Really Be Paid Billions?

    1. AN

      roles.

    2. HS

      Do you not think we have a white-collar talent pipeline problem though, which is whether you're a consultant or you're a legal associate, in the junior ranks, a lot of what you can do is being replaced by AI, and they are actually cutting juniors. You're seeing this across the board. And so what's the fear is we're gonna have this talent hole where in 10 years time, there's no juniors to go up into seniors because we've replaced them.

    3. AN

      Yeah. I don't think it's as dire as that. I think there is a big problem, but I don't think it's exactly that problem. So let me tell you what I'm seeing in software engineering.

    4. HS

      Mm.

    5. AN

      The most productive engineers I know, they're, they're, they're, you know, they're, they're not fresh college grads, they are people who have, you know, 10, 20 years of experience or whatever, and really on top of AI, and know the AI tools and understand the AI code. So those people, experienced and on top of AI, move faster than anything the world has seen even one or two years ago. One tier down is actually our fresh college grads that are really on top of AI. So I've hired, you know, quite a few people, fresh college grads, that for whatever reason, through the social network, community, really learned the AI tools and they move really fast, but they're not as good as people experienced of that know AI. One tier down from the fresh college grads is the people with 10 years of coding experience, but who had a comfortable job and for whatever reason is still coding like it's 2022 before ChatGPT. Those people, I, I just don't hire people like that anymore. But there are people that, you know, they had the comfortable job, they kept coding the old way, and they just did not learn AI, I think those people may get into trouble at some point. And then-

    6. HS

      That, uh-

    7. AN

      Oh, but, but there's one other one which is the, the tier that is in trouble, which is, um, the fresh college grads that don't know AI. One unfortunate thing is, um, university curricula is slow to change, um, and so I actually feel pretty bad that even today, there are, you know, universities graduating CS undergrads that have not made a single call to a single, um, API on the internet, right? And i- imagine, imagine graduating a CS undergrad that has never heard of cloud computing, and say, "What is a cloud? Oh, I don't need to just run things?" Tha- that's, that's weird. You just can't b- be a CS major and not know how to do things in the cloud. And I'm feeling I'm getting to the point where I don't think it's right f- I, I, I feel like we've got to not train CS majors, um, without also making sure they know how to use AI to help them with coding, but we're also making them know the AI building blocks. But university curricula... And that's, that's the cohort of students they're entering the job market that's really struggling. But do the fresh college grads know AI? We can't find enough of them. So many businesses love to hire those fresh college grads.

    8. HS

      I just want to touch on the, the 10X, 100X engineers that you said are just amazing, amazing. We're seeing pay packets, compensation brands larger than they've ever been, you know, three and a half billion dollars in certain cases for a n- a single engineer. Are these justified pay packages given the impact that they are having on companies' enterprise value? Or is this bubble-like pay packages that we should be concerned by?

    9. AN

      I don't know. It, it is very hard to say. I, I, I know a number of people that have gotten really huge pay packets. I'm actually very happy for them. I think it's great, the funding going into, you know, pay AI people really well. Um-Is it a problem?

    10. HS

      Can you tell me nicely, do you think it's, uh, a hundred million dollars for an engineer? I worry that you're just not going to be as productive. Like, if I give you a hundred million dollars overnight, God, you might buy a nice house and go on holiday and, you know, you lose a bit of efficiency.

    11. AN

      I don't know. I have a lot of Silicon Valley friends that, you know, for whatever reason, have made a little bit of money. Many of them just keep working really, really hard. Uh, uh, b- but e- equally before and after, you know, they wound up making a little bit of money, so I- I- I find that, uh, love of the tech culture, we do stuff 'cause it's fun, 'cause it lets us, you know, hopefully help other people as a way to change the world. I- I- I find that, uh, wealth makes people become lazy much less than one might guess.

  5. 19:1529:30

    Why is the Application Layer the Most Exciting Layer?

    1. AN

    2. HS

      I'm intrigued to see how you think about this. You said all the different ways that it could impact many different verticals there, and you said, "We- we overhype," you know, uh, doomsday scenarios and everything in between. You know, Andrej Karpathy recently said, "AGI will just blend into 2% GDP growth," which I thought sounded a little bit unexciting to be honest, Andrew. I- I- I wanted some seismic shift in productivity increase. Do you think, uh, it'll blend into 2% GDP growth, is what you expect, or do you expect a much more significant 5, 6%, like Masa Son at SoftBank expects?

    3. AN

      I hope we can get much closer to five, six, or more percent GDP growth. Um, when looking to the future, it turns out one of the most expensive things in today's world is intelligence. This is why it's so expensive, at least in the US, to hire a highly skilled doctor to advise us on a medical condition, or hire a highly skilled tutor to, you know, patiently teach our kids, 'cause that intelligence, training of that wise doctor, wise teacher, wise advisor is very expensive. But with AI, we finally have a path to make intelligence cheap. And so in the future, if everyone can be assisted by an army of smart, well-informed staff on all of these topics under the sun, that currently only the relatively wealthy in society can afford to hire people for, then individuals would be so much more empowered and able to get so much more done. And that highly empowered individual, you know, lives would be so different and the GDP growth will be- will be- will be massive.

    4. HS

      Totally get that and agree. Kind of speaking about that democratization of knowledge there and the benefits that come from it, you said a word before which was "open," about the kind of open weights ecosystem w- we've seen. We've seen this reversion back to like a closed world in a lot of cases. How do you feel about the reversion back to a closed, and how do you analyze the state of play today in that open versus closed?

    5. AN

      It's still very dynamic. Um, I think it is, uh... So for a lot of American companies, the leading frontier model is often kept closed, and then the one-tier-down model not quite as good as release as open. I think it's much better than nothing. I'm actually grateful for all the teams that are releasing open source, open weight models. And then the other dynamic is, uh, uh, China especially has been really taking the lead, uh, or, or, uh, taking a lead or getting up there in terms of releasing tons of really good, um, open weight models. So I would say, if not, i- i- it's kind of a not what I would have predicted, you know, a decade ago that China AI would end up being more open than America AI. Uh, and I think-

    6. HS

      Why do you think China is wanting an open AI world?

    7. AN

      It turns out that openness is great for a country's, um, development. So it turns out that, uh, when a team releases open source software, circulation of knowledge is much faster to the close by community. And so it... What- what I see is when s- when a team in China releases an open way model, then yes, of course America can take advantage of it, but the China economy benefits even more from it, because once something is open, it's easier for teams to ........................ and say, "Hey, buddy, how does this really work? I'm having trouble with this part of the model." It- it's just that circulation of knowledge is really valuable for innovation. And when the US, um, has more closed models, and when, you know, teams are trying to pay these $100 million salaries to extract talent, uh, then that circulation of knowledge becomes very slow, and it slows down the rate of American and- and European innovation.

    8. HS

      With the commoditization of the model layer, though, and the kind of opening of it, it actually increases the premium on manufacturing and the ability to manufacture at scale, which China have a much greater ability to do than the US. Do you not think that actually leads a lot of their thinking around why they want to remove the strength of US models?

    9. AN

      In addition to, um, increase innovation and circulation of knowledge, which the open way models helps with, um, I think that open way models is a tremendous source of geopolitical influence.

    10. HS

      Hmm.

    11. AN

      So for example, if, um, someday, you know, some kid in some developing nation, um, asks a question about a publicly sensitive topic or asks, "Hey, where are the national borders in this case?" Or, "What is the history of this event or that event?" The na- the country of origin of the model they end up using will be delivering some answer, and you know, whether the answer skews towards one nation's values or another nation's values is actually a tremendous source of influence and soft power. Like it or not, open way models are a key part of the AI supply chain, and, um, uh, China releasing, you know, free, uh, uh, low cost or free models into that key part of the supply chain means it's really-... starting to build up a lead, right? And, and build up a commanding user base, and that too will be a source of... And this is why I think nations, um, with a strong media and entertainment industry, it turns out South Korea has vastly proportionate- disproportionate influence because of their leading entertainment industry, so people listen to whatever, you know, K-pop or whatever, and that buys the nation a lot of influence. Hollywood was a tremendous source of soft power for America, paints a certain vision of the American dream, uh, talks about the values of freedom and democracy, and I think this is another frontier of communications in soft power.

    12. HS

      You have the most fascinating perspective, having obviously spent many years at Google, and then obviously Baidu as well, and so having been on both sides of the table in certain respects, we ha- we have this kind of strange binary polarization of the AI race, China versus the US. Do you agree with that positioning of China versus the US in an AI race?

    13. AN

      I think there's a lot of room for, um, cooperation, and then also some places that will be competitive. So first, um, while people, sometimes even me, talk about the AI race, there's no single phish in the finish line. It's not one race. It's, AI is a general-purpose technology, and you could be better or worse at coding, better or worse at answering questions, better or worse at helping with, you know, markets and finance and so on. So AI has many different capabilities, and there's no one finish line. And even with one capability, I think we're gonna keep on improving for a long time. So, I, I, I feel like because of, uh, PR goals, AGI has been hyped up as this finish line, but I don't think it's a finish line. It's just we'll have continually improving capabilities for, you know, decades to come. Having said that, nations with stronger AI capabilities are going to be more powerful, uh, their citizens will be more prosperous, the economies will grow faster, uh, so I think there is... So, to the extent that different nations' incentives are not aligned, nations with more powerful AI capabilities will be able to do more. Just like, you know, if, if, if a country has a fantastic electricity grid and another country, you know, has po- power outages and so on, well, one country can just use the electricity grid to do more manufacturing, more industrial work, just do a lot more that way.

    14. HS

      Do you not think we still underestimate China's ability though? I mean, I th- I think we definitely do in Europe, but I think in the US, respectfully, I see a lot of US arrogance around your positioning. And then you go to China, and you've been to China and spent huge amounts of time in China, you realize the speed and the intensity with which they move. It's a different level to both Europe and the US.

    15. AN

      Yeah. Th- this, yeah. Uh, uh, to be fair, I think US, Europe, China all have problems as well. Uh, but having said that, I think the work ethic, the velocity, um, when, uh, China's government makes a whole-nation commitment, it's an all-industrial commitment, that's actually a very powerful force, with kind of state-level investments in semiconductors, in its education system, so, you know, like K-12 kids being tr- trained to use AI, businesses also use AI, share knowledge, and then sometimes, you know, build this stuff and also sell it internationally with state apparatus that may or may not be the

    16. HS

      Yeah.

    17. AN

      hope. That's actually a very... And, and then, uh, control over, um, uh, uh, rare earth, uh, elements, uh, so I think that whole-of-economy, whole-of-country efforts is actually a very powerful force that I wouldn't underestimate.

    18. HS

      Given that, like, we shouldn't underestimate it, do you think it's right that we have export controls on chips? You know, obviously NVIDIA has had a lot of export controls back and forth. Do you think that's right or not?

    19. AN

      I think the export control on chips has largely backfired. Um, the way that the US... And, and I think the, the way the US first put restrictions on Huawei, uh, uh, and then later on, you know, exported NVIDIA and AMD and, and other semiconductors, that really incentivized China. So before the export controls, semiconductor development in China, it was not... Frankly, it wasn't moving that fast. You know, it was a nice area, there was some investment, but when America did that, then China really accelerated its semiconductor development. And so America incentivized China to do this, and it is paying off for China. Um, I think, uh, uh, you know, a number of Chinese companies are building offerings that, um, individual chips are less powerful, but maybe a much larger number of chips. China built offerings competitive with certainly the last generation of NVIDIA, maybe increasingly the current generation. So I think, um, if I were to analyze just purely, you know, US national self-interest, uh, I think that acce- caused China to accelerate its semiconductor industry in a way that may not be helpful to the US long term.

  6. 29:3038:43

    Will AI Deliver Masa Son's Predictions of 5% GDP Growth?

    1. AN

    2. HS

      I sit in Europe, I, you know, obviously live in London, you told me you were born in London before this. My question to you is, it transparently feels like we are very far behind, and people say, "Oh, you've already lost." How do you feel about Europe's position in a very new world, and what can Europe do to regain some semblance of equality between the US and China?

    3. AN

      If I had one wish for the European regulators, I've spoken with quite a few European regulators, I was hearing things like, "We want to be leaders in regulating AI, and that's a competitive advantage." And with all due respect, that's not a competitive advantage. So, my one wish for Europe is, uh, stop regulating so much and just focus on investing in building. The thing is, it's still early in the days of AI, um, it's still early in, in, in, in, in the game, and Europe has plenty of smart people. Uh, let people work hard, don't force them to not work hard, let people that want to work hard, work hard, and stop over-regulating, and just go and invest and build stuff.

    4. HS

      Where do we most need to be investing where we are not investing enough?

    5. AN

      There's tons of capital going into data centers and infra. Uh, we can debate, is there a bubble or not? We definitely need a lot of investments. Are we-... you know, getting to the point where people are using such esoteric financial instruments to find cash for it that there'll be a bubble. We could debate that, right? So we definitely have a lot of investments, but when does it become overinvestment? That, that's a, th- that's an interesting question. The other place that I think we need to invest in a lot is not just the infra data center foundation model layer, but the application layer. Because of others having spent, you know, billions of dollars to train these AI models, we can now access them for, you know, hundreds of dollars, or thousands of dollars, or whatever, for all... Tens of dollars. So it's wonderful to build tons of applications that just were not possible before. Now, from a VC investment perspective, I've heard from multiple VCs, is, um, bizarrely the cost of trying something out is so low that there are fewer ideas. It's not quite sure where to put massive amounts of capital to work at the application layer. In fact, if you look at a lot of the, um, uh, uh, application layer investments, sometimes it feels like, you know, firms are putting in $100 million so that they can pay Open Anthropic, so that Open Anthropic can pay NVIDIA, which is where all the money is-

    6. HS

      Mm-hmm.

    7. AN

      ... is ending up. Having said that, there's so many valuable bets to be placed at the application layer to just build stuff. Uh, but the dilemma is, you could do it in a very capital-efficient way. So if someone wants to say, "I want to put $10 billion to work," you know, yes, you can build $10 billion worth of data centers. We know how to spend that money. But how do you spend $10 billion in building applications? The pro- the, the problem is almost, it only costs me a million dollars to try an idea out, so how do I spend $10 billion? It's kind of a problem and also not a problem, but I think we should-

    8. HS

      But what does it? Because when you look at AI margins, well, margins for AI application layer companies, they're terrible. They make no money. They cost a lot of money to build, 'cause you have large engineering teams that build them. They, they don't... They cost more, not less.

    9. AN

      I think it still varies. Um, I'm seeing a lot of green shoots of software applications that, um, were not that expensive to build, and if your own token usage is not, you know, the majority of your expense.

    10. HS

      If you look at a Replit or a Lovable, 80% of their pass-through is through Anthropic.

    11. AN

      Mm-hmm. Yeah. So the dynamic that I'm excited about is, um, uh, i- i- it turns out, I think An- Anthropic, uh, I think that as, um, token costs continue to come down, uh, we'll see how the economics change, right? Right now, um, token, uh, y- you know, is, is, is just expensive. But hopefully that will change and the value created is really large. Actually, you know, I, I remember an earlier era, in early days of food delivery for example, I saw this in both the US and China, there's a lot of s- VC-subsidized eating, right? You know, it was great. We could, we could eat, food delivered, it was basically VC-subsidized. Um, I think we're seeing that right now with a lot of VC-subsidized, you know, AI coding. Uh, the laws of physics or the laws of finance says that at some point, right, this can't go on forever. Uh, but where it settles down will be that... I think there will be some very valuable businesses that are not perpetually VC-subsidized, but navigating this crazy VC-subsidy world to get to a good outcome takes a lot of skill. But having said that, I still want to say, there are a lot of, um, smaller applications that are not yet doing these, you know, hundreds of millions of dollars. Maybe they're doing millions of dollars or tens of millions of dollars of revenue that haven't been quite expensive to build and to operate, and that I think will, um, will see continue to grow.

    12. HS

      Speaking of kind of the, the smaller niches, so to speak, there that continue to grow, how do you think about the question of... You mentioned earlier brilliantly about articulation and kind of horizontal, uh, and then the verticals beneath them, and Google and now OpenAI being the horizontal. How do you think about the, the question of a world of large monolithic models versus much smaller, much more efficient, much more specialized models? How do you think about that, and has your mindset changed around which will be more dominant?

    13. AN

      I think it's good. It, it'll be all of the above. We, we will have large models and mid-sized models and tiny small models. And the reason I'm confident about that is because, um, the nature of intelligence is diverse, right? Sometimes we do intellectually really easy tasks. Like, if someone asks me, uh, you know, uh, when, when... Or, like, all right, yesterday, my daughter, um... Well, sh- right, she misspelled the word butterfly, so I need to tell her how to spell butterfly. It's a low... You know, it's an easy intellectual task. And sometimes I'm sitting down thinking for hours about some, you know, complex, like, technical problem, right? And that's, that's really hard. And so intelligence has a range of things we want it to do, and so the set of things we want AI to do too has a huge range. If you want AI to do basic grammar checking and spell checking, you don't need a trillion-parameter model. Use a tiny model, maybe run it locally, just do that. But if you want it to do complex reasoning, to write a piece of code, then yes, having a powerful model is gonna do better. And so, um, I'm actually very confident we'll end up with a huge range of models, small and large, to do the huge range of tasks. Just, just like we have humans do a range of tasks with difficulty, same with AI.

    14. HS

      Does that mean that you disagree with Andrej Karpathy when he said that, uh, agents, useful agents importantly, useful agents are a decade away?

    15. AN

      I disagree with that. I think we're seeing useful agentic workflows right now. Um, so AI Fund, our team has built so many agentic workflows, uh, for so many tasks, where, you know, we just could not even do the task but for agentic workflow.

    16. HS

      Can you give me an example? I'm fascinated.

    17. AN

      Over a year ago, we thought that, uh, te- this was actually, um, uh, after, uh, o- after one of the, uh, Biden-Trump debates.You know, for better or worse, we thought that tariff compliance may become an issue. Maybe, unfortunately, we turned out to be right. But so last year, I think it was around August, we started building, started exploring building technology to help with tariff compliance. And by the way, I don't know if you've seen these tariff compliance docs, but frankly, when I look at what it takes to follow these, this paperwork, it's like, it- it makes me wanna, "Oh my god, what is this?" So, you know, you, you say import a bicycle, you know, then you look at the specs of the bicycle, um, how much does it cost, the size of the wheels, there are all these rules and regulations to, like, import a bicycle. It just, it just makes me go, "Oh my god, are humans really doing this?" So we built agentic workflows, um, to read the tariff compliance documents carefully, get the spec for what someone wants to import carefully, try to match, make suggestions. And, uh, so this is now one of our portfolio companies, uh, called GAIA, GAIA Dynamics, that, you know, because of the increased complexity in tariff compliance, has been doing pretty well. Right? Uh, and, and so I find that we just could not have done this without agentic workflows. With medical assistants, we have, um, different startups, AI fund portfolio companies. Uh, medical assistant, uh, uh, operating in India, uh, AI assistant, uh, Callidus, uh, you know, helping process legal documents. Many of these workflows we just could not do, um, so I, I, I find that there are useful AI agentic workflows already today. And they're large businesses too, not just our startups. When we look at the hyperscalers and I chat to friends in some of the large businesses, there's a bunch of internal workflows that, you know, we just could not be doing without these AI agents.

  7. 38:4340:36

    Do Margins Matter in a World of AI?

    1. AN

    2. HS

      When we think about the core of a business, it's margins, and most of these businesses don't have margins. Do you care about margins when investing today, or with absolute respect, and it sounds disrespectful, do you take the kind of utopian view that it will just correct itself with time and with efficiency gains?

    3. AN

      At some point, uh, the laws of physics, I think, or the laws of finance or something, margins do matter. But one of the tricky things about AI is, um, we know the technology is gonna change, so we don't build assuming the technology will be stagnant. We do build assuming the technology will evolve. So one, one obvious one, um, token prices have been rapidly falling, right? Uh, depending on who you believe, falling 80% year on year whatever. Kind of, uh, he- frankly, when we build prototypes, we routinely just not worry about token costs, because the first, most important thing is let's build a product that users love, um, and then what we find is after we build some... This actually happened to me a few times now. We'll build something and not worry about the cost, and then, you know, users start to use it, and then our API bill starts climbing and then it, it's really like kind of you're looking at this every few weeks and you go, "Whoa, this is getting really expensive. This is costing me salary of one engineer. It's costing me more than two engineers. It's costing me more than a whole bunch of engineers." All right, just ... But fortunately, when that has happened, um, almost every time so far, we've been able to use techniques to bend the cost curve back down even faster than the rate at which token prices are falling in the market. And so I find that, uh, absolutely margins are important, but when you have a view for where the technology is going, then it lets you not build for the margins today, but what you can forecast them being in the future. And I think that's an important distinction. But we don't take a blind utopian, you know, AGI blah blah blah view either. I think that's also overly simplistic.

  8. 40:3648:24

    Is Defensibility Dead in a World of AI?

    1. AN

    2. HS

      How do you think about defensibility in AI? While a lot of people suggest that the time to copy is reduced significantly, um, that defensibility itself is questioned in AI. Do you agree with that and the questioning of defensibility today or not?

    3. AN

      Moats are changing. Um, so I find that moats tend to be a function of the industry rather than a function of the technology. So AI as a technology doesn't really offer an answer to the moats for most businesses. So if you're building AI, you know, for drones or legal or for whatever, the moat is more of a function of that industry. Um, but one thing that is changing with regard to moats is previously software used to be a moat, right? If you had, you know, invested 10 years to build a software, it's really hard to replicate that. That one moat is much weaker than before. But other moats, like are you trying to use AI to accelerate to build a two-sided marketplace which can be very defensible or, you know, are you building for consumer, more for consumer than enterprise? Are there brand and reputational effects, right? They can help you build defensibility there. So I find that, um, the software moat has changed, but other moats tend to be analysis based on the industry.

    4. HS

      Okay. So software moats have changed. Fantastic. And so we now have margins that matter, but we have a little bit more elasticity there. The software moat has changed. In terms of, like, the ability to stay relevant for large enterprises, what are the single biggest barriers that are preventing large enterprises from implementing AI aggressively and prevent themselves being extinct?

    5. AN

      I think the biggest barrier in most large enterprises is, is actually, um, people and change management. Um...

    6. HS

      Hmm. Not data?

    7. AN

      It's not data. I d- I think it's definitely not data. Not that data's not important, but that's definitely not the bottleneck. I think data has been... Data is... So, you know, the, the interesting thing about AI hype is, um, there's almost always a gem of truth in the hype. It's just that it's been hyped up, you know, 10 times more than the reality. And maybe actually let me give one example, then I'll come back to data. Um, there's been this buzz about, "Oh, with AI, we'll have unicorns with one employee." Right? It's like a thing. And, and it's fine. If you want to build a unicorn startup unintelligible ) with one employee, go for it. It's a good thing to do.... but frankly, if you're doing a-- if you have a billion-dollar valuation, you could afford to pay two employees or even 10. So why do you need to hype it all the way up to say, "Let's do this with just one employee," right? So that, so, uh, i- it is true that team sizes are shrinking. Uh, we can get more done with smaller teams, so that is true. But the hype is then saying, "Let's build a unicorn with one startup." And I find a lot of AI hype is so hard to disentangle because there's a gem of truth in it. It's just been hyped up a lot more. So on data, look, data is important, um, but it turns out that data is very verticalized and you don't need as much of it to get started as you want, a- a- a- as you think. Um, so for example, you know, I don't know, Landing AI does a lot of work with financial institutions, um, healthcare. A lot of financial institutions have plenty of transaction data, you know, take the PDF file, turn it into om-ready Markdown text, go process that, find value in that. Like for example, I don't know, we could, uh, take, uh, SEC filings, large complex financial tables, very accurately turn those financial tables into Excel spreadsheets, then go get your analyst or your AI to analyze that and draw conclusions. So you could do that. So often with bit of scrappiness, looking at internal data, looking at, you know, public data, you can often get some stuff going, and it turns out that, um, a lot of internet data is kind of general-purpose data. Most of the world's data is actually private, right? Uh, and there's a lot of businesses with actually very valuable transaction data, you know, sales data, product data, manufacturing data, logistics data, and all that data with a scrappy team that knows how to use it can actually start to build something and get value out of it. Not to say more data wouldn't be even better, but you're not stuck to even take the first few steps for lack of data.

    8. HS

      Andrew, I speak to many CEOs of these size businesses and they say, "Harry, are you kidding me? You think we can get security and permissioning for our data and our enterprise? No. We don't have Slack, we don't have Notion. Everything is custom-built." And you're seeing the likes of JPMorgan and Goldman Sachs absolutely refuse any ChatGPT use, building internal systems. Is that the world that we inhabit for enterprise AI adoption?

    9. AN

      I think we'll get there. Um, uh, so I find that a lot of enterprises are adopting LMs, you know, ChatGV and many others, um... I think today there are still businesses that are still on prem rather than on the cloud, but we're making progress. You know, I'll take... actually, o- one thing about AI, th- this hype that we'll have AGI in two years or whatever, I think that's just ridicul- uh, that's just, I, I, I... For most reasonable definitions of AGI, that's just not gonna happen. And just as how long are we now into the cloud era but we still have an awful lot of on-prem jobs? Um, I think that AI adoption, it will be wonderful, there will be tremendous GPT growth, is also gonna take much longer than the hype says it will. I, I actually think that a decade from now, we will still be working to identify valuable applications in enterprises and building them. Having said that, we will make a lot of progress over the next one or two years, but we're not gonna be done, you know, even 10 years from now.

    10. HS

      What else does everyone think they know about AI and its adoption and implementation that they get wrong?

    11. AN

      Even earlier this year, we saw some senior business leaders advise people to not learn to code on the grounds that AI will automate it. We'll look back on that as some of the worst career advice ever given. As coding becomes easier with AI assisting us, a lot more people should learn to code, not fewer. And I'm already seeing, I mentioned the Markley example just now with building an app for feedback swiping, but I think, um, for a lot of job functions, people that know how to tell a computer exactly what they want it to do so the computer can do it for you, they'll just be more powerful. And for the foreseeable future, the language of precisely telling computers what you want it to do is coding. It doesn't mean you should write code by hand. Writing code by hand is becoming obsolete, right? You know, really, don't do that. Not... Nah. But to get AI to write code for you. And people who can do that would be more effective and more powerful and have more fun.

    12. HS

      If we're that early where in a decade's time we're still gonna be looking and identifying areas where it can improve meaningfully, do we have enough money to fund both the energy and the compute requirements for that 10-year period? Sam Altman has said he needs a trillion dollars. He needs the energy of Japan. If we're 10 years out before we have still not that much improvement, do we have the money to fund it?

    13. AN

      I think we'll see plenty of improvement over the next two years, uh, but I think we still won't be done getting even more improvements 10 years from now. One place where it's super promising is AI-assisted coding, so we're seeing real productivity gains, real returns. It's really changing the way software is written. It- it's really been fantastic. Uh, frankly, so many of my friends says coding is so much more fun with AI to help us out than without. So we are seeing returns, just to be clear, but we've... still won't be done growing this, you know, 10 years from now.

  9. 48:2455:05

    Will Human Labour Budgets Shift to AI Spend?

    1. AN

    2. HS

      But if you look at the TAM, th- th- the secret to success in AI investing is where we see a transition from software budgets to human... From human labor budgets, sorry, to software budgets. And if we have that, then Holy Grail, me and you will make a lot of money with our funds and fantastic news because the TAMs have massively increased or the spend's massively increased. If we're like, "Hey, we're not gonna actually lose any people," then actually we don't see that transition from human labor budget to software.Do you think we won't see that transition?

    3. AN

      So, to me, the question is, um, is AI mostly for cost savings or is it for growth? And I know that, you know, it's difficult to change workflows. A lot of companies tend to think cost savings. But he- maybe, here's, here's the problem. There's actually one pattern I see. Let's say I have a work task that has, you know, like five steps, right? And let's say each step takes 20% of my effort. Like, uh, maybe I'm, um, underwriting approvals. You know, do I approve this loan or not, right? So let's say, let's say, for simplicity, there are five steps. Each takes 20% of my effort. If you can automate one of those steps, it's a 20% cost savings. Wh- which is really nice. You know, it could be great if you're a low margin business. But it doesn't feel like a game changer. So what I find is that the more valuable uses of AI, it actually requires, it often requires rethinking that workflow. And the pattern I see is instead of taking the 20% cost savings, which you could do, that's fine. Nothing wrong with that. The, the two patterns to then getting growth is, um, is either do more or do it faster. So in the case of underwriting, making loans, um, if instead of saving 20% of my human labor, if I can now rework the workflow to turn around my decision-making time. So instead of someone needing to wait, you know, two weeks before a loan officer looks at it, but we can just give you an initial answer in 10 minutes, that changes the product and lets you drive growth. So that's a faster pattern. And then there's also the more pattern. So another example, um, there are a lot of, uh, I don't know, businesses that could do, um, high-touch, you know, say, customer service only for expensive high-end clients, right? But if you can now serve a much larger group of people... Or l- let's say financial advice. Instead of giving high-touch financial advice to a small group of people, if we can now deliver that quality of service to a lot more people, then that again changes the product and lets you drive growth. So instead of cost savings, if AI lets you do something way faster or lets you take a task and do it 1,000 times more. Instead of serving a small number of people, let's serve a lot more people 'cause it's now economic to do so. These are the two patterns I've seen to drive value increases, and I think that will be important for unlocking a lot of this GV growth.

    4. HS

      You said economic to do so. Do you think it's crucial that we see vertical ownership in terms of, see NVIDIA-owned models as well as chip player, and you know, we're seeing Facebook build out data centers more than anyone. We're seeing everyone build out data centers. Is it important that we own every layer of the stack, or actually will we see individual participants own horizontal layers of the stack?

    5. AN

      I think this will evolve over time. Um, I'm gonna make an analogy. In the early days of, um, say the computing industry, it was the vertical players that won. Because, you know, if you want to connect the keyboard to your computer motherboard, which is a CPU, is it okay if your keyboard has a, you know, plus minus five volts and your CPU has some other voltage? Is that okay or not? So we didn't know where the API boundaries or, um, if your, uh, CPU has, uh, you know, memory laid out a certain way, a compute and your, you know, math accelerator, they needed to inter-operate with each other. So before we wound up having a clear conception of where to draw lines and what are the API boundaries, the integrated players, IBM back in the day, could solve all the problems and, and, you know, build, build valuable working products. But as the industry matured, we started to have standards. Like, for example, now we have a USB standard. Before there were other standards. So now you make a computer, someone else makes a keyboard, we'll plug them together and it all works. So when an industry is immature, it turns out where to draw those boundaries so that different participants do their part and have it still inter-operate, that's less clear. But then as the industry matures and there's, you know, more standards for, uh, if I want to publish a compressed LM model on the internet, what's the file format for that? You know, it's starting to see more standards. Then that makes it easier for kind of individual players to do something and still have it fit into the broader ecosystem.

    6. HS

      So do you think then like Zuck and Sam are right to be spending as much they are on data centers? Or should they be patient and wait for the maturation of the industry where they can then be horizontal?

    7. AN

      I think OpenAI's investments have, have, right, paid off, uh, uh, to date. Uh, it is possible to over-invest at some point, but I don't know if, uh, that is the point. And then I think also the, uh, s- financial instruments being, you know, used by many players, uh, to shift risks around have been really interesting. There are, uh, I, I find that, uh, overly used, complex use of financial instruments to shift risks is sometimes, you know, increases the risks of there being a bubble at some point. So that's something to watch out for.

    8. HS

      Do you worry about the circular deals?

    9. AN

      It's something to keep an eye on. I'm not alarmed by them, but it is, you know, th- I think things could be more frothy or less frothy. Things could be more of a bubble and less of a bubble, and these are signs of things feeling a little bit more bubble-ish.

    10. HS

      When does a sign turn into a big concern for you with these?

    11. AN

      I think, uh, you, uh, you mentioned the Sequoia article on the $600 billion problem with AI. Um, I am concerned about that. But, but it's interesting. My, uh, my, my concern for different layers of the stack is different. So what I'm seeing is for the application layer, there is very clear ROI. I think it's fantastic. So someone else trained these models who can build applications for, you know, $100,000 or $1 million and start generating ROI. And then I think it is, uh, calibrating to the right level of infrastructure investment that is tricky. But having said that, it is also at the same time very clear that we do need more electricity, more data centers and more semiconductors. That too is very clear. So we should be investing a lot. Uh, uh, and I'm glad we are. Uh, but what exactly is the right amount to invest? I think that's the tricky question. It should be a lot though.

  10. 55:0556:28

    Are We in an AI Bubble?

    1. HS

      Do you get annoyed by the bubble discussion?

    2. AN

      I don't get annoyed by the bubble discussion. Um, I do get annoyed by the hype. I don't know, when regulators are calling me up and saying, "Hey, we heard AI could lead to human extinction," thankfully much less of that now than a couple years ago. Then I'm kinda like... You know, and then, then instead the conversation should be, "How can we upscale the workforce? Where can we invest?" You know, not like, "How do we slow this thing down?" I think the hype has really distorted public perception of AI. Oh, and, and one downside to the hype too is, um, without public support of AI, things slow down. So for example, uh, actually, uh, one of my friend, uh, w- friends works a lot with high school students, and he told me that he was talking to a girl, uh, a high school student, that was, uh, that he was talking to her about maybe pursuing a career in AI. And she said, "You know what? I heard AI could have something to do with human extinction. I don't want to have anything to do with that." And so this hype turned the high school girl away from working on AI at a time where it'd be so promising for them to leap into AI. And I think this really causes people to make weird decisions, both at the individual school student level, as well as at the community level, where, um, when the community, you know, shuts down building out a data center that could be good for the community and good for the world, I think that's also unfortunate.

  11. 56:281:05:55

    Quick-Fire Round

    1. AN

    2. HS

      I'd love to move to a quick fire round where I say a short statement but, but kind of staying on that thread, 'cause the first question is what's your biggest advice to educational institutions to make sure they equip students for a generation of AI?

    3. AN

      Embrace it, update curricula, teach them as much AI as possible. Uh, students are gonna live in a world where they will be using AI and having to help them. Um, gotta teach students to do that. I think it'll be different for different fields, but one thing that is clear is get all your students to learn to code.

    4. HS

      What's one thing you've changed your mind about AI in the last 12 to 18 months?

    5. AN

      I think my favorite tools keep changing. Uh, if you ask me, you know, every three months over the last year what my favorite coding tool is, my answer would have kept on changing.

    6. HS

      Do you think Anthropic will beat OpenAI in the coding wars?

    7. AN

      Really hard to say. So OpenAI has a very strong consumer brand, and that is very defensible. In contrast, uh, developers are more likely to switch coding tools on a dime. So I love Cloud Code, I think it's fantastic, but I find myself using OpenAI, uh, Codex much more over the last month. Um, u- I think OpenAI Codex has actually gained real momentum. And then I'm also keeping an eye on Gemini CLI, which I think is also, uh, getting better maybe at a faster rate than people have given them credit for. So the coding devtools and API tools market, the moat there is weaker than having a strong consumer brand. So I think that's something that, uh, uh, you know, companies will have to sort out.

    8. HS

      Tell me, what was your biggest takeaway from Baidu? It's such a different company to anything that we're used to in the West. What was your biggest takeaway?

    9. AN

      I really appreciated the speed and inten- and intensity of Baidu, and also, um, of the China ecosystem. I think it's really unfortunate that in some parts of the United States advising someone to work hard, you know, is viewed as politically incorrect or something. Um, uh-

    10. HS

      In, in Europe I'm chastised for it.

    11. AN

      Ah, okay. All right, great. Hopefully the European viewers won't hate me or hate us both for that. I think, frankly, I, I wish, you know, people could work four hours a week and, and be wildly successful. But the practical reality is when people work hard, they get more done. Now, I want to acknowledge that not everyone in every point in their life is in the position to work hard. So, you know, the week after my kids were born, I didn't work that hard. I took time off, spent time with the kids, right? For, uh, more than a week. But... And I think we need to respect people in all walks of life, including people that, for whatever reason, are not in a position to work hard at that moment. But if someone wants to work hard, go, you know, quote Steve Jobs, "Make a dent in the universe," let's empower them and celebrate that. If someone, for whatever situation, can't work hard, let's also, right, respect that and maybe celebrate that. But I think, uh, uh, this is a moment in time where there's so much stuff we could build. People that work hard to learn a lot and build things will accomplish a lot.

    12. HS

      Did you do 996?

    13. AN

      The term 996 wasn't an explicit term that I use. Uh, I find that... Right, these days I work, you know, I just, I, I, I, I really love what I do, really doesn't feel like work. But, you know, I, I would... O- on a lot of my weekends I'm sitting in a coffee shop coding away 'cause it's the most fun thing you could do on a Saturday. Uh, so I, I, I actually don't bother to keep track of my hours. I... It's probably a lot.

    14. HS

      What's the hardest transition element moving from operator to investor?

    15. AN

      Oh, one thing about AI Fund, yes, we call ourselves a fund, but, uh, frankly, the way we run the fund day to day, we act much more like operators than investors. Uh, AI Fund's a venture studio, and I believe our skill set is actually in building, not just in, you know, capital asset allocation or whatever. So we, um, work really hard to screen ideas. We talk to customers. You know, I'm sometimes on customer calls myself. Uh, uh, and then we bring in founders to work alongside us. We're reviewing their product, giving feedback on the product, arguing about pricing. So where my day-to-day life is much more operator. And yes, eventually we have to do the financial due diligence as I write a check and do follow-on. We do all that, but lot more...

    16. HS

      I'm re- I'm really sorry, Andrew. Then are you a fund or are you an incubator?

    17. AN

      So, um, we call ourselves a venture studio or a venture builder. Um, the term... We, we don't c- Incubators usually bring in founders that already have an idea. We go earlier than that. We often work with our investors and partners to come with an idea, and only after we have an idea, then we go and try to find the best founder to co-build, to co-found the company with us. So we don't call ourselves an incubator.

    18. HS

      How much ownership do you have then when you make those original investments and seed the company?

    19. AN

      Uh, it depends. We end up with some common stock, uh, c- for the sweat equity of building the, the company, and then we usually, uh, our first check-in is usually, uh, like a million dollars at, uh, a $4 million cap. Um, uh, so kind of 20% ownership or safe.

    20. HS

      And so we're basically getting 20 to 25% ownership on entry with a couple of common?

    21. AN

      Yeah, plus some common for the sweat equity.

    22. HS

      Totally get you. What do you think is the biggest-

    23. AN

      But, but to me, but to me, the reason we do this is because, um, I find that while there are VCs that, you know, do the competitive deal flow thing, they make a lot of money that way, I think my team's biggest contributions is not, you know, fighting over hot deals. It is finding ideas and creating companies that would not exist but for the fact that we and a founder got together to co-found it together. So I think we just create more value in the world by creating new companies rather than only discovering hot companies to try to, you know, put money into.

    24. HS

      What concerns you most today, Andrew? I love your optimism and your o- open-mindedness. What concerns you on the flip side?

    25. AN

      The difficulty of bringing everyone along with us. In previous ways of economic disruption, like when, when, when our nations went from mainly agriculture to non-agriculture, someone that was a farmer could keep farming until they retired, but their kids had to learn a different trade, maybe move to a city or whatever. The change is so fast this time round that we need people that are alive today to learn new skills as opposed to we need their kids to learn new skills. And that's actually very challenging. And historically, I don't think we've ever been good at that.

    26. HS

      You do a lot of interviews, Andrew. You speak to many journalists. I'm not a journalist. I've never actually had a job. Do you find the quality of interviewers that ask you questions good?

    27. AN

      I think media has an important role to play to curate and disseminate knowledge. I think the quality of questions that reporters are asking has been very clearly trending up over time. Uh, but there is still the hype element of it that keeps on distorting the information ecosystem. Unfortunately, there are financial incentives and, you know, regulatory capture and legislative benefit types of incentives to certain types of hope, uh, to certain types of hype. And it's essentially one pattern that I've seen. I, I won't name any companies. But I find that, um, they're companies with something to lose whose statements over time have become more moderated. So, you know, I, I find that as you're an established company, you just say more sensible things. But there are some companies that I think are at greater existential risk, uh, you know, and I find those companies, some of those companies that I don't want to name, to be the worst sources of hype because they've got less to lose, "Let's just say a bunch of random stuff."

    28. HS

      In many respects, it's, it's a lashing out in desperation, I think. Uh, when you look at a Demis, say, obviously a brilliant leader, or you look at a Sam even or a Dario, you know, all of them, I think, have moderated their position significantly with the maturing of their companies.

    29. AN

      Yeah, yeah. No, no, no comment on individuals. But, but I, I think that when you have something to lose, you say more sensible things. But when your company faces greater existential risk, you know, you... Sometimes people say weird things for, for fundraising.

    30. HS

      I'd like to finish on a tone of optimism. What single thing are you most excited for when you look forward to the next decade? So for me, for example, you know, my mother's got MS. I think we'll have incredible medical discoveries in some diseases that we haven't really made much advancements in for years. That excites me. What excites you?

Episode duration: 1:06:05

Install uListen for AI-powered chat & search across the full episode — Get Full Transcript

Transcript of episode rT74mF6_NhQ

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome