Skip to content
a16za16z

Dylan Patel on GPT-5’s Router Moment, GPUs vs TPUs, Monetization

The AI hardware race is heating up, and NVIDIA is still far ahead. What will it take to close the gap? In this episode, Dylan Patel (Founder & CEO, SemiAnalysis) joins Erin Price-Wright (General Partner, a16z), Guido Appenzeller (Partner, a16z), and host Erik Torenberg to break down the state of AI chips, data centers, and infrastructure strategy. We discuss: - Why simply copying NVIDIA won’t work, and what it takes to beat them - How custom silicon from Google, Amazon, and Meta could reshape the market - The economics of AI model launches and the shift toward cost efficiency - Infrastructure bottlenecks: power, cooling, and the global supply chain - The rise of AI silicon startups and the challenges they face - Export controls, China’s AI ambitions, and geopolitics in the chip race - Big tech’s next moves: advice for leaders like Jensen Huang, Sundar Pichai, Mark Zuckerberg, and Elon Musk Timecodes: 0:00 Introduction & AI Hardware Landscape 1:11 Reactions to GPT-5: Is It Disappointing? 4:19 The Business of AI Models: Cost, Monetization, and the Router 7:34 The Economics of AI: Cost vs. Performance 10:10 Usage-Based Pricing & Product Stickiness 12:30 Advice for Sam Altman: Monetizing OpenAI 14:18 NVIDIA’s Growth & The Future of AI Compute 21:27 Custom Silicon: Threats to NVIDIA 26:09 The Silicon Startup Boom 45:28 Data Center Power & Cooling: The Next Bottleneck 50:46 Intel’s Role in the AI Era 57:56 Advice for Tech Giants: NVIDIA, Google, Meta, Apple, Microsoft 1:08:17 AI Policy & Export Controls Resources: Find Dylan on X: https://x.com/dylan522p Find Erin on X: https://x.com/espricewright Find Guido on X: https://x.com/appenz Learn more about SemiAnalysis: https://semianalysis.com/dylan-patel/ Stay Updated: Let us know what you think: https://ratethispodcast.com/a16z Find a16z on Twitter: https://twitter.com/a16z Find a16z on LinkedIn: https://www.linkedin.com/company/a16z Subscribe on your favorite podcast app: https://a16z.simplecast.com/ Follow our host: https://x.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details, please see a16z.com/disclosures.

Dylan PatelguestErik TorenberghostGuido Appenzellerhost
Aug 18, 20251h 6mWatch on YouTube ↗

EVERY SPOKEN WORD

  1. 0:001:11

    Introduction & AI Hardware Landscape

    1. DP

      NVIDIA's gonna have better networking than you. They're gonna have better s-- uh, HBM. They're gonna have better process node. They're gonna come to market faster. They're gonna be able to ramp faster. They're gonna have better negotiations with whether it's TSMC or SK Hynix in the memory and silicon side, or all the rack people, or, like, copper cables. Everything they're gonna have better cost efficiency. So you can't just, like, do the same thing as NVIDIA. You have to really leap forward in some other way. You have to be, like, five X better.

    2. ET

      [on-hold music] Dylan, welcome to the podcast.

    3. DP

      Thank you for having me.

    4. ET

      We've been trying to get you for a while. You're a busy man. Uh, but, uh, but it worked out. Um, Guido, why-why don't you introduce why we're so excited to have, have Dylan on the podcast and what we're excited to discuss.

    5. GA

      Look, I mean, I-I-I think, Dylan, you've done exceptional job in, in covering what's happening in the AI hardware space, AI semi space, and, and now more and more data center space as well. And, you know, uh, just looking at it, currently the most valuable company on the planet is an AI semi company, right? Uh, the, the, I think biggest IPO so far in AI was an AI cloud company. This is currently where it's happening, right? In any gold rush in the early days is the pick and sh- picks and shovels that make money, and I think this is the stage that we're in. So super excited to have you here today.

    6. DP

      Awesome. Thank you. Uh, w- happy to talk about my favorite topics.

  2. 1:114:19

    Reactions to GPT-5: Is It Disappointing?

    1. DP

      [chuckles]

    2. ET

      Amazing. Well, maybe let's start with GPT-5. You know, we just had the, um, some of the researchers from who, who, uh, Christina and Isabelle on here last week. What, um... y-you, you said it was, uh, disappointing. Uh, what why don't you sh-share your reactions or what capabilities you were, you were hoping to see or overall plans.

    3. DP

      I think, I think it depends on what tier of user you are, right? Um, and so, like, right, if you're just using GPT-5 and before you were, you know, two-- twenty dollars or two hundred dollars a month subscriber, um, you no longer have access to 4.5, which i-in my opinion, is still a better pre-trained model for certain things. Um, or you no longer have access to, um, o3, which would think for thirty seconds on average, maybe. Right? Whereas GPT-5, even when you're using thinking, uh, only thinks for, like, five to ten seconds on average, right? Um, which is an interesting sort of phenomenon, right? Um, but basically, like, GPT-5 is not spending more compute per se. The model did get a little bit better, um, on a vanilla basis, right? 4o to 5 is, is actually quite a bit better. But, um, when you think about, you know, what, what is this curve of intelligence, right? It's like the more compute you spend, the better the model gets. Um, and that's whether it's a bigger model, uh, which GPT-5 isn't, right? You can, you can see it's not a bigger model. Um, it's roughly the same size, um, and, you know, or you think more, right? Uh, but again, like, this is something that OpenAI is mo-- first thinking models, you know, the first few, uh, generations of o1, o3 would think for a long time, um, and waste a lot of tokens, uh, if you will. Uh, and when you look at, like, for example, um, you know, Anthropic's thinking models, they actually think even when you put them in thinking mode, they think a lot less, right? To get to the same results or better results, right? As OpenAI was. And so OpenAI, I think, like, optimized a lot of, like, well, if I ask... Like I think, I think the silliest one w- I had asked was, like, I asked o3 once, like, "Is pork red meat or white meat?" And it thought for like forty-eight seconds. I was like, "What are you doing?" Like, "This should just, like, this should l- this should just, like, tell me the answer." [chuckles] Um, and, and, and so, like, you know, the, the, the nice thing is that GPT-5 will think a lot less even if you select thinking manually. But more importantly, they have the sort of auto functionality, the router, which lets them decide whether or not, you know, "Hey, do I route to the regular model? Do I route to maybe mini," if you're at our rate limits? "Or do I route to thinking?" Right? Um, and, "How much do I think?" Uh, but in general, the thinking model will think less. So, so there's less compute going into, uh, a power user's average query than before.

    4. GA

      But is it even more interesting? OpenAI cannot control how much compute it wants to allocate to you, right? If, if, if we're in a high load situation, maybe tune the router a little bit so it's less, right? Maybe, maybe we skip. I have no idea what they're doing, um, you know, behind the curtain. But there's this meme out there at the moment that basically all they did, which is, which is a meme, right? It's not true. But, but all they did is, you know, take o3 plus a couple of smaller models, put a router in front, and, you know, offer that at a, a lower blended price, essentially, right? Is-- I think there's a little bit of that. Like, cost suddenly matters and, and they figured out a way how they can steer that.

  3. 4:197:34

    The Business of AI Models: Cost, Monetization, and the Router

    1. DP

      I think, yeah. The, I mean, and they talked about how they've been able to dramatically increase their infrastructure capacity because I myself was just regularly using o3 or 4.5, right? And now I'm forced to use Auto, which sometimes gives me, you know, the o3 equivalent thinking model, but sometimes gives me, you know, just the regular base model-

    2. GA

      That's what they are

    3. DP

      ... uh, which, which sucks. But, like, um, I think for the free user, it's actually quite interesting, right? Uh, the free user was not getting thinking models pretty much ever, or not using them. Um, or in many cases, they just open the website and ask their query, and now sometimes their query gets routed there. So sometimes they get a way better model. But now sometimes, you know, the, the OpenAI can gracefully degra-degrade them-

    4. ET

      Exactly

    5. DP

      ... uh, if they need to, right? And it's like, I think, I think the router points to the future of OpenAI from a business, right? Like, you can look at sort of the model companies, right? Anthropic is fully focused on B2B, right? Um, API code, et cetera, right? Where- or, or Claude code, whatever it is, right? OpenAI, yes, they have that business, Codex and, and API business, but really their majority of their revenue is consumer, right? Um, and it's consumer subscriptions. But they have no way to upsell, like, you know, how to make money off of all the free users, right? Uh, in any other application, consumer app, the free user still pays via ads. Um, but this is not compatible with AI, right? Like, it's a helpful assistant. You can't just make the r- users' result worse by injecting ads. Banner ads don't really work, uh, in AI either. So it's like, how do you now monetize them? And I think, I think they're getting-- with the router, they're getting really close to figuring out how to monetize that user, right? Um, with the new CEO of Applications, if you saw her product that she launched at Shopify, uh, I think it was Shopify, was an agent for shopping.Right? And now this, like, immediately clicks. Like, oh, if the user asks a low-value query like, "Hey, why is the sky blue?" Just route them to Mini, right? Like, you know, the model can answer perfectly fine, and that is a chunk of queries, right? And but if they ask, like, "What's the best DUI lawyer near me?" Right? All of a sudden this is like, you know, you're in jail, you have one shot. You know, you're like, "Screw it. Let me ask ChatGPT what the best U- DUI lawyer is," and now all of a sudden... You know, the model's not capable of it today, but soon enough it'll be able to, you know, contact all the lawyers in the area and, um, you know, figure out what their results are and maybe search their, like, court filings and whatever, right? And, and, and book the best lawyer for you or, or an airplane ticket.

    6. GA

      They negotiate a cut, [laughs] you know, as part of that.

    7. DP

      Yeah, of c- of course they're gonna take a cut, right?

    8. GA

      Yeah.

    9. DP

      But this is a, this is a much better way of, like, monetizing the free user is, like... You know, it, it's like Etsy, 10% of their traffic now comes from chat, and OpenAI makes nothing off of that. But they very... They really, really will soon, right? Um, and partially that's because Amazon blocks chat but, like, you know, there's a way to make money from shopping decisions, whether it's booking flights or looking for items. Um, and those you now say, "Free user, I don't care. I'm gonna send you to my best model. I'm gonna send you to agents. I'm gonna spend ungodly amounts of compute on you because I can make money off of this. But if it's a query that's like, 'Help me with my homework,' I'll send you, like, a decent model, right? I don't, I don't need to spend money on you." Um, and so this is how I think, like, OpenAI can finally make money off of the free user, and I think that's the biggest, like, thing about the router, right? Like-

  4. 7:3410:10

    The Economics of AI: Cost vs. Performance

    1. GA

      Th- th- this is super interesting. I think this is the first time that we've seen that there's a, a, a launch of a new model where to some degree cost is the headline item, right? I mean, so, so far it was always like, who has the smartest model? Who has the highest MMLU score? Now we have suddenly people who use models for coding for eight hours a day and, you know, surprise that, you know, if you take a, a large context window and the best model, creates, you know, thousands of dollars of cost a month. So cost matters. And, and so to some degree, so where are you on the Pareto frontier between cost and performance is the new benchmark for model competitive and no longer cost alone. Is, is, is that what we're seeing here or?

    2. DP

      I mean, I think definitely, right? Like, OpenAI said they doubled their rate limits for, uh, you know, cer- for big amounts of, uh, users. They've, they've dramatically increased the number of tokens they're serving from this launch-

    3. GA

      Mm

    4. DP

      ... which effectively says this is an economic release. Um, and I think, like-

    5. GA

      And, and probably also means the tokens are now cheaper, right? Otherwise-

    6. DP

      Yeah, yeah. For sure, for sure.

    7. GA

      Yeah.

    8. DP

      I, I think the funniest thing is, like, you know, this whole cost thing you mention is like we've seen this in the code space, right? Like, Cursor had to pull away the unlimited, um, Claude Code. You know, initially they had, like, this super expensive plan, and it had, like, unlimited rates, and then they... or only, like, a weekly rate limit. Now they have, like, hour-based rate limits. Um, and I saw the craziest, like, thread on Twitter where this guy said he changed his sleep schedule, right? Uh, modeled after, like, how sailors in the bay-

    9. GA

      [laughs]

    10. DP

      Like, you know, like, if you're sailing you can't sleep, right? Like, s- solo sailing, uh, they'll take, like, power naps, uh, when they get to the right spot so that they can, like, still be safe.

    11. GA

      In the morning when it's not very windy. [laughs]

    12. DP

      Well, but, like, they can't sleep uninterrupted, right? And so because Anthropic had to put rate limits that are, like, not just week based but, like, number of hours based, he's like in, like... He, like, basically sleeps, you know, multiple times a day but-

    13. GA

      That's amazing

    14. DP

      ... small chunks just so he can maximize the usage. And there's also a leaderboard on Reddit-

    15. GA

      Oh, my goodness

    16. DP

      ... where people are, like, competing to see how many tokens-

    17. GA

      That is amazing

    18. DP

      ... they're using through their subscription, and there's, like, a dude spending, like, $30,000 a month.

    19. GA

      So I'm gonna find some developer in India that I can do pair programming with so I can get the day cycle, he can get the night cycle, [laughs] you know, and we both can, can maximize the, together the quota for the account. Is that the future then? [laughs] It's-

    20. DP

      Um, I mean, but it's, it's clear, like, people are taking advantage of the negative gross margin, like, sort of-

    21. GA

      Yeah

    22. DP

      ... uh, you know, subscriptions that are offered. Um, you know, I think, I think Anthropic probably makes a positive gross margin off of my subscription. Like, I don't code enough, but, like, there's plenty of people that are definitely losing money.

    23. GA

      Yeah.

    24. DP

      Um, and so as you said, it's an economic-

    25. GA

      It, it, it push more and more to, I think, just usage-based pricing, right?

    26. DP

      Yeah.

    27. GA

      I think if, if you have a, if you have an underlying commodity that you're reselling to some degree that has, that is that large a part of your cost of goods, right, you need to go to usage-based pricing.

  5. 10:1012:30

    Usage-Based Pricing & Product Stickiness

    1. DP

      How much, how, how much do you think the, like, um, customer capture and stickiness for these code products is? Like, um, yeah, I'm, I'm curious what you think on that, right? Like, y- you know, once you use an ID, once you integrate, you know, one of the CLI products in, like, how sticky is it or is-

    2. GA

      That is a billion-

    3. DP

      ... people just switch after? [laughs]

    4. GA

      That is a billion-dollar question. That's a very conservative estimate. Look, uh, A- A- Andrew Karpathy has this, this great slide where he basically says if you're building an agentic system today, right? But fundamentally what it is is sort of this loop, right, where, where half of the loop is the model thinking, right, and, and then trying to do something. The other half is then the, the, the user verifying what did the agent do, is it the right thing, providing feedback and trying to steer it in the right direction because, you know, it can't run forever. Eventually you need to, you need to, to, to steer it back. One half of that is, is the model provider, right? They're trying to build the, the best models. The other half is really about, I think, designing the best possible UI to enable the user to give feedback, and I think there's value in that. So I think there's a certain amount of stickiness in there, right? So what are all the different tools, like, in terms of vis- like, say, take code editing, right? How can I most easily visualize what the code changes are? How can it, it most easily visualize, you know, what they impact, which files? You know, how can I for small changes get very quick feedback versus for complex ones, you know, get complex feedback? And there's now some tools that actually draw diagrams for you of what they do, right? [laughs] So I think, so I think this will be the battle, is I think there's stickiness in that, right? Uh, you know, how much exactly? This is a great question.

    5. DP

      So in that sense, like, people should be doing subscriptions to get people locked in, right? Instead of moving to usage-based pricing.

    6. GA

      Well, I think it's the customers that don't wanna do usage-based pricing-

    7. DP

      Yeah

    8. GA

      ... because it's so hard to guarantee, you know... It, it's so hard for it to get, y- it to get, it to get away from them, and you, you actually want guarantees, and you're, you're willing to, like, say... You're, you're willing to commit to pretty high spend in order to not have usage-based pricing. I think it's the model companies that want usage-based pricing.

    9. DP

      I, I think with consumers, [coughs] it's frankly very hard to h- not have usage-based pricing just because the, the variability is so massive, right? If it's, like, you know, us coding versus somebody who does this as their full-time job, right? Th- the... You just have a factor of 20 or so difference in usage, and if that is... Th- that costs a lot of money, right?

    10. GA

      Mm-hmm.

    11. DP

      The instance is just one. I think for enterprises we could, we could see, like, more flat-fee pricing because you can average it out more.

    12. GA

      You have a, a developer that's using it all day, you kinda know in a general sense.

    13. SP

      Like, h-h-how many hours a day they're programming and, and what that sort of looks like.

    14. GA

      Yeah.

    15. DP

      The, the vibe quotas are harder.

    16. SP

      [laughs] Yeah.

  6. 12:3014:18

    Advice for Sam Altman: Monetizing OpenAI

    1. GA

      Before we leave OpenAI, I, I wanna ask a broad question, which is if Sam Altman was, was sitting here and saying, "Hey, Dylan, I'll listen to anything you, you tell me to do, any advice you have, as long as it makes OpenAI more valuable," what would you, what would you tell him?

    2. DP

      I would say, like, immediately launch a, uh, a method for you to input your credit card into ChatGPT and agree that for anything it, like, agentically does for you, it'll take X cut, and then launch that product because, uh, where, where it does shopping, right? 'Cause, like, we know-- everyone knows that, like, Anthropic and OpenAI and all the other labs are buying RL environments of Amazon, and of, like, Shopify, and of Etsy, and of all the different ways to shop on the internet. Of, of airline websites, right? Now, just like, "Hey, integrate my calendar. I wanna, I wanna fly to there on Thursday. Make sure I don't miss a meeting." Cool. Booked, right? Like, do, do that integration, like, super well. Know my preferences on wha-wha-whether I like aisle or window. All this stuff, right? And just take a take rate. Like, I think, like, this will make them so much money the moment they launch it, and I, I think they're working on it already, but, like, I'd like to hear how he thinks about it, uh, 'cause he's shifted his tone massively on, like, ads over the last six months, right? He used to be like, "No way," and now he's like, "Uh, maybe," you know? Uh, there's a way to do it without, you know, harming the user, and I think this is, this is how you monetize the free user, right? So I think, uh, that's probably what I'd tell him/ask him about. Like a whole line of questions-

    3. GA

      Yeah

    4. DP

      ... around this.

    5. GA

      I, I wanna shift to NVIDIA. Uh, NVIDIA's having a monster year. They're up almo-almost seventy percent. W-what are the possible paths fro-from here? How do you see, how do you see it playing out?

    6. DP

      Um, depends, like, how, how pilled you are on, like, the continued growth, but, like, I think, I think you guys have a good vantage point, and we have a good vantage point of how fast revenue's growing for a lot of these companies-

    7. SP

      Mm-hmm

    8. DP

      ... especially the code companies,

  7. 14:1821:27

    NVIDIA’s Growth & The Future of AI Compute

    1. DP

      but, but even many other applications. Um, I think we can clearly see, uh, the demand side is, is accelerating, right? Um, and, and then if you look at the training side, I think the, the race is on. Um, you know, Meta's, Meta's upping hugely. Uh, Google's upping hugely. Um, you know, if you, if you just look at, like, again, just OpenAI and Anthropic and the compute that they have and are getting this year, uh, from Google and Amazon for Anthropic and from Microsoft, CoreWeave, Oracle, uh, for OpenAI, thirty percent of the chips are going to them, just those two companies. Um, but that's actually, like, okay, well, like, seventy percent of the stuff, like, who's making off of it? Well, a one third of it is, like, ads, right? Um, whether it be ByteDance or, uh, Meta or many of the other people who are doing ads. So then it's still like, okay, well, where are the rest of these one third of the chips coming from? Well, they're, like, mostly uneconomic providers who I don't think it's, like, an obvious bet that they're going to keep growing, um, you know, keep raising bigger and bigger rounds. So what happens there? Um, I think, I think with the... You know, we, we talked about, like, coding, right, like earlier. Actually, the Qwen Coder 3 model is actually super cheap if you're running it on-prem or if you're running it in the cloud with all these inference libraries. Um, and so, like, there's stuff like that as well. So I think the question is, like, how much does it keep growing? Um, because clearly, you know, I think the first third is definitely skyrocketing, right, of, of OpenAI, Anthropic, you know, lab spend. The second third of, like, ads is gonna grow. It's not gonna grow like crazy, but, like, I think there's definitely an inflection point that could be hit with gen A- gen AI ads. I know Meta's been experimenting with it a lot. Um, but I could totally be convinced that there's gonna be a huge inflection in, like, uh, take rate there, right, where you start showing me personalized ads. Um, like every person that's an ad is like-- looks like me, and I'll be like, "Okay, yes." Except, like, slightly better, so I, like, feel better, right?

    2. SP

      Inspiration.

    3. DP

      And I'm like, "I wanna buy it." Yeah.

    4. GA

      Um, it's, it's super interesting, the, the... I have no idea how this is gonna, gonna scale, right? But, but if you ask the question, how much could it scale, right? Like, how much value are we creating here? Is this-- Can we create enough value to actually, you know, keep growing for, for a long time? If you just take AI software development, right?

    5. DP

      Yeah.

    6. GA

      We know we can easily get about fifteen percent more productivity out of a developer.

    7. DP

      I don't think that's right. I think it's way higher.

    8. GA

      No, no, the-- W-with the, with the straight... Like, I talk to a lot of enterprises. Like a, a classical enterprise, straight up, you know, GitHub Copilot deployment, that gives you about fifteen percent. We can do much more than that.

    9. DP

      But bro, like [chuckles] you know how bad GitHub Copilot is?

    10. GA

      [laughs]

    11. DP

      Like, like, like how did they-

    12. GA

      We-

    13. DP

      How-- Th-they-- Look at the revenue ARR chart.

    14. GA

      This was the lower bar.

    15. DP

      It's so funny. It's so funny. If you look at the revenue ARR chart and it's like, it's, like, Claude Code in like three months has surpassed them, and, like, Cursor co- you know, easily surpassed them, and then, like, even, like, companies like Replit are, like... and Windsurf/Cognition are, like, gonna pass them. Like, it's like-

    16. GA

      You're preaching to the choir

    17. DP

      ... what's going on? [laughs]

    18. GA

      So, so look, let's assume we can get this to a hundred percent.

    19. DP

      Yeah.

    20. GA

      Right? So it's we can double the productivity of a developer, right? About thirty million developers worldwide, give or take.

    21. DP

      Yeah.

    22. GA

      Right? Let's say 100K value add per developer. This might be a little high worldwide. The US is low, but, but worldwide it's high. So that's three trillion dollars.

    23. DP

      Yeah, yeah.

    24. GA

      Right? So, so w-we're probably building technology here which adds three trillion vol- dollars of GDP value.

    25. DP

      Yeah.

    26. GA

      In theory, we could put that into, into GPUs [chuckles] because that's the main cost factor here, right?

    27. SP

      Just, just from a coding model, like not-

    28. GA

      Just from a coding model. This is one use case, right?

    29. SP

      Ignoring every other use case.

    30. GA

      So, so at least in theory, the value generation is here to keep growing, right? Now, how that translates to the industry is much more complicated.

  8. 21:2726:09

    Custom Silicon: Threats to NVIDIA

    1. GA

      silicon?

    2. DP

      I think that's the biggest thing, right? Is, um, when we look at orders from Google and from, um, Amazon, right, especially their... and, and Meta, their custom silicon is... Not, not Microsoft, their custom silicon kinda sucks. Uh, but the other three, they're really upping their orders massively over the last year. Um, you know, Amazon is making millions of Trainium. Google's making millions of TPUs.

    3. SP

      Mm.

    4. DP

      Um, TPUs clearly are like a hundred percent utilized, right?

    5. GA

      Yeah. They're doing very well.

    6. DP

      Um, Trainium's not there, but I think Amazon will figure out how to do that, um, and Anthropic will. Um, so, so I think, I think that's the biggest threat to NVIDIA, is that people figure out how to use custom silicon more broadly. Um, and this sort of becomes this sort of like if AI is concentrated, then custom silicon will do better. Um, and that's not even talking about like OpenAI's silicon team and stuff, right? Like if AI is really concentrated, uh, then, then they'll do better, uh, custom silicon. But if it gets dispersed broadly because there's all these open source models from China, um, and there's all these, um, open source software libraries from, you know, NVIDIA and, and China, and it makes the deployment costs like rock bottom, then potentially-

    7. GA

      Hear me out here. If, if Google's TPU is, is able to compete with NVIDIA, in theory it could do it on the, on the open market. NVIDIA is worth more than Google these days. Shouldn't Google start selling their chips to everyone? I mean, in theory, they should be able to achieve a higher market cap that way, right?

    8. DP

      I, I, um, I absolutely think so. I think Google's even discussing it, um, internally. I think it would require a big reorg of culture, um, and a big reorg of like-

    9. SP

      Mm

    10. DP

      ... how Google Cloud works, um, and how the TPU team works, and how the Jack software team and XLA software teams work. Um, I totally think they could. Um, it would just take them like shaking themselves pretty hard to be able to do it. Um, yeah, uh, but I, I, I, I totally think Google should sell TPUs externally. Not just renting, but like physically with racks.

    11. GA

      It's, it's kind of funny if a side hobby, in theory, has a higher c- uh, company value potential as your main product now. [laughs]

    12. SP

      Than your entire business. Especially as you think about the degradation of search-

    13. DP

      So I think-

    14. SP

      ... as a core business. I mean...

    15. DP

      Yeah, I think... But I think like if you were to ask like Sergey, right, like, "Hey, do you think selling chips and in- and racks is more valuable or cloud or, or Gemini?" Um, he'd be like, "No, no, no, no, no. Like Gemini is gonna be worth way, way, way more. It's just not yet today," right? Um, and so I think like, like today you say NVIDIA's the most... Again, it's like a whole concentration thing, right? If the world is super concentrated in terms of customers, then NVIDIA will not be the most valuable company in the world, right? Um, but if it gets dispersed more and more, um, which arguably we're starting to see with a lot of these open source models getting better and better and better, um, and with the ease of deploying them getting better, then you would see... I think you could argue NVIDIA will remain the most valuable company in the world for a long period of time. Um-

    16. GA

      And hi- historically, no pun intended, software has eaten the world in most markets-

    17. DP

      Yeah

    18. GA

      ... right? I mean, like if, if you look at, uh, early networking days, Cisco was the most valuable company on the planet, right, for a while.It's no longer, right? The, the guys that build services on top like, like Google or Amazon or, or Meta eventually eclipsed that

    19. DP

      Well, which is why NVIDIA's, like, making all these software libraries, right?

    20. GA

      Mm.

    21. DP

      Like, that's, that's-- And they're trying to commoditize inference, right? Like, um, I-- You guys don't, I think, even have an inference API provider investment, do you? Um...

    22. GA

      Well, we have, we have all kinds of model providers.

    23. DP

      Model providers.

    24. GA

      But, but that's a different-- Yeah.

    25. DP

      But I'm talking about a pure API provider investment, I think, right? Is that correct? I think I talked to, um, one of the team members, maybe, maybe Rog'co or someone about, like, why you guys don't-- didn't invest in, like, you know, like a Together or, like, a Fireworks. And sort of the argument was, like, "Well, we think infer- just serving models, uh, alone without making them will sort of be commoditized."

    26. GA

      Yeah.

    27. DP

      Right? Um-

    28. GA

      We, we have some in the stable diffusion ecosystem, like with, like, uh, Faul-

    29. DP

      With Faul, yeah, yeah

    30. GA

      ... or the Gate.

  9. 26:0945:28

    The Silicon Startup Boom

    1. GA

      Uh, what's, what's your take on those? I mean, there, there's, there's a ton of capital flowing in-into that, right? We've seen, I have not numbers, but probably billions, uh, being invested in, in chip startups.

    2. DP

      Yeah, for sure. For sure. I mean, like, whether you're looking at, like, um, you know, companies like-- I think it's, like, pretty impressive that a few companies like, um, Etched and Revos, um, and a number of other companies, you know, Madax and, and others, like, have gotten the amount of funding they've had without even launching a chip, right? Um, you know, in the past, like, yes, silicon companies would make money or raise money, but they would at least launch a chip before they get a, you know, a big round. But, like, uh, Etched and Revos, like, have raised, you know, a lot of money without ever launching a chip publicly-

    3. GA

      Yeah

    4. DP

      ... which I think is... I mean, it speaks to, well, like, yes, silicon is super, uh, capital intensive if you're building a chip, especially an accelerator, which has so many moving pieces. Um, and there's, there's, like, there's, like, ten different AI accelerator companies out there, right? Like, that are new-ish in the last few years.

    5. GA

      I think there's a lot more.

    6. DP

      [laughs] Uh, that, that are like-

    7. GA

      [laughs]

    8. DP

      Yeah, yeah, yeah. That's fair. Um, and then, and then, and then there's the old guard, which continues to raise money-

    9. GA

      Yeah

    10. DP

      ... right? Like Groq and Cerebras and-

    11. SP

      Mm-hmm

    12. GA

      Somanova

    13. DP

      ... and Somanova and, and Tenstorrent and so on and so forth, right? Like, um, or Graphcore getting bought out by SoftBank, and SoftBank dumping money into this effort as well, right? There's, there's a lot of capital being invested to disper- dis- uh, dispel sort of NVIDIA's top dollar or top position. Um, but it becomes challenging, right? It's like, how do you beat NVIDIA, right? Like, the hyperscalers, I think, are, like, kind of lucky in that they can, they can do mostly the same thing as NVIDIA, right?

    14. GA

      They're, they're captive customer, which is themselves, right?

    15. SP

      Yeah.

    16. DP

      Right.

    17. GA

      It's a huge asset, yeah.

    18. DP

      And it's-- They can, they can just win on supply chain, right?

    19. GA

      Yeah.

    20. DP

      Like, I'm using cheaper providers.

    21. GA

      It's a, it's a margin compression exercise, essentially, right?

    22. DP

      Yeah, yeah. And maybe, maybe for certain workloads, like Meta for recommendation systems, they'll have a better... You know, they can specialize more. Um, but for the most part, it's like, "No, we're, we're targeting the same workloads. We can just simplify supply chain or, or in, in-house a lot of it and compress margin, and it'll be fine." Um, but in the case of, you know, these g- these other companies, it's like, well, they don't have a captive customer. So now you have to contend with, "Well, I'm using the same ecosystem, um, and either I can use some custom silicon provider who's gonna p- take a margin anyways on top, and that's gonna compress my, like, what I can sell for. Or I can try and in-house everything," but then it's like, this is really hard, right? Like, I'm gonna do all the software des- I'm gonna do all the silicon design. I'm gonna build all this different IP. I'm going to manage the supply chain on chips, on racks, on everything, right? Ends up being a huge effort in terms of team size. Um, all in the end, like, hey, I make a seventy-five percent gross margin as NVIDIA. Um, AMD sells their GPUs for fifty percent gross margin, um, and they have a hard time out-engineering NVIDIA, and they're great at engineering, right? Like, uh, they're, they... But yet they still take more silicon area, more memory to ach-achieve the same performance, and they have to sell for less, so their margin gets compressed, right?

    23. GA

      That makes sense. But, but look, the... I think historically, if you look at it, typically, the, for new entrants in markets didn't win by marginally improving on something existing. That happens sometimes, but, but more likely, they, they jumped on some of, some kind of disruptive technology leap, right? Where it's like, we have a different approach, we have different technology. I- is that possible here? I mean, to, to, to some degree, um, maybe this is oversimplifying it a little bit, but I think part of the reason why the transformer model won was because it runs so incredibly great on GPUs, right?

    24. DP

      Yeah.

    25. GA

      Like a, like a recurrent neural network is similarly performant it looks like, but it, [laughs] it runs terribly on a, on a GPU. So, so did we sort of pick the model for an [laughs] architecture, and now it's, it's hard to come up with an architecture that, uh, you know, really-

    26. DP

      Well, it, it's, it's hardware-software co-design, right?

    27. GA

      Exactly.

    28. DP

      Like, there is like-

    29. GA

      Yeah, exactly. [laughs]

    30. DP

      There's all this hype about, um, neuromorphic computing, right? Like, theoretically, it's amazing and super efficient. It's like, okay, great, like, there's no ecosystem of hardware, there's no ecosystem of software. It would take, like, you know, tens of thousands of people who are the best at AI today focusing on that to even prove out if it's worthwhile or not, right? Um, on a hardware side, on a software side, on a model side. And so, like, you look at, like, Groq, Cerebras, Somanova, um, they all, like, sort of over-indexed to the models that were leading at the time when they designed their chips.

  10. 45:2850:46

    Data Center Power & Cooling: The Next Bottleneck

    1. DP

      center are like... You know, there's this whole narrative about like, oh, AI uses so much power, and it's like not really. Um, you know, farming alfalfa uses like 100X the water of, of AI data centers. Uh, even by the end of the decade it'll be the same and it's like alfalfa's like worth very little. So it's like, I know. There's like, it's like cooling is like, not that... You know, people have like experimented with like, you know, undersea data centers to reduce the cooling cost, but it's like-

    2. GA

      Yeah, that doesn't make sense [laughs]

    3. DP

      It's like five, 10% savings-

    4. GA

      It's-

    5. DP

      But then like if you wanna-

    6. GA

      It's easier to get the water out of the ocean than, than to put the data center into the ocean I think [laughs]

    7. DP

      It's like if you wanna service it, like you're screwed, right? So like the same with power, it's like we talk a lot about like the power is not actually that expensive. It's just hard to build, right?

    8. GA

      And hard to get to the right place, right? I think you said.

    9. DP

      And hard to get to the right space and convert it down to the voltages and, and all this stuff that, that chips need. But like-

    10. SP

      So it's less the magnitude of power and more where it is and how it moves.

    11. DP

      Well, the magnitude too, right? Like it's gonna be-

    12. GA

      In, in terms of total worldwide energy consumption, AI data doesn't, is still a, a-

    13. DP

      It's like-

    14. SP

      Very small. Right. Nothing.

    15. GA

      It's like, yeah.

    16. DP

      Even-

    17. GA

      It's a fraction of a percent. It's not-

    18. DP

      Yeah, yeah.

    19. GA

      Yeah.

    20. DP

      Even by the end of the decade, uh, you know, g- the US will be like 10% of our power will be AI data centers, which is still like-

    21. GA

      Of electricity.

    22. DP

      Of our electricity.

    23. GA

      Yeah. And in terms of energy, that's even a smaller fraction, right?

    24. DP

      Oh, yeah, yeah. 'Cause you think about vehicles-

    25. GA

      By shifting to electric vehicles also, you can probably make a bigger swing than, than, uh, you know, with all the AI data centers we can build in the first place.

    26. DP

      But outside, like it's like in Europe, like that number's not moving up that fast and like all these other countries. I, I think we need to build a lot more power, but it's not like some, some crazy, crazy like amount. It's just like doing it properly is the hard th-

    27. GA

      Yeah, I agree.

    28. DP

      Um, and, and, and again, like the cost of power, like you go look at like these deals people are signing, they're still signing like even though the price has skyrocketed, uh, from like a few cents a kilowatt hour for these massive, massive purchases to like 10, it's still, you know, when you think about the full TCO of a c-cluster, you know, the GPU cost, the networking-

    29. GA

      It's a couple percent, yeah

    30. DP

      ... all of that stuff far outstrips the power.

  11. 50:4657:56

    Intel’s Role in the AI Era

    1. DP

      right? He's invested in so many different companies first. Um, you know, he was on the board of, like, SMIC, which is China's TSMC effectively, which is, like, a big, like, drama or, like, the big- the... Some of the biggest tool companies in China. He's the first investor in them because, you know, there's a multipolar world there, and he's making good investments. Um, but like, you know, now, like, people are getting mad about that, but it's like, no, he, he, he recognizes, he, he... The companies, like, he understands the supply chain. He needs to not spend his time on splitting the company because then he never actually fixes the company, right? Intel's problem is that, like, it takes them five to six years to go from design to shipping the product, um, in some cases more. Uh, and when they tape out a chip, right, like, you know, you send the design to the fab, the fab brings back the chip. They go through fourteen revisions in wha- in some cases, whereas, like, the rest of the industry goes through, like, one to three, right, revisions if they're good, um, of like send the design in, uh, get the chip back, test it, send the design in, right, uh, for, for a public launch, and they'll launch a chip in three years.

    2. GA

      So but i-i-if you look at Intel today, right, they still don't have a competitive entry on the, uh, on the AI side.

    3. DP

      And they won't.

    4. GA

      Uh, uh, right. C-can you... So what... But what does it mean for their offering? I mean, they, they, they're still doing great on, on, on CPUs. They don't have a good AI, uh, AI chip product. Is it long-term sustainable positioning, right, I mean, as, as, as a, as a standalone chip company?

    5. DP

      I mean, IBM still makes more money every launch off of mainframes. So it's not like, it's not like X86 is dead. It's like you don't get the growth rates, but, like, you could totally run this as a very profitable enterprise. Um, and, and I think the same with PCs, right? There's, you know, there's some turmoil, there's some Arm entry, there's some AMD competition and Apple-

    6. GA

      They don't care about most of them, yeah

    7. DP

      ... but, like, I think it's a very, it can be a very profitable business if it had, like, one-third the people or half the people working on it. And so, like, Lip-Bu Tan to fix Intel needs to go into both the design company and lay off a shitload of people, but, like, keep all the good people, um, and make sure that they're designing fast and they're launching from design conception to launch is two to three years, not five to six. Um, and, and that's on the design side and make that profitable. And then on the fabs, he has to do the same thing. There's all these people, like, um, one of the heads of, of, uh, fab automation at Intel-

    8. GA

      Yeah

    9. DP

      ... um, I, I explicitly told Lip-Bu Tan because, you know, we have a couple ex-Intel people who are actually good in the company, um, that worked on the fab side and were like, they were, were like, "Who's the worst people in friends?" It's like, "Oh, this guy sucks." I explicitly told Lip-Bu Tan. He had never talked to the guy because it was like four layers down. The company has, like, absurd amounts of hierarchy. It's like four layers down. He, he goes and talks to the guy and he's out, right? It's like, like he figures out, like, who's bad, right, um, and who's good. And he has to go in and he has to like, "Hey, the vast majority of the team at Intel is the one who led the world in production and process technology for twenty years."

    10. GA

      Yeah.

    11. DP

      Right? But there's a lot of, like, built-up crap. So he has to go figure this out, right? He can't waste his time on like, oh, all this like structuring to split. Like, I w- I think it would be better if the company split. I just don't think he can spend the time to do that. Um, and if the design side of the company is, you know, you're not really gonna get into AI, you're not really gonna... You, you have to make some money there. But the fabs I think could truly become a competitor. But they're gonna go bankrupt by the time anything, you know, can happen. So they have to figure out how to get capital. So he has to figure out how to get capital. He has to figure out how to clean up all the crap-

    12. GA

      Yeah

    13. DP

      ... uh, make the, um, you know, yields go up, right? Make the product ship way faster. Like, all of these things are basic problems.

    14. GA

      I think, I think the goals are completely correct. I mean, I think, I think the big challenge, just reflecting back on my time there, right? The... I think the big challenge is that right now if you look at Intel, right, they have essentially software, sort of the, the chip design making the... And then there's, you know, the core manufacturing part, right? And they have three diff- very different cultures, and it's very hard to get everything under one umbrella, right? And, and so I think that is the big challenge, right?

    15. DP

      I think you c- you should even run-

    16. GA

      But he has a lot of money

    17. DP

      ... the company separately, right? Like, but, like, you can't physically separate them entity-wise because it's gonna take so long to sever all these things, um, because he doesn't have time, right? Like, Intel is literally gonna go bankrupt if they don't have a big cash infusion or they lay off like half the company, right? Uh, which some could argue you need to lay off like thirty percent of the company anyways. But, um, there's a lot of bad things that happen if that happens, right? Um-And they need to spend a lot more on building the next generation fab, even if they fix the fab, and they don't have money for that, right? So there's like, there's like a lot more, more important problems than like physically separating the company, even though I think long term, yes, the fab has to be separate from the chips design software, right? Like or chip design, uh, part of the company. Just like that's gonna make each company much more accountable, uh, be able to service their customers better, etc. It's just that's gonna take too long and they're gonna go bankrupt by then.

    18. ET

      Awesome.

    19. DP

      I, I but I think, I think, I hope, I hope, I pray someone does something, right? Like you get a big capital infusion. I don't know. The big hyperscalers are like muscled into like, oh, okay, wait. If TSMC eventually grows their margin to seventy-five percent because they're the monopoly, um, plus they intake all this stuff like co-package optics and power delivery and all this, like all of a sudden the cost is gonna spike. So we should actually just throw five billion dollars at Intel each, right? Screw it. And, and that could actually give Intel enough of a lifeline to potentially get to something and maybe be competitive, um, that's, that's the hope.

    20. ET

      Can we, uh, can we finish by finishing th-this game that we started when we gave Sam Ad-- uh, Altman advice? If, if Jensen was here, what, what advice would, would you have for him?

    21. DP

      Hmm. If Jensen was here...

    22. ET

      Ha.

    23. DP

      Uh, you know, I think, I think he has a massive, massive balance sheet, right? Jensen does. He's, he's cash... Free cash flow is like, you know, ridiculous. Um, the tax cut, the new, the new Trump, you know, tax bill, um, institutes something really incredible, which is that you can depreciate all of the GPU cluster cost in year one. Um, which we put out like a note about how like the tax implications to like Meta are like ten billion dollars a year. And across each of the major hyperscalers is like massive. It's like, well, NVIDIA's gonna spend tons and tons of cash, um, or ca- uh, they're gonna spend like, you know, tens of billions of dollars of taxes. Why don't you get into the infrastructure game somehow? Um, now this is obviously gonna be like crazy because like now they're buying GP-- their own GPUs and putting them in data centers and doing stuff, and they're competing with their own customers, but they're already doing that anyways because their customers are trying to make chips. Um, but they should like accelerate the data center ecosystem with investments, right? Uh, because really we, we think we can like have very high degree of like accuracy on what they're gonna do next year in terms of revenue, because it's just the number of data center watts that are being built, right? Like this is harder thing to shift up and down, right? Now, there's a little bit of share difference between how much is TPU versus GPU, but like, it's like you have to accelerate the infrastructure and you need to spend all of this capital that you're building, right? Like, okay, do you wanna go the route of like doing buybacks and dividends? Like great, like you're a loser if you do that, right? Like you, you can make more money by reinvesting and building a bigger company that's not just chips into the ecosystem or servers into the ecosystem, but actually like controlling the infrastructure end to end somehow. Um, so I think there's something he could do there, uh, with this massive war chest. And there's a reason like NVIDIA's done some buybacks and they've done some dividends in-increasing,

  12. 57:561:08:17

    Advice for Tech Giants: NVIDIA, Google, Meta, Apple, Microsoft

    1. DP

      but the cash on their balance sheet keeps growing, and they're gonna have north of a hundred billion dollars of cash on their balance sheet by the end of this year, I think. Um, so it's like, what are you gonna do with that? Um, I think, I think there's something moving into the infrastructure layer much more, um, that they, they could do if he really wants to be the king of the world, right? Uh, which I think he does.

    2. ET

      Uh, Sergey and Sundar?

    3. DP

      Um, ooh. Um-

    4. ET

      [laughs]

    5. DP

      I, I think, I think they should open up the Komodo on TPUs, right? Like start selling them. Um, open up the software. Open source a lot more of the XLA software, 'cause there's OpenXLA and there's XLA, but the vast majority is closed source. Um, really, really open up the Komodo on that, um, and be a lot more aggressive, right? They've-- they're still pretty not aggressive on data centers. Um, they're pretty not aggressive on a lot of elements of the company. Um, the TPU team's next gen dir- designs are pretty not aggressive, partially because a lot of the TPU team, uh, has left to go to OpenAI, the best people that I knew. Uh, it was actually really annoying. I knew like four people, uh, or five people, and they all went to OpenAI [chuckles] and it's like, fuck, like now I don't get as much ins-

    6. ET

      It's alright.

    7. DP

      I met some other people, right? But it's like, you know... Um, I think they could be a lot more aggressive in many ways across the company. They don't have to be, right? But they could.

    8. ET

      Right.

    9. DP

      Um, because AI, you know, like this ChatGPT take rate, the shift of search queries, the monetizable ones, especially from to, to purchasing agents, is gonna really screw Google long term if they don't, you know, get their act together. I think they've gotten their act together on DeepMind. There's still some inefficiencies, but Sergey works on, works within DeepMind a lot and they're driving hard. They're still a little bit behind, but like, I think like physical infrastructure, TPU, um, and, and how much money they could make and how much they could take the wind out of everyone else's sales-

    10. ET

      Yeah

    11. DP

      ... if they start selling TPUs externally, um, and reorg around like building data centers much faster so that they do have the most compute in the world, 'cause they did. Uh, but now there's certain companies that are gonna surpass them, uh, potentially over the next few years if they don't really get their act together. So I think that's what I would say for them. Um, yeah. And also like, like learn how to ship product better, right? [laughs]

    12. ET

      [laughs] Um, Zuck.

    13. DP

      Um, I think, I think Zuck... You know, it remains to be seen what goes on with superintelligence, but like they're trying to move for super fast with the data centers. Uh, you know, like screw it, we'll build tents, uh, instead of like physical data centers 'cause we only need these for five years anyways. Um, you know, the superintelligence moves, you could, you could say whatever you want, but like, you know, trying to buy like Thinky for like thirty billion or, or, um, SSI for thirty billion didn't work out, so then they spent, you know, not even that much on hiring, not thirty billion on hiring all these people. Um, so I think they-- he recognizes the urgency with the models, with the infrastructure. Um, so I really think he needs to like... You know, i-if you read his website post about like AI, like I think, you know, he sees the vision, right? There's the wearables, there's integrating AI into that. There's being your AI assistant to do all this purchasing and stuff. I think he sees the vision, but I think he also needs to focus on like actually likeReleasing that faster, um, but also, like, the products that they do outside of their core IP every time they launch something is kinda mid, right? Um, you know, uh, Meta Reality Labs is doing well, but I think they should, like, go more explicit, like have a ChatGPT competitor, have a Claude com- uh, a Cl- like Claude Code competitor. Like, just start releasing way more products, um, because they're really just focused on their individual gardens rather than, like, branching outside of it.

    14. ET

      D-do you think Apple should have that same sense of urgency? Or if Tim Cook was here, what, what would you tell him?

    15. DP

      The funny thing is, like, some of their best AI people are now, like, at Superintelligence. [chuckles] Uh, they're building an AI accelerator. They're gonna... They're, they're, they have AI models, but they're just, like, way slower. Um, they did mention on the last earnings call they're gonna allocate more capital to this, but it's like, guys, Apple, like, you guys are gonna lose the boat if you do not spend, like, 50, $100 billion on infrastructure.

    16. ET

      You don't think the current Siri will, will cut it? [laughs] Sorry.

    17. DP

      [laughs] Um, I think, I think, like, more and more you'll see people, like... You know, great, Apple has this walled garden, but, like, they can only do so much to protect it, right? IDFA, like, they shut down ads to or data sharing to Meta. But Meta b-made better models, and now they have way more data and way more power over the user than they ever did before. Kind of it was good that Meta kicked the crutch off of them, or, or Apple did. Uh, but the same applies to, like, AI. Like, yes, they have access to the text, and they have access to this, but, like, I think other people are gonna be able to integrate user data, um, and agents will be able to integrate all this user data, and they'll start to lose control of what the user experience is as more and more gets disintermediated by AI being the interface rather than touch, rather than, you know, touchpad and keyboard. Um, and I don't think they've truly realized what happens when the interface to computing is, is AI. Like, they, they market it, but, like, that's gonna shift computing really heavily. They have great hardware, and their hardware teams are working on awesome stuff and form factors. But, like, I just don't know if they, like, get what is actually gonna happen to the world in the next five years, uh, truly well enough, and they're not building fast enough for it.

    18. ET

      What about Microsoft to, to that end?

    19. DP

      Um, Microsoft has the same problem, I think. Um, they were super aggressive in '23 and '24.

    20. ET

      Yeah.

    21. DP

      Um, and then they pulled back heavily, right? Uh, now, like, OpenAI's slipping through their grasps. Uh, there's that whole thing there. Uh, they cut back on data center investments heavily.

    22. ET

      Yeah.

    23. DP

      They were gonna be the largest infrastructure company in the world by, like, a factor of 2X.

    24. ET

      Yeah.

    25. DP

      Uh, which would've been... You know, you could argue maybe that was, like, too much, and maybe it wouldn't have been economical. Um, but, like, they're losing grasp on OpenAI. Their internal model efforts are failing, uh, spectacularly. Like, they're on LLM arena right now, and they're pretty decent there, but it's like that's just, like, a sick authentic model. Like, um, it's a code name but, like, whatever. Like, MAI is, like, failing. Azure is, like, losing a lot of share to Oracle and CoreWeave and Google, um, and so on and so forth, right? Their internal chip pro- chip effort is by far the worst of any hyperscaler. Like, uh, they're just, like, misexecuted. Like, GitHub... How is GitHub not the highest ARR software, uh, code model?

    26. ET

      I mean, they, they, they only had the best IDE, the best source code repository, the best enterprise Salesforce, the best model company as a relationship, and they were the first to market, right? [laughs] It's like they had everything going for them.

    27. DP

      And, like, there's just nothing, right? It's like-

    28. ET

      I'm-

    29. DP

      ... like Co- GitHub, uh, GitHub Copilot is failing. Microsoft Copilot is, like, still crap, right? Like, um-

    30. ET

      Yeah, it's unusable.

Episode duration: 1:06:16

Install uListen for AI-powered chat & search across the full episode — Get Full Transcript

Transcript of episode xWRPXY8vLY4

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome