a16zDylan Patel on GPT-5’s Router Moment, GPUs vs TPUs, Monetization
EVERY SPOKEN WORD
75 min read · 14,678 words- 0:00 – 1:11
Introduction & AI Hardware Landscape
- DPDylan Patel
NVIDIA's gonna have better networking than you. They're gonna have better s-- uh, HBM. They're gonna have better process node. They're gonna come to market faster. They're gonna be able to ramp faster. They're gonna have better negotiations with whether it's TSMC or SK Hynix in the memory and silicon side, or all the rack people, or, like, copper cables. Everything they're gonna have better cost efficiency. So you can't just, like, do the same thing as NVIDIA. You have to really leap forward in some other way. You have to be, like, five X better.
- ETErik Torenberg
[on-hold music] Dylan, welcome to the podcast.
- DPDylan Patel
Thank you for having me.
- ETErik Torenberg
We've been trying to get you for a while. You're a busy man. Uh, but, uh, but it worked out. Um, Guido, why-why don't you introduce why we're so excited to have, have Dylan on the podcast and what we're excited to discuss.
- GAGuido Appenzeller
Look, I mean, I-I-I think, Dylan, you've done exceptional job in, in covering what's happening in the AI hardware space, AI semi space, and, and now more and more data center space as well. And, you know, uh, just looking at it, currently the most valuable company on the planet is an AI semi company, right? Uh, the, the, I think biggest IPO so far in AI was an AI cloud company. This is currently where it's happening, right? In any gold rush in the early days is the pick and sh- picks and shovels that make money, and I think this is the stage that we're in. So super excited to have you here today.
- DPDylan Patel
Awesome. Thank you. Uh, w- happy to talk about my favorite topics.
- 1:11 – 4:19
Reactions to GPT-5: Is It Disappointing?
- DPDylan Patel
[chuckles]
- ETErik Torenberg
Amazing. Well, maybe let's start with GPT-5. You know, we just had the, um, some of the researchers from who, who, uh, Christina and Isabelle on here last week. What, um... y-you, you said it was, uh, disappointing. Uh, what why don't you sh-share your reactions or what capabilities you were, you were hoping to see or overall plans.
- DPDylan Patel
I think, I think it depends on what tier of user you are, right? Um, and so, like, right, if you're just using GPT-5 and before you were, you know, two-- twenty dollars or two hundred dollars a month subscriber, um, you no longer have access to 4.5, which i-in my opinion, is still a better pre-trained model for certain things. Um, or you no longer have access to, um, o3, which would think for thirty seconds on average, maybe. Right? Whereas GPT-5, even when you're using thinking, uh, only thinks for, like, five to ten seconds on average, right? Um, which is an interesting sort of phenomenon, right? Um, but basically, like, GPT-5 is not spending more compute per se. The model did get a little bit better, um, on a vanilla basis, right? 4o to 5 is, is actually quite a bit better. But, um, when you think about, you know, what, what is this curve of intelligence, right? It's like the more compute you spend, the better the model gets. Um, and that's whether it's a bigger model, uh, which GPT-5 isn't, right? You can, you can see it's not a bigger model. Um, it's roughly the same size, um, and, you know, or you think more, right? Uh, but again, like, this is something that OpenAI is mo-- first thinking models, you know, the first few, uh, generations of o1, o3 would think for a long time, um, and waste a lot of tokens, uh, if you will. Uh, and when you look at, like, for example, um, you know, Anthropic's thinking models, they actually think even when you put them in thinking mode, they think a lot less, right? To get to the same results or better results, right? As OpenAI was. And so OpenAI, I think, like, optimized a lot of, like, well, if I ask... Like I think, I think the silliest one w- I had asked was, like, I asked o3 once, like, "Is pork red meat or white meat?" And it thought for like forty-eight seconds. I was like, "What are you doing?" Like, "This should just, like, this should l- this should just, like, tell me the answer." [chuckles] Um, and, and, and so, like, you know, the, the, the nice thing is that GPT-5 will think a lot less even if you select thinking manually. But more importantly, they have the sort of auto functionality, the router, which lets them decide whether or not, you know, "Hey, do I route to the regular model? Do I route to maybe mini," if you're at our rate limits? "Or do I route to thinking?" Right? Um, and, "How much do I think?" Uh, but in general, the thinking model will think less. So, so there's less compute going into, uh, a power user's average query than before.
- GAGuido Appenzeller
But is it even more interesting? OpenAI cannot control how much compute it wants to allocate to you, right? If, if, if we're in a high load situation, maybe tune the router a little bit so it's less, right? Maybe, maybe we skip. I have no idea what they're doing, um, you know, behind the curtain. But there's this meme out there at the moment that basically all they did, which is, which is a meme, right? It's not true. But, but all they did is, you know, take o3 plus a couple of smaller models, put a router in front, and, you know, offer that at a, a lower blended price, essentially, right? Is-- I think there's a little bit of that. Like, cost suddenly matters and, and they figured out a way how they can steer that.
- 4:19 – 7:34
The Business of AI Models: Cost, Monetization, and the Router
- DPDylan Patel
I think, yeah. The, I mean, and they talked about how they've been able to dramatically increase their infrastructure capacity because I myself was just regularly using o3 or 4.5, right? And now I'm forced to use Auto, which sometimes gives me, you know, the o3 equivalent thinking model, but sometimes gives me, you know, just the regular base model-
- GAGuido Appenzeller
That's what they are
- DPDylan Patel
... uh, which, which sucks. But, like, um, I think for the free user, it's actually quite interesting, right? Uh, the free user was not getting thinking models pretty much ever, or not using them. Um, or in many cases, they just open the website and ask their query, and now sometimes their query gets routed there. So sometimes they get a way better model. But now sometimes, you know, the, the OpenAI can gracefully degra-degrade them-
- ETErik Torenberg
Exactly
- DPDylan Patel
... uh, if they need to, right? And it's like, I think, I think the router points to the future of OpenAI from a business, right? Like, you can look at sort of the model companies, right? Anthropic is fully focused on B2B, right? Um, API code, et cetera, right? Where- or, or Claude code, whatever it is, right? OpenAI, yes, they have that business, Codex and, and API business, but really their majority of their revenue is consumer, right? Um, and it's consumer subscriptions. But they have no way to upsell, like, you know, how to make money off of all the free users, right? Uh, in any other application, consumer app, the free user still pays via ads. Um, but this is not compatible with AI, right? Like, it's a helpful assistant. You can't just make the r- users' result worse by injecting ads. Banner ads don't really work, uh, in AI either. So it's like, how do you now monetize them? And I think, I think they're getting-- with the router, they're getting really close to figuring out how to monetize that user, right? Um, with the new CEO of Applications, if you saw her product that she launched at Shopify, uh, I think it was Shopify, was an agent for shopping.Right? And now this, like, immediately clicks. Like, oh, if the user asks a low-value query like, "Hey, why is the sky blue?" Just route them to Mini, right? Like, you know, the model can answer perfectly fine, and that is a chunk of queries, right? And but if they ask, like, "What's the best DUI lawyer near me?" Right? All of a sudden this is like, you know, you're in jail, you have one shot. You know, you're like, "Screw it. Let me ask ChatGPT what the best U- DUI lawyer is," and now all of a sudden... You know, the model's not capable of it today, but soon enough it'll be able to, you know, contact all the lawyers in the area and, um, you know, figure out what their results are and maybe search their, like, court filings and whatever, right? And, and, and book the best lawyer for you or, or an airplane ticket.
- GAGuido Appenzeller
They negotiate a cut, [laughs] you know, as part of that.
- DPDylan Patel
Yeah, of c- of course they're gonna take a cut, right?
- GAGuido Appenzeller
Yeah.
- DPDylan Patel
But this is a, this is a much better way of, like, monetizing the free user is, like... You know, it, it's like Etsy, 10% of their traffic now comes from chat, and OpenAI makes nothing off of that. But they very... They really, really will soon, right? Um, and partially that's because Amazon blocks chat but, like, you know, there's a way to make money from shopping decisions, whether it's booking flights or looking for items. Um, and those you now say, "Free user, I don't care. I'm gonna send you to my best model. I'm gonna send you to agents. I'm gonna spend ungodly amounts of compute on you because I can make money off of this. But if it's a query that's like, 'Help me with my homework,' I'll send you, like, a decent model, right? I don't, I don't need to spend money on you." Um, and so this is how I think, like, OpenAI can finally make money off of the free user, and I think that's the biggest, like, thing about the router, right? Like-
- 7:34 – 10:10
The Economics of AI: Cost vs. Performance
- GAGuido Appenzeller
Th- th- this is super interesting. I think this is the first time that we've seen that there's a, a, a launch of a new model where to some degree cost is the headline item, right? I mean, so, so far it was always like, who has the smartest model? Who has the highest MMLU score? Now we have suddenly people who use models for coding for eight hours a day and, you know, surprise that, you know, if you take a, a large context window and the best model, creates, you know, thousands of dollars of cost a month. So cost matters. And, and so to some degree, so where are you on the Pareto frontier between cost and performance is the new benchmark for model competitive and no longer cost alone. Is, is, is that what we're seeing here or?
- DPDylan Patel
I mean, I think definitely, right? Like, OpenAI said they doubled their rate limits for, uh, you know, cer- for big amounts of, uh, users. They've, they've dramatically increased the number of tokens they're serving from this launch-
- GAGuido Appenzeller
Mm
- DPDylan Patel
... which effectively says this is an economic release. Um, and I think, like-
- GAGuido Appenzeller
And, and probably also means the tokens are now cheaper, right? Otherwise-
- DPDylan Patel
Yeah, yeah. For sure, for sure.
- GAGuido Appenzeller
Yeah.
- DPDylan Patel
I, I think the funniest thing is, like, you know, this whole cost thing you mention is like we've seen this in the code space, right? Like, Cursor had to pull away the unlimited, um, Claude Code. You know, initially they had, like, this super expensive plan, and it had, like, unlimited rates, and then they... or only, like, a weekly rate limit. Now they have, like, hour-based rate limits. Um, and I saw the craziest, like, thread on Twitter where this guy said he changed his sleep schedule, right? Uh, modeled after, like, how sailors in the bay-
- GAGuido Appenzeller
[laughs]
- DPDylan Patel
Like, you know, like, if you're sailing you can't sleep, right? Like, s- solo sailing, uh, they'll take, like, power naps, uh, when they get to the right spot so that they can, like, still be safe.
- GAGuido Appenzeller
In the morning when it's not very windy. [laughs]
- DPDylan Patel
Well, but, like, they can't sleep uninterrupted, right? And so because Anthropic had to put rate limits that are, like, not just week based but, like, number of hours based, he's like in, like... He, like, basically sleeps, you know, multiple times a day but-
- GAGuido Appenzeller
That's amazing
- DPDylan Patel
... small chunks just so he can maximize the usage. And there's also a leaderboard on Reddit-
- GAGuido Appenzeller
Oh, my goodness
- DPDylan Patel
... where people are, like, competing to see how many tokens-
- GAGuido Appenzeller
That is amazing
- DPDylan Patel
... they're using through their subscription, and there's, like, a dude spending, like, $30,000 a month.
- GAGuido Appenzeller
So I'm gonna find some developer in India that I can do pair programming with so I can get the day cycle, he can get the night cycle, [laughs] you know, and we both can, can maximize the, together the quota for the account. Is that the future then? [laughs] It's-
- DPDylan Patel
Um, I mean, but it's, it's clear, like, people are taking advantage of the negative gross margin, like, sort of-
- GAGuido Appenzeller
Yeah
- DPDylan Patel
... uh, you know, subscriptions that are offered. Um, you know, I think, I think Anthropic probably makes a positive gross margin off of my subscription. Like, I don't code enough, but, like, there's plenty of people that are definitely losing money.
- GAGuido Appenzeller
Yeah.
- DPDylan Patel
Um, and so as you said, it's an economic-
- GAGuido Appenzeller
It, it, it push more and more to, I think, just usage-based pricing, right?
- DPDylan Patel
Yeah.
- GAGuido Appenzeller
I think if, if you have a, if you have an underlying commodity that you're reselling to some degree that has, that is that large a part of your cost of goods, right, you need to go to usage-based pricing.
- 10:10 – 12:30
Usage-Based Pricing & Product Stickiness
- DPDylan Patel
How much, how, how much do you think the, like, um, customer capture and stickiness for these code products is? Like, um, yeah, I'm, I'm curious what you think on that, right? Like, y- you know, once you use an ID, once you integrate, you know, one of the CLI products in, like, how sticky is it or is-
- GAGuido Appenzeller
That is a billion-
- DPDylan Patel
... people just switch after? [laughs]
- GAGuido Appenzeller
That is a billion-dollar question. That's a very conservative estimate. Look, uh, A- A- Andrew Karpathy has this, this great slide where he basically says if you're building an agentic system today, right? But fundamentally what it is is sort of this loop, right, where, where half of the loop is the model thinking, right, and, and then trying to do something. The other half is then the, the, the user verifying what did the agent do, is it the right thing, providing feedback and trying to steer it in the right direction because, you know, it can't run forever. Eventually you need to, you need to, to, to steer it back. One half of that is, is the model provider, right? They're trying to build the, the best models. The other half is really about, I think, designing the best possible UI to enable the user to give feedback, and I think there's value in that. So I think there's a certain amount of stickiness in there, right? So what are all the different tools, like, in terms of vis- like, say, take code editing, right? How can I most easily visualize what the code changes are? How can it, it most easily visualize, you know, what they impact, which files? You know, how can I for small changes get very quick feedback versus for complex ones, you know, get complex feedback? And there's now some tools that actually draw diagrams for you of what they do, right? [laughs] So I think, so I think this will be the battle, is I think there's stickiness in that, right? Uh, you know, how much exactly? This is a great question.
- DPDylan Patel
So in that sense, like, people should be doing subscriptions to get people locked in, right? Instead of moving to usage-based pricing.
- GAGuido Appenzeller
Well, I think it's the customers that don't wanna do usage-based pricing-
- DPDylan Patel
Yeah
- GAGuido Appenzeller
... because it's so hard to guarantee, you know... It, it's so hard for it to get, y- it to get, it to get away from them, and you, you actually want guarantees, and you're, you're willing to, like, say... You're, you're willing to commit to pretty high spend in order to not have usage-based pricing. I think it's the model companies that want usage-based pricing.
- DPDylan Patel
I, I think with consumers, [coughs] it's frankly very hard to h- not have usage-based pricing just because the, the variability is so massive, right? If it's, like, you know, us coding versus somebody who does this as their full-time job, right? Th- the... You just have a factor of 20 or so difference in usage, and if that is... Th- that costs a lot of money, right?
- GAGuido Appenzeller
Mm-hmm.
- DPDylan Patel
The instance is just one. I think for enterprises we could, we could see, like, more flat-fee pricing because you can average it out more.
- GAGuido Appenzeller
You have a, a developer that's using it all day, you kinda know in a general sense.
- SPSpeaker
Like, h-h-how many hours a day they're programming and, and what that sort of looks like.
- GAGuido Appenzeller
Yeah.
- DPDylan Patel
The, the vibe quotas are harder.
- SPSpeaker
[laughs] Yeah.
- 12:30 – 14:18
Advice for Sam Altman: Monetizing OpenAI
- GAGuido Appenzeller
Before we leave OpenAI, I, I wanna ask a broad question, which is if Sam Altman was, was sitting here and saying, "Hey, Dylan, I'll listen to anything you, you tell me to do, any advice you have, as long as it makes OpenAI more valuable," what would you, what would you tell him?
- DPDylan Patel
I would say, like, immediately launch a, uh, a method for you to input your credit card into ChatGPT and agree that for anything it, like, agentically does for you, it'll take X cut, and then launch that product because, uh, where, where it does shopping, right? 'Cause, like, we know-- everyone knows that, like, Anthropic and OpenAI and all the other labs are buying RL environments of Amazon, and of, like, Shopify, and of Etsy, and of all the different ways to shop on the internet. Of, of airline websites, right? Now, just like, "Hey, integrate my calendar. I wanna, I wanna fly to there on Thursday. Make sure I don't miss a meeting." Cool. Booked, right? Like, do, do that integration, like, super well. Know my preferences on wha-wha-whether I like aisle or window. All this stuff, right? And just take a take rate. Like, I think, like, this will make them so much money the moment they launch it, and I, I think they're working on it already, but, like, I'd like to hear how he thinks about it, uh, 'cause he's shifted his tone massively on, like, ads over the last six months, right? He used to be like, "No way," and now he's like, "Uh, maybe," you know? Uh, there's a way to do it without, you know, harming the user, and I think this is, this is how you monetize the free user, right? So I think, uh, that's probably what I'd tell him/ask him about. Like a whole line of questions-
- GAGuido Appenzeller
Yeah
- DPDylan Patel
... around this.
- GAGuido Appenzeller
I, I wanna shift to NVIDIA. Uh, NVIDIA's having a monster year. They're up almo-almost seventy percent. W-what are the possible paths fro-from here? How do you see, how do you see it playing out?
- DPDylan Patel
Um, depends, like, how, how pilled you are on, like, the continued growth, but, like, I think, I think you guys have a good vantage point, and we have a good vantage point of how fast revenue's growing for a lot of these companies-
- SPSpeaker
Mm-hmm
- DPDylan Patel
... especially the code companies,
- 14:18 – 21:27
NVIDIA’s Growth & The Future of AI Compute
- DPDylan Patel
but, but even many other applications. Um, I think we can clearly see, uh, the demand side is, is accelerating, right? Um, and, and then if you look at the training side, I think the, the race is on. Um, you know, Meta's, Meta's upping hugely. Uh, Google's upping hugely. Um, you know, if you, if you just look at, like, again, just OpenAI and Anthropic and the compute that they have and are getting this year, uh, from Google and Amazon for Anthropic and from Microsoft, CoreWeave, Oracle, uh, for OpenAI, thirty percent of the chips are going to them, just those two companies. Um, but that's actually, like, okay, well, like, seventy percent of the stuff, like, who's making off of it? Well, a one third of it is, like, ads, right? Um, whether it be ByteDance or, uh, Meta or many of the other people who are doing ads. So then it's still like, okay, well, where are the rest of these one third of the chips coming from? Well, they're, like, mostly uneconomic providers who I don't think it's, like, an obvious bet that they're going to keep growing, um, you know, keep raising bigger and bigger rounds. So what happens there? Um, I think, I think with the... You know, we, we talked about, like, coding, right, like earlier. Actually, the Qwen Coder 3 model is actually super cheap if you're running it on-prem or if you're running it in the cloud with all these inference libraries. Um, and so, like, there's stuff like that as well. So I think the question is, like, how much does it keep growing? Um, because clearly, you know, I think the first third is definitely skyrocketing, right, of, of OpenAI, Anthropic, you know, lab spend. The second third of, like, ads is gonna grow. It's not gonna grow like crazy, but, like, I think there's definitely an inflection point that could be hit with gen A- gen AI ads. I know Meta's been experimenting with it a lot. Um, but I could totally be convinced that there's gonna be a huge inflection in, like, uh, take rate there, right, where you start showing me personalized ads. Um, like every person that's an ad is like-- looks like me, and I'll be like, "Okay, yes." Except, like, slightly better, so I, like, feel better, right?
- SPSpeaker
Inspiration.
- DPDylan Patel
And I'm like, "I wanna buy it." Yeah.
- GAGuido Appenzeller
Um, it's, it's super interesting, the, the... I have no idea how this is gonna, gonna scale, right? But, but if you ask the question, how much could it scale, right? Like, how much value are we creating here? Is this-- Can we create enough value to actually, you know, keep growing for, for a long time? If you just take AI software development, right?
- DPDylan Patel
Yeah.
- GAGuido Appenzeller
We know we can easily get about fifteen percent more productivity out of a developer.
- DPDylan Patel
I don't think that's right. I think it's way higher.
- GAGuido Appenzeller
No, no, the-- W-with the, with the straight... Like, I talk to a lot of enterprises. Like a, a classical enterprise, straight up, you know, GitHub Copilot deployment, that gives you about fifteen percent. We can do much more than that.
- DPDylan Patel
But bro, like [chuckles] you know how bad GitHub Copilot is?
- GAGuido Appenzeller
[laughs]
- DPDylan Patel
Like, like, like how did they-
- GAGuido Appenzeller
We-
- DPDylan Patel
How-- Th-they-- Look at the revenue ARR chart.
- GAGuido Appenzeller
This was the lower bar.
- DPDylan Patel
It's so funny. It's so funny. If you look at the revenue ARR chart and it's like, it's, like, Claude Code in like three months has surpassed them, and, like, Cursor co- you know, easily surpassed them, and then, like, even, like, companies like Replit are, like... and Windsurf/Cognition are, like, gonna pass them. Like, it's like-
- GAGuido Appenzeller
You're preaching to the choir
- DPDylan Patel
... what's going on? [laughs]
- GAGuido Appenzeller
So, so look, let's assume we can get this to a hundred percent.
- DPDylan Patel
Yeah.
- GAGuido Appenzeller
Right? So it's we can double the productivity of a developer, right? About thirty million developers worldwide, give or take.
- DPDylan Patel
Yeah.
- GAGuido Appenzeller
Right? Let's say 100K value add per developer. This might be a little high worldwide. The US is low, but, but worldwide it's high. So that's three trillion dollars.
- DPDylan Patel
Yeah, yeah.
- GAGuido Appenzeller
Right? So, so w-we're probably building technology here which adds three trillion vol- dollars of GDP value.
- DPDylan Patel
Yeah.
- GAGuido Appenzeller
In theory, we could put that into, into GPUs [chuckles] because that's the main cost factor here, right?
- SPSpeaker
Just, just from a coding model, like not-
- GAGuido Appenzeller
Just from a coding model. This is one use case, right?
- SPSpeaker
Ignoring every other use case.
- GAGuido Appenzeller
So, so at least in theory, the value generation is here to keep growing, right? Now, how that translates to the industry is much more complicated.
- 21:27 – 26:09
Custom Silicon: Threats to NVIDIA
- GAGuido Appenzeller
silicon?
- DPDylan Patel
I think that's the biggest thing, right? Is, um, when we look at orders from Google and from, um, Amazon, right, especially their... and, and Meta, their custom silicon is... Not, not Microsoft, their custom silicon kinda sucks. Uh, but the other three, they're really upping their orders massively over the last year. Um, you know, Amazon is making millions of Trainium. Google's making millions of TPUs.
- SPSpeaker
Mm.
- DPDylan Patel
Um, TPUs clearly are like a hundred percent utilized, right?
- GAGuido Appenzeller
Yeah. They're doing very well.
- DPDylan Patel
Um, Trainium's not there, but I think Amazon will figure out how to do that, um, and Anthropic will. Um, so, so I think, I think that's the biggest threat to NVIDIA, is that people figure out how to use custom silicon more broadly. Um, and this sort of becomes this sort of like if AI is concentrated, then custom silicon will do better. Um, and that's not even talking about like OpenAI's silicon team and stuff, right? Like if AI is really concentrated, uh, then, then they'll do better, uh, custom silicon. But if it gets dispersed broadly because there's all these open source models from China, um, and there's all these, um, open source software libraries from, you know, NVIDIA and, and China, and it makes the deployment costs like rock bottom, then potentially-
- GAGuido Appenzeller
Hear me out here. If, if Google's TPU is, is able to compete with NVIDIA, in theory it could do it on the, on the open market. NVIDIA is worth more than Google these days. Shouldn't Google start selling their chips to everyone? I mean, in theory, they should be able to achieve a higher market cap that way, right?
- DPDylan Patel
I, I, um, I absolutely think so. I think Google's even discussing it, um, internally. I think it would require a big reorg of culture, um, and a big reorg of like-
- SPSpeaker
Mm
- DPDylan Patel
... how Google Cloud works, um, and how the TPU team works, and how the Jack software team and XLA software teams work. Um, I totally think they could. Um, it would just take them like shaking themselves pretty hard to be able to do it. Um, yeah, uh, but I, I, I, I totally think Google should sell TPUs externally. Not just renting, but like physically with racks.
- GAGuido Appenzeller
It's, it's kind of funny if a side hobby, in theory, has a higher c- uh, company value potential as your main product now. [laughs]
- SPSpeaker
Than your entire business. Especially as you think about the degradation of search-
- DPDylan Patel
So I think-
- SPSpeaker
... as a core business. I mean...
- DPDylan Patel
Yeah, I think... But I think like if you were to ask like Sergey, right, like, "Hey, do you think selling chips and in- and racks is more valuable or cloud or, or Gemini?" Um, he'd be like, "No, no, no, no, no. Like Gemini is gonna be worth way, way, way more. It's just not yet today," right? Um, and so I think like, like today you say NVIDIA's the most... Again, it's like a whole concentration thing, right? If the world is super concentrated in terms of customers, then NVIDIA will not be the most valuable company in the world, right? Um, but if it gets dispersed more and more, um, which arguably we're starting to see with a lot of these open source models getting better and better and better, um, and with the ease of deploying them getting better, then you would see... I think you could argue NVIDIA will remain the most valuable company in the world for a long period of time. Um-
- GAGuido Appenzeller
And hi- historically, no pun intended, software has eaten the world in most markets-
- DPDylan Patel
Yeah
- GAGuido Appenzeller
... right? I mean, like if, if you look at, uh, early networking days, Cisco was the most valuable company on the planet, right, for a while.It's no longer, right? The, the guys that build services on top like, like Google or Amazon or, or Meta eventually eclipsed that
- DPDylan Patel
Well, which is why NVIDIA's, like, making all these software libraries, right?
- GAGuido Appenzeller
Mm.
- DPDylan Patel
Like, that's, that's-- And they're trying to commoditize inference, right? Like, um, I-- You guys don't, I think, even have an inference API provider investment, do you? Um...
- GAGuido Appenzeller
Well, we have, we have all kinds of model providers.
- DPDylan Patel
Model providers.
- GAGuido Appenzeller
But, but that's a different-- Yeah.
- DPDylan Patel
But I'm talking about a pure API provider investment, I think, right? Is that correct? I think I talked to, um, one of the team members, maybe, maybe Rog'co or someone about, like, why you guys don't-- didn't invest in, like, you know, like a Together or, like, a Fireworks. And sort of the argument was, like, "Well, we think infer- just serving models, uh, alone without making them will sort of be commoditized."
- GAGuido Appenzeller
Yeah.
- DPDylan Patel
Right? Um-
- GAGuido Appenzeller
We, we have some in the stable diffusion ecosystem, like with, like, uh, Faul-
- DPDylan Patel
With Faul, yeah, yeah
- GAGuido Appenzeller
... or the Gate.
- 26:09 – 45:28
The Silicon Startup Boom
- GAGuido Appenzeller
Uh, what's, what's your take on those? I mean, there, there's, there's a ton of capital flowing in-into that, right? We've seen, I have not numbers, but probably billions, uh, being invested in, in chip startups.
- DPDylan Patel
Yeah, for sure. For sure. I mean, like, whether you're looking at, like, um, you know, companies like-- I think it's, like, pretty impressive that a few companies like, um, Etched and Revos, um, and a number of other companies, you know, Madax and, and others, like, have gotten the amount of funding they've had without even launching a chip, right? Um, you know, in the past, like, yes, silicon companies would make money or raise money, but they would at least launch a chip before they get a, you know, a big round. But, like, uh, Etched and Revos, like, have raised, you know, a lot of money without ever launching a chip publicly-
- GAGuido Appenzeller
Yeah
- DPDylan Patel
... which I think is... I mean, it speaks to, well, like, yes, silicon is super, uh, capital intensive if you're building a chip, especially an accelerator, which has so many moving pieces. Um, and there's, there's, like, there's, like, ten different AI accelerator companies out there, right? Like, that are new-ish in the last few years.
- GAGuido Appenzeller
I think there's a lot more.
- DPDylan Patel
[laughs] Uh, that, that are like-
- GAGuido Appenzeller
[laughs]
- DPDylan Patel
Yeah, yeah, yeah. That's fair. Um, and then, and then, and then there's the old guard, which continues to raise money-
- GAGuido Appenzeller
Yeah
- DPDylan Patel
... right? Like Groq and Cerebras and-
- SPSpeaker
Mm-hmm
- GAGuido Appenzeller
Somanova
- DPDylan Patel
... and Somanova and, and Tenstorrent and so on and so forth, right? Like, um, or Graphcore getting bought out by SoftBank, and SoftBank dumping money into this effort as well, right? There's, there's a lot of capital being invested to disper- dis- uh, dispel sort of NVIDIA's top dollar or top position. Um, but it becomes challenging, right? It's like, how do you beat NVIDIA, right? Like, the hyperscalers, I think, are, like, kind of lucky in that they can, they can do mostly the same thing as NVIDIA, right?
- GAGuido Appenzeller
They're, they're captive customer, which is themselves, right?
- SPSpeaker
Yeah.
- DPDylan Patel
Right.
- GAGuido Appenzeller
It's a huge asset, yeah.
- DPDylan Patel
And it's-- They can, they can just win on supply chain, right?
- GAGuido Appenzeller
Yeah.
- DPDylan Patel
Like, I'm using cheaper providers.
- GAGuido Appenzeller
It's a, it's a margin compression exercise, essentially, right?
- DPDylan Patel
Yeah, yeah. And maybe, maybe for certain workloads, like Meta for recommendation systems, they'll have a better... You know, they can specialize more. Um, but for the most part, it's like, "No, we're, we're targeting the same workloads. We can just simplify supply chain or, or in, in-house a lot of it and compress margin, and it'll be fine." Um, but in the case of, you know, these g- these other companies, it's like, well, they don't have a captive customer. So now you have to contend with, "Well, I'm using the same ecosystem, um, and either I can use some custom silicon provider who's gonna p- take a margin anyways on top, and that's gonna compress my, like, what I can sell for. Or I can try and in-house everything," but then it's like, this is really hard, right? Like, I'm gonna do all the software des- I'm gonna do all the silicon design. I'm gonna build all this different IP. I'm going to manage the supply chain on chips, on racks, on everything, right? Ends up being a huge effort in terms of team size. Um, all in the end, like, hey, I make a seventy-five percent gross margin as NVIDIA. Um, AMD sells their GPUs for fifty percent gross margin, um, and they have a hard time out-engineering NVIDIA, and they're great at engineering, right? Like, uh, they're, they... But yet they still take more silicon area, more memory to ach-achieve the same performance, and they have to sell for less, so their margin gets compressed, right?
- GAGuido Appenzeller
That makes sense. But, but look, the... I think historically, if you look at it, typically, the, for new entrants in markets didn't win by marginally improving on something existing. That happens sometimes, but, but more likely, they, they jumped on some of, some kind of disruptive technology leap, right? Where it's like, we have a different approach, we have different technology. I- is that possible here? I mean, to, to, to some degree, um, maybe this is oversimplifying it a little bit, but I think part of the reason why the transformer model won was because it runs so incredibly great on GPUs, right?
- DPDylan Patel
Yeah.
- GAGuido Appenzeller
Like a, like a recurrent neural network is similarly performant it looks like, but it, [laughs] it runs terribly on a, on a GPU. So, so did we sort of pick the model for an [laughs] architecture, and now it's, it's hard to come up with an architecture that, uh, you know, really-
- DPDylan Patel
Well, it, it's, it's hardware-software co-design, right?
- GAGuido Appenzeller
Exactly.
- DPDylan Patel
Like, there is like-
- GAGuido Appenzeller
Yeah, exactly. [laughs]
- DPDylan Patel
There's all this hype about, um, neuromorphic computing, right? Like, theoretically, it's amazing and super efficient. It's like, okay, great, like, there's no ecosystem of hardware, there's no ecosystem of software. It would take, like, you know, tens of thousands of people who are the best at AI today focusing on that to even prove out if it's worthwhile or not, right? Um, on a hardware side, on a software side, on a model side. And so, like, you look at, like, Groq, Cerebras, Somanova, um, they all, like, sort of over-indexed to the models that were leading at the time when they designed their chips.
- 45:28 – 50:46
Data Center Power & Cooling: The Next Bottleneck
- DPDylan Patel
center are like... You know, there's this whole narrative about like, oh, AI uses so much power, and it's like not really. Um, you know, farming alfalfa uses like 100X the water of, of AI data centers. Uh, even by the end of the decade it'll be the same and it's like alfalfa's like worth very little. So it's like, I know. There's like, it's like cooling is like, not that... You know, people have like experimented with like, you know, undersea data centers to reduce the cooling cost, but it's like-
- GAGuido Appenzeller
Yeah, that doesn't make sense [laughs]
- DPDylan Patel
It's like five, 10% savings-
- GAGuido Appenzeller
It's-
- DPDylan Patel
But then like if you wanna-
- GAGuido Appenzeller
It's easier to get the water out of the ocean than, than to put the data center into the ocean I think [laughs]
- DPDylan Patel
It's like if you wanna service it, like you're screwed, right? So like the same with power, it's like we talk a lot about like the power is not actually that expensive. It's just hard to build, right?
- GAGuido Appenzeller
And hard to get to the right place, right? I think you said.
- DPDylan Patel
And hard to get to the right space and convert it down to the voltages and, and all this stuff that, that chips need. But like-
- SPSpeaker
So it's less the magnitude of power and more where it is and how it moves.
- DPDylan Patel
Well, the magnitude too, right? Like it's gonna be-
- GAGuido Appenzeller
In, in terms of total worldwide energy consumption, AI data doesn't, is still a, a-
- DPDylan Patel
It's like-
- SPSpeaker
Very small. Right. Nothing.
- GAGuido Appenzeller
It's like, yeah.
- DPDylan Patel
Even-
- GAGuido Appenzeller
It's a fraction of a percent. It's not-
- DPDylan Patel
Yeah, yeah.
- GAGuido Appenzeller
Yeah.
- DPDylan Patel
Even by the end of the decade, uh, you know, g- the US will be like 10% of our power will be AI data centers, which is still like-
- GAGuido Appenzeller
Of electricity.
- DPDylan Patel
Of our electricity.
- GAGuido Appenzeller
Yeah. And in terms of energy, that's even a smaller fraction, right?
- DPDylan Patel
Oh, yeah, yeah. 'Cause you think about vehicles-
- GAGuido Appenzeller
By shifting to electric vehicles also, you can probably make a bigger swing than, than, uh, you know, with all the AI data centers we can build in the first place.
- DPDylan Patel
But outside, like it's like in Europe, like that number's not moving up that fast and like all these other countries. I, I think we need to build a lot more power, but it's not like some, some crazy, crazy like amount. It's just like doing it properly is the hard th-
- GAGuido Appenzeller
Yeah, I agree.
- DPDylan Patel
Um, and, and, and again, like the cost of power, like you go look at like these deals people are signing, they're still signing like even though the price has skyrocketed, uh, from like a few cents a kilowatt hour for these massive, massive purchases to like 10, it's still, you know, when you think about the full TCO of a c-cluster, you know, the GPU cost, the networking-
- GAGuido Appenzeller
It's a couple percent, yeah
- DPDylan Patel
... all of that stuff far outstrips the power.
- 50:46 – 57:56
Intel’s Role in the AI Era
- DPDylan Patel
right? He's invested in so many different companies first. Um, you know, he was on the board of, like, SMIC, which is China's TSMC effectively, which is, like, a big, like, drama or, like, the big- the... Some of the biggest tool companies in China. He's the first investor in them because, you know, there's a multipolar world there, and he's making good investments. Um, but like, you know, now, like, people are getting mad about that, but it's like, no, he, he, he recognizes, he, he... The companies, like, he understands the supply chain. He needs to not spend his time on splitting the company because then he never actually fixes the company, right? Intel's problem is that, like, it takes them five to six years to go from design to shipping the product, um, in some cases more. Uh, and when they tape out a chip, right, like, you know, you send the design to the fab, the fab brings back the chip. They go through fourteen revisions in wha- in some cases, whereas, like, the rest of the industry goes through, like, one to three, right, revisions if they're good, um, of like send the design in, uh, get the chip back, test it, send the design in, right, uh, for, for a public launch, and they'll launch a chip in three years.
- GAGuido Appenzeller
So but i-i-if you look at Intel today, right, they still don't have a competitive entry on the, uh, on the AI side.
- DPDylan Patel
And they won't.
- GAGuido Appenzeller
Uh, uh, right. C-can you... So what... But what does it mean for their offering? I mean, they, they, they're still doing great on, on, on CPUs. They don't have a good AI, uh, AI chip product. Is it long-term sustainable positioning, right, I mean, as, as, as a, as a standalone chip company?
- DPDylan Patel
I mean, IBM still makes more money every launch off of mainframes. So it's not like, it's not like X86 is dead. It's like you don't get the growth rates, but, like, you could totally run this as a very profitable enterprise. Um, and, and I think the same with PCs, right? There's, you know, there's some turmoil, there's some Arm entry, there's some AMD competition and Apple-
- GAGuido Appenzeller
They don't care about most of them, yeah
- DPDylan Patel
... but, like, I think it's a very, it can be a very profitable business if it had, like, one-third the people or half the people working on it. And so, like, Lip-Bu Tan to fix Intel needs to go into both the design company and lay off a shitload of people, but, like, keep all the good people, um, and make sure that they're designing fast and they're launching from design conception to launch is two to three years, not five to six. Um, and, and that's on the design side and make that profitable. And then on the fabs, he has to do the same thing. There's all these people, like, um, one of the heads of, of, uh, fab automation at Intel-
- GAGuido Appenzeller
Yeah
- DPDylan Patel
... um, I, I explicitly told Lip-Bu Tan because, you know, we have a couple ex-Intel people who are actually good in the company, um, that worked on the fab side and were like, they were, were like, "Who's the worst people in friends?" It's like, "Oh, this guy sucks." I explicitly told Lip-Bu Tan. He had never talked to the guy because it was like four layers down. The company has, like, absurd amounts of hierarchy. It's like four layers down. He, he goes and talks to the guy and he's out, right? It's like, like he figures out, like, who's bad, right, um, and who's good. And he has to go in and he has to like, "Hey, the vast majority of the team at Intel is the one who led the world in production and process technology for twenty years."
- GAGuido Appenzeller
Yeah.
- DPDylan Patel
Right? But there's a lot of, like, built-up crap. So he has to go figure this out, right? He can't waste his time on like, oh, all this like structuring to split. Like, I w- I think it would be better if the company split. I just don't think he can spend the time to do that. Um, and if the design side of the company is, you know, you're not really gonna get into AI, you're not really gonna... You, you have to make some money there. But the fabs I think could truly become a competitor. But they're gonna go bankrupt by the time anything, you know, can happen. So they have to figure out how to get capital. So he has to figure out how to get capital. He has to figure out how to clean up all the crap-
- GAGuido Appenzeller
Yeah
- DPDylan Patel
... uh, make the, um, you know, yields go up, right? Make the product ship way faster. Like, all of these things are basic problems.
- GAGuido Appenzeller
I think, I think the goals are completely correct. I mean, I think, I think the big challenge, just reflecting back on my time there, right? The... I think the big challenge is that right now if you look at Intel, right, they have essentially software, sort of the, the chip design making the... And then there's, you know, the core manufacturing part, right? And they have three diff- very different cultures, and it's very hard to get everything under one umbrella, right? And, and so I think that is the big challenge, right?
- DPDylan Patel
I think you c- you should even run-
- GAGuido Appenzeller
But he has a lot of money
- DPDylan Patel
... the company separately, right? Like, but, like, you can't physically separate them entity-wise because it's gonna take so long to sever all these things, um, because he doesn't have time, right? Like, Intel is literally gonna go bankrupt if they don't have a big cash infusion or they lay off like half the company, right? Uh, which some could argue you need to lay off like thirty percent of the company anyways. But, um, there's a lot of bad things that happen if that happens, right? Um-And they need to spend a lot more on building the next generation fab, even if they fix the fab, and they don't have money for that, right? So there's like, there's like a lot more, more important problems than like physically separating the company, even though I think long term, yes, the fab has to be separate from the chips design software, right? Like or chip design, uh, part of the company. Just like that's gonna make each company much more accountable, uh, be able to service their customers better, etc. It's just that's gonna take too long and they're gonna go bankrupt by then.
- ETErik Torenberg
Awesome.
- DPDylan Patel
I, I but I think, I think, I hope, I hope, I pray someone does something, right? Like you get a big capital infusion. I don't know. The big hyperscalers are like muscled into like, oh, okay, wait. If TSMC eventually grows their margin to seventy-five percent because they're the monopoly, um, plus they intake all this stuff like co-package optics and power delivery and all this, like all of a sudden the cost is gonna spike. So we should actually just throw five billion dollars at Intel each, right? Screw it. And, and that could actually give Intel enough of a lifeline to potentially get to something and maybe be competitive, um, that's, that's the hope.
- ETErik Torenberg
Can we, uh, can we finish by finishing th-this game that we started when we gave Sam Ad-- uh, Altman advice? If, if Jensen was here, what, what advice would, would you have for him?
- DPDylan Patel
Hmm. If Jensen was here...
- ETErik Torenberg
Ha.
- DPDylan Patel
Uh, you know, I think, I think he has a massive, massive balance sheet, right? Jensen does. He's, he's cash... Free cash flow is like, you know, ridiculous. Um, the tax cut, the new, the new Trump, you know, tax bill, um, institutes something really incredible, which is that you can depreciate all of the GPU cluster cost in year one. Um, which we put out like a note about how like the tax implications to like Meta are like ten billion dollars a year. And across each of the major hyperscalers is like massive. It's like, well, NVIDIA's gonna spend tons and tons of cash, um, or ca- uh, they're gonna spend like, you know, tens of billions of dollars of taxes. Why don't you get into the infrastructure game somehow? Um, now this is obviously gonna be like crazy because like now they're buying GP-- their own GPUs and putting them in data centers and doing stuff, and they're competing with their own customers, but they're already doing that anyways because their customers are trying to make chips. Um, but they should like accelerate the data center ecosystem with investments, right? Uh, because really we, we think we can like have very high degree of like accuracy on what they're gonna do next year in terms of revenue, because it's just the number of data center watts that are being built, right? Like this is harder thing to shift up and down, right? Now, there's a little bit of share difference between how much is TPU versus GPU, but like, it's like you have to accelerate the infrastructure and you need to spend all of this capital that you're building, right? Like, okay, do you wanna go the route of like doing buybacks and dividends? Like great, like you're a loser if you do that, right? Like you, you can make more money by reinvesting and building a bigger company that's not just chips into the ecosystem or servers into the ecosystem, but actually like controlling the infrastructure end to end somehow. Um, so I think there's something he could do there, uh, with this massive war chest. And there's a reason like NVIDIA's done some buybacks and they've done some dividends in-increasing,
- 57:56 – 1:08:17
Advice for Tech Giants: NVIDIA, Google, Meta, Apple, Microsoft
- DPDylan Patel
but the cash on their balance sheet keeps growing, and they're gonna have north of a hundred billion dollars of cash on their balance sheet by the end of this year, I think. Um, so it's like, what are you gonna do with that? Um, I think, I think there's something moving into the infrastructure layer much more, um, that they, they could do if he really wants to be the king of the world, right? Uh, which I think he does.
- ETErik Torenberg
Uh, Sergey and Sundar?
- DPDylan Patel
Um, ooh. Um-
- ETErik Torenberg
[laughs]
- DPDylan Patel
I, I think, I think they should open up the Komodo on TPUs, right? Like start selling them. Um, open up the software. Open source a lot more of the XLA software, 'cause there's OpenXLA and there's XLA, but the vast majority is closed source. Um, really, really open up the Komodo on that, um, and be a lot more aggressive, right? They've-- they're still pretty not aggressive on data centers. Um, they're pretty not aggressive on a lot of elements of the company. Um, the TPU team's next gen dir- designs are pretty not aggressive, partially because a lot of the TPU team, uh, has left to go to OpenAI, the best people that I knew. Uh, it was actually really annoying. I knew like four people, uh, or five people, and they all went to OpenAI [chuckles] and it's like, fuck, like now I don't get as much ins-
- ETErik Torenberg
It's alright.
- DPDylan Patel
I met some other people, right? But it's like, you know... Um, I think they could be a lot more aggressive in many ways across the company. They don't have to be, right? But they could.
- ETErik Torenberg
Right.
- DPDylan Patel
Um, because AI, you know, like this ChatGPT take rate, the shift of search queries, the monetizable ones, especially from to, to purchasing agents, is gonna really screw Google long term if they don't, you know, get their act together. I think they've gotten their act together on DeepMind. There's still some inefficiencies, but Sergey works on, works within DeepMind a lot and they're driving hard. They're still a little bit behind, but like, I think like physical infrastructure, TPU, um, and, and how much money they could make and how much they could take the wind out of everyone else's sales-
- ETErik Torenberg
Yeah
- DPDylan Patel
... if they start selling TPUs externally, um, and reorg around like building data centers much faster so that they do have the most compute in the world, 'cause they did. Uh, but now there's certain companies that are gonna surpass them, uh, potentially over the next few years if they don't really get their act together. So I think that's what I would say for them. Um, yeah. And also like, like learn how to ship product better, right? [laughs]
- ETErik Torenberg
[laughs] Um, Zuck.
- DPDylan Patel
Um, I think, I think Zuck... You know, it remains to be seen what goes on with superintelligence, but like they're trying to move for super fast with the data centers. Uh, you know, like screw it, we'll build tents, uh, instead of like physical data centers 'cause we only need these for five years anyways. Um, you know, the superintelligence moves, you could, you could say whatever you want, but like, you know, trying to buy like Thinky for like thirty billion or, or, um, SSI for thirty billion didn't work out, so then they spent, you know, not even that much on hiring, not thirty billion on hiring all these people. Um, so I think they-- he recognizes the urgency with the models, with the infrastructure. Um, so I really think he needs to like... You know, i-if you read his website post about like AI, like I think, you know, he sees the vision, right? There's the wearables, there's integrating AI into that. There's being your AI assistant to do all this purchasing and stuff. I think he sees the vision, but I think he also needs to focus on like actually likeReleasing that faster, um, but also, like, the products that they do outside of their core IP every time they launch something is kinda mid, right? Um, you know, uh, Meta Reality Labs is doing well, but I think they should, like, go more explicit, like have a ChatGPT competitor, have a Claude com- uh, a Cl- like Claude Code competitor. Like, just start releasing way more products, um, because they're really just focused on their individual gardens rather than, like, branching outside of it.
- ETErik Torenberg
D-do you think Apple should have that same sense of urgency? Or if Tim Cook was here, what, what would you tell him?
- DPDylan Patel
The funny thing is, like, some of their best AI people are now, like, at Superintelligence. [chuckles] Uh, they're building an AI accelerator. They're gonna... They're, they're, they have AI models, but they're just, like, way slower. Um, they did mention on the last earnings call they're gonna allocate more capital to this, but it's like, guys, Apple, like, you guys are gonna lose the boat if you do not spend, like, 50, $100 billion on infrastructure.
- ETErik Torenberg
You don't think the current Siri will, will cut it? [laughs] Sorry.
- DPDylan Patel
[laughs] Um, I think, I think, like, more and more you'll see people, like... You know, great, Apple has this walled garden, but, like, they can only do so much to protect it, right? IDFA, like, they shut down ads to or data sharing to Meta. But Meta b-made better models, and now they have way more data and way more power over the user than they ever did before. Kind of it was good that Meta kicked the crutch off of them, or, or Apple did. Uh, but the same applies to, like, AI. Like, yes, they have access to the text, and they have access to this, but, like, I think other people are gonna be able to integrate user data, um, and agents will be able to integrate all this user data, and they'll start to lose control of what the user experience is as more and more gets disintermediated by AI being the interface rather than touch, rather than, you know, touchpad and keyboard. Um, and I don't think they've truly realized what happens when the interface to computing is, is AI. Like, they, they market it, but, like, that's gonna shift computing really heavily. They have great hardware, and their hardware teams are working on awesome stuff and form factors. But, like, I just don't know if they, like, get what is actually gonna happen to the world in the next five years, uh, truly well enough, and they're not building fast enough for it.
- ETErik Torenberg
What about Microsoft to, to that end?
- DPDylan Patel
Um, Microsoft has the same problem, I think. Um, they were super aggressive in '23 and '24.
- ETErik Torenberg
Yeah.
- DPDylan Patel
Um, and then they pulled back heavily, right? Uh, now, like, OpenAI's slipping through their grasps. Uh, there's that whole thing there. Uh, they cut back on data center investments heavily.
- ETErik Torenberg
Yeah.
- DPDylan Patel
They were gonna be the largest infrastructure company in the world by, like, a factor of 2X.
- ETErik Torenberg
Yeah.
- DPDylan Patel
Uh, which would've been... You know, you could argue maybe that was, like, too much, and maybe it wouldn't have been economical. Um, but, like, they're losing grasp on OpenAI. Their internal model efforts are failing, uh, spectacularly. Like, they're on LLM arena right now, and they're pretty decent there, but it's like that's just, like, a sick authentic model. Like, um, it's a code name but, like, whatever. Like, MAI is, like, failing. Azure is, like, losing a lot of share to Oracle and CoreWeave and Google, um, and so on and so forth, right? Their internal chip pro- chip effort is by far the worst of any hyperscaler. Like, uh, they're just, like, misexecuted. Like, GitHub... How is GitHub not the highest ARR software, uh, code model?
- ETErik Torenberg
I mean, they, they, they only had the best IDE, the best source code repository, the best enterprise Salesforce, the best model company as a relationship, and they were the first to market, right? [laughs] It's like they had everything going for them.
- DPDylan Patel
And, like, there's just nothing, right? It's like-
- ETErik Torenberg
I'm-
- DPDylan Patel
... like Co- GitHub, uh, GitHub Copilot is failing. Microsoft Copilot is, like, still crap, right? Like, um-
- ETErik Torenberg
Yeah, it's unusable.
Episode duration: 1:06:16
Install uListen for AI-powered chat & search across the full episode — Get Full Transcript
Transcript of episode xWRPXY8vLY4
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome