EVERY SPOKEN WORD
55 min read · 11,449 words- 0:00 – 1:25
Intro
- SGSarah Guo
(music plays) Hey, listeners. You are here for another episode of No Priors, just with me and Elad. And there's been a lot going on in earnings and in the technical world, so I think we will start with maybe just one fun thing that seems to have taken flight in terms of music generation. Elad, what do you make of the popularity that Suno and Udio have found?
- EGElad Gil
When you said fun things, I thought you were going to talk about my hats. I have two hats.
- SGSarah Guo
(laughs) We can talk about your hats.
- EGElad Gil
I have my Bitcoin halving hat that I got from Coinbase. As you see, the Bitcoin halving. And then, um, Zaid has these Make AI Great Again hats, that he's actually selling. I think it's going to fund his data labeling habit.
- SGSarah Guo
That's a lot of hats.
- EGElad Gil
I know, yeah. That's all I got. That's all I got left here.
- SGSarah Guo
If anybody has read Jared Kushner's book, there's a great bit about how much money they're making from all the, uh, MAGA swag, and so-
- EGElad Gil
Yeah.
- SGSarah Guo
Listeners, Elad and I are making hats and tequila for the guests.
- EGElad Gil
Yeah.
- SGSarah Guo
But for the low, low price of one H100 GPU, we will give, uh, each of those to you too.
- EGElad Gil
Or a Bitcoin. Either way.
- SGSarah Guo
Okay, I'll take the Bitcoin instead. Yeah.
- EGElad Gil
Me too. I know, yeah. (laughs) I think we all would at this point. We can check in again in a couple months when the D100s come out, or whatever time period.
- 1:25 – 4:02
Music AI generation
- EGElad Gil
So, um, yeah, so, you know, as you know, there's been some really interesting things happening on the music generation side. And so, um, there's both Suno and Udio, and both seem to be kind of taking people by storm in terms of really interesting music-based models and, um, it feels like one of those things where it's early, but it's really giving a glimpse of what's coming in terms of the ability to create other types of content. Obviously, the very first content wave in some sense was simple text-based things on GPT-3 like Jasper, and then we hit a image gen wave, and that was Midjourney and Stable Diffusion and things like that. And then we had obviously chat come out as sort of a new type of format and interaction modality, and then we had video with things like Pika, and so it just feels like sequentially we're hitting these different formats. And then obviously Suno from, um, from OpenAI, and now we have these really interesting music models where you can specify, um, the type of music that you want. You can write the lyrics. It'll add vocals. And so, you know, these really seem to be the two models initially at least that people are really adopting. And so it just seems like an interesting moment in time from the perspective of look at all these different creative things that people are now empowered to do, and look at the different ways to engage. And of course you could imagine going forward in time and saying, "Okay, s- at some point there'll be voice cloning where..." And I think, um, I think it was Drake who put out a song, right, where he had two or three other rappers that he just voice cloned in. And you can imagine a world where you could use anybody's voice, assuming there's permissions and everything else, to, to generate your own songs and content and the whole thing. So it just seems like a very exciting future world between Udio, Suno, and some of those other companies.
- SGSarah Guo
Yeah, I, I think one of the things that's not obvious here is in, um, like, media platforms in general, they're... Like, the ratio varies, right? But there are a lot more, uh, readers on X, or consumers, like people who scroll a feed, on TikTok, or not TikTok anymore I suppose, but w- whatever it is, than, um, than creators. And so I think, like, one thing that is just, like, unknown is how many people actually want to create music if you make it a lot easier to, you know, create something that's any good, right? Uh, and if, uh, if the music we get changes. And so, I was talking to one of these founders, and he was like, "Everybody should have a personalized soundtrack for their life, but in the voice of Taylor Swift, in the style of Taylor Swift." Um, so.
- EGElad Gil
Yeah, I already have one of those, but yeah.
- SGSarah Guo
So what is yours?
- EGElad Gil
I, I can't really share publicly, but we can talk about it later.
- SGSarah Guo
Okay. Okay, yeah. It's gonna be a little bit... It's gonna be... I think it's gonna be a little bit a personal thing.
- EGElad Gil
Yeah. It's fine.
- SGSarah Guo
What else is going on?
- 4:02 – 11:39
Apple’s LLM
- SGSarah Guo
- EGElad Gil
I don't know. I mean, what about local LLMs and the Apple release? Do you want to talk about that?
- SGSarah Guo
Yeah, so it's, it's interesting. Um, Apple has entered the chat with a release of, uh, you know, relatively small models that are now on Hugging Face and such. And I, I think it's just worth, uh, talking about the fact that what some of the initial open source releases, um, MixedStraw, LLaMA did, uh, was create, you know, pretty impressive reasoning capabilities that were open that developers could use. But there's been huge demand for m- models that actually have, you know, that level of capability or at least useful capability in a one and three billion parameter size that'll fit on edge devices. And of course, like, if you, if you enable that, you have a very different latency paradigm in terms of, um, what experiences for a consumer are possible, and then not paying for the compute of inference all the time means you can do things much more easily that are ongoing and passive and proactive. And so I think there's a lot of developer demand, and I think it, like, foreshadows... We should see Apple creating interfaces, um, for running models locally, uh, as part of their ecosystem. That's my general prediction.
- EGElad Gil
Yeah, I think it's kind of interesting because, uh, I've, I've seen a number of, um, people building apps recently that have been these Mac or iPhone OS resident sort of exposure to LLMs, and in some cases it's let me index everything on your computer and let you interact with it or search on it or, you know, build an embedding on top of it or set of embeddings. Or, um, let me just integrate ChatGPT or other things into, into your desktop. And there used to be, like, search bar apps and things like that in sort of prior generations. And so I think a lot of that is gonna be really interesting from a user experience perspective, and one can imagine a couple years from now that's just gonna be something that comes standard on your device, either direct from Apple or through some sort of partnership or something else, yeah.
- SGSarah Guo
Do you believe any of those things can persist independently, right? 'Cause I'm reminded of, like...... the launcher platforms like, you know, if you think about Android as a more permissive platform-
- EGElad Gil
Yeah.
- SGSarah Guo
... people trying to choose core, change to a core user experiences like that, uh, what felt like an operating system level was, was kind of tough.
- EGElad Gil
I think it depends on whether or not, um, you truly take advantage of the browser and you're also accessing browser-related, uh, third-party applications that you're just getting through the web and indexing them or not. And so really, it comes down to, like, what's the footprint of content that you're searching over in some sense, and that to me is the biggest pivot point in terms of whether you'll end up with something that's gonna be specific to the OS company or s- or broader. I think in general, uh, platforms tend to integrate the most valuable things into themselves, and so it's always risky. To do that as a strategy and every once in a while you see a company that has breakout success even though it's built on top of a platform, so I think the oddest example of that is Veeva. Do you know Veeva? They're like a $40 billion vertical SaaS company focused on pharma.
- SGSarah Guo
Oh, yes. Yeah.
- EGElad Gil
Yeah. Was built on top of, um, Salesforce. It was just a Salesforce app, and they were just selling this into pharma and then it started working, and they eventually swapped out Salesforce in the backend. But they were literally just some like thin layer (laughs) on top of Salesforce for years. And so that- that's- that's the only example I can really think of as something that's getting truly massive, um, on top of a platform that wasn't then subsumed by that platform. But I'm sure there's other examples too.
- SGSarah Guo
Yeah, I think there are like all kinda apples and oranges. Yeah. L- Veeva, sorry. So out of context. I'm like, "What does that have to do with AI, man?" And like, uh, o- operating systems. Yeah, I think choosing the, the vertical thing makes sense to me there, right? You've got compliance around workflows for selling to doctors for life sciences that a Salesforce, like, didn't really have the, um, expertise or maybe they just didn't see how big it was to, um, attack.
- EGElad Gil
Well, they verticalized. I mean, um, Salesforce always had these verticals, right?
- SGSarah Guo
Mm-hmm.
- EGElad Gil
Let's see, what's Salesforce market cap? It'll be interesting to see, like, relative to Veeva, how much value. So Salesforce is 266 billion, so yeah, Veeva is like 20% of Salesforce or something like that.
- SGSarah Guo
Yeah, that was worth investing in.
- EGElad Gil
Yeah. (laughs)
- SGSarah Guo
Yeah.
- EGElad Gil
I think people also forget that all of Microsoft Office at some point-
- SGSarah Guo
Right.
- EGElad Gil
... were third-party applications. So in the '80s, there were separate companies like Lotus and others that were providing what, uh, turned into Excel. There was a PowerPoint company that was very popular, you know, before Word, there was documents, uh, (laughs) that people would buy as separate applications and then Microsoft just ended up subsuming them into its, its own distribution platform, and so again, it's a very standard kind of traditional thing to have happen. The default case would be that those things eventually become part of the operating system, um, or, you know, the operating system company, but again, you never know, so I always think it's interesting to see what people do and and can they make it cross-platform, does that matter and all the rest of it. The one other thing that I think is interesting about small models is the question of how s- how smart can small models, models get and how much can you actually pack into them? And to some extent if you look at an LLM there's like three or four pieces of capability that people care about, there's sort of the reasoning part of it, um, there's a set of capabilities in terms of what it can, um, do from a synthesis or other perspective, there's the multimodality, the knowledge base or knowledge set is actually resident in the model, and you can only step so much into, you know, a 3B model or a 7B model or whatever size you end up eventually running on device. And so that's the other question is, what are the capabilities that you can actually have that are, um, device resident versus which you always have to go to the cloud for? And obviously devices always expand their capabilities over time because of the microprocessors they have and everything else, but fundamentally, there are gonna be limitations and so the question is where is that line? And you could probably, um, define that analytically and just say, "Okay, today this is all the stuff that's gonna get cut off if you just do it on device and here are all the things you can do," and as we move up in terms of device capabilities and their ability to, um, provide access to larger and larger, uh, LLMs that are resident, what are the capabilities that come with that? And it, and so I, I haven't really seen that analysis but that seems like something that should be reasonably straightforward to do in terms of just trying to understand what can you really do on device versus not, and therefore what apps or products can you build that are gonna be, you know, device resident.
- SGSarah Guo
Yeah, well, I, I think there's like an obvious analogy to draw too, just like, the weight of like how much computing is done, like, on client or in cloud client server, if you're really old, right? And I, I don't think it's like obvious that there is any sort of principled answer here, except it's going to vary by application and how quickly, as you said, like capabilities actually fit into a whatever hardware people already have, because now there are companies doing AI-specific hardware already, right? And i- you know, if you're trying to make something that feels very AI native work, there is a question of like, okay, well, you know, do we try to make the model work? What hardware do we... What, what, um, compute do we want in the device itself? Uh, how much pre-processing do we do, um, to, you know, send less data over the network to, um, large model in the cloud? Uh, and, and so I, I definitely think there's going to be like a distribution of that compute, just like there are religious wars about like, you know, what gets done, you know, on client, on server, in your CDN, whatever.
- EGElad Gil
But a once-forgotten status is do you think that's not just your phone and your watch or something? Like what, where do you think the new AI hardware will matter, at least from a consumer perspective?
- SGSarah Guo
One of the most
- 11:39 – 15:25
The role of AI-specific hardware
- SGSarah Guo
interesting theories, and we can actually talk about like the Meta AI launch as well, is, um, whether or not you want some sort of passive device that is like continually collecting data about the world from a vision environment perspective, which is why you get like Ray-Ban Meta glasses instead of just like a phone or a watch that's sitting in your pocket, right?
- EGElad Gil
Sure. Um, I guess you could also just, uh, fix small cameras to other devices as well so it just comes down to, do you need a new form factor or not? Although I think it's an interesting area and interesting direction, and I guess relatedly, like when do you need that extra data or that information?... um, and under what circumstances we'll use it. I mean, it's super interesting, right? I think in general, uh, if I look at the early mobile device products that emerged from it, a lot of them just took advantage of the accelerometer if you were doing fitness or, um, GPS if you were doing everything from Uber to a variety of other applications. So I, I do think those sorts of new capabilities, uh, from a primitives perspective are super interesting. So but is, again, it just comes down to, okay, what are you going to use it for and how and when? You know, there's also really interesting unexpected applications that they've sent to other areas right now, so I'm moving off it. Um, you know, meta thing I think is super interesting. Have you tried the product?
- SGSarah Guo
Yeah.
- EGElad Gil
Yeah, I was very impressed. Yeah.
- SGSarah Guo
Yeah. They, they entered the race, um, with, like, multiple different products, like different modalities upfront. It's interesting that they haven't actually, um, they haven't pushed it that aggressively into their existing surfaces yet because they have, like, unlimited distribution, right? And it's just meta.ai, like, independent product. Um, but I imagine they're going to, I imagine they're going to phase into it.
- EGElad Gil
Yeah, it seems like you would, um, potentially, and who knows what their plans are, have like a, you know, uh, a channel or bot on WhatsApp or on, you know, Messenger or multiple different services that you could just start interacting with to do things for you. And that could effectively be a way to encapsulate meta.ai as like a, just another line item or another account that you're interacting with in chat or another sort of properties they own. But I thought it was really well done. Um, I've been making different images with my kids, um, on the different services, you know, Playground and, um, OpenAI and Meta and everything else and, um, they really enjoyed the one-click animation, as like a feature.
- SGSarah Guo
Yeah. Uh, I think it's, it's an impressive product launch, and then on the, um, open source side they've also made a splash, uh, where I think one of the things that surprise people is, uh, there's an argument generally in research about, like, how much do you want to try to change, like, the efficient frontier of model training, um, or, like, do you just, you know, ride the curve and scale up? And, and if you are Meta and you have somewhere between, you know, 22,000 GPU clusters and 350,000 GPUs, um, available, uh, then continuing to train past, like, supposedly optimal points, like, does improve performance apparently, um, and doesn't just fully asymptote as soon as maybe, uh, a lot of the research community predicted. Um, and so I think it just is a, is actually a point in favor of the very large firms who are really willing to invest against this and don't ha- like, it, it begs the question of how important is efficiency? I think efficiency and its, like, creativity and architectural approach is going to end up being really important for lots of different use cases. Like, over time applications are gonna want efficient inference and it's like, really large models are impossible today to serve for, uh, the vast majority of use cases from a cost and speed perspective. But if all you're trying to do is a technical demonstration, like, this is very impressive.
- EGElad Gil
Yeah, they did really nice work.
- 15:25 – 18:01
AI platform updates
- EGElad Gil
I guess the other thing on the sort of data compute platform side while we're talking about all these different platforms is, um, Snowflake and Databricks and sort of that layer of companies, do you think you need to own the model as a data or compute platform? Or how do you think about these various open source models that these folks are starting to launch now?
- SGSarah Guo
One thing that's tricky as an investor, as an opera in the space is, like, it begins to look like all one blended, uh, landscape of competition. So the Snowflake platform hosts a bunch of different models from other players including, for example, Mistral. Uh, and they're also training their own and their models are available on, for example, like, Microsoft Azure, right? And so you've got a huge number of players all in coopetition somewhere between models and inference and it's like, I can't tell you how I think it is going to work. I do think that in the immediate term, like, it is less obvious to me that, um, they necessarily, like, these, these compute platforms necessarily need to own the models especially if there's a, um, a big landscape of open source out there, uh, versus need to demonstrate to customers that... And actually have the expertise of, uh, let's say, like, training models and fine-tuning them and deploying them and customizing them to different use cases. So I think there's a piece of it that is actually, um, uh, like, learning, uh, something that they can deliver to their customers and marketing too.
- EGElad Gil
Yeah, it definitely feels like in the long run there's a bit of a capital scale question which is if, if you're focused on the frontier models at least, you need more and more, um, scale in terms of compute or other things which means you need more and more money to invest behind it, and that's why a lot of the models seem to have sponsors in the hyperscalers or other sort of large corporations. And so the question is how long do some of these other players want to keep up in terms of trying to build things that are bigger and bigger or do they just focus on a specific subclass of models that are more kind of medium to small that have more specialized use cases? And so one could argue that in the long run that's where those types of models should go, is the inference platforms probably provide things that are more in that range and then the hyperscalers and their partners, you know, a handful of them will be at the frontier because th- those are the ones that have business models that can actually afford the subsidization of these massive frontier models with the idea that the ROI pays for itself dramatically later, um, as you scale into more and more intelligence and more and more applications and things like that. So it's kind of investing far ahead and the question is how far ahead can certain companies invest?
- SGSarah Guo
I'm sure you've been getting this question
- 18:01 – 20:33
Forward thinking in investing in AI
- SGSarah Guo
but I get like a stressed out question from investors of essentially like how much money should the world spend on this and like is it, is it rational? Um, and I, I, I think this is an impossible question because should is...And it- like, this is a- this is a question of people's business decisions. Like, a very small number of individuals, actually, right? And something i- that feels unique in AI to me is you- you have these folks who, um, like, they really believe in technology and, you know, strong intelligence. They can, uh, reason about exponential growth. That's something like Silicon Valley people are very proud of, but also all of these people are very capable of. And they have, like, very high-risk appetite and then, you know, um, they can control and marshal a huge amount of resources. 10 billion dollars plus spend, 20 billion dollars plus spend if you're Meta every year on GPUs. Uh, 30, I think, um, 30 to 35 this year. They may have changed the estimate there, a- against these bets. And, um, I actually think it's interesting to put in context. Like, this isn't completely unheardof in terms of investment, uh, toward some future goal. Like, we- you know, you and I have talked about chip fabs before and how much they cost, but if you think in aggregate, a handful of players, um, in terms of the hyper-scalers are spending almost 200 bo- billion dollars this year on, um, on, uh, compute for AI. Uh, and then you, like, try to think about the- the other, um, types of investment in a new ecosystem. Like, oil majors spend 80 billion dollars a year. Um, broadband providers, you know, last few- like, it's about 100 billion dollars a year of spend in infrastructure, right? Like CapEx for high-speed internet. This is, like, going back a decade or two, but if you look at, uh, railroad freight, like, there are years where you're spending a huge number, uh, on railroad CapEx. Uh, and- and so I, um, I think, like, the different dimensions you can think about spent are, like, well, what is it in overall- overall context? And then, you know, what does- what does that get you over time? I think both these questions are pretty- pretty hard to answer. But it's not unheard of in terms of scale.
- EGElad Gil
Yeah. 100%. And I- again, it depends on your belief in terms of where all this goes, but I think it's notable that the hyper-scalers are doing it, and then the one other potential source of immense scale in the long run, maybe solvents, as- as people want to customize models that are specific to their region or customs or language or culture, whatever it may be.
- SGSarah Guo
So, there are a few areas of research
- 20:33 – 23:03
Unlimited context
- SGSarah Guo
that have been, like, debated, um, hotly recently, and one is this idea increasingly of, like, unlimited context, um, which I think is, like, mostly a term for, like, actively managing context, but you- you're an investor in Magic. Magic is one of the key players here. Can you- can you talk about this?
- EGElad Gil
Oh, sure. I mean, Magic, um, I think was one of the first companies to launch a really long context window. I think they launched a 5 million token context window out, you know, six or 12 months ago now, so it's been a while. Um, and then when Google came out with Gemini 1.5, I think it was a million token context window and then... It seems likely that a lot of people will end up in the 10 million plus range in the next, uh, year or two. Um, or, you know, some reasonable timeframe ahead, and what's happening is with, um, longer- longer and longer context windows, uh, the way that you think about what you put into a prompt changes pretty dramatically because you can start- start dropping everything from entire code repos to all sorts of documents if you're dealing with legal on through to, you know, you kind of name it. You can also drop in all the context of a giant customer support queue, you know? (laughs) Because, um, you know, some of these things eventually get big enough that you can, um, do really significant things with them. I think one of the most striking examples of long context window being important is actually some biology models that have come out recently, where just increasing the context window for things like protein folding seems to really make a big difference in terms of your end results, in terms of the fidelity of that folding. So I think this is gonna end up being one of those really significant things, and it reminds me a little bit of microprocessors or bandwidth in the '90s where each generation of, uh, microprocessors that came out had a big step up in things that you could do with it, um, or bandwidth, you know? Instead of a dial-up modem, suddenly you had a better connection and then eventually you had broadband, and now, you know, people have fiber in some cases. Um, and so it feels like a similar thing where there will be a really long period, and long period in this, um, AI means like two weeks or whatever. There'll be a really long- I mean, it's gonna be much longer than that. There's a really long period where, um, bigger and bigger context windows will matter and then there'll be some shift where, okay, now we've kind of maxed out what we're really gonna be able to do with it, but it seems like a- a significant shift in thinking around, um, how important this is gonna be. And again, in areas that I didn't expect necessarily, like biology. I guess the other thing that people have been looking at, you know, we talked a little about, uh, context window, we talked a little about compute platforms, we talked a little about the capabilities of small models and were they constrained? The other potential for your constraint is energy. So,
- 23:03 – 29:09
Energy constraints
- EGElad Gil
I'm just curious how you think about, you know, if you have a 500 megawatt or gigawatt data centers, like, how... Does the way we think about energy shifts, like, you- there have been things like job ads posted on the Microsoft website for like nuclear engineering and things like that. And so what- what do you think happens from an energy constraint perspective? 'Cause you kind of look at it, the first constraint was like chips and then it was packaging for the chips and then there's probably gonna be some data center constraint where everybody mis- miscalculates how many data centers we actually need, and then energy (laughs) is sort of related to that. So I'm just curious how you think about future energetic needs.
- SGSarah Guo
I think, um, like, one supposed coming, uh, limit to scale is gonna be, um, energy that nobody's built, like a 500 megawatt, gigawatt data center yet, and if you think of it as the equivalent of like, a nuclear power plant's worth of energy going toward a single data center, it is quite large. And I- I- I mean, the- um, the basic understanding here is today to train these large models, you- you need all of the GPUs co-located because there is enough, um, uh, data transfer between different chips, right? Between your nodes. And there's a physical constraint on that, in that you need to get that much power, uh, to a data- data center, and I- I think it's kind of- it's kind of interesting because, um, Sam made the point recently that when you have...... uh, constraints that require permitting and physical world changes, like you'll see a slowdown, uh, in terms of how quickly you can go deploy this, right? It's, it's no longer a software engineering problem only, and I think the fear, um, I think we're likely to work through a lot of these things, um, but the fear is that some of the, uh, potential limits to progress are going to be, like we can't just throw more compute at the problem because it's like physically hard to throw more compute at the problem. Energy, data centers, right? And then also, you know, the concept of the data wall, right, like the, the number of cheap available tokens on the internet, uh, we have used, and now we have to go figure out, um, how to go get more, collect more, um, in the world, or more likely, like, you know, uh, or in combination with generating synthetic data. That still feels like a bits, not atoms problem, and you can solve pretty fast. But you can even see it in, um, the, uh, designees for the new DHS Safety Board, uh, the AI Safety Board that just got announced, right? Um, some of the players are very obvious, so, um, the, you know, Sam and Satyas of the world, but it also includes, um, people who work on energy to this point, uh, and infrastructure security.
- EGElad Gil
Yeah, it makes a lot of sense. I mean, I actually found that the news around Microsoft investing in 1.5 billion in Abu Dhabi's G42 was super interesting from the perspective of you're effectively setting up a big, uh, AI data center. A, you're, you're starting to sort of, um, democratize or broaden global access to it, um, but B, you're doing it, and you're an energy center. And so, you know, I think that's really interesting. I think, um, you know, in general, if you view the last 50 or 70 or 100 years as, um, two sort of competing, uh, philosophies around progress and anti-progress or abundance and scarcity, you know, one of the biggest wins on the scarcity side was really shutting down nuclear power in the '70s, at least in the US, or the ability to add new nuclear power since, you know, 17 or 18% of US power generation today is still nuclear. It's like 70% in France, it's 30-ish percent in Japan, so it's still quite significant in some places, but it could have been 70% in the US and a bunch of other countries. And if you would have thought of that large, abundant, cheap resource and now how that ties into things like compute and other areas, it's a real game changer, right, in terms of capabilities. And so there are solutions. The question is are we gonna adopt those solutions at any point? Um, but these are actually very solvable problems if we choose to solve them, which is why I thought that job posting on the Microsoft website was so interesting.
- SGSarah Guo
Yeah, well, and I'd love to, I'd love to see that reversed, um, even beyond just like general, um, general energy needs. If you, if you do think of AI as a strategic issue and a national security issue, not using every energy resource we have and every energy technology we have, uh, like, like you and I have talked about, you know, generations of nuclear technology that, um, have existed not d- not deployed in the United States for, um, for really like policy reasons. It is yet another dependency that we're creating for ourselves on other countries, um, without needing to, right? It makes the, it makes the sort of energy dependency even worse to have, um, AI rely on it.
- EGElad Gil
Yeah, all of geopolitics would be dramatically different if we just had a lot of widespread nuclear power. And it's, it's interesting to think a- of that, and you think of some of the biggest producers being Russia, and Venezuela, obviously the Gulf, uh, you know, Iran, et cetera, and you think of geopolitical policy, it's kind of interesting to think about how the world would be different if we weren't dependent, or as dependent on some of these sources. So anything else we should talk about?
- SGSarah Guo
I think we're good.
- EGElad Gil
Do you wanna see my hats again?
- SGSarah Guo
Yeah, let's see the hats.
- EGElad Gil
That's the other piece of infrastructure where people have spent a lot of money in the past is this sort of hat-making, you know?
- SGSarah Guo
Yeah.
- EGElad Gil
Millions of dollars a year go into that infrastructure.
- SGSarah Guo
The trade is good for anybody listening. Uh, we will offer you one branded bottle of tequila or mockquila or whatever you'd call it and a, uh, a cool No Priors hat for H1 Hundreds.
- EGElad Gil
Just let us know.
- SGSarah Guo
Okay, that's all we got. Find us on Twitter @nopriorspod. Subscribe to our YouTube channel if you wanna see our faces. Follow the show on Apple Podcasts, Spotify, or wherever you listen. That way, you get a new episode every week. And sign up for emails or find transcripts for every episode at no-priors.com.
Episode duration: 29:09
Install uListen for AI-powered chat & search across the full episode — Get Full Transcript
Transcript of episode 7m2o6UQbYxQ
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome