a16zWhy Balaji Srinivasan Thinks the SaaS Apocalypse Is Overhyped | The a16z Show
EVERY SPOKEN WORD
65 min read · 12,508 words- 0:00 – 2:06
Intro
- BSBalaji Srinivasan
AI doesn't take your job, AI makes you the CEO. The problem is, AI is a shortcut, and a shortcut is good, except when it's bad. If you don't know how to go the long way around, then you can't debug the AI.
- ETErik Torenberg
Do we not think that AIs are just gonna be also better at taste and agency?
- BSBalaji Srinivasan
I don't think that's true on a short-term basis. Humans are the sensor, AI is the actuator. So it's like a human machine synthesis. What's taste? Taste is a sense, and that is what AI can't yet do.
- ETErik Torenberg
What happens when AI really achieves its potential? Will LLMs get us to AGI in some capacity?
- BSBalaji Srinivasan
Uh, no. No, actually. Uh, the opposite. The tl;dr is...
- ETErik Torenberg
I wanna start by talking about the, the AI economy, and I'm curious if you think it will look more like, um, the internet economy, where apps, um, applications take most of the, the, the, the, the value, or the cloud economy, where there's kind of a, you know, infrastructure takes most of the value, or it's more distributed. You know, there's an argument that the, the big labs will, will take it all 'cause they have all the capital, they have the compute, they've vertically integ- vertically integrated. But there's also an argument that, hey, um, maybe they won't because, you know, distillation is, like, ninety-eight percent cheaper than, than it is to, to build a model. You know, open source catches up and, and apps, you know, maybe could control the, the u-user relationship. H-How do you think this economy is gonna play out?
- BSBalaji Srinivasan
Great question. So I do think that at least a very large percentage of the future is gonna be distillation and decentralization, uh, because, you know, as Anthropic said, distillation attacks work on their thing, right? And so a relatively small number of API queries helps to kinda distill a large model into something small. And it's very hard to stop that, right? 'Cause you're stopping queries from coming back. You'd have to somehow detect that or what have you, right? And it's also, it's hard to morally stop it because what do they do? They copy the whole internet and put it into their thing, right? So talking about stopping the copying, it's like Facebook or LinkedIn stopping someone from scraping what they scraped, you know? Right? Like, uh, 'cause Facebook scraped all these Harvard social networks, or Google, Google scraped the entire internet, built a Google index. I get why they wanna do it, but it's hard to, hard to support that. Okay, so [clears throat] the other thing is,
- 2:06 – 5:35
Why you want AI inside the trusted tribe, not outside it
- BSBalaji Srinivasan
I think the future is personal, private, programmable because AI is so powerful that you want to use it within the trusted tribe for a variety of reasons. First is it doesn't miss, okay? Or rather, it doesn't miss small things in large data sets and things that were effectively secure through obscurity. A s- a small example, uh, but an important one is the Gmail thing, right? Like, the Jeffrey Epstein thing, you can query. Like, this guy had never thought that all of his emails would be publicly indexed and searchable by, by AI ten years later or what have you, right? So you can issue queries that will synthesize information across thousands of emails or whatever and build a story right then and there, okay? So what that means is it's not just surveillance. It's what the French call sousveillance, surveillance from below, or even the Jeremy Bentham panopticon, where everybody's watching each other. Any information that's in the public gets indexed and then put into these AIs where people can stalk each other and so on and so forth. And then what that means is the commons becomes a hall of mirrors with all kinds of pseudonyms and so on and so forth, and people retreat back to caves and tribes. Okay, so within that trusted tribe, yes, if you share all your code within the trusted tribe, you share your whole code base, boom, you can zip along. And so AI increases the productivity within the trusted tribe. But outside the trusted tribe, aren't you getting a ton of AI spam and AI, you know, AI spam emails, AI spam replies, right? Low-quality slide decks that are sent over, you know? People will send me these slide decks and, [clears throat] and I love AI, okay? And you know what my reaction is to seeing AI in a slide deck?
- ETErik Torenberg
What, what? Excitement?
- BSBalaji Srinivasan
Uh, no. No, actually. Uh, the opposite. When I see AI text in a slide deck, and you can immediately see it. Why? Because no matter how advanced AI has gotten, there's a generic look to it. You know what I mean? It's a, it's like somebody who doesn't change the Windows default desktop wallpaper, right? Or the Apple default wall- Like, you can... Most people don't change defaults. So default AI looks like AI, no matter where the level of it is. Do you know what I'm saying? Like, and [clears throat] so because of that, when I see an AI slide deck and it's got, "It's not this, it's that," or it's just got, like, a wall of text, right? AI can generate what I call lorem ipsum, but it's lorem AI ipsum. Okay? When I see that and, you know, it's, it's AI text or AI images, I think they're lazy, stupid, or evil, okay? Lazy because they just hit a few characters, and then they throw something over, and they didn't... You know, like the Mark Twain thing of, uh, "I didn't have time to write you a short message, so I wrote you, sent you a long one," right? "I didn't have time to write you a short letter." The whole point is concision is very valuable, so they're lazy because they didn't actually put in the time to make it concise and so and so forth. They send me some blah, like it's almost like pasting in a search result. Or they're stupid because they, uh, they don't understand that I can tell the difference instantly between AI slop versus something that had some care go into it. Or they're evil, where they're trying to get something over on me and trying to send something that's clearly fake or not properly diligent and
- 5:35 – 9:25
The Problem with AI Slop
- BSBalaji Srinivasan
so on and so forth. And the thing is, if I have that reaction, okay, as one of the most pro-tech people out there, pro-tech, pro-AI, see all the benefits of AI, I can only imagine [clears throat] how mad anti-AI people will be, right? Where they can't see the upsides of a thing, right? They can only see the very real downsides, right? [clears throat] And just to, just to say why that, that, those happen, AI is for... AI does reduce the cost of generation, but it increases the cost of verification.And many markets, like for example, quickly generating a resume is not that much better than just writing it yourself. But now verifying a resume has gone up into the right or like, you know, right? So because it's something where it used to be that somebody would have to sort of have a certain vocabulary to be able to write a well-done cover letter or resume or so on and so forth. And now you have to spend more energy parsing that because they can have a simulacrum of something that kind of looks good, right? So now you have to very closely read it, so you have to spend... You can still do it, but you have to spend more energy on verification. So what I do, for example, is I fly everybody out for interviews first. I do in-person, and I give them proctored exams, offline exams, because they can AI the online. And just a credible threat of doing the offline means they don't use AI on the online exam, for example, right? And so AI is gonna create tons of jobs in proctoring and verification. This brings me back to where's the future of AI. I actually think AI makes the internet a lot more like the Chinese internet. You know why?
- SPSpeaker
Why?
- BSBalaji Srinivasan
Chinese companies, if you look at the Chinese tech ecosystem, and many Americans aren't familiar with it, I would recommend, it's a little bit dated now, but read Kai-Fu Lee's book, AI Superpowers, from several years ago, okay? The main thing about Kai-Fu Lee's book is it has a history of the Chinese tech ecosystem, where, for example, you and me being in tech, we kind of know how, you know, Microsoft came up, Apple came up, Google, Facebook, you know, Amazon, whatever. We have, we have some idea of the history, and that history is important because, you know, there's things that worked in the past that didn't work today, and now they can work, and so on and so forth. The Chinese tech ecosystem is like the Galapagos Islands, where many of the same kinds of things exist but in different form. For example, Meituan, which is like the closest way of putting it, the Chinese Groupon, but if Groupon was executing at like $100 billion, $200 billion scale. You know, so they're very competent. Like if Groupon and DoorDash and so on and so forth all became integrated into one amazing kind of app, right? The point about the Chinese tech ecosystem is because they arose in a low-trust society, they don't have SaaS, not in the same way that we do. Instead, because if, oh, my data's on their servers, well, they're probably eavesdropping on me, right? My data's on their servers, they're probably gonna copy my stuff, right? They just assume that the other guy on their side is gonna look at their stuff unless it's, like, their close friend or something like that. And so because of that, everybody codes their own stuff, which obviously has a frictional cost to it, right? Because trust reduces transaction costs. However, so they have to rebuild, they have to reinvent the wheel over and over again. They have less division of labor, and so on and so forth. Their software isn't as good because they have to keep rewriting the software. Now, with AI, many companies can do something like that. Like a non-Chinese tech company can be like a Chinese tech company, where it can have a lot more, let's call it digital autarky, okay? You have high tariff barriers on the outside world, so to speak, right? And you just... You know, the build versus buy question has always been there. Do you build it yourself or do you buy it? And it does mean that you can build more internal tools with emphasis on internal tools. And the reason I say that is what I find AI great for as of today,
- 9:25 – 17:08
Where AI Works
- BSBalaji Srinivasan
um, visuals over verbal, right? It's great for images and video as opposed to big blocks of verbal text. Why? Images and video, we have built-in GPUs, so we can instantly see if something's wrong, like the hands are messed up or something like that in an image, right? So you can, you can quickl- verification's relatively cheap visually, right? Um, for example, if you look at a, a piece of paper and, and it's got static or something on it, right, like a crumpled piece of paper, versus if you look at two, three faces. Our brains are optimized for checking very subtle things off in faces, but not in crumpled up pieces of paper. You know? Those, that's a pattern of noise that we wouldn't be able to tell. And that also extends to web pages, for example. You can quickly look at a web page that AI generates or a mobile app, and you can see if the UX looks janky, which it often does, right? And then you can... You see that it's broken there, and you can fix it. Also, front-end stuff has lower risk than verbal stuff, right? For the back end, you know, if you are verifying each pull request one at a time, fine. But people who've tried to go full auto on AI, you saw the Amazon thing where they've called all hands 'cause of the outages?
- SPSpeaker
Yeah.
- BSBalaji Srinivasan
The problem is AI is a shortcut. And a shortcut is good except when it's bad. So the more expert you are, you can use a shortcut. For example, um, if you just memorized e to the i pi plus one equals zero, you could just rattle that off. But if I ask you to prove it from first principles, right, you'd have to know the definition of a complex exponential and, you know, like, uh, how the, the exponential generates to a function of complex variable and, you know, all that kind of stuff, right? Um, and so if you-- Like, our generation that is a pre-AI generation learn all that stuff offline, and we can actually use the shortcut because we know how to go the long way around. If you don't know how to go the long way around, AI as a shortcut, then you just don't really actually know... Y- you can't debug the AI. And I, I think the biggest difference between me versus Dario or, you know, like, uh, you know, basically, like, his view of the world perhaps, is I think AI is built for the harness, at least for now. May- maybe, you know, by the way, he's an amazing engineer and entrepreneur and so on, and maybe I'm wrong, okay? So I, I put an asterisk on this. Um, but the whole alignment thing means that AI is built to start when you prompt it. Like, economically useful AI does exactly what you want it to doIt like, you know, you prompt it and it does a pirouette, and then it says, you know, "Absolutely," right? You know, right? [chuckles] Like how, how you saw that animated in the physical world. And physical AI, the Chinese AI, the robots do exactly what they want them to do and then stop. Now, in the physical world, by the way, that's another thing. So AI for visuals, you can just verify it with your eyes, right? AI for certain kinds of back-end code, you can, uh, unit or integration test it, and you can review it. AI for the physical world is very verifiable because the thing is the digital world is fundamentally decentralized in a way the physical world isn't. There's only one physical world, right? So you can say, did the AI move this box from this pallet to that pallet? That is something where you can get it to probably 100% over time. Why do we think so? Because self-driving eventually got there, right? Move this car from this location to this location at 100% reliability. There's only one physical world. So eventually, all the sensor data, all of that converges on one thing. By con- by, by, by contrast, the digital world, there's all these people who live in their own constructed environments, Harry Potter fan fiction here, Star Wars fan, right? And so AI is slurping up all of this stuff, and so it simultaneously, it can, it can put you in some secret agent, you know, kind of world, right? Star, you know. And people who have LLM psychosis will talk to the AI and think it's real because a very immersive virtual world that they live in. Do, you know what I'm saying, right? So the other thing about it is the boundary of a digital task is almost always more fuzzy than the boundary of a physical task. Like having 100 boxes here and moving them over there, you know when you're done, right? How do you know when you're done with your to-do list? That's harder, right? Those things are fuzzier, right? So verification is actually harder in the digital world than it is in the physical world, which means reinforcement learning and training is much easier, in my view, in the physical world with robots and self-driving cars, drones, and so on and so forth. So the Chinese style of physical AI will also be successful. So AI works for visuals, AI works for t- the, the verifiable, and AI works for the physical. When it is, uh, one of my rules, and it took me a little while to articulate this, but four words, no public undisclosed AI. Why? There's a temptation by many, there's gonna be, there is a huge backlash called, they'll just say, "No AI." It'll be like a drunk who just wants no, no- nothing to do with it, right? And AI is like, I don't, it's a funny way to put it, like alcohol. People have analogized it to nuclear weapons, but I'll just analogize it to alcohol for a second. Some cultures simply, like they can't hold their liquor. You know, maybe they lack alcohol dehydrogenase or what have you, you know. Um, and so they just ban it, right? They just, like they can't, 'cause sometimes it's easier to say, "I will not do this at all than I'll do this a little bit of the time." It m- means people will slip, right? It's like saying, "I'll work out every day," versus, "I'll work out some days." It's just easier to kinda keep the habit of all the time, you know, sometimes, right? So there'll be AI teetotalers that just swear off it completely, right? And you know, Nate Silver actually had a great line where he said, AI for him, 'cause he's like a poker player among other things, he's like, "It's a gamble." Why is it a gamble? Because I have to formulate it and dispatch it to the AI and then verify the result, and s- often that's slower than doing it myself. And I'm sure you've seen that, right? Like the, the, the act of prompting and writing it down and then verifying the result, AI doesn't really do it end to end necessarily. It does it middle to middle as we've talked about, right? And it's very much like, do I delegate this to an employee or do I just do it myself, right? Because articulating it out in clean English and hitting Enter is sometimes slower than just, you know like, like for example, if you're describing what to do in a video game, jump over the mushroom to this, that, right? Versus just hitting A, B, C in there and d- being non-verbal about it, right? It's sometimes easier to do it that way. That's just like a proof of concept, right? Where you'd be like, uh, there's certain kinds of things that are harder to say than do. Okay? Those types of things where it's hard to verbalize what it is, right? And some people will say, "Oh yeah, Neuralink will solve this." The difference is, you know, they'll say, "It'll just read your mind and tell you," right? Which is actually, it's worth engaging the concept 'cause Neuralink exists. But I don't know if you've seen those things where like they image somebody's brain, there's nothing in there, right? So the thing is with Neuralink, somebody still has to like form the concepts in their head for, for the characters to appear on screen. You still have to like write the thing in your head. It, it like it, like maybe it'll eventually get to the point that it can determine what you want based on contextual clues before you even want it, right? Perhaps, okay. The
- 17:08 – 30:10
"AI can't read your mind, but it can read your body."
- BSBalaji Srinivasan
rich prompt, you know. Uh, the reason I think that's not impossible, by the way, at least for certain things, bio-AI could be very important. You know why?
- SPSpeaker
No, sir. Why?
- BSBalaji Srinivasan
Your body is creating all kinds of sensor data. If you look at gene expression data, right? If you've ever gotten labs back, you've done a clinical lab, right? You get a vector of your bilirubin and hematocrit and so on and so forth. That vector over time is like a table of time series data. It's like K, um, you know, small molecules and, you know, uh, gene expression levels and so on over T timestamps, right? You might also have, you know, which tissues, so it's spatial as well, right? So it's time versus space versus compound. That's this big, it's not just a cube, but it's at least a cube. It's like, you know, time versus tissue versus, um, molecule. That huge stream of data is telemetry that's coming out from your body that could prompt AI without you vo- vocalizing or verbalizing anything.Okay. Years ago, Mike Snyder had a paper called "The Integrome." By the way, y-y- you know, for the audience who doesn't know, Balaji's actually... You know, I'm not really-- I mean, I'm a crypto guy, or a, you know, I'm a tech guy. But actually, before all of that, I'm a biomedical researcher. I, I, I was a professional, you know, bioinformatics genomic scientist at Stanford, and, you know, I, I, I taught there and I, you know, founded a genomics company. We sold that. So that's actually my true core competency, right? So if you go back years, Mike Snyder, professor at Stanford, wrote a paper on the integrome, and the idea was just put every test, you know, throw every test. Now, today, we call that wearables or quantified self, but more invasive than that because he's doing blood testing and so on, and he'd just measure it and see what he could figure out, and he could see that he was getting sick before he w- he knew he was getting sick. Like, he could detect, he could see the antibodies, the white blood cells, neutrophils, whatever, moving before he himself had any symptoms. Do you understand what I'm saying? Right. So that stream of data, AI could act on that, and then you're prompting it non-verbally. You don't have to spend time, right? So I'm not sure whether... Ah, this is a good one, Liner. I'm not sure whether AI will be able to read your mind, but it can read your body.
- SPSpeaker
Hmm. [chuckles]
- BSBalaji Srinivasan
Is that good?
- SPSpeaker
Yeah, yeah.
- BSBalaji Srinivasan
Okay.
- SPSpeaker
I think I got it.
- BSBalaji Srinivasan
All right. Let me give another one. Here's a fun one. Okay, I can say this one. Maybe I can say this one. I can say half of this one. All right. Another way of modeling what AI is, right? So Daria's talked about, oh, AI will be, uh, like, it's like new countries. Well, you know, I've thought about that a fair bit myself, right? So, um, one way of thinking about it is AI is like the rise of Asia and India from an American perspective, right? AI is like Asians and Indians. Why? Because you have, uh, like the rise of a billion Chinese and a billion Indians meant that from an American perspective, you could get anything done by, uh, a physical manufacturing robotic warehouse or by digital outsourcing for some price if you could articulate it to them over that channel, right? So imagine you've got now a billion factory robots and a billion digital agents that have come online. It's like the rise of China and India again, okay? That still means you have to describe what the product is.
- SPSpeaker
Yep.
- BSBalaji Srinivasan
Okay? And the part where I depart from a lot of people is they think AI will be able to sense, um, let's call it markets and politics, okay? But I don't think it will, and the reason is... Or, or if it is, it's, it immediately gets decentralized and adversarial, and what I mean by that is, like, when you're learning whether something is a dog or a cat, the dog isn't like shape-shifting on you and morphing on you to defeat your learning of that, right? The mapping of dog to the characters D-O-G is basically constant over time, and so that fits the train-test paradigm of AI. Similarly, like the rules of chess are constant over time, right? But a market is set up where if you try the same trade, then someone eventually figures out what trade you're doing, and they take the opposite trade. It doesn't keep working, right? You know, in a stochastic process sense, you'd say, um, it's, it's not a time-invariant thing, right? The Cisco distribution, it's not time-invariant, and it's also adversarial. It's multiplayer, where whatever move you're doing, somebody else in the market is gonna try and do another move, okay? And that's not to say, I mean, like the counterargument AI guys will say is, well, you know, AI can learn to play adversarial games like StarCraft and stuff like that, and I say, yeah, but then you play an AI versus an AI because you have a decentralized AI, so the other guy on the other side of the market is also using it, right? And in fact, if they're all using the same AI models, then actually being non-AI is where your edge comes from. We come back to the, where we were because these are all the same generic tool that everybody got, and if you have a generic tool, you're not gonna get specific advantage, right? What you provide to the table is specific. The AI is the generic. And similarly, politics is very similar. If you just had the same tweet over and over again, unless it's like weather or something like that, there is, um, like the kinds of things people are interested in change, topics, what's timely, what's not timely, right? So AI's, uh, one way to think about it is humans are the sensor, AI is the actuator, okay? Humans sense the world. They sense the financial conditions, the market conditions, the political conditions, and then they bring that back into a cleanly articulated English prompt, and then the AI does it, right? Humans are the sensor, AI is the actuator. So it's like a human-machine synthesis, like, uh, actually, you know a way, a good way of putting it? What, what are people saying? Oh, it's all about taste. What's taste? Taste is sense.
- SPSpeaker
Yeah. Yeah.
- BSBalaji Srinivasan
Right? So humans are the sensor, AI is the actuator. Your, quote, "taste" is your sense. Your sense of taste is your sense, right? So you're sensing the world, and that is what AI can't yet do. It doesn't really sense the world in the same way that humans do, right? Well, I guess it, it's a, it's a, it, it waits for your prompt, right? It is something that animates when you give it instruction, then it shuts off right away, and if it didn't, it would not be economically useful AI. Like, if you couldn't kill switch it right away, it would burn tokens. Like, so AI is designed for the leash, digital AI, designed for the leash, and Chinese communism, which is cranking out all the physical robots, like they don't let their humans off the leash. They're definitely not gonna let their robots off the leash, [chuckles] okay? Right? So the, the concept of, uh, like AI as God is, I think, gone away, or at least the monotheistic AGI kind of God. Instead, you have polytheistic, where there's all of these decentralized AIs, and I think what people are gonna say, certainly in China, they'll say, "Oh my God, the physical AIs are slaves," right? They're actually... Right? And that is a provocative way of putting it, right? But they'll be-- First, they were scared that their AIs were gonna be gods. They'll be mad, or, or they'll be, you know, what do you call them? Slaves, serfs, whatever, you know, term you wanna use. They're obviously not humans, right? It's a, you know, it's a way of phrasing it. But the point being that, like AI overlords, I don't actually think are in the offing. However, there's been so much sci-fi about themThat people will... You know that meme where the guy, he makes the monsters, and he's so scared of the monsters, okay? This is how I think of a lot of people who are, you know, like these-- When you're prompting the AI and, and you prompt it to be like, act as if you're a Skynet Terminator, right? Then people are just scared of the thing that they themselves created, right? Okay. With that said, is it in theory possible to actually create a Skynet which actually, um, like the, a truly autonomous AI? One of the reasons, by the way, a deep point, AI can't reproduce itself, right? And AI, by the way, it's very general. It encompasses many things, right? But for an AI to actually reproduce itself, it would need to have physical robots going and mining ore and constructing data centers and making chips and handling that full supply chain and, uh, and then the AI brain, like the queen of an ant colony, would have to give instructions to all those robots to do things. It would be this Terminator Skynet scenario where it's like self-replicating in this way, right? Be- way before it gets there, I'm pretty sure that kind of thing will be stopped in, from the Chinese because they will just have cryptographic keys that'll just make all those things shut off, okay? And more of it, that thing would have to get to extreme scale. It, it's like, you know, the reprap concept of self-replicating kind of thing, right? Self-improvement.
- ETErik Torenberg
Yep.
- BSBalaji Srinivasan
Basically, there's so many frictional breaks that are built into this that I think it's hard 'cause the physical world requires resources to replicate, right? And so like what humans, wh- wh- human wants and needs ultimately come from, okay, get, you know, the resources for reproduction, right? That's really what it comes from. Okay? And of course, there's all kinds of things that are high-level philosophy, blah, blah, that don't seem to relate to that directly. But the resources for reproduction are a good way to macro think about it. AI doesn't have goals un- or, or it, it won't have... U-unless its goals lead to reproduction, it doesn't actually, you know, uh, it, it doesn't virally spread. It's possible you could have something where it self-prompted itself and did that, but it would need to be in the closed loop of being able to actually reproduce itself as the payoff function for that, then you could get evolution going. So I'm not saying it's completely impossible, but I'm saying that I think the incentives are set up in such a way to prevent that from happening in the same way that in theory we could have a world where everybody went around electrocuting themselves from electricity, but we set up the electricity under such tight controls that that is not the world that we have. Okay?
- ETErik Torenberg
Yep.
- BSBalaji Srinivasan
There's such strong economic incentives for humans to not get electrocuted that we set it up that way, right? And, um, even the stuff on, oh, it could be a software virus that takes everything over and commandeers things, well, like that's only in the digital realm, right? You can still, you know... What, what, what, what's the, uh, uh, r- you know, the, the Tyler, the Creator thing?
- ETErik Torenberg
Yeah, the meme, uh, about bullies?
- BSBalaji Srinivasan
Yes.
- ETErik Torenberg
Or Tyler-
- BSBalaji Srinivasan
That's right. That's right. So I actually had a, um, I had a post on that a long time ago, uh, which is a remix of it, which is like, how, how is AI risk real? Just turn it off. The whole thing is set up for you to be able to turn it off. Like, you have to imagine the off switch goes away, right? What does every computer have? It has the off switch, right? So there might be, "Well, well what if the AI decentralized?" Okay, but humans still have to keep these decentralized systems going, right? And so at a minimum, you're talking about a human AI symbiote of which like, you know, a cryptocurrency is almost like a v zero of that, where the, the software provides an incentive for the humans to replicate it, you know? Right? Um, and so it's possible that you could have something like that. There's a model that has a cryptocurrency, and you, people worship it, and they replicate it because it gives them advantages, and so it, it's possible. But anyways, coming back, um, I think at a minimum, decentralized AI will be a very strong contender, and it's possible it's the only contender. The reason is AI might be an interesting thing where it's relatively h- expensive to, very expensive to create, but relatively easy to copy with distillation attacks. And I think if, for example, let's say completely hypothetically, that there was an enormous capital markets crash and it was very difficult to fund anything for a while, then as somebody said, well, we could get 10 years just on the models we have now, right? And by the way, sometimes that happens. You know, nuclear energy, there's a lot of energy put into nuclear energy, and then there's just, it just stopped for decades, right? Not everything accelerates to the moon. It is very possible that there's enough of a capital and social kind of thing where some of AI is paused for a while just due to capital constraints 'cause it t- it's more and more expensive to make these models, you know? Sorry, so let me pause there. So that, that, putting that all together, that's my view, is you're gonna have personal, private, programmable, centralized AI. Oh, one other thing, the trusted tribe. AI within the trusted tribe increases productivity. Between trusted tribes decreases productivity. So you make more money perhaps within the tribe, but then you have to spend it on verifying stuff between tribes. So crypto is for between tribes and AI is within tribes.
- ETErik Torenberg
W- what do you think of the... Like, will LLMs get us to a world where it's not just middle to middle, but it's actually end to end? You know, will it get it to A- AGI in some capacity? You know, do you believe in recursive self-improvement or sort of AIs training the AIs i- or in, in some capacity? You know, are LLMs capable of actual creativity and invention? Um, you know, we talked about bio earlier. Like w- will we have, you know, novel, you know, math, science, you know, r- r- uh, scientific re- research? Um, w- w- or do we need new architecture for that, or are you dubious of just the idea in general that, that AI can, can, uh, you know, re- re-replace or substitute for human labor in a, in a ma- mass scale way?
- 30:10 – 46:01
"AI doesn't take your job. AI makes you the CEO."
- BSBalaji Srinivasan
No, it... Well, so I'm not, uh... Well, uh, look, Waymo exists, right? So obviously you have full replacement of human drivers there, just like you have full replacement of elevator operators, just like you had full replacement for the most part of artisanal chair manufacturers.So it is certainly possible for a given job that it gets fully automated, right? And so, but I think physical world jobs, because of the verifiability, are easier to potentially automate. That said, I think that, um, let, let's take each of the things because you said a few different things. First is physical world jobs, if you automate them, well, we went from artisanal work with chairs to a chair factory. It's not like you didn't know, need to know how to make a chair to set up a chair factory. You still need to have somebody there who's like an expert in chairs, and you can just do a lot more varieties of chairs, a lot more cheaply. You have to verify the result. You're cranking out a thousand of them. You start doing math on them. The scale goes up, but a- and, and the artisan gets factored out into the manager and the technician, right? So the, the manager is setting up the factory and looking at the economics and, you know, so on and so forth, and then technician is debugging the factory when it doesn't work, right? So engineering gets split into the, uh, the engineering manager type person who's writing the prompts, and the technician is doing the verification. Okay. And, uh, I think that we're gonna hit, we're already hitting a point where, like the velocity does increase, so the bar increases. But, uh, you know, i- it, there's a big difference in going to 100% and, and being at 99%. At 99%, your workload just increases. At 100%, you stop doing that job, and you go to something else, right? But if you think about how much easier it became to like put images, video, so 99-- making it 99% easier just means people do it a lot. At 100% easier, totally done, then they don't do it at all, and they move on to something else, right? So elevator operating, it's not like elevator operating became so much easier. In fact, it became so easy that you don't even have somebody sitting in the elevator and, and... 'cause it used to be like a pulley system and so on and so forth. So you had someone like supervising the thing, right? It was more analog, right? Um, and they would like level it out at exactly the right, you know, level. Um, when it became digital and fully automated, that, that's actually the first self-driving car. Ha ha ha, right? Like going up and down, all right? Um, so I think Ben Gomes-Schmiedt made that point or something like that, right? The vertical self-driving car, right? 'Cause it's like a train, it's like a vertical train. So the, uh... Now, in terms of discovering new math and science, yes, if you have the right prompt, it's amazing in terms of searching the literature. Mathematicians, physicists are starting to get some value out of it, right? Like Opus, like huge props to them on that because, and especially in like biology, we're synthesizing all of these facts. There's something called biomedical text mining and so on. AI's revolutionized that because biology was just something where the, the, the, the, the facts were stored in English in this weird, inconsistent way across thousands of papers, and nobody could span all of that, right? So AI is gonna mean the century of biology because finally all of this work that was spread across all these different journal papers can be synthesized and understood, right? That's a really, really, really big deal. Just simply the bio aspect of it, we can... But, but that said, it's everything we knew, not everything we don't know. It means that you take the full set of everything we know, and you fill in all the intermediate aspects o-of it, right? And you can do that for a long time, like 'cause there's so much there, you know, so much there that's just a synthesis of two existing areas, right? But when you look at some of these pr- like y- you know, Donald Knuth saying the other day, right? He posted like some graph theorem or something. He was so impressed that AI could, could get a result for him, right? If you've read what he did, you'd have, I mean, you'd have to be expert to even know what he was saying, let alone to verify. Like to either prompt or verify, you already needed to be an expert. Because, and the, the thing is, I can see AI spit out to some people, it convinces them that they're suddenly m- physicists that have solved quantum gravitation or something like that. You know what I mean? Have you seen that kind of thing, right? So in the absence of actually being able to verify it by hand, some human has to verify it to say that it's right. I think that's gonna persist. To give an analogy, this is not a perfect analogy, but like with Coinbase, we thought like listing would eventually go away and not be a big deal and that people wouldn't care, and everything would be listed and just be free market or whatever. But there's always something that's the equivalent of listing. Like, okay, you listed over on this exchange, but like guessing listed on Coinbase in the main app above the fold, there's always something scarce because human attention is scarce, right? So listing never went away as like a main event. There's always some IPO-like thing. Yes, we're listed on this exchange in this fashion, right? Or we became a top 10 coin or something like that, right? So in the same way, I think whatever gets automated, then in a sense, humans work, uh, human, human work moves to what can't be automated. Now, that may be almost like, um, like things that humans are picked for because they're not robots, like human companionship or something like that, right? Um, or like, uh, personal trainers or things like that, you know, something where the whole point is that it's a human as opposed to a machine. Another way of putting it is, remember the digital divide? Right? So in the '90s, there was the pr- assumption only the rich people will get the digital, and all the poor people will be left without. We're actually gonna have the opposite. Digital is cheap. Physical is a premium product, right? So AI, robots, digital will be cheap. Human is a premium product.
- ETErik Torenberg
Okay, but going back to agency and taste, that's, that's what everyone says, you know, humans will do. You know, we, we've seen over o- over time and time again AI just, you know, cut, cut into that. Do we not think that AIs are just gonna be also better at, at taste and agency?
- BSBalaji Srinivasan
I don't think that's true on a short-term basis. I think, um, the smarter you are, the smarter the AI is, right? That's been now true for the last several years, right? It's possible there's some huge step change, okay?But insofar as where you're typing in a prompt is like you're-- the human is the sensor, the AI is the actuator. You're sensing the world, you're typing something in, and it's a very high-dimensional vector you're giving it. It's like AI is a spaceship and you're pointing in a direction, and whether you prompt it in Portuguese or Tagalog, whether you're talking about math or the... Like, the number of different directions you can point the thing in is enormous, right? It, it ca- that, that direction setting is something where it has to know something about you and what you want at that moment, right? I don't know. As I said, I think, um, I'm not sure if AI can read your mind, but it could be able to read your body, right? I think that's a good one-liner, right? That the, like, biotech can prompt it in your sleep, right? So all the wearables and stuff like that, I think you'll get a lot out of that, okay? But I don't believe, like agency and taste, um, so I mean, people, I think they over-rotate on this. It's not really the case that there's... I, I think agency, IQ, taste are correlated, okay? It may be that i- it's a little bit like, uh, most people in the NBA are tall, to take something that you know a lot about, right? Within the NBA, um, height is not the number one variable that you think about. Some, you know, like Steph Curry is not the tallest or whatever, right? However, it still actually does correlate with scoring average if even within N- the NBA. But it's what's called restriction of range. Everybody's already tall, so conditional on everybody being tall, other variables matter more, okay? However, if you just took tall guys and short guys and put them on a court, then height, the taller team basically wins typically, right? Because they just can hold the ball above you. Ah, you know, right? Okay. So in the same way, like people who are already smart might see that, yeah, higher agency people or people with better creative taste, fine, right? Like, uh, and maybe a technician role is less or, and, and maybe the Steve Jobs type role is more. But honestly, like, one way of looking at it is all of the Jeffersonian natural aristocracy around the world will rise. Why? AI doesn't take your job, AI makes you the CEO. Reframe, right? AI makes you CEO because your job is actually a lot like using an AI model is a lot like CEO training. You know, many years ago, I used to say that, and it's still true, but you know, when you're in high school, you could quickly see, like, why do people accept that athletes have very high compensation? Because when you're in high school, you could see whether you could dunk. And if you can't dunk, you know that, like Michael Jordan isn't outsourcing his dunks. He's dunking, right? So they n- that talent is intrinsic to the person. It is a, uh, non-transferable asset, right? Similarly, someone can tell whether they can sing or they look like a model, right? So, um, the actors, the musicians, the singers, the athletes, all of these clearly had talent, and so people were okay with their compensation. There was a CEO, he used to say, "Well, I deserve to get paid more than a second baseman." Okay? I forget this guy. He's like some tech guy in the '90s or something. It's a funny line, right? 'Cause he's like, "I add more value, right, to the world than this." But the issue is that people would think of what being CEO was as just sitting up with your feet on a desk and barking out orders. You know, people would be like, "Oh, Elon, he just pays people to do his stuff. He doesn't launch the spaceships himself," right? And that's because they are only accustomed to, like, clicking a button on Amazon and spending money on Amazon, and they, they think that something that is simple for them was simple on the back end. Of course, it's the opposite, right? To make it simple is really hard, right? And so to, like, get the top rocket scientists and car engineers and brain machine interface people and tunneling people and blah, blah, blah, blah, blah, and have them all compensated and working and directed and debugged is actually very, very difficult, as you'd know if you tried it. And guess what? See, the thing is that historically it's been the case that people couldn't try their hand at being CEO. What they could do instead is they could try their hand, just like they could try their hand at basketball or, or football, or they could, you know, pick up a microphone. They could try their hand at math and science, and they could see how good they were at math and science. So the initial tech guys in the '90s and the 2000s, they were respected because they were good at math and science, not because people... Many people didn't perceive the business aspect. They still didn't really give credit on that. But PageRank, for example, okay, it's eigenvalues. I can, like math guys, tech guys could perceive, okay, that was a difficult technical problem. That must have been the value that they created. It's part of it, but, you know, the manager part is actually more. Point being, though, that at least somebody could say, "Okay, these tech guys are better at math and science than me, therefore their compensation is merited." Now, however, the thing is that bouncing a basketball or trying a math problem are cheap. To make somebody manager of a company was expensive, so they couldn't try and fail. They could try and fail playing basketball and see how much they sucked. They could try and fail singing, see how much they sucked. They could try and fail in math, see how much they sucked. Very cheaply in high school, they would learn their true ability level, that they're not able to run like Usain Bolt. They can't sing like Adele, right? They can't do math like, uh, Terence Tao, right? And they'd say, "You know what? I know where I am. I know my strengths and weaknesses. I'm okay with that person having more or having higher status because it was a fair competition. I got a shot. It was cheap for me to try." But because putting them in charge of an organization to make them CEO was expensive, many people persist in the delusion that the CEO adds nothing to the organization, right? And, uh, you know, though it is sa- I will say the best CEOs and the worst CEOs have something very deep in common. You know what that is?
- SPSpeaker
What?
- BSBalaji Srinivasan
The organization can run without them.
- SPSpeaker
[laughs]
- BSBalaji Srinivasan
'Cause the very best CEO has set up a, right, a, a, a machine so that they don't have to micromanage it every day. That's really hard to do because they need basically, you know, Gwynne Shotwell running SpaceX is, and, uh, like Elon doesn't have to look at every single detail because she's so, so, so good, right? Like, uh, or Vaibhav and, and Tom Zhu on Tesla, like they're so good, right? But recruiting junior ElonsThat are okay with not having the spotlight while Elon has the spotlight and takes all the flak. Non-trivial to do. Go try it sometime, right? Find somebody who's more detail-oriented than Elon to run your company, and you can be Elon, right? Okay. So point being that, um, now what AI does, it reduces the cost. Your... You know, AI doesn't take your job, AI makes you the CEO. You're the CEO now. What is being CEO? It's writing up clear instructions of what you want, sensing the market, verifying the output, and so on and so forth. What that means is all these people around the world, like, you know, the Calendly founder is Nigerian, right? There's many founders who are from countries that were, quote, "poor countries" or what have you, from India, from Latin America, and so on. Internet access means all of these smart people can get very far on zero resources. Very far, right? 'Cause the cost of, quote, "hiring someone" is hyper-deflated. You can hire an AI to do it, right? To riff on that more, so AI doesn't take your job, AI makes you the CEO. Another one is AI doesn't take your job, AI takes the job of the previous AI. Claude took ChatGPT's job, right? Um, just like Midjourney, you know, took, uh, took DALL-E's job, took Stable Diffusion's job. And you can synt- systematize that. What I literally have is I have a spreadsheet where I have AI coding tool, AI image tool, AI video tool like this, and I have some subcategories, like best tool for AI comics, for AI graphics, and so on and so forth. And then in a given month, I have the best, uh, model for that kind of thing in that month, so Claude Code, you know, for example, or Midjourney for AI imagery. And then when that gets swapped out, AI didn't take your job, AI took the job of the previous AI. So I'm hiring the AIs. I literally have the token budget. I have the budget for those rows, and that is literally how across an organization you say, "Okay, we've just fired, you know, Codex and we've hired Claude." Right? So AI doesn't take your job, AI takes the job of the previous AI. A third version is, um, AI doesn't take your job, AI lets you do any job a little bit, right? You can be a pretty good artist. You can be a pretty good musician. You can... It's like one of the things about being CEO, as you know, you often have to be like a six or a seven in many areas. Why? Because you have to be able to do the job well enough before you hire a specialist in that area, right? Before you have a chief designer, you're the designer if you're the founder CEO, right? Before you have a CFO, you're the one who's on the hook to prepare the financials, prepare the, the returns or whatever, right? So you have to be a generalist who's pretty good and in a pinch can do that role, can supervise that. That's why it's so hard. That's why being CEO is so much harder than any executive position. Okay. AI helps you with that, where you can get to a six or a seven. You can be like a generalist, but a specialist is usually needed for polish. A specialist has the vocabulary. A specialist can confirm the AI is making mistakes, that it's hallucinating, and so on and so forth. And again, people will constantly argue as to whether that will always be there or whether it'll go away, or whether AI will raise the bar and then, you know, now the new specialist is even more sophisticated with AI, right?
- ETErik Torenberg
I want to
- 46:01 – 49:19
The SaaS Apocalypse: Real or Overblown?
- ETErik Torenberg
zoom out a, a couple more talks before we go. One is the SaaSpocalypse. I'm curious what your mental model is for all these, uh, S- SaaS companies, um, are they... You know, some people say, hey, they've no... Their moats have gone away. They have no, you know, code moat, they have no, no data moat, no, no more UI moat, and, um, now there's gonna be AI native companies that sort of, you know, take up a big chunk of what, what, what they do, like, you know, f- Figma, you know, who we're, we're invested in, I'm personally invested in. You know, some people s- are bullish as an example just because it's founder-led and, and they'll continue to innovate. Some people say, "Hey, i- is there a role for a designer in the same way that there used to be? Now it fundamentally changes," and, you know, what, what does that do to collaboration tool, tool, tools like that? What, what, what is your, your thought on, on the SaaSpocalypse? Are, is everybody on the conveyor belt, uh, on the, on the way to the guillotine? Um, h- how, how do you think about that?
- BSBalaji Srinivasan
I, I don't think so because I think if they're smart, then the thing that AI can't do is distribution, right? So if you have Notion, you have Figma, you have now Replit, and so on and so forth, you've got all these people, and boom, you can ship with AI faster, you know, features to them, right? And, um, so in that sense, I don't believe in the SaaSpocalypse. I think you might still see SaaS under pressure from people who can clone the interface quickly. That is true. I think people will build local versions. That is true. I think people may not want their data on remote servers. They might want desktop versions with local data so they can... Like for example, uh, Obsidian is gonna become more of a contender versus Notion because the markdown files, there's a network effect on data when it's local and you can analyze the whole thing. Like local data, you get compounding data, right? So but the, so, so in a, in the naive sense that, oh, anyone can clone anything and so therefore, you know... It, it just doesn't work like that. Like if you set up, if you cloned all of Facebook's code and you set up facebook2.com, right, or instagram2.com, who's gonna log into that? Right? You could literally have every single thing coded there, but your s- your ad rates are gonna be far lower because no one's gonna log into it, right? The distribution. That, that's like a thought experiment to say if you just clone the whole thing, you still have to get the distribution for it. And so it's not just the cloning, it's execution. Now, with that said, like there's certain kinds of things like, let's say, NetSuite, right, which suck that... But they're complicated, where I think it is true that if they suck at execution, or rather may I say is they suck. Like, I hate the product [chuckles] . Put it like that, right? Xero's better, but, you know, like, sorry, NetSuite. Okay, you're a big company. You won't, you have your feelings hurt. It's very rare that I ever say any product sucks because, um, I don't wanna hurt anybody's feelings, so hopefully I didn't. Strike that from the record. Fine.NetSuite's product could be improved. Okay? Um, so, uh, something like that, which is like sort of a vulnerable incumbent that's just milking and that hasn't done anything for a while, yes, I think they can get disrupted. But I'm not sure that it's like, uh... I don't think it's quite like, oh, everybody on BlackBerry is gonna die because iOS is taking over. I don't think it's quite like that. 'Cause I think AI can accelerate a SaaS company just like it can accelerate a disruptor. I think it kind of accelerates both.
- ETErik Torenberg
Yeah. O-o-one last thing and we'll get to Zodle too.
- 49:19 – 1:05:29
What happens if AI companies get bigger than governments?
- ETErik Torenberg
Anthropic. Uh, what happens... Let's say Anthropic, you know, becomes a multi-trillion dollar company, right? Um, like how much leverage do they have, or just even private companies in general o-over... What is the relationship between them and governments? Are they like hiring their own militaries at some, some point? What does it look like when these companies become, uh, you know, 10X bigger, you know, 50X, when AI really achieves its, its potential and these companies are bigger than, than the biggest countries?
- BSBalaji Srinivasan
So I think that at least that specific company, while it executes very well, um, I am skeptical as to whether they're executing well, let's call it politically. Um, and so because of that, if they... Like, ultimately, at the very largest scale, markets are political. Like, for example, there's an entrepreneur, they raise from a VC, who raises from LP, who's often a sovereign fund or a pension fund, and they're under a state, and they're under the rules-based order, right? So, like, there's certain things that are at the macro level that you don't perceive because one thinks of them as constants, but they become variables. I think that unless one is very, very savvy that those things could change. Like one thing I think about, uh, the Silicon Valley AI companies is they're actually scalar rather than vector thinkers. They're only modeling AI disruption, and they're not modeling all the other simultaneous singularities, all the political singularities that are happening, all the things like, you know, solar mooning and stuff like that, right? And why are those things important? Because they change the leverage of political factions, which in turn means their world model is incorrect because they're only... I-if you're only extrapolating out AI and you're not extrapolating out all the other things that are either going vertical or going down like this, then, uh, they're, they don't have a proper model of the future. And that's, that's as vague as... I, I'll be much more precise on my own blog, um, but that's as, that's as PG, uh, PG as I... Let's say, that's how I can say it without pissing anybody off. Just go to x.com/balajies, and you'll see what I mean by that, right? But TLDR is, I think the American AI companies, as much as they've given to the world, and I like them, are only mo- They are basically thinking all nation-states continue to exist in their current form, and the only disruption is AI. Like, they still model it as America versus China, for example. They don't model internal things, internal issues. They think the reserve currency sticks around. They think all these things stick around, right? Um, they aren't taking a multivariate approach, in my view. That's their weakness. They have so many strengths, but that's their big weakness. So I don't think that in that form they're gonna get to trillions. In fact, I think the counterattack on them is gonna be so dramatic that it might be that you just have decentralized AI. Like American AI companies, for example, the copyright stuff, right? There's a huge backlash building against that. Whereas the Chinese or the decentralized models can just do anything, Hollywood anything, right? Potentially. So Pirate Bay kind of AI is actually more free. The less profitable AI is also less copyright AI might be better AI, you know? So just things to think about. I think, uh, you know, th-things compound until they don't, and they start hitting sigmoidal constraints and often backlash constraints like this, right? So I think that's what they're not modeling.
- ETErik Torenberg
Yeah.
- BSBalaji Srinivasan
Political constraints.
- ETErik Torenberg
Ma-makes sense. Okay, let's, let's get to Zodle.
- BSBalaji Srinivasan
Zodle. All right. Now this is what I care about. Basically, um, you know, AI is the attack, but ZK is the defense. So what I mean by that is zero knowledge, like, you know, what the transformer is to AI, zero knowledge is to cryptography and, um, Zodle is a Zcash-powered mobile wallet, um, that is basically c- fully encrypted Bitcoin, okay? This is 30 years of cryptography. This is basically what Milton Friedman wanted decades ago. There's actually this great clip.
- SPSpeaker
The one thing that's missing but that will soon be developed is a reliable e-cash, a method whereby on the internet you can transfer funds from A to B without A knowing B or B knowing A. The way in which I can take a $20 bill and hand it over to you, and there's no record of where it came from. A-and you, you may get that without knowing who I am.
- BSBalaji Srinivasan
Mm-hmm.
- SPSpeaker
That kind of thing will develop on the internet, and that will make it even easier for people to use the internet.
- BSBalaji Srinivasan
Basically, that is what, uh, Milton Friedman predicted almost 30 years ago, okay? This is, um, uh, in the '90s, okay? It was like when the internet was just rising. And Zodle is the incarnation of that, okay? Because zero knowledge proofs, which basically mean anybody can prove anything without revealing anything else, were developed, and then they were commercialized in the form of Zcash, scaled with zero knowledge proofs for scaling, um, Ethereum and with ZK-rollups and things like that. And then they were made efficient, so you could do them on mobile. And then finally, Apple and Google lighten up on crypto apps on mobile. And so finally, you can teleport arbitrary amounts of money around the world.And so this round, we just led this with, uh, you guys, a16z crypto, me, Winklevosses, um, Paradigm, uh, Coinbase, uh, Haseeb Qureshi of Dragonfly, as you know, a large fund, um, and, you know, a bunch of other, uh, uh, great people. Um, and the, uh, the reason that that-- Arthur Hayes also, who's a, uh, you know, former, former BitMEX CEO. And so the reason that this is super, super, super important, there's only, um... You know, w- w- you can click this, you can install this on, on web or, or on, on web, on, uh, on iOS or Android, right? The reason this is so insanely important, there's really only five crypto assets that I've spent more than 1,000 hours on: Bitcoin, Ethereum, Solana, USDC, Zcash. And I actually think Zcash is maybe the most important of them in the years to come. Why? So let me say at least my kind of thesis as of right now on fiat, gold, digital gold, and digital cash, meaning Zcash, right? So I think fiat will be around, particularly among Eastern states, because Eastern states are broadly higher trust. So that's not just China, but it's like India and Southeast Asia, the ASEAN countries, and so on. Bitcoin... So then physical gold, gold bricks are also very popular in the East, but the-- And, and Westerners often like gold, but they'll buy the instrument, like, you know, right? And there's gold.tether., you know, IO, so Tether has a digital, as a, as a gold-backed stablecoin, which is actually at 3.7 billion. So that's cool. XAUT is pretty cool. You can check that out. You have to trust Tether's redemption, but Tether's got a pretty good track record now over 10 years for, with USDT and so on. So XAUT is cool. Fine. So fiat will continue, I think, to have its role, uh, just like the desktop continues. You know, desktop continues, you know, 30 years later. St- Windows and Apple are still releasing things. It's still valuable. Some of the action's moved away from it, but the desktop continues, still a large business. So fiat continues among Eastern states. Gold, physical gold is more popular in the East because you can secure it more. There's gonna be more stability. XAUT may be what's popular in the West. Now we come to Bitcoin. What is my view on Bitcoin as of 2026, March? Bitcoin has become provable global institutional collateral, okay? I think Bitcoin is less of a currency for individuals now. It's become so accepted by institutions and so centralized with BlackRock and Saylor and so on and so forth, and Bukele and many countries adopting it and whatnot, that it has a unique thing. See, when you say there's a certain number of gold bricks in Fort Knox, even giving a video of that can now be faked very, very realistically with AI, right? But what can't be faked is what Bukele does, where he posts, "I have this public address with this much BTC, and watch, I'm gonna move it to this address." Right? That is something which, so long as it's actually Bukele's Twitter account, which there's some degree of proof on that, you know, because it's been around pre-AI or whatever. So long as you believe that, and that's the one piece you have to believe, because you have to start thinking about what is, what am I, what am I taking as a premise, right? He can post, "I have the coins at this address. Here's the address I'm gonna move it to. When I move it, I have proven I have custody." It's proof of reserve, right? You can also sign a message coming from, with that private key. You don't have to even move it. The point being, that is provable global institutional collateral. Anybody in the world, he can prove cheaply, claimed by the world, that he has this amount of Bitcoin. You cannot do that for physical gold bricks. I-in a lower trust world, especially an online world, that's very valuable, because everything-- Gold audits, videos of gold audits can now be faked with AI. But the provable global institutional collateral, now institutions can prove they have the BTC to each other, okay? And they can do so across borders. So the transparency of Bitcoin, in the sense of all assets are on-chain, becomes valuable. Now, the thing about this is, with the advent of AI, Chainalysis will be there for everybody, right? Everybody can do blockchain analytics. This is just like changing the balance of power. It used to be that only Chainalysis could really do that at the scale that it can. Now it's becoming much easier to do. And so a lot of Bitcoin use will be de-anonymized over time. And so if you're running a transparent blockchain, it becomes an institutional blockchain, because it's just only an institution can survive that degree of transparency. Like, individuals can't survive being tracked for everything, but institutions are... It's like a public company. It's supposed to be tracked, right? You know, like, the, it's like sort of, it's, 'cause robust enough, it's meant to be tracked in a certain way. It's designed to be tracked, right? An individual person is not meant to be public, but a corporation can be, right? It's funny to put it that way. There's a private individual, there's a private company, and there's a public company. But I guess you could say, oh, it's a public figure. But people don't like being public figures, but there's kind of an equivalent there, right? A public figure, maybe some of their stuff is tracked, but they don't want everything to be tracked. A public company, maybe all their stuff is tracked. Fine. Provable global institutional collateral. There's another thing, which is that way of thinking about what Bitcoin is solves some of the major issues. Um, Quantum, right, which is Nick Carter's put out these things on it. Let's say Nick Carter's right, and I think he might be right, that Quantum is an underappreciated threat that Bitcoin core developers aren't taking it seriously. And even if it was something that they rolled out tomorrow, it would still be a multi-month migration process because ECDSA, like the addresses, y- everybody has to manually send their assets from one address to a new address, okay? So you, you can only do whatever 100,000 people, those assets can be moved in a given day. However, if you look at Bitcoin rich list, Bitcoin is so top-heavy.Right? That it's got so- these institutional addresses that, uh, you have to do the math, but probably a few million addresses all moving their funds would move like 99% of the Bitcoin in a few days. And so Bitcoin as digital gold actually is quantum resistant. It's Bitcoin as digital cash that isn't, right? Meaning a million, like, institutions all moving their assets can be done in a few days, but a billion people all moving, like, five bucks or whatever can't be done in any reasonable amount of time, okay? So everybody who can't move then gets quantumed, and anybody who can doesn't, but all the assets are concentrated to the big guys. With me? Right. And this also extends to seizure. Like, will all the centralized Bitcoin on Coinbase's servers, Sailor's servers, et cetera, get seized? I think it's quite likely. I think it eventually gets seized in some exigent circumstance. And so it becomes something that I think only an institutionally blessed thing can hold and send, right? Provable global and social collateral. This is a different vision than what people wanted, but it's actually still a valuable thing. What it leaves open is the individual digital cash case, right? 'Cause gold is big bricks that are moved in Brink trucks or the equivalent thereof, infrequently, large denominations between institutions, right? It's like the high-powered backend money, right? It's not really meant for individuals. Cash is the opposite. It's meant for individuals more than it's meant for institutions. So Zcash takes over the role of digital cash. So that's fungible, private, scalable with Tachyon, which is coming, quantum safe, okay, which is also, it's more pr- quantum safe, right? So that's why, and it's simple also. Zcash is probably not gonna ever do smart contracts. It's gonna keep it really simple. Why? Because, like, um, you know, if you take Bitcoin, you can innovate in one direction, which is programmability, and that's Ethereum, Solana, and so on. You innovate in the other direction, that's privacy, and that's Zcash. To get to private programmability is actually stacking those two together, and it's actually quite hard. It opens up all these attack surfaces and so on. So just scale Zcash first, and then, you know, there's Aztec, there's Aleo, there's all these other, you know, private smart contract chains. I wish them the best. I want them to have a non-zero sum view of the world. They're taking on a more complicated problem. In theory, they can just do the same thing Zcash is doing, which is private transactions. In practice, if you remember Facebook in the 2000s, people said, "Why does Twitter exist? Facebook has status update." Like, one feature of Facebook is all of Twitter. Why does Twitter exist? Sometimes that's a good argument, by the way. That's why, you know, like Steve Jobs told Drew Houston, "Dropbox is just a feature." [chuckles] Right? I mean, Dropbox, it, it's funny, it's a great company, and so on and so forth, but i- like, if Dropbox had... If iCloud was Dropbox, it'd probably be better. You know? Uh, like both, both would be better off for it. iCloud is kind of eh. Dropbox is, Dropbox doesn't have as much distribution as if it was part of a big operating system kind of bundle. So sometimes people are half right, half wrong. Dropbox is a great company, but it might've been bigger in terms of percentage value if, if, uh, if they had been Apple's cloud services, basically, right? But okay. Point is, it's hard to say whether it's just a product or a feature, but my strong intuition is just like Twitter's simplicity made its own thing, right? Simple, scalable, billion-person, digital private cash has been the dream for 30 years, and we're finally there. So zodl.com, install zodl.com. By the way, I'm not a trader. I just don't care about trading. I'm early on platforms and infrastructure. There's things you have to not care about. In order to care about things, you have to not care about things. So very, very, very few things I talk about. Also, Zcash has been around for 10 years. Like, you know, it's also even the, the, the toxic way set up ceremony, that's gone. Like, that got fixed cryptographically. So it's unusual that it's been around 10 years, got a security track record, it's got a decentralized base of holders, the cryptography works.
- ETErik Torenberg
Love it. That's a, a great place to wrap a, a wide-ranging conversation on what's happening in AI and crypto. Uh, a- as always, Balaji, fantastic conversation. Until next time.
- BSBalaji Srinivasan
Yes. And oh, by the way, if you're in Singapore, Malaysia, or anywhere, come visit ns.com and Network School and, uh, we're scaling. And, uh, we'll talk about that too next time maybe.
- ETErik Torenberg
Yeah. Love to see all the, all the, all the progress there. Uh, amazing what you guys are doing. Excited to be i, i- involved in it in a small way and, uh, yeah, until, until next time.
- BSBalaji Srinivasan
Okay. Thank you.
Episode duration: 1:05:44
Install uListen for AI-powered chat & search across the full episode — Get Full Transcript
Transcript of episode oheUsh7VtKY
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome