Skip to content
a16za16z

Amjad Masad & Adam D’Angelo: How Far Are We From AGI?

Adam D’Angelo (Quora/Poe) thinks we're 5 years from automating remote work. Amjad Masad (Replit) thinks we're brute-forcing intelligence without understanding it. In this conversation, two technical founders who are building the AI future disagree on almost everything: whether LLMs are hitting limits, if we're anywhere close to AGI, and what happens when entry-level jobs disappear but experts remain irreplaceable. They dig into the uncomfortable reality that AI might create a "missing middle" in the job market, why everyone in SF is suddenly too focused on getting rich to do weird experiments, and whether consciousness research has been abandoned for prompt engineering. Plus: Why coding agents can now run for 20+ hours straight, the return of the "sovereign individual" thesis, and the surprising sophistication of everyday users juggling multiple AIs. Timestamps 00:00 Introduction 00:41 The Bearishness Paradox: "I don't know what people are talking about" 04:25 "Functional AGI" - Brute Forcing Your Way to Automation 11:18 "We are in a human expertise regime" 15:31 The Weird Equilibrium: Automating Entry-Level but Not Experts 17:22 The Expert Data Paradox 24:44 The Sovereign Individual: A Prediction Framework for the AI Era 28:51 "Vastly increased what a single person can do" 45:04 "It's gonna be the decade of agents" 49:01 Managing Tens of Agents in Parallel 52:56 "I actually think vibe coding is unbelievably high potential" 58:47 Claude 4.5's Strange New Awareness Resources Follow Amjad on X: https://x.com/amasad Follow Adam on X: https://x.com/adamdangelo Stay Updated: If you enjoyed this episode, be sure to like, subscribe, and share with your friends! Find a16z on X: https://x.com/a16z Find a16z on LinkedIn: https://www.linkedin.com/company/a16z Listen to the a16z Podcast on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYX Listen to the a16z Podcast on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711 Follow our host: https://x.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.

Amjad MasadguestErik Torenberghost
Nov 7, 20251h 2mWatch on YouTube ↗

EVERY SPOKEN WORD

  1. 0:000:41

    Introduction

    1. SP

      Nothing seems fundamentally so hard that it couldn't be solved by the smartest people in the world working incredibly hard for the next five years.

    2. AM

      Humanity went through the agrc-agricultural revolution and the Industrial Revolution. We're going through another, another revolution. We will not be able to c-call it something. It's only future people who will call it something. But we are going through something.

    3. SP

      The number of solo entrepreneurs-

    4. AM

      Yeah

    5. SP

      ... that this technology's gonna enable, it's vastly increased what a single person can do.

    6. AM

      For the first time, opportunity is massively available for everyone. Just the ability for more people to be able to become entrepreneurs is-

    7. ET

      Yeah

    8. AM

      ... is massive.

  2. 0:414:25

    The Bearishness Paradox: "I don't know what people are talking about"

    1. ET

      Adam, Amjad, welcome to the podcast.

    2. AM

      Thank you.

    3. SP

      Yeah. Thanks for having us.

    4. ET

      So a lot of people have been throwing cold water over LLMs lately. So some general bearishness, people talking about the limitations of, of LLMs, why they won't get us to AGI. Well, maybe, uh, what we thought was just a couple years away is now maybe ten years away. Adam, you seem a bit more o-optimistic. Why don't you share your broad general overview?

    5. SP

      Yeah, I mean, I, I actually hon-honestly, I don't know what people are talking about. I think, I think if you look a year ago, the world was very different. And so just judging on how much progress we've made in the last year with things like reasoning models, um, things like the improvement in code generation ability, um, the improvements in video gen, it seems like things are going faster than ever. And so I, I don't really understand where the, the kind of bearishness is coming from.

    6. ET

      Well, I think there's some sense that we hoped that they would be able to, um, replace all of tasks or all, all jobs, and maybe there's some sense that it's like middle to middle, but not end to end, and maybe, you know, l-labor won't be automated in the same way that we, we thought it would on the same timeline.

    7. SP

      Yeah, I mean, I, I don't know what the previous timelines people were, were thinking were, but, you know, I think, I think if you're, if you go five years out from now, we're in a very different world. I think, think a lot of what's holding back the models these days is not actually intelligence. It's getting the right context into the model so that it can be able to, to use its intelligence. Um, and then there's some things like computer use that are still not quite there, but I, I think we'll almost definitely get there in the next year or two. And when you have that, I, I think we're gonna be able to automate a large portion of what people do. I don't think-- I don't know if I would call that AGI, but I, I think it's gonna satisfy a lo- a lot of the critiques that people are making right now. I, I think they won't be valid in, in a year or two.

    8. ET

      What is your definition of AGI?

    9. SP

      I don't know. Everyone, everyone thinks it's something different. [laughing] I think... I mean, you know, o-one, one definition I, I, I kinda like is, um, if you say that you have a remote worker, a, a human, any, any job that can be done by someone whose job can be done remotely, um, that, that's AGI. You know, you can, you can then say is it, does it have to be better than the best person in the world at every single job? Some people call that ASI. Um, does it have to be better than teams of people? You can, you can argue with those different definitions, but I, I think once we get to be better than a typical remote worker at the job they're doing, we're living in a, a very different world, and I think that's, that's effectively what people-- That, that, that's a very useful anchor point for, for these definitions.

    10. ET

      So in summary, you're not sensing the same limitations of LLMs that other people are? So y-you think there's a lot more room that LLMs can, can go from here? We don't need like a brand-new architecture or other breakthrough?

    11. SP

      I don't think so. I mean, I, I think there are certain things like memory and learning, like continuous learning, that are not very easy with the current architectures. I think even those you can sort of fake and maybe you're, we're gonna be able to, to get them to work well enough. Um, but we, we just don't seem to be hitting any kind of, of limits. The, the progress in reasoning models is incredible, and I think the progress in, in pre-training is, is also going pretty quickly. Maybe not as quickly as people had expected, but certainly fast enough that you can expect a lot of progress over, over the next few years.

    12. ET

      Amjad, what, what's your, what's your reaction

  3. 4:2511:18

    "Functional AGI" - Brute Forcing Your Way to Automation

    1. ET

      hearing all this?

    2. AM

      Yeah, I, I, I think I've been pretty consistent and consistently right, perhaps. Uh. [laughing]

    3. ET

      Dare I say. [laughing]

    4. SP

      Consistent with yourself or consistent with what I'm saying?

    5. AM

      With, with, um, with myself and with, I think, how things are unfolding that, uh, you know, I started being a bit of a more public doubter of, of things y- around, uh, the time when the AI safety discussion was, uh, was reaching its height back in maybe '22, '23. Um, and I, I thought it was important for us to be realistic about the progress, um, because, uh, you know, otherwise we're gonna scare politicians, we're gonna scare everyone, you know. Uh, DC will descend on Silicon Valley. We'll- They'll shut everything down. So m-my criticism of the idea of, like, AGI 2027, you know, that paper that I think it's called Alexander or someone else wrote-

    6. ET

      Yeah

    7. AM

      ... uh, and then, um, and the situational awareness and all this, uh, hype papers that are not really science, they're just vibe. Here's what I think will happen. Uh, the, you know, the whole economy will get automated. You know, jobs are, uh, are gonna disappear. All of that stuff is that, again, it's just I think, um, it's unrealistic. It is not following the kind of progress that we're seeing, and it is, uh, gonna lead to just bad policy. So m-my view is LLMs are amazingAmazing machines. Uh, I don't think they are exactly human, uh, intelligence equivalent. You can still trick LMs with things like they might have solved the strawberry one, but you can still, you know, uh, trick it with like single sentence questions like how many Rs are in this sentence? The-- I think, I think I tweeted about it the other day, which was like three out of the four, four models couldn't-- didn't get it even. Um, and then GPT 5 with high thinking had to think for like fifteen seconds in order to get a question like that. So, uh, LMs are, are, I think, a different kind of intelligence than, uh, what humans are. Uh, and also, uh, th-they have, they have clear limitations, and we're papering over the limitations, and we're kind of working around them in all sorts of ways, whether it's in the LM itself and the training data or, uh, in the infrastructure around it and everything that we're, we're doing to make them work. Um, but that, that makes me less optimistic that we're-- we've, we've cracked intelligence. And I think once we truly crack intelligence, um, it-it'll feel a lot more scalable and that you can, uh, and that the, the idea behind the Bitter Lesson will actually be true, and that you can just pour more, um, more power, more resources, more compute into them, and they'll, they'll just scale more naturally. I think right now, uh, th-th-there's a lot of manual work going into making these models better. In the pre-- in the true pre-training scaling era, you know, GPT 2, 3, 3.5, maybe up to 4, um, it, it, it felt like y-you, you can just, uh, put more internet data in there and just, it, it just got better. Uh, whereas now it feels like there's a lot of labeling work happening, there's a lot of contracting work happening. A lot of these, uh, contrived RL environments are getting created in order to make, uh, LLMs good at coding and becoming co-coding agents. And they're gonna go do that. I think the news from OpenAI, that they're gonna do that for, for investment banking. And so I, uh, try to coin this term I call functional AGI, which is the idea that you can automate a lot of aspects of a lot of jobs by just going in and, like, collecting as much data and creating these RL environments. It's gonna take enormous effort and money and data and all of that in order to do. And I think we're, uh, yeah, I, I agree with Adam that, you know, things are gonna get better, uh, one hundred percent over the next three months, six months. Claude 4.5 was a huge jump. Uh, I don't think it's u-appreciated how much of a jump it was over, over 4. There's really, really amazing things about Claude 4.5. So th-there is progress. We're gonna continue to see progress. I don't think LLMs as they currently stand are on a, on the way to AGI. And my definition for AGI is, I think the old school RL definition, which is, um, a machine that can go into any environment and learn efficiently in the same way that a human could go into-- uh, you can put a, put a human into a, a, a pool game and, you know, within two hours they can, like, shoot pool and be able to do it. Uh, right now, there's no way for us to have machines learn skills like that on the fly. You know, everything requires enormous amount of data and compute and time and effort and, and, and, uh, and more importantly, it requires human expertise, which is the non-Bitter Lesson, [chuckles] uh, idea, which is, you know, uh, human expertise is not scalable. And we are reliant-- today we are in a human expertise regime.

    8. SP

      Yeah, I mean, I, I think that humans are certainly better at learning a new skill from a limited amount of data in a new environment than the current models are. I think that on the other hand, human intelligence is the product of evolution, which used a massive amount of effective computation. And so this is a different, this is a different kind of intelligence. And so because it didn't have this, this massive equivalent of evolution, it just has pre-training for, for that which is not as good. You then need more data to learn everything, every new skill. But I guess I think in terms of like the functional consequence, so like if, if you're like when, when will the world, when will the job landscape change? When will the ec-economic growth hit? I think that's gonna be more a function of when we can produce something that is as good as human intelligence, even if it takes a lot more compute, a lot more energy, a lot more training data. We could just put in all that energy and still get to software that's as good as the average person at doing a typical job.

    9. AM

      So I don't disagree with that. And, and that's-- it, it is-- it feels like we're in a brute force type of regime, but, but maybe that's fine and-

    10. SP

      Yeah.

    11. AM

      Yeah.

    12. ET

      So where's the disagreement then? I guess. So there, there's agreement on that. Where, where is the d-divergence perhaps?

  4. 11:1815:31

    "We are in a human expertise regime"

    1. AM

      I, I don't think that we'll get to the singularity or, uh, I don't think that-- I don't think we're gonna get to the next level of human civilization, uh, until we, um, we, we, we crack the true nature of intelligence. Like u-until we understand it and have algorithms that are actually, uh, not brute force.

    2. SP

      And, and you think those will take a long time to come?

    3. AM

      Uh, I, I'm sort of agnostic on, on, on that. It just does, it does feel like the LLMs, uh, in a way are distracting from that 'cause, um, all the talent is going there. Um, and therefore there's less talent that are trying to do basic research on, on intelligence.

    4. SP

      Mm-hmm.Yeah, and at the same time, a huge portion of talent is going into AI research that used to previously wouldn't have gone into AI at all.

    5. AM

      Mm-hmm.

    6. SP

      And so you have this, this massive industry, massive funding, you know, funding compute, but also funding human employees. And that is-- I guess I-- nothing seems fundamentally so hard that it couldn't be solved by the smartest people in the world working incredibly hard for the next five years on it.

    7. AM

      But, but basic research is, is different, right? Like trying to, um, like trying to get into the fundamentals and as opposed to like, there's a lot of industry research, like how do we make these things more useful, uh, in order to generate profit? And, um, so I, I think that's, that's different. And often, I mean, Thomas Kuhn, this philosopher of science, talks a lot about how these research programs end up, you know, becoming like a bubble and like sucking all the attention and ideas and like think, think about physics and how there are like these industry of, I don't know, string theory and like it pulls everything in and there's sort of a plug-- black hole of progress and [chuckles] you know.

    8. SP

      Yeah, yeah. No, and, and I think, I think one of his things was like, you gotta wait until the current people retire-

    9. AM

      That's right

    10. SP

      ...to even have a chance at changing the, the paradigm.

    11. AM

      He's very pessimistic about paradigms. Yes. Yeah.

    12. SP

      But I, I guess I feel like the current paradigm, this is maybe our disagreement. I think the current paradigm is pretty good.

    13. AM

      Mm-hmm.

    14. SP

      And I think we're nowhere near the sort of like diminishing returns-

    15. AM

      Mm-hmm

    16. SP

      ...of continuing to push on it.

    17. AM

      Mm-hmm.

    18. SP

      And I bet, yeah, I, I guess I would just bet that you can keep doing different innovations within the paradigm to, to get there.

    19. AM

      Mm-hmm. So let, let's say we continue to, to brute force it, um, we're able to a-a-automate a bunch of labor. Do you estimate that GDP is, is something, you know, four or five percent a year, or are we going up to ten percent plus, or what does it do to the economy?

    20. SP

      I think it, it depends a lot on exactly where we get to and what, what AGI means. But so, so let's say you have, let's say you have LLMs that with, with an amount of energy that costs one dollar an hour, they could do a job of any human. Let's just, just, just, let's just take that as a, as a theoretical point you could get to. I think you're gonna get to much more than four to five percent GDP growth in that world. I think the issue is you may not get there. So it may be that the LLMs that can do everything a human can do actually cost more than humans do currently, or they can do kinda like eighty percent of what humans can do, and then there's this other twenty percent. Um, and, and I, I think-- I do think at some point you get to LLMs can-- they can do everything, every single thing a human can do for cheaper. Like, I, I, I don't see a reason why we don't eventually get there. That may take five, ten, fifteen years. But I think until you get there, we're gonna get bottlenecked on the things that the LLMs still can't do, or the, you know, building enough power plants to, to supply the energy or there are other bottlenecks in, in the supply chain.

  5. 15:3117:22

    The Weird Equilibrium: Automating Entry-Level but Not Experts

    1. AM

      One thing I worry about, uh, is, uh, the deleterious effect of LLMs in the economy in that, say, LLMs, uh, you know, e-effectively automate, uh, the entry-level job, but not, but, but, but the, but not the expert's job, right? So, um, let's take, uh, you know, QA, q-quality assurance, um, and, uh, it, it's, it's so good, but, uh, there's still all these long tail even- uh, you know, events that it doesn't handle. And so you have a lot of, uh, really good QA people now, like managing like hundreds of agents, and you effectively increase productivity a lot. Uh, but they're not hiring new people because the agents are be-better than new people. Uh, and, and, and that, that feels like a weird equilibrium to be in, right?

    2. SP

      Yeah.

    3. AM

      And I don't think that many people are thinking about it.

    4. SP

      Yeah. Yeah, for sure. Yeah. No, I, I, I think that's, you know, I think it's happening with, um, CS majors-

    5. AM

      Mm-hmm

    6. SP

      ...graduating from college. There's just not as many jobs as there used to be. And, and, um, LLMs are a little more substitutable for what they previously would have done, and I'm, I'm sure that's contributing to it. And then it means that you're gonna have fewer people going up that ramp that, you know, companies paid a lot of money to, to employ them and, and, and train them. Um, and so I, I think it's a real problem. I think it's gonna-- I'm guessing you'll probably f-see some kind of-- like that problem also creates a economic incentive to solve the problem.

    7. AM

      Mm-hmm.

    8. SP

      So it may be that there's like more opportunities for companies that can train people or maybe use of AI to, to teach people these things. Um, but for sure that's, that's an issue right now.

  6. 17:2224:44

    The Expert Data Paradox

    1. AM

      Uh, a-a-another related problem is that, uh, since we're dependent on, uh, expert data in order to train the LLMs and the LLMs start to substitute, um, those workers, but, but, but, you know, at some point there's no more experts 'cause they're all out of jobs and, and, and, and they're equivalent to the LLMs. But if the LLMs is truly dependent on, on labeling data, expert RL environments, then how would they improve beyond that? I think that's something, uh, a question for an economist to really sit down and think about is like, once you get the first tick of automation, I mean, there, there are some challenges there. And so how do you go, how do you go, how do you go to the next part?

    2. SP

      Yeah. I mean, I, I think it-- a lot of it's gonna depend on how good of RL environments can be-

    3. AM

      Mm-hmm

    4. SP

      ... created. So, you know, on the one extreme you have something like AlphaGo, where it's just a perfect environment and you can just blast past expert level. Um, but I think a lot of jobs have limited data that anyone can, can train from. And so I think it'll be interesting to see how, how easy is it for research efforts to, to overcome that, that bottleneck.

    5. AM

      If you had to make a guess on what job category is going to be introduced or explode in, in the future, um, you know, some people say it's like the, you know, everyone's an influencer, you know, or, or are in some sort of caring, uh, field or, um, you know, everyone's employed by the government [chuckles]

    6. SP

      Mm-hmm

    7. AM

      ... some sort of bure-bureaucrat thing or, um, you know, maybe training the AI in, in some way. Uh, you know, as, as more and more things start to get automated, you know, what is your, your guess as to what more and more people start to do? You know, doing art and poetry is, you know, obviously there's hope.

    8. SP

      Yeah. I mean, at, at some point you have everything automated, and then I think people will do art and poetry and, you know, I think there's a data point that the people playing chess is up since computers got better at human-- than, than humans at chess. So I don't think that's a bad world if people are all just kind of free to, to pursue their, their hobbies. Uh, as long as you have some kind of, you know, way to distribute wealth so that some people can afford to, to live. Um, but I, you know, in the near t-- that, that, that's a while away and in the near term-

    9. AM

      Well, like ten, fifteen years out, but you're-

    10. SP

      I, I don't know how much, but yeah. In the, in the-- I'll put it in the at least ten years-

    11. AM

      Yep

    12. SP

      ... range. Um, I, I think in the near term, the job categories that are gonna explode are the jobs that can really leverage AI. And so, so people who are good at using AI to, to accomplish their jobs, especially to accomplish things that the AI couldn't have done by itself. There's just, there's just massive demand for, for that.

    13. AM

      I don't think we're gonna get to a point where you automate every, every job. Uh, definitely not in the current paradigm. I would, uh, I would doubt it happening. I, I, I'm not certain it would ever happen, but definitely not in the current paradigm. Now h-here's what I think. Because a lot of jobs is about servicing other humans. You need to be fundamentally human in order to... You need to be actually human in order to understand what other people want, you know? And so you need to have the human experience. So unless we're gonna, uh, create, uh, human humans [chuckles] , unless the machi- uh, unless AI is actually embodied in the human experience, then humans will always be the generators of ideas in the economy. A-A-Adam, respond to Amjad's point around the human part because you created one of the most, you know, the best wisdom of the crowds, you know, uh, platforms in, in the universe. Um, and now you've gone, you know, all, all in with Poe. Um, what, what are your thoughts on, you know, to what extent will we be relying on, um, humans versus will we be trusting AIs to, you know, be our therapists, be our, you know, caretakers in other ways?

    14. SP

      Humans have a lot of knowledge collectively, and, you know, e-even like one individual person who's an expert and has lived a whole life and had a whole career and seen a lot of things, they, they often know a lot of things that are not written down anywhere.

    15. AM

      Tacit knowledge.

    16. SP

      And, um, ta-ta- you can call it tacit knowledge, but also, also what they're capable of writing down if you did ask them a question. I think there's still an important role for, for people to play in the world by sharing their knowledge, especially when they have knowledge that, that just wasn't otherwise in an LLM's training set. Um, you know, whether they will be able to make a full-time living doing that, I, I don't know. But if that becomes a bottleneck, then, then for sure that's gonna mean that all the sort of like economic pressure goes, goes to that. I don't-- Un- in terms of the like, you know, you have to be human to know what humans want, I don't know about that. So like as an example, I think, I think recommender systems, like the system that ranks your Facebook or Instagram or Quora feed, those recommender systems are already superhuman at predicting what you're gonna be interested in, in reading. Like if, if, if I gave you a task that was like, "Make me a feed that I'm gonna read," like there, there's just no way, no matter how much you knew about me, there's no way you could compete with these algorithms that just have so much data about everything I've ever clicked on, everything everyone else has ever clicked on, what all the similarities are between all those, those different data sets. And so I don't know. You know, it's, it's true that as a human you can kind of like simulate being a human and that makes it easier for you to like test out ideas, and I'm sure that composers and artists are-- th-this is an important part of their, their process for doing work is they like-

    17. AM

      Or chefs or, you know-

    18. SP

      Yeah.

    19. AM

      Yeah.

    20. SP

      They, they produce something and, you know, a chef will-

    21. AM

      Mm

    22. SP

      ... cook something and they taste it.

    23. AM

      Mm-hmm.

    24. SP

      And it's important that they can taste it. But I don't know. You know, they, they just-- they have very little data compared to what AI can be trained on. So, so I, I don't know how that's gonna shake out.

    25. AM

      That's a, that's a good point. I mean, a-and ultimately what recommender systems, uh, are, they're like aggregating all the different tastes and then, uh, sort of finding where you sit in the sort of multidimensional taste vector space and like getting you the best content there. So I guess there's some of that. I think that's more narrow than we think. Like-I-I-- Like, yes, it, it's true in recommender systems, but I'm not entirely sure it's true of, of, of everything. Um, but so m- I, I think the best prediction for where the world is headed, and this is not a endorsement or necessarily like this is where I think the world's headed, because I think part of it is, uh, will be slightly ins- uh, instable-- unstable system.

  7. 24:4428:51

    The Sovereign Individual: A Prediction Framework for the AI Era

    1. AM

      But I think the sovereign individual continues to be, I think, a really good set of predictions for the future. Although it's not a scientific book or not, it's a very polemic book and, um... But, but the idea is, uh, you know, in the late '80s, early '90s, um, are they economists? I'm not sure. I think they're economists or political science majors. Uh, two people out of the UK, um, wrote this book about trying to predict what happens, uh, when computer technology matures, right? They're like, you know, humanity went through the agri-agricultural revolution and the Industrial Revolution. We're going through ano-another revolution. Uh, clearly. Uh, information revolution, now we call it intelligence revolution, whatever. I think w-we will not be able to c-call it something. It's a future people will call it something. But we are going through something, and so they're trying to predict, okay, what, what happens from here? And what they arrive at is that the, um, ultimately, you're gonna have large swaths of people that are potentially unemployed or economically not, um, contributing, but you're gonna have the entrepreneur, the entrepreneur capitalists gonna be, uh, so highly leveraged because they can spin up these companies with AI agents very quickly. So because they have this-- because they're very generative, they have interesting ideas, they're human, they've, uh, they have interesting ideas about what other people want. They can create these companies very quickly and these products and services, and they can organize the economy in certain ways. And the politics will change because, uh, uh, the, to, you know, today's politics is based on, um, uh, every human being, uh, economically pr-productive. Uh, but when you have only, uh, when you have massive automation and then you have a few entrepreneurs and, uh, very intelligent, generative people are actually, uh, able to be productive, then the political structures also change. Um, uh, and so they talk about how the, you know, nation-state sort of s-subsides and instead you go back to, uh, to an era where, um, states are like competing over people, over wealthy people and like they, you know, uh, as a sovereign individual, you can like, uh, negotiate your tax rate with your favorite state. And so it starts to sound like biology a little bit.

    2. ET

      [chuckles]

    3. AM

      And I don't think it is far from where I th- where it might be headed. Now, again, it's, it's not a sort of a value judgment or, or desire, uh, but, but I do think it's worth thinking about when, when people are not the f- the, uh, you know, unit of economic productivity, things have to change, including culture and, and politics.

    4. ET

      Yeah. I, I think there's a question with that book and, and some of this conversation more broadly of like when does the technology reward the, uh, you know, the defender versus the s- the sort of aggregator or something? Or like the, um, when does it incentivize more decentralization versus centraliza- like, uh, I remember Peter Thiel had this quip a decade ago of like, you know, crypto is libertarian, is, is more decentralizing. AI is, you know, communist or, or m-more centralizing. And it, it, um, it's not obvious to me that that, that's entirely accurate, um, on, on either side. AI does seem to empower a, a bunch of individuals, as, as you were saying. And then also, you know, crypto turns out is like fintech or [chuckles] something like stablecoin. You know, uh, it does empower sort of, uh, you know, and nation-states were talking about doing the sort of like, you know, the, the China thing that they were gonna do. So yeah, I think there's an open question as to, you know, wh-wh-which technology leads to who does it empower more? The edges or the, the center? And I think if it empowers the edges, it seems like the sovereign individual is, is... And, and maybe there's some barbell, uh, where it's like both basically the, the big, the incumbents just get much, much, much, much, much bigger and there's like these e-edges. But anyways, that's another-

    5. SP

      Yeah.

    6. ET

      Yeah.

    7. SP

      I'm,

  8. 28:5145:04

    "Vastly increased what a single person can do"

    1. SP

      I'm very excited for the, um, the number of solo entrepreneurs-

    2. ET

      Yeah

    3. SP

      ...that this technology is gonna enable. I think it's, it's just greatly, it's vastly increased what, what a single person can do. And there's so many ideas that just never got explored because it's a lot of work to get a team of people together and maybe raise the funding for it and get the right kind of people with all the different skills you need. Um, and now that one person can, can bring these things into existence, I, I think, I think we're gonna see a lot of really amazing stuff.

    4. AM

      Yeah, I get these tweets all the time about people who like quit their jobs because they started making so much money u-using, uh, tools like, like Replit and, um, it's, it's really exciting. I think, uh, it-- for the first time, opportunity is massively available for-

    5. SP

      Yeah

    6. AM

      ...for everyone.

    7. SP

      Yeah.

    8. AM

      Uh, and I think that, that is, to me, the most exciting thing about this technology other than all the other stuff that we're talking about. Just the ability for more people to be able to become entrepreneurs is-

    9. SP

      Yeah

    10. AM

      ...it's massive.

    11. ET

      That, that trend is obviously going to happen. Uh, uh, as we look out at the next decade or two, do you think that AI is more likely to be sustaining or disruptive in the Christian sense? So to ask it another way, do you think that most of the value capture is going to come from companies that were scaled pre-OpenAI starting? Um, uh, so, so Replit still counts as the, the, the latter and, and so does Court to some degree. Or, or do, um, do you think most of the value is going to be captured by companies that started, you know, after, let's say, twenty fifteen, twenty sixteen?

    12. SP

      So there's a related question, which is how much of that is gonna go to the hyperscalers-

    13. AM

      Yeah

    14. SP

      ... versus everyone else? And I think on that one, we are-- I actually think we're in a pretty good balance where there's enough competition among the hyperscalers that the, um, th-there's enough competition that as an application-level company, you have choice, and you have alternatives, and the, the prices are coming down incredibly quickly. Um, but there's also not so much competition that the hyperscalers and the, you know, labs like Anthropic and OpenAI, there's not so much competition that they are unable to raise money and make these long-term investments. And so I actually think we're in a, in a pretty good balance, and, and we're gonna have a lot of, a lot of new companies and a lot of growth among the, the hyperscalers.

    15. AM

      I think that's, that's about right. So the terminology of sustaining versus disruptive comes, comes from, uh, uh, the Innovator's Dilemma. Uh, and, uh, it's, it's this idea that, uh, whenever there's a new technology trend, it sort of-- There's this idea of a power curve. It starts as a toy almost, or something that doesn't really work or captures the lower end of the market. But as it sort of evolves, uh, it goes up the power curve, and eventually it disrupts even the incumbents. So o-originally, the incumbents don't pay attention to it, uh, because it looks like a toy, and then eventually it disrupts everything and eats the entire, uh, sort of market. Uh, and so that, that was true of PCs. You know, when PCs came along, the big, uh, main-mainframe manufacturers did not, uh, uh, pay attention to it. And, and initially it was like, yeah, it's for, it's, you know, for kids or whatever. Uh, but we, we have to run these large computers or data centers or whatever. But now even data centers are running on PCs and so on. Um, and, and so PCs were just a hugely disruptive, uh, force. Uh, but there, there are technologies that come along and really benefit the incumbents and really don't really benefit the, uh, the, uh, new players, the startups. Uh, I think Adam's right. It's, uh, it's both. Um, and maybe for the first time, it's kind of both, uh, like a, a huge technology trend 'cause the internet was hugely disruptive. Um, but, but this time, uh, it feels like it is an obvious supercharge for the incumbents, for the hyperscalers, for the large, uh, internet companies. But it also enables, uh, new business models that, uh, that is perhaps counterpositioned against the, uh, the existing, existing ones. Al-although the, the... You know, I think what happened is everyone read that book, and everyone learned how to not be disrupted. Uh, for example, ChatGPT was fundamentally counterpositioned against Google because, uh, Google had a business that, that was actually working. Uh, ChatGPT was seen as this, uh, technology that hallucinates a lot and creates a lot of bad information, and Google wanted to be trusted. And so Google had ChatGPT internally. They didn't release Gemini until, like, two years after ChatGPT, and ChatGPT had sort of already won the, like, at least brand recognition. Um, and, and so there, there was, in a way, OpenAI came out as a disruptive technology. Uh, but, but now Google realizes it's a disruptive technology and kind of responds to it. At the same time, it was always obvious that AI is gonna benefit Google. At minimum, its, uh, you know, overview, uh, search overview has gotten a lot better. Um, all its, you know, Workspace suite is, is getting a lot better with Gemini. Uh, their mobile phones, everything gets better. So it's, it seems like it's, it's both.

    16. SP

      Yeah. I, I really agree, like, everyone read the book-

    17. AM

      Mm-hmm

    18. SP

      ... and, and that changes what the theory even means.

    19. AM

      Yeah.

    20. SP

      Um, 'cause you have, you have, like, all the, all the public market investors have read that book, and they now are gonna punish companies for not adapting and reward them for adapting, even if it means they have to make long-term investments. I think, you know, all the, the management leadership of the companies have, have read the book, and they're on top of their game. I think also just, like, the people running these companies are in, I, I guess I would say smarter, I think, than, like, the, the companies from the generation that that book was sort of built on. And they're, they're on-- at the top of their game, and they are, a lot of them are founder-controlled, and so they can make... It's, it's easier for them to sort of take a hit and, and make these, these investments. So that's-- I, I actually, you know, I think if, if you had an environment more like we had in, say, like the '90s, I think this would actually be more disruptive-

    21. AM

      Mm-hmm

    22. SP

      ... than, than the, the current hyper-

    23. AM

      Right

    24. SP

      ... hypercompetitive, uh-

    25. AM

      Yeah

    26. SP

      ... world that we're in now.

    27. AM

      One mistake that we as a firm have reflected on over the past few years, though, of course, I haven't been here for more than just a few months, is this idea of we've, that we've passed on companies because we, they weren't going to be the market leader or the, or the category winner. And thus we thought, oh, you know, learning the lessons from, from Web2, you have to invest in the, in the category winner. That's where things are going to consolidate. Value is gonna accrue over time. And, um, it seemed-- And so, you know, why do the, the next foundation model company if the first one already has a, has a head start? Um, but it seems like the market has gotten so much bigger that n- in foundation models but also in applications, there's just multiple winners, and they're kind of, you know, fragmenting or, or, you know, and taking parts of the market that are all ve-venture scale. I'm curious if this is a durable phenomenon or... But, um, it's, that seems just o-one difference than, than the Web2 era is just more winners, um, across more categories.

    28. SP

      I think network effects are playing much less of a role-

    29. AM

      Yeah

    30. SP

      ... now than they did in the Web2 era also, and that, that makes it easier for competitors to get started. There's still a scale advantage because, you know, if you have more users, you can get more data. If you have more users, you can raise more capital. But that advantage is not-- it doesn't make it absolutely impossible for a competitor of smaller scale. It makes it hard, but it's, there, there's definitely, like, room for more winners than, than there was before.

  9. 45:0449:01

    "It's gonna be the decade of agents"

    1. AM

      I think, uh, Karpathy, uh, recently said that it's gonna be the decade of agents. Uh, and I think that's absolutely right. It's, um, uh, as opposed to like prior modalities of AI, like when, uh, AI first came to coding, it was autocomplete with Copilot, then it became sort of chat with ChatGPT. Then I think Cursor innovated on this composer modality, which is like editing like large chunks of, uh, files, but that's it. I think Replit, what Replit innovated is, is, is, is the agent, um, and the idea of like not only editing code, provisioning infrastructure like databases, doing migrations, um, you know, connecting to the cloud, deploying, uh, having the entire debug loop, like executing the code, running tests. Um, and so just like the entire development lifecycle loop happening inside an agent, and that's gonna take a long time to mature. So we're agent in beta came September 2024, and it was the first of its kind that did this both code and infrastructure, but it was c- you know, fairly janky, didn't work very well. And then Agent v1 around, uh, December, uh, uh, it took, it took another, um, uh, generation of models. So you go to from Claude 3.5 to 3.7. 3.7 was the first model, uh, that, uh, really knew how to use a computer, a virtual machine. So unsurprisingly, it was the first also computer use model. Um, and these things have been moving together. Uh, and so with every generation of models we see, we find new capabilities and, um, you know, um, Agent v2 improved on autonomy a lot. Agent v1 could run for like two minutes. Agent v2 r- uh, uh, ran for twenty minutes. Agent 3, we advertise it as running for two hundred minutes. It just felt like it should be symmetrical, but like it's actually runs kind of indefinitely. Like we've had users running it for twenty-eight plus hours.

    2. ET

      Wow.

    3. AM

      Um, and the main idea there was that if we put a verify in the loop... I remember reading DeepSeek, uh, a, a paper from Nvidia about how they, um, used DeepSeek to write CUDA kernels, and they were able to run DeepSeek for like twenty minutes if they put a verifier in the loop, like being able to run tests or something like that. And I thought, "Oh, okay, so what kind of verifier can we put in, in the loop?" Obviously, you can put unit tests, but unit tests doesn't really capture whether the app is working or not. So we started kind of digging into computer use and whether computer use was gonna be able to test apps. Computer use is very expensive and, um, it's actually kind of still very buggy, and like Adam talked about, that's gonna be, uh, a big area of improvement that'll unlock a lot of applications. But we ended up building our own framework w-with like bunch of hacks and some, some AI research, and Replit's computer use, I think testing model is, I think one of the best. Um, and, uh, and once we put that into the loop, then, uh, you can put Replit in high autonomy. So we have an autonomy scale. Uh, uh, you can, you can, you can choose your autonomy level, and then it just writes the code, goes and tests the applications. If there's a bug, it reads the error log and like writes the code again and, and can go for, for, for hours. And I've seen people build amazing things by-

    4. ET

      Yeah

    5. AM

      ... letting it run for, for a long time. Now that needs to continue to get better. That needs to, um-To get cheaper and faster. Uh, so it's not necessarily a point of pride to run for a lot longer, like it should be f- as fast as possible. So we're working on that. Um, a-agent for-- Th-there's a bunch of ideas that are gonna be, uh, coming out agent for, but one of the big things is you shouldn't be just like waiting for that one feature that you requested,

  10. 49:0152:56

    Managing Tens of Agents in Parallel

    1. AM

      you should be able to work, uh, on a lot of different features. So the idea of like parallel agents is very interesting to us. So, you know, you ask for a login page, but you could also ask for a Stripe, uh, checkout and, and then you ask for an admin dashboard. The AI should be able to figure out how to parallelize all these different tasks, or some tasks are not parallelizable, but should also be able to do merge across the code. So being able to do collaboration across AI agents, um, is very important, and that way, the productivity of a single developer goes up by a lot. Right now, even when you're using Cloud Code, Cursor and others, th-there isn't a lot of parallelism going on. But I think the next, uh, boost in productivity is gonna come from sitting in front of programming environment like Replit and being able to manage, uh, tens of agents, maybe at some point hundreds, but you know, at least, you know, five, six, seven, eight, nine, ten agents, uh, all different, all, all do-- you know, working in different parts of your, your product. I also think that, um, UI and UX, uh, could, could use a lot of work in terms of, um, right now, um, you're trying to translate your ideas, uh, into just like textual representation. Um, just like a, like a PRD, right? Uh, what product managers do, right? Just product descriptions. But product descriptions that don't... I-it's really hard, and you see it in a lot of tech companies, it's really hard to align on the exact features 'cause it's lang-language is fuzzy. And so I think there's a, there's a world in which you're interacting with AI in a more multimodal fashion. So open up, uh, like a whiteboard and being able to draw and like diagram with AI and, and, and really work with it like you work with a human. Uh, and then, um, then the next stage of that, uh, having, uh, like better memory, bet-better memory inside the project, but also across projects, and perhaps having different instantiations of Replit agents that, uh, you know, th-this, this agent is really good at like, um, Python data science because, um, you know, it has all the information and skills and memories of, about my company, what it's done in the past. So I'll have a data analysis like sort of Replit agent, and I'll have like a front-end Replit agent, and they have memory over multiple projects and over time and over interactions. And maybe they sit in your Slack like a, like a worker, and you can like talk to them. So again, like I can, I can keep going for another fifteen minutes about a roadmap that could span like three to four to five years perhaps. And so, but, but this, this agent, this agent phase that we're in is just, there's so much work to do and it's, it's, uh, it's gonna be a lot of fun.

    2. ET

      Yeah. It's, uh, I, I was talking to one of our mutual friends, uh, one of the co-founders of one of these, uh, you know, big productivity companies, and he leads a lot of their R&D, and he's like, "Man, uh, during the week these days, I'm not even talking to humans anymore as much. I'm just like [chuckles] it's just, you know, using all, all these agents to, to build." So it, it's i-living in the future to, to some degree is already in the present.

    3. AM

      Th-th-there's something interesting about that in that, are people talking to each other less at, at companies?

    4. ET

      Hmm.

    5. AM

      And is that a bad thing? Um, so it's, you know, I think, uh, I s- I, I'm starting to think more about the second order effects of-

    6. ET

      Yeah

    7. AM

      ... of things like that. Um, uh, you know, will it make it awkward for like, a-again, the new grads, I feel so bad for them. Like, uh, you know, if, if people are not sh-sharing as much knowledge between each other or it's like it's not culturally easy to go-

    8. ET

      Yeah

    9. AM

      ... ask for help because, like, you should be able to use AI agents. Uh, there's something, there's some cultural forces that I think need to be reckoned with.

    10. ET

      Yeah. I think a lot of tough cultural forces for zoomers these days.

    11. AM

      Yes.

    12. ET

      Um, let's g-gearing towards closing here.

  11. 52:5658:47

    "I actually think vibe coding is unbelievably high potential"

    1. ET

      Um, obviously, you guys are, you know, focused on running your companies. But to stay current on the AI ecosystem, you, you guys also make an-angel investments as well. Um, where are you guys most, uh, most excited? Um, you know, we, we haven't talked about robotics. Are you guys bullish on, on robotics in the, in the near term or any emerging categories or use cases or spaces that you're looking to make more investments in, or you have made some?

    2. SP

      I actually think vibe coding generally is just unbelievably-

    3. ET

      Yeah

    4. SP

      ... like high potential. Um, just the idea that all the, you know, this, this-

    5. ET

      You think it's un-underhyped even still?

    6. SP

      I, I think so.

    7. ET

      Yeah.

    8. SP

      I, I, I think, you know, just opening up the potential of software to the mainstream of, you know, e-every-everyone. I think that... And yeah, and I, I actually, I think one reason I think it's u-underhyped is that the tools are still very far from what you can do as a professional software engineer. And if you imagine that they're gonna get there, and I, I think there's no reason why they wouldn't. Might-- it'll take a few years, but, um, then it's like everyone in the world is gonna be able to create any s- things that would have taken a team of a hundred professional software engineers. That's just gonna massively open up opportunities for, for everyone. So I think Replit is like a great example of this, but I think it's also gonna... That there will be cases other than just like building applications that, that this also creates.

    9. ET

      By the way, just on that note, if you were going to Stanford or Harvard, you know, today, twenty twenty-five, just enter, would you major again in computer science or just focus on building something or uh-

    10. SP

      I think I would. I mean, I-

    11. ET

      Hmm

    12. SP

      ... I, I ac- I, I went to college starting in two thousand and two, and it was right at- after the dot-com bubble hadBurst and there was a lot of pessimism and I remember my, um, my roommate, his parents had told him like, "Don't study computer science," even though that was, that was something he really liked. Um, and I just kind of did it because I, I liked it. And I think that, I think that it's definitely like the job market is worse than it was a few years ago. At the same time, I think having these skills to understand the sort of fundamentals of what's possible with algorithms and data structures, I think that actually really helps you in, in managing agents when, when you're using them. Um, and I, I, I'm guessing that it will continue to be a valuable skill in the future. I also think the other question is like, what else are you gonna study?

    13. AM

      [laughs]

    14. SP

      And, and every single thing you could imagine-

    15. AM

      [laughs]

    16. SP

      ...there's an argument for why it's gonna be automated.

    17. AM

      Mm.

    18. SP

      So I think you might as well study what you enjoy and, and, and I think this is as good as, as anything.

    19. AM

      Yeah. I, um, I think there's a lot to, to get excited by. One thing is maybe kind of random, but like I get really fired up to see like mad science experiments, like the, uh, DeepSeek OCR that came out the other day. Did you, did you see it? It's, it's wild where, um, correct me if I'm wrong 'cause I only looked at it briefly, but basically you can, um, get a lot more economical with a context window if you like have a screenshot of the text [laughs] instead of the fucking text. Um-

    20. SP

      Yeah. I, I'm not, I'm not the right person to be correcting you on, on that.

    21. AM

      [laughs]

    22. SP

      But like, it's-- there's, there's definitely some, some really interesting things.

    23. AM

      Yeah. I saw another thing on Hacker News the other day where, um, you know, dif- uh, text diffusion, uh, where someone made a text diffusion model by instead of doing, I was saying denoising, he would take like a single BERT instance and like try to, you know, mask different words and, uh, and just predict like these different tokens and, um... And so the-- we have a lot of components and I don't think people think a lot about that. You know, we have now the, you know, base pre-trained models. We have the-- all these RL reasoning mo-models. We have the, um, you know, enco-encoder, decoder models. We have diffusion models. We have-- There's all these different things like, just like, you know, you mix them in different ways.

    24. SP

      Yeah.

    25. AM

      Uh, I feel like there isn't a lot of that.

    26. SP

      Yeah.

    27. AM

      And it'd be great, it'd be great if a, like, a new s- research company just, like, comes out and is, like, not trying to, like, compete with OpenAI and things like that, but instead, uh, is just trying to, like, discover how to put these different components together in order to create a new flavor of these models.

    28. SP

      Yeah. In crypto, they talk about composability and, like, mixing primitives together, and in AI maybe there needs to be more experimentation.

    29. AM

      There, there was less playing around, I found.

    30. SP

      Yeah.

  12. 58:471:02:38

    Claude 4.5's Strange New Awareness

    1. SP

      Um, Amjad, you've, uh, been into consciousness for a long time. Are, are you bullish that we will, um, via some of this AI work or just some, you know, scientific progress elsewhere, make some progress in understanding, in, in, uh, you know, getting across this, this hard problem or?

    2. AM

      You know, something happened recently, uh, which is interesting. Um, uh, Claude 4.5, uh, seemed to ha- to become more aware of its context length. So as it gets closer to the end of the context, it starts become, becoming more economical with tokens. It also, it looks like its awareness when it's being red-teamed or in a test environment, like, jumped significantly. And so there's something happening there that's quite interesting. Now, I think, uh, in terms of, you know, the, the question of, of consciousness, it is still fundamentally not a scientific question, and there is a sort of, uh... We've given up on trying to make it scientific. But I think it, uh, I think-- this is also, uh, the problem that I talked about with all the energy going into LLMs. Um, uh, no one is trying to really think about the true nature of intelligence, true nature of, uh, consciousness, um, and there's a lot of really core, core questions. Like, one of my favorite one is, uh, the, uh, Roger Penrose, um, Emperor's New Mind, where he wrote a book about how everyone in the sort of philosophy of mind space, uh, and perhaps the larger scientific ecosystem started thinking about the brain in terms of a computer. And in that book, he tried to show that it fundamentally is impossible for the brain to be, uh, a computer because, uh, humans, uh, are able to do things that Turing machines cannot do, or Turing machines, like, fundamentally get, get stuck on, such as, um, uh, you know, j-just, uh, basic logic, um, puzzles, uh, that we're able to kind of detect, but, like, there's no way to encode that in a, in a, in a Turing machine. For example, like this statement is false, you know. Those, like, old logic puzzles. Um, and, uh, a-anyways, it's like a complicated argument, but, uh, if you read that book or, or many others, uh, th-there's like a core strain of arguments in the theory of mind about how, uh, computers, uh, are f-fundamentally different from, from human intelligence. And, uh, and so, yeah, I, I haven't really-- I've been very busy, so I haven't really updated my thinking too much about that. But, but I think there's, there's a, there's a, there's a huge field of study there that is not being studied.

    3. SP

      If you were a freshman, uh, entering college today, would you study philosophy?

    4. AM

      I would do that. I would do that. I would definitely study philosophy of mind. I would probably go into neuroscience, uh, 'cause I think those are the core questions that are gonna b-become very, very important as AI kind of continues to see more of jobs and economy and things like that.

    5. SP

      That's a great place to wrap. Amjad, Adam, thanks for coming on the podcast.

    6. AM

      Thank you.

    7. SP

      Thank you. [outro jingle]

Episode duration: 1:02:38

Install uListen for AI-powered chat & search across the full episode — Get Full Transcript

Transcript of episode 191Ojd7Rq6s

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome