Skip to content
a16za16z

Marc Andreessen & Amjad Masad on “Good Enough” AI, AGI, and the End of Coding

Amjad Masad, founder and CEO of Replit, joins a16z’s Marc Andreessen and Erik Torenberg to discuss the new world of AI agents, the future of programming, and how software itself is beginning to build software. They trace the history of computing to the rise of AI agents that can now plan, reason, and code for hours without breaking, and explore how Replit is making it possible for anyone to create complex applications in natural language. Amjad explains how RL unlocked reasoning for modern models, why verification loops changed everything, whether LLMs are hitting diminishing returns, and if “good enough” AI might actually block progress toward true general intelligence. 00:00 Intro 00:37 Programming in Plain English 03:00 The Vision Behind Replit 05:15 From Machine Code to English Code 07:00 Building Apps with AI Agents 09:30 When the Agent Becomes the Programmer 11:00 Long-Horizon Reasoning and Coherence 13:45 Reinforcement Learning and Problem Solving 17:30 The Verification Loop and Multi-Agent Systems 21:15 Watching AI Work Like a Human Programmer 23:45 From Stochastic Parrots to Real Reasoning 26:00 Why Coding Is Advancing Faster Than Other Fields 30:15 Verifiable Domains: Math, Code, and Physics 33:45 The AGI Debate: Are We on Track? 37:45 Transfer Learning and the Limits of Human Intelligence 41:15 Functional AGI and Automating Labor 45:20 GPT-5, Diminishing Returns, and Lost “Humanity” 53:10 Creativity, Reasoning, and Finding Truth in AI 57:30 The Origins of Replit and Early Coding Days 01:03:00 Hacking His University and Getting Caught 01:08:00 The Redemption and Lessons Learned for the AI Age Stay Updated: If you enjoyed this episode, be sure to like, subscribe, and share with your friends! Resources: Follow Amjad on X: https://x.com/amasad Follow Marc on X: https://x.com/pmarca Find a16z on X: https://x.com/a16z Find a16z on LinkedIn: https://www.linkedin.com/company/a16z Listen to the a16z Podcast on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYX Listen to the a16z Podcast on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711 Follow our host: https://x.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.

Marc AndreessenhostAmjad MasadguestErik Torenberghost
Oct 23, 20251h 11mWatch on YouTube ↗

EVERY SPOKEN WORD

  1. 0:000:37

    Intro

    1. MA

      We're dealing with magic here that we, I think probably all would have thought was impossible five years ago-

    2. AM

      Yeah

    3. MA

      ... or certainly ten years ago. This is the most amazing technology ever, and it's moving really fast, and yet we're still like really disappointed. Like it's not moving fast enough-

    4. AM

      Mm-hmm

    5. MA

      ... and like, it's like maybe right on the verge of stalling out. We should both be like hyper excited, but also on the verge of like slitting our wrists-

    6. AM

      [laughs]

    7. MA

      ... because like, you know, it's-

    8. AM

      Yeah

    9. MA

      ... the gravy train is coming to an end.

    10. AM

      Right. It is faster, but it's not at computer speed, right?

    11. MA

      Right.

    12. AM

      You know, what, what we expect computer speed to be. It's sort of like watching a person work.

    13. MA

      It's like watching John Carmack on cocaine. [laughs]

    14. AM

      The world-- Okay. The world's, the, the world's best programmer on a stimulant.

    15. MA

      Yeah, on a stimulant. Yeah, that's right. [upbeat music]

  2. 0:373:00

    Programming in Plain English

    1. MA

      So let's start with, um, let's assume that I'm a sort of a novice programmer, so maybe I'm a student, um, uh, or maybe I'm just somebody, you know, I took a few coding classes and I've hacked around a little bit or like, I don't know, I do Excel macros- [laughs]

    2. AM

      Mm-hmm

    3. MA

      ... or something like that. But I'm like, not as, as well as I'm not like a master craftsman of coding. Um, and you know, people, somebody tells me about Replit and, and specifically AI, um, uh, AI and Replit. Like, what, what's my, what's my experience, uh, when, when I launch in with, with what Replit is today with AI?

    4. AM

      Yeah, I, I would, um, I, I think the experience of someone with no coding experience or some coding experience is largely the same when you go into Replit.

    5. MA

      Okay.

    6. AM

      The first thing we try to do is get all the nonsense away from like setting up development environment and all of that stuff, and just have you focus on your idea. So what do you wanna build? Do you wanna build a product? Do you wanna solve a problem? Do you wanna do a data vis-visualization? So the prompt box is really open for you. You can put in anything there. So let's say you wanna, you know, build a startup. Let-- You have an idea for a startup. I would s- I would start with like a paragraph-long kind of description of what I wanna build. Uh, the agent will read that. It will punch it out.

    7. MA

      You just, you t- you just type the, the-

    8. AM

      Just type it

    9. MA

      ... standard English.

    10. AM

      Standard English.

    11. MA

      In standard English, you just type it in.

    12. AM

      Yeah.

    13. MA

      "I wanna build a... I wanna sell, I wanna sell crepes." Uh, [laughs] "I wanna sell crepes online."

    14. AM

      Mm-hmm.

    15. MA

      So you just like-

    16. AM

      You can do that

    17. MA

      ... type in, "I wanna sell crepes online."

    18. AM

      You can, you can... It literally could be that-

    19. MA

      Okay

    20. AM

      ... four words or five words.

    21. MA

      Okay.

    22. AM

      Or it could be if you're, if you have a programming language you prefer or a stack you prefer, you could do that. But, uh, we actually prefer not, for you not to do that, because we're gonna pick the best thing for... We're gonna classify the best stack for that request.

    23. MA

      Right.

    24. AM

      So it's a... If it's a data app, we'll pick Python or Streamlit, whatever. If it's like a web app, we'll pick JavaScript and Postgres and things like that.

    25. MA

      Right.

    26. AM

      So you just type that.

    27. MA

      Or you can decide, you can decide, you can say, "And I wanna do it in-"

    28. AM

      Yeah.

    29. MA

      "I know Python," or, "I'm learning Python at school, and I wanna do it in Python."

    30. AM

      That's right. The, the-

  3. 3:005:15

    The Vision Behind Replit

    1. AM

      uh, everyone would want to build software.

    2. MA

      Right.

    3. AM

      And the thing that's kind of getting in, in people's way is this, all the, uh, what Fred Brooks called the accidental complexity of programming, right?

    4. MA

      Right.

    5. AM

      They're like essential complexity, which is like, how do I bring my startup to market, and how do I build a business, and all of that. Accidental complexity is what package manager do I use, and all of that stuff. We've been abstracting away that for so many years, so you can just, um... A- and the last thing we had to abstract away is code.

    6. MA

      Right.

    7. AM

      I had this realization last year, which is, I think we, you know, built an amazing platform, but the business is not performing, and the reason the business is not performing is that code is the bottleneck.

    8. MA

      Right.

    9. AM

      Like, yes, all the other stuff is important to solve, but syntax is still an issue. Like-

    10. MA

      Right

    11. AM

      ... you know, syntax is just an unnatural thing for people. So ultimately, English is the programming language.

    12. MA

      Right.

    13. AM

      I, I-

    14. MA

      But so just to... Does it work with other, other, uh, world languages other than English at this point?

    15. AM

      Yes.

    16. MA

      Does it-

    17. AM

      You can, you can write in Japanese, and we have a lot of users, especially Japanese users.

    18. MA

      Amazing. Okay. [laughs] That tends to be very popular. So does it support these days, like for the, does, does A- does AI support every language, or is it still, do you still have to do like custom work to craft a new, new language?

    19. AM

      No, most, most, uh, you know, uh, mainstream languages that has like 100 million plus people, well, speak it, AI is pretty good at it.

    20. MA

      Okay. Yeah.

    21. AM

      Yeah.

    22. MA

      Wow.

    23. AM

      So, uh, I, I, I did a bit of, a bit of historical research recently for so- for some reason. I just wanna just understand the moment we're in, and because it's such a special moment, it's, as I think it's important to contextualize it. And I, I, I read this quote from Grace Hopper. So Grace Hopper invented the compiler, as you know. Uh, at the time, people were, uh, you know, programming in machine code, and that's what programmers do.

    24. MA

      Yeah, of course.

    25. AM

      That's what the specialists do.

    26. MA

      Yes.

    27. AM

      And she said, "You know, specialists will always be the specialists. They have to learn the underlying machinery of computers. But I want to get to a world where people are programming English."

    28. MA

      Right.

    29. AM

      That's what she said. That's before Karpathy, right? That's like, you know, 75 years ago. Uh, and, and, and that's why I invented the compiler. And in her mind, like C programming is English.

    30. MA

      Right.

  4. 5:157:00

    From Machine Code to English Code

    1. AM

      step.

    2. MA

      Right.

    3. AM

      Instead of typing syntax, you're actually typing thoughts.

    4. MA

      Right.

    5. AM

      You know, which is what we ultimately want.

    6. MA

      And the machine writes the code.

    7. AM

      And the machine writes the code.

    8. MA

      Right. Right.

    9. AM

      Yeah.

    10. MA

      Um, yeah, I remember it, too. You're, you're probably not old enough, uh, to remember-

    11. AM

      Mm-hmm

    12. MA

      ... but I re- I remember when, uh, when I was a kid, it was, um, you know, there, there were, were higher level languages, you know, uh, by the '70s, like, like Basic and so forth, and Fortran, and C and C.

    13. AM

      Mm-hmm.

    14. MA

      But, um, uh, there was still... You know, you still would run into people who were doing assembly programming-

    15. AM

      Right

    16. MA

      ... or assembly language. Which by the way, you still do, you know, like game companies-

    17. AM

      Yeah

    18. MA

      ... or whatever, still do assembly to, to, to get-

    19. AM

      And they were hating on the kids that were doing Basic. [laughs]

    20. MA

      Oh, well, so the, so the assembly people were hating on kids doing Basic, but there were also older coders who hated on the assembly programmers for doing assembly and not, and not, and not-

    21. AM

      Not punch cards? [laughs]

    22. MA

      ... and not do-- Oh, no, no, no, not doing direct machine code.

    23. AM

      Right. Right.

    24. MA

      Not doing direct zero and one machine code.

    25. AM

      Yeah.

    26. MA

      'Cause, 'cause asse- 'cause assembly la- assem-

    27. AM

      Yeah

    28. MA

      ... so if people don't know, assembly language is sort of this very low level programming language that sort of compiles to actual, actual machine code.

    29. AM

      Yeah.

    30. MA

      And if you, and if it's, it's, it's incomprehensible gibberish to most programmers-

  5. 7:009:30

    Building Apps with AI Agents

    1. MA

      It's, like, just, uh, you know, people never change.

    2. AM

      Okay. Got it.

    3. MA

      Okay, so you s- you're, you're ty- you're typing English, "I wanna sell crepes online, I wanna do this, I wanna have a t-shirt," whatever the, the business is.

    4. AM

      Yeah.

    5. MA

      Okay, what, what happens then?

    6. AM

      Yeah, and then, uh, uh, Replit agent will show you what it understood.

    7. MA

      Mm-hmm.

    8. AM

      So it's trying to build, um, a common understanding be-between you and it, and I think there's a lot of things we can do better there in terms of UI, but for now it'll show you a list of tasks.

    9. MA

      Mm-hmm.

    10. AM

      It'll tell you, "I'm gonna go set up a database," because you need to store your data somewhere. Uh, we need to set up Shopify or Stripe because we need to accept payments. Uh, and then it shows you this list, and gives you two options initially. Do you wanna start with a design so that we can iterate back and forth to get locked d- design down, or do you wanna build a full thing?

    11. MA

      Mm-hmm.

    12. AM

      Hey, if you wanna build a full thing, we'll go for 20, 30, 40 minutes-

    13. MA

      Mm-hmm

    14. AM

      ... uh, and a- and i- it be like, the agent will tell you, "Go-- Here, install the app."

    15. MA

      Mm-hmm.

    16. AM

      Uh, "I'm gonna go set up the database, do the migrations, write the SQL, you know, build the site. I'm gonna also test it." So this is a recent innovation we did with, um, Agent 3, is that after it writes the software, it spins up a browser and goes around and tests in the browser, and then any issue, it kind of iterates, kind of goes and fix the code. So I'll spend 20, 30 minutes building that. I'll send you a notification. It'll tell you the app is ready.

    17. MA

      Mm-hmm.

    18. AM

      So you can test it on your phone. You can ba- go back to your computer. You'll see, maybe you'll find an, a, a bug or an issue. You'll describe it to the agent. It'll say, "Hey, it's not exactly doing what I expected." Uh, or if it's perfect and, and you're ready to go, and that's it, you know, 20 minutes la-- By the way, there's a lot of examples where people just get their idea in 20, 30 minutes, which is amazing. Um, you just hit Publish.

    19. MA

      Mm-hmm.

    20. AM

      You hit, uh, you hit Publish. Um, it, uh, couple clicks, you'll be up in the cloud. We'll set up a, a virtual machine in the cloud. The database is deployed. Everything's done, and now you have a production database.

    21. MA

      Mm-hmm.

    22. AM

      So think about the steps needed just two or three years ago in order to get to that step. You have to set up your local development environment. You have to sign up for an AWS account. You have to provision the databases, the virtual machines. You have to create the entire pipe, deployment pipeline. All of that is done for you, and it just, you know, a kid can do it, a layperson can do it. If you're a programmer and, uh, you're curious about what the agent did, the cool thing about Replit, because we have this history of being an IDE, you can peel the layers.

    23. MA

      Mm-hmm.

    24. AM

      You can open the file tree and you c- look at the files. You can open Git. You can push to GitHub. You can connect it to your editor if you want. You can open it in Emacs. So the cool thing about Replit,

  6. 9:3011:00

    When the Agent Becomes the Programmer

    1. AM

      yes, it is a vibe coding platform that abstracts away all the complexities, but all the layers are there for you to look at.

    2. MA

      Right. So l- let's go back, let's go back to, um, th- that was great. L- but l- let's go back to you said it, it, it gives you, the, the a- the agent gives you, you s- you say, "I've got my idea."

    3. AM

      Mm-hmm.

    4. MA

      You plug it in, and it says, it gives you this list of things.

    5. AM

      Mm-hmm.

    6. MA

      And then you s- and then when you describe it, you said, "I'm gonna do this, I'm gonna do that."

    7. AM

      Mm-hmm.

    8. MA

      The, the I there in that case was the, the agent-

    9. AM

      Agent. That's right

    10. MA

      ... as opposed to the user.

    11. AM

      Yes.

    12. MA

      And so the, the agent lists the set of things that it's going to do, and then the agent actually does those things.

    13. AM

      Agent does those things.

    14. MA

      Okay.

    15. AM

      So yeah, that, that, that's a-

    16. MA

      Yeah

    17. AM

      ... that's a, that's a very important point.

    18. MA

      Right.

    19. AM

      When we did this shift, we hadn't realized internally at Replit how much the actual user stopped being the human user, and it's actually the agent programmer.

    20. MA

      Right.

    21. AM

      So one, one really, uh, funny thing happened is we had servers in Asia, uh, and we-- The reason we had servers in Asia, because we wanted our Indian or, you know, Japanese users to be, to have a, you know, a shorter, uh, time to the servers. Uh, when we launched Agent, their experience got significantly worse, and we're like, "What happened? Like, it's supposed to be faster." Well, it turns out it's worse. It's because the AIs are sitting in, uh, in United States, and so the, the programmer's actually in the [chuckles] United States. It's, you're sending the request to the programmer, and the programmer is interfacing with a machine across the world. And so yes, suddenly the agent is the programmer.

    22. MA

      Okay. So like the, the term- term, you know, n- n- new terminology, agent is a software program that is basically using the rest of the system-

    23. AM

      That's right

    24. MA

      ... a- as if it were a, as if it were a human user, but it's not. It's a, it's a bot.

    25. AM

      That's right.

    26. MA

      Right.

    27. AM

      It has

  7. 11:0013:45

    Long-Horizon Reasoning and Coherence

    1. AM

      access to tools such as write a file, edit a file, delete a file, uh, uh, search the package index, install a package-

    2. MA

      Mm-hmm

    3. AM

      ... uh, provision a database, provision object sto- object storage. It is a programmer that has the tools and interface. It has its sort of an interface-

    4. MA

      Right

    5. AM

      ... that, that is very similar to a human programmer.

    6. MA

      And then, um, you know, the, uh, uh, we'll talk more about how this all works, but a, a debate inside the AI industry, um, is with the, these, was kind of this, the, you know, this idea now of having agents that do things on your behalf and then go out, you know, go, go out and kind of accomplish missions. Um, th- there's this, you know, kind of debate, which is, okay, how, like obviously, you know, it's a big deal even to have an AI agent that can do relatively simple things. To do complex things, of course, is, you know, one of the great technical challenges of the last 80 years, you know, to, to, to do that. And then there's this sort of this question of like, can the agent go out and run and operate on its own for five minutes, you know, for, for 15 minutes, for an hour, for eight hours? And, and w- meaning like how, sort of like how long does it maintain coherence? Like how long does it actually like stay in full control of its, [chuckles] of its faculties and not kind of spin out? Uh, 'cause at least the earl- early agents or the, the early AIs, if, if you set them off to do this, they might be able to run for two or three minutes, and then they would, they would start to get confused and go down rabbit holes and-

    7. AM

      Yeah

    8. MA

      ... you know, kind of s- kind of spin out. Um, more, more recently, uh, more recently, um, uh, you know, we've seen that, that, that, that agents can run a lot longer and, and, and do more complex tasks. Like where are we on the curve of agents being able to run for how long and for what complexity of tasks before-

    9. AM

      Yeah

    10. MA

      ... before they break?

    11. AM

      That's, uh, that's absolutely the, the, I think the main metric we're looking at, even back in 2023, you know, I've had the idea for software agents, you know, four or five years ago now. The problem every time we attempt them, the, the problem of coherence.

    12. MA

      Mm-hmm.

    13. AM

      You know, they'll, they'll, they'll go on for a minute or two, and then they'll just, you know, they just compound in errors in a way that they just can't recover. Um-

    14. MA

      And you can actually see it, right? 'Cause they actually, they actually, if you wa- watch them operate, th- they get increasingly confused and then-

    15. AM

      Yes

    16. MA

      ... you know, [chuckles] maybe even deranged.

    17. AM

      Yeah, they very derange.

    18. MA

      Yeah.

    19. AM

      And they g- they go into very weird areas, and sometimes they start speaking Chinese and do-

    20. MA

      Right, right. [chuckles]

    21. AM

      ... doing really weird things.

    22. MA

      Right.

    23. AM

      And um, but, uh, I would say sometime around last year, weMaybe cross a three, four, five-minute mark.

    24. MA

      [chuckles]

    25. AM

      And it felt to us that, okay, we're on a path where long re- you know, long-horizon reasoning is getting solved.

    26. MA

      Mm-hmm.

    27. AM

      Uh, and so we made, we made a bet, and I te- I tell my team-

    28. MA

      So, so sorry, long-horizon reasoning meaning, reasoning meaning like dealing in like facts and logic-

    29. AM

      Mm-hmm

    30. MA

      ... um, i-in a, in a sort of complex way, and then long horizon being over a long period of time-

  8. 13:4517:30

    Reinforcement Learning and Problem Solving

    1. AM

      talking to itself. It's like, "Oh, now I need to go set up a database. Well, what, what kind of tool do I have? Oh, there's a tool here that says Postgres. Okay, let me try using that. Okay, I used that, I got feedback. Let me look at the feedback and read it," and it'll read the feedback. And so the, the, that prompt box or context is where both the user input, the environment input, and the internal thoughts of the machine are all within. It's sort of like a program memory in, in memory space. And so reasoning over that was the challenge for a long time. That's when AIs just like went off track, and now they're able to kind of think through this entire thing and s- and maintain coherence. And there's, uh, there's now techniques around, uh, compression of context.

    2. MA

      Mm-hmm.

    3. AM

      So there's still, uh, the context length is still a problem, right? So I would say LLMs today, you know, they're marketed as a million, uh, token, uh, length, which is like a million words almost. Uh, in reality it's about two hundred thousand, and then they start to struggle. So we do a lot of, uh, you know, we stop, we compress the memory. So if a memory-- if, if a portion of the memory is saying that I'm getting all the logs from the database, you can summarize, you know, paragraphs of logs with one statement or the database set up.

    4. MA

      Mm-hmm.

    5. AM

      That's it, right?

    6. MA

      Right.

    7. AM

      And so every once in a while we'll compress the context so that we make sure we maintain coherence. So that there's a lot of innovation happened outside of the foundation models as well in order to, to enable that long context coherence.

    8. MA

      And what was the, what was the key technical breakthrough at the, in the foundation models that made this possible, do you think?

    9. AM

      I think it's RL.

    10. MA

      Okay.

    11. AM

      I think it's, uh, reinforcement learning. So the way pre-training works is, you know, uh, the, uh, pre-training is a, uh, the first step of training a large language model. It reads a piece of text, it covers the last words and tries to guess it. That's how it's trained. That doesn't really imply long context reasoning. It, it, you know, it, it, it, it turns out to be very, very effective. It can learn language that way. But the reason we weren't able to move past that limitation is that that modality of training just wasn't good enough. And what you want is you want a type of problem-solving over a, uh, over long context. So what reinforcement learning, uh, uh, especially from code execution gave us is the ability to, for the machine to-- f-for the LLM to roll out what we call trajectories in AI. So a trajectory is a, uh, step-by-step reasoning chain in order to reach a solution. So, uh, the way, uh, as I understand reinforcement learning works is they put the LLM in a programming environment like Replit and say, "Hey, here's a pr- here's a, a code base, here's a bug in the code base, and we want you to solve it." Um, now the human trainer already knows what the solution would look like. So we have a pull request that we have on GitHub, so we know exactly, or we have a unit test that we can run and verify the solution.

    12. MA

      Right.

    13. AM

      So what it does is it rolls out a lot of different trajectories, those-- they sample the model, and maybe one of those trajectories will reach, and a lot of them will just go, go off, off track, but one of them will reach the solution-

    14. MA

      Right

    15. AM

      ... by solving the bug.

    16. MA

      Right.

    17. AM

      And it reinforces on that. So that, that gets a reward, and the model gets trained that, okay, you know, this is how you solve these type of problems.

    18. MA

      Right.

    19. AM

      So that's how we were able to extend these reasoning chains.

    20. MA

      Got it. And, and how, uh, so two-part question is how, how, how good, how good are the models now at long, long, long reasoning? Uh, and h- and I would say, and how do we know? Like how, how is that established?

    21. AM

      Um, there is a nonprofit called Meter,

  9. 17:3021:15

    The Verification Loop and Multi-Agent Systems

    1. AM

      um, that is, um, measuring, uh, u-useful t- It has a benchmark to measure, uh, how long a model runs w-while maintaining coherence and doing useful, useful things, whether-

    2. MA

      Mm-hmm

    3. AM

      ... it's programming or other ta- benchmark tasks that they've done. Uh, and they put up a paper, I think, uh, late last year that said every seven months-

    4. MA

      Mm-hmm

    5. AM

      ... uh, the n- the minutes that a model can run is doubling.

    6. MA

      Mm-hmm.

    7. AM

      So you go from two minutes to, you know, four minutes in seven months. I think they vastly underestimated that.

    8. MA

      Is that right?

    9. AM

      Vastly.

    10. MA

      So it's doubling, it's doubling more often than seven months.

    11. AM

      We, so Agent 3, we measure that, uh, you know, very closely. Uh, and we measure that in real tasks from real users, so we're not doing benchmarking. We're actually doing A/B tests, and we're looking at the data that how users are successful or not.

    12. MA

      Right.

    13. AM

      For us the, the absolute sign of success is you made an app and you published it, because when you publish it, you're paying extra money. You're saying, "This app is economically useful, I'm gonna publish it."

    14. MA

      Right.

    15. AM

      So that's as clear cut as possible.

    16. MA

      Right.

    17. AM

      And so what we're seeing is in Agent 1, the agent can run for two minutes-

    18. MA

      Mm-hmm

    19. AM

      ... and then, and then perhaps struggle. Agent 2 came out in February. It ran for twenty minutes.

    20. MA

      Mm-hmm.

    21. AM

      Agent 3, two hundred minutes.

    22. MA

      Okay. Right.

    23. AM

      Two hundred min-- And so some users are pushing it to like twelve hours and things like that. I'm less confident that it is as good in, when it goes to these stratospheres, but at like two, three hours timeline, it is really, it's, it's, it's, it's insanely good. And, and the main innovation outside of the models is a verification loop.

    24. MA

      Mm-hmm.

    25. AM

      Uh, I remember reading, um, a research paper from NVIDIA. So what NVIDIA did is they're trying to, uh, write, um, GPO kernels, uh, using DeepSeq, and that was, like, perhaps seven months ago-

    26. MA

      Mm-hmm

    27. AM

      ... when DeepSeq came out. And what they found is that if we add a verifier in the loop, if we can run the kernel and verify it's working, we're able to run DeepSeq for, like, twenty minutes.

    28. MA

      Mm-hmm.

    29. AM

      And it, it was generating actually optimized kernels.

    30. MA

      Mm-hmm. Right.

  10. 21:1523:45

    Watching AI Work Like a Human Programmer

    1. AM

      It's not at computer speed, right?

    2. MA

      Right.

    3. AM

      At w- what we expect computer speed to be.

    4. MA

      It's like watching a per-- like if you watch the, if you, if it's describing what it's doing, it's sort of like watching a person work.

    5. AM

      It's like watching John Carmack on cocaine work. [laughing]

    6. MA

      The world-- Okay. The world's, so, so say that the, the world's best programmer.

    7. AM

      [laughs] Yeah.

    8. MA

      The world's best programmer on a stim-

    9. AM

      Yeah

    10. MA

      ... on a stimulant.

    11. AM

      On a stimulant, yeah, that's right.

    12. MA

      Okay.

    13. AM

      Uh, and so-

    14. MA

      Working, working for you.

    15. AM

      [laughs]

    16. MA

      Working for you.

    17. AM

      Yeah. So the-

    18. MA

      Yeah

    19. AM

      ... it's very fast.

    20. MA

      Yeah.

    21. AM

      And you can see the, uh, file diffs running through, but every once in a while it'll stop, and it'll start thinking. It'll show you the reasoning.

    22. MA

      Yeah.

    23. AM

      It's like, "Oh, I did this, and I did this. Am I on the right track?" It kind of really tries to reflect.

    24. MA

      Right.

    25. AM

      Uh, and then it might review its work and decide the next step, or it might kick into the testing agent or, you know. So, so you're seeing it do all of that, and every once in a while it calls a tool. For example, it stops and says, "Well, we ran into an issue. You know, Postgres, um, fifteen is not, um, uh, compatible with this, you know, database ORM package that I, that I have."

    26. MA

      Mm-hmm.

    27. AM

      "Um, okay, this is a problem I haven't seen before. I'm gonna go search the web."

    28. MA

      Mm-hmm.

    29. AM

      So it has a web search tool.

    30. MA

      Right.

  11. 23:4526:00

    From Stochastic Parrots to Real Reasoning

    1. MA

      there are three. Um, so, um, so it, it, it was this thing, and so people were-- and there was even this term that was being used, kind of the, the, the slur that was being used at the time was stochastic parrot.

    2. AM

      Yeah.

    3. MA

      Right.

    4. AM

      I was thinking clanker. [laughing]

    5. MA

      Well, well, clanker is the, is the new slur. Clank-clanker, clanker is just the full-on racial slur aga-

    6. AM

      [laughs]

    7. MA

      ... against AI as a species. Um, but the, the technical critique was so-called stochastic parrot. stochastic means random.

    8. AM

      Yeah.

    9. MA

      Uh, so sort of random parrot, mean-meaning basically that this thing was sort of a, the large language models were like a mira-- they were like a mirage-

    10. AM

      That's right

    11. MA

      ... where they were, like, repeating back to you things that they thought that you wanted to hear, but they didn't-

    12. AM

      And in a way, it's true in, uh, in the pure pre-training LLM world.

    13. MA

      Right, for the v- for the very basic layer. And, but then what happened is, as, as you said, over the last year or something, there, there was this layering in of, of r- of reinforcement learning, and then, and, but the key to-

    14. AM

      It's not new, crucially. It's like-

    15. MA

      Okay, go ahead

    16. AM

      ... it's AlphaGo, right?

    17. MA

      Right.

    18. AM

      So, uh-

    19. MA

      Describe, so describe that for a second.

    20. AM

      Yeah. So we, we had this breakthrough before in, uh, twenty fifteen was the AlphaGo breakthrough, I think, twenty fifteen, twenty sixteen, where it is a merging of sort of, uh, you know, the, the, the, you would know a lot better than me, the old AI debate between the connectionists, uh, the, the, the people who, who thinks neural networks are the true sort of way of doing AI, and the s-symbolic systems, I think-

    21. MA

      Right

    22. AM

      ... or, like, the people that think that, you know, discrete reasonings or if statements and knowledge bases, whatever-

    23. MA

      Right

    24. AM

      ... this is the way to go. And so there was, there was a merging of these two worlds, where the way AlphaGo worked is it had a neural network, but it had a Monte Carlo tree search algorithm on top of that. So the neural network would generate, uh, would, would, like, uh, generate a list of potential moves. Uh, and then you had a more discrete algorithm sort those moves and find the best based on just, uh, tree search, based on just, uh, trying to verify. Again, this is sort of a verifier in the loop, trying to verify which move might yield the best based on more classical way of doing algorithms. Um, and so that, that's a resurgence of, of that movement, where we have this amazing generative, uh, neural network, that is the, uh, the LLM, and now let's layer on more discrete ways of trying to verify whether it's doing the right thing or not, and let's put that in a training loop. And once you do that, the LLM will start gaining new capabilities,

  12. 26:0030:15

    Why Coding Is Advancing Faster Than Other Fields

    1. AM

      such as, uh, reasoning o-over math and code and things like that.

    2. MA

      Exactly, right. Okay, and then that's great. And then, and then the, the key thing there, though, for, for RL to work, for LLMs to reason, the, the key is that it be a, a problem statement that there is a def-defined and verifiable answer.

    3. AM

      That's right.

    4. MA

      Is that right? And so, and, and, and you might think about this as, like, uh, let's give a bunch of examples. Like in medicine, this might be, like, um, you know, a diagnosis that, like, a panel of human doctors agrees with.

    5. AM

      Mm-hmm.

    6. MA

      Um, or, or, or by the way, or a diagnosis that actually, you know, solves the condition. Um, in law, this would be a, um, you know, cor-- a, a, a argument that in front of a jury actually results in an acquittal.

    7. AM

      Mm-hmm.

    8. MA

      Uh, or, or something like that. Um, in, um, math, it's an equation that actually solves properly. Uh, in physics, it's a result that actually works in the real world.

    9. AM

      Mm-hmm.

    10. MA

      I don't know, in civil engineering, it's a bridge that doesn't collapse, right? So, so, so there, there, there's always some, some test of correctness-

    11. AM

      Caveat is that-

    12. MA

      Okay, go ahead

    13. AM

      ... the first two do not work very well-

    14. MA

      Okay

    15. AM

      ... just yet.

    16. MA

      Okay.

    17. AM

      Like the, the, like the, the, I would say, uh, law and healthcare, they're still a little too squishy, a little too soft.

    18. MA

      Okay.

    19. AM

      It's unlike math-

    20. MA

      Okay

    21. AM

      ... or code. Like the way that they're training on math, they're using this, uh, sort of like a program language, uh, provable language called Lean for proofs, right? So you can run a Lean statement. You can run a computer code. Uh, perhaps you can run a physics simulation or civil engineering, uh, sort of physics simulation, but, uh, you can't run a diagnosis.

    22. MA

      Okay.

    23. AM

      So, uh, I would say that-

    24. MA

      But you could verify it with human answers or n- or not.

    25. AM

      Yeah. So that, that's a more-

    26. MA

      Or-

    27. AM

      ... RL HF in a way.

    28. MA

      Okay.

    29. AM

      So it is not the like sort of autonomous RL train-

    30. MA

      Okay

  13. 30:1533:45

    Verifiable Domains: Math, Code, and Physics

    1. MA

      So squ-squ-- So software domains meaning like domains in which it's har- it's harder, harder or even impossible to actually verify correctness of, of result-

    2. AM

      Yeah, like-

    3. MA

      ... in a sort of a deterministic, factual, grounded-

    4. AM

      Yeah

    5. MA

      ... non-controversial way.

    6. AM

      Like if you have a, a chronic disease, you could, you could have, you know, you have, uh, POTS or, uh, you know, whatever, EDS syndrome or, y- And, and they're all, they're all clusters, and it's-

    7. MA

      Right

    8. AM

      ... because it, it is the domain of abstraction. It is not as concrete as code and math and things like that.

    9. MA

      Right.

    10. AM

      So I think there's still a long ways to go there.

    11. MA

      Right. So sort of the more concrete the problem, like it's the concreteness of the problem that is the key variable, not the difficulty of the problem. Would that be a way to think about it?

    12. AM

      Ye-yeah, I think the, the, uh, concreteness i-in a sense of can you get a true or false ver-verifiable output.

    13. MA

      Right. But like in any domain, in any domain of, of human effort in which there's a verifiable answer, we should expect extremely rapid progress.

    14. AM

      Yes.

    15. MA

      Right. Okay.

    16. AM

      Yes, absolutely, and I, I think that's what we're seeing.

    17. MA

      Right. And that, and that for sure includes math. That for sure includes physics, for sure includes chemistry, for sure includes large areas of code.

    18. AM

      That's right.

    19. MA

      Right. What, what else does that include, do you think?

    20. AM

      Bio, like-

    21. MA

      Yeah. Yeah

    22. AM

      ... we're seeing with a protein folding.

    23. MA

      Like genomic, genomic, yeah, yeah. Okay.

    24. AM

      Genomic.

    25. MA

      Protein folding, right.

    26. AM

      Yeah, yeah.

    27. MA

      Okay.

    28. AM

      Things like that. I think some, some, uh, uh, areas of robotics.

    29. MA

      Right.

    30. AM

      Um, uh, there's a clear outcome.

  14. 33:4537:45

    The AGI Debate: Are We on Track?

    1. MA

      And like, you know, we should both be like hyper excited, but also on the verge of like slitting our wrists-

    2. AM

      [laughs]

    3. MA

      ... 'cause like, you know-

    4. AM

      Yeah

    5. MA

      ... the gravy train is coming to an end.

    6. AM

      Right.

    7. MA

      And, and I always wonder, it's like, you know, on the one hand, it's like, okay, like, you know, not all, I don't know, ladders go to the moon. Like, just 'cause something, you know, looks like it works or, you know, doesn't mean it's gonna, you know, be able to sc- you're gonna be able to scale it up and have it work, you know, to the fullest extent. Um, uh, you know, so like, it, it's important to like recognize practical limits and to not just extrapolate everything to infinity. Um, on the other hand, like, you know, we're dealing with magic here that we, I think probably all would've thought was impossible five years ago-

    8. AM

      Yeah

    9. MA

      ... or certainly ten years ago. Like I, I didn't, you know, look, I, I, you know, I got my CS degree in the late '80s, early '90s. I, I never th-- I didn't think I would live to see any of this.

    10. AM

      Yes.

    11. MA

      Right? Like, this is just amazing that this is actually happening in, in, in my lifetime. Um-

    12. AM

      But, but, but there's a huge bet on AGI, right? Like, whether it's the foundation models, uh, I think, you know, now the entire US, uh, economy is sort of a [laughs] a bet on AGI. And, and there are crucial questions to ask whether are we on track to AGI or not.

    13. MA

      Right.

    14. AM

      Because there are some ways that I can tell you it doesn't seem like we're on track to AGI because we, uh, because there doesn't seem to be transfer learning across these domains that are, that are, you know, significance, right?

    15. MA

      Right.

    16. AM

      So if we get a lot better at code, uh, we're not immediately getting better at like generalized reasoning. We need to go also fi-- you know, get training data and create RL environment for bio or chemistry or physics or math-

    17. MA

      Right

    18. AM

      ... or law or... So, so, and, and this, this has been the sort of point of discussion now in the AI community after the, uh, Dwarkesh and Richard Sutton, uh, interview where, uh, you know, Richard Sutton kind of poured this cold water on the, um, uh, on the bitter lesson. So everyone was using this, uh, essay that he wrote called "The Bitter Lesson." The idea is that there are, um, infinitely scalable ways of, uh, doing, uh, uh, AI research. And, a-and, and, and, and any time you can pour more compute and more data and get more performance out, you're just, y-you know, that's the ultimate way of getting to AGI. And some people inter-- you know, interpreted that interview that perhaps he's doubtful that even, we're even on a, on a bitter, uh, lesson path here.

    19. MA

      Right.

    20. AM

      And perhaps the current training regime is actually very much the opposite in which we, we are so dependent on human data and human annotation and, and all of that stuff.

    21. MA

      Right.

    22. AM

      So I think the, the... I, I agree with you. I mean, as a company, we're, we're [laughs] excited about where things are headed, but, but there's, there's a question of like, are we on track to AGI or not? And-

    23. MA

      Right. Right

    24. AM

      ... be curious what you think.

    25. MA

      So, uh, so a-and, you know, Ilya, I think, you know, Ilya Sutskever makes a, makes a specific form of this argument, which is basically like we're, we're just literally running out of training data.

    26. AM

      It's the fossil fuel argument.

    27. MA

      Right.

    28. AM

      Yeah.

    29. MA

      Like we've, we've, we've slurped all the training da- We-- Fundamentally, we've slurped all the data off the internet. That is where almost all the data is at this point. There's a little bit more data that's in like, you know, private and dark pools somewhere that we're gonna go get, but like-

    30. AM

      Right

  15. 37:4541:15

    Transfer Learning and the Limits of Human Intelligence

    1. MA

      It, well-

    2. AM

      [laughs]

    3. MA

      At one, at one point. At one point. At one point. It, it-- Let's get, yeah, let's say e-even if, even if he's a brilliant-- Well, this is, so this is the thing, like what does that mean? Like, uh, should a brilliant economist be able to extrapolate, you know-

    4. AM

      Yeah

    5. MA

      ... the, the, the internet is a, is a good question. But, um, th- but the point being like even if he is a br- you know, or take anyb- take, take anybody... Oh, by the way, or like Einst- like Ein- like Einstein's like actually my favorite example. Like Br- I think you'd agree Einstein was a brilliant physicist.

    6. AM

      Definitely, yeah.

    7. MA

      He was like a, he was a, he was a Stalinist. Like he was this, he was a-

    8. AM

      Was he?

    9. MA

      Yeah, he was a socialist and he was a Stalinist.

    10. AM

      Oh.

    11. MA

      And he was like, well, he thought like Stalin was fantastic.

    12. AM

      Well, he did order his out, so [laughs]

    13. MA

      Yeah. Yeah, okay. All right.

    14. AM

      True socialism, I mean.

    15. MA

      All right, Ei- all right, Einstein. You know, I'll, I'll, uh, I'll, I'll, I'll, I'll, I'll, I'll, I'll, I'll take your word for it.

    16. AM

      [laughs]

    17. MA

      But like once he got into politics, he was just like totally loopy. Or, or, you know, or even right or wrong, it's just, you know, he just sounded like all of a sudden like an undergraduate lunatic, like s- somebody in a dorm room. Like he-- There, there was no transfer learning from physics into politics. Uh, like he, he, uh, was it right or wrong, he didn't-- There was no g-

    18. AM

      Yeah

    19. MA

      ... there was clearly, there was nothing new in his political analysis.

    20. AM

      Yeah. Yeah.

    21. MA

      It was the same rote, routine bullshit you get out of, you know-

    22. AM

      Yeah. So, so in a way, your, the argument you're making is like w-we may, may be already at human-level AI. I mean, perhaps the definition of AGI is, is, is something totally different. It's like above human level that something that truly generalizes across domains. It's, it's not something that we've seen.

    23. MA

      Yeah, like we've ideal-- Yeah, as, as I said, we, we've, we've... And, and, you know, look, we should, we should shoot big, but we've, we've idealized a, a, we've idealized a goal, um, that may be idealized in a way that like it, it-- Number one, it's just, it, it's, it's like so far beyond what people can do-

    24. AM

      Right

    25. MA

      ... that it's, it's no, you're no longer, it's no longer a relevant comparison to people.

    26. AM

      Right.

    27. MA

      And, and usually AGI is defined as, you know, able to do everything better than a person can.

    28. AM

      Right.

    29. MA

      And it's like, well, okay, so if doing everything better than a person can, it's like if a person can't do any transfer learning at all-

    30. AM

      Right

  16. 41:1545:20

    Functional AGI and Automating Labor

    1. AM

      on top of that or train the same foundation model on top of that, and, and we'll, we'll go, we'll target every sector of economy, and, and you can automate a big part of labor that way.

    2. MA

      Right.

    3. AM

      So I think, I think, yeah, w- I think we're on that track-

    4. MA

      Right

    5. AM

      ... for sure.

    6. MA

      Right. Um, you tweeted after GPT-5 came out that you were feeling the diminishing returns.

    7. AM

      Yeah.

    8. MA

      What, what were you expecting and, but, and, and what needs to be done? Do we need another breakthrough to get back to the pace of growth? Or what, what are your thoughts there?

    9. AM

      I mean, this, this whole discussion is, is sort of about that, and, and my feeling is that, uh, you know, GPT-5, uh, got good at verifiable domains. It didn't feel that much better at anything else. The more human angle of it, it felt like it regressed, and like you had this, uh, sort of, uh, Reddit pitchfork, uh, sort of, uh, movement against, against Sam and OpenAI because they felt like they lost a friend. GPT-4.0 felt a lot more human and closer, uh, whereas GPT-5 felt a lot more robotic, you know, very in its head, kind of trying to think through, through everything. And, um, and so I, I, I would've just expected like when we went from GPT 2 to 3, it was clear it was getting a lot more human. It was, uh, a lot closer to our experience. It can, you can feel like it's actually, oh, it gets me. Like, it, there's something about it that understands the world better. Similarly, 3 to 4. 4 to 5 didn't feel like it was a better overall being, as it were.

    10. MA

      But, but is that, is that, is, is that, is that a, the, is the question there, like, is it emotionality? I-is it-

    11. AM

      Par-partly emotionality, but, but again, partly, like, I like to ask models, like, very controversial-

    12. MA

      Mm-hmm

    13. AM

      ... uh, things. Um, can it reason through, uh, I don't know how deep you wanna go here, but like, um-What happened with World Trade Seven?

    14. MA

      Right, right. [laughs] Sure.

    15. AM

      It's, it's an interesting question, right? Like, I'm not, I'm not putting out a theory.

    16. MA

      Right.

    17. AM

      But like, it's interesting, like how did it-

    18. MA

      Right

    19. AM

      ... you know, and, and can it, can it-

    20. MA

      Right

    21. AM

      ... think through controversial questions-

    22. MA

      Right

    23. AM

      ... in the same way that it can go think through a coding problem?

    24. MA

      Right.

    25. AM

      And, uh, there, there hasn't been any movement there, like the, all the reasoning and all of that stuff. Heaven sa-- And not just that, you know, that's a cute example, but like, um, COVID, right?

    26. MA

      Sure.

    27. AM

      Like, you know, w- the origins of COVID.

    28. MA

      Right.

    29. AM

      You know, go, uh, you know, dig up GPT-4 or other models and go to GPT-5. You're not gonna find that much difference of, okay, let's reason together, let's try to figure out what was the origins of COVID, because it's still an un-

    30. MA

      Right

  17. 45:2053:10

    GPT-5, Diminishing Returns, and Lost “Humanity”

    1. MA

      like, went out and did that work, like it would maybe be that good.

    2. AM

      Yeah.

    3. MA

      Um, but then, but then of course the significance is it's like, it's like, you know, at least for, it's, it's, this is true for many domains, you know-

    4. AM

      Yeah

    5. MA

      ... kind of PhD and everything. And so-

    6. AM

      But, but this is synthesizing knowledge, not trying to create new knowledge.

    7. MA

      Well, but this, this, this gets to the, the sort of, you know, of course, you, you get into the angels dancing on the head of a pin thing-

    8. AM

      Right

    9. MA

      ... which is like, what, what, you know, what's the difference? How many, how much new knowledge e-ever actually is there anyway?

    10. AM

      Yeah.

    11. MA

      What do you actually expect from people when you ask them questions? Um, and so what, what I'm looking for is like, yes, explain this to me in like the, the clearest, most sophisticated, most complex-

    12. AM

      Yeah

    13. MA

      ... most like complete way that is possible for somebody to ex-

    14. AM

      Yeah

    15. MA

      ... you know, for a real expert to be able to, to, to explain things to me. Um, and that's what I use it for. And at le- and again, as far as I can tell from the cross-checking, like I'm getting, you know, like almost, like basically a hundred out of a hundred. Like I don't even think I've had an issue in months-

    16. AM

      Yeah, yeah

    17. MA

      ... um, where it's like-

    18. AM

      For sure

    19. MA

      ... had, had a, had a, had a problem in it.

    20. AM

      Yeah.

    21. MA

      And it's like, yeah, you can say, yeah, synthesizing is supposed to create new information, but like it's, it's generating a forty page-- It's basically generating a forty page book-

    22. AM

      That's amazing. Yeah

    23. MA

      ... that's like incredibly like fluid. It's, you know, it's-

    24. AM

      Yeah

    25. MA

      ... it's, it's, it's, it's, you know, the, the lo- the logical coherence of the entire-

    26. AM

      Yeah

    27. MA

      ... like it's a, it's a great write. Like if, if you, if you evaluated an, an, a, a human author on it, you would say-

    28. AM

      Yeah

    29. MA

      ... "Wow, that's a great author."

    30. AM

      Yeah.

  18. 53:1057:30

    Creativity, Reasoning, and Finding Truth in AI

    1. AM

      lot of progress or outcome there. But I, I watch it kind of from far.

    2. MA

      Although, you know, for all we know, it's already, there's already a bot on X somewhere.

    3. AM

      What's that? Maybe.

    4. MA

      You know? You know?

    5. AM

      Perhaps.

    6. MA

      You never know. It might not be a big announcement. It might just be a, you know, one day there's just, like, a bot on X that starts winning all the arguments.

    7. AM

      Yeah. It could be. It could be.

    8. MA

      Or, or a co- as I was saying, or a co- a coder, uh, a user at Reddit all of a sudden that is-

    9. AM

      Yeah

    10. MA

      ... generating incredible software. Um, okay, let's, uh, let's spend our remaining minutes. Let's, let's, let's talk about you. So, uh, so, uh, so how-- So yeah, take us, start from the beginning-

    11. AM

      Mm-hmm

    12. MA

      ... with your, uh, with your life, and how, how did you get, how did you get from being born and being in Silicon Valley?

    13. AM

      Okay. [laughs] Um-

    14. MA

      In two minutes.

    15. AM

      Yeah.

    16. MA

      I'm just, I'm joking, but...

    17. AM

      Yeah. I, I got introduced to computers, uh, very, very early on. And so for whatever reason, so I was born in Amman, Jordan.

    18. MA

      Mm-hmm.

    19. AM

      And for whatever reason, my, my dad, who was just a government engineer at the time, uh, decided that computers were important. And he didn't have a lot of money, took out a debt, bought a computer. It was the first computer in our, in our neighborhood, first computer o-of anyone I know. And I just-- one of my earliest memories, I was six years old, just watching my dad unpack this machine and sort of open up this huge manual and kind of finger type CD, LS, MKDIR. And like, I would, you know, be behind his shoulder and just, like, watching him, you know, type these commands and seeing the sort of machine kind of respond and do exactly what he's asked it to do. Um-

    20. MA

      Popping, popping Tylenol as you-

    21. AM

      [laughs] Popping Tylenol. Exactly. [laughs]

    22. MA

      Autism activated.

    23. AM

      Of course. [laughs] You have to. [laughs] You have to. Exactly.

    24. MA

      What kind of, um, what kind of computer was it?

    25. AM

      Uh, it was, uh, an IBM, as far as I remember.

    26. MA

      IBM PC.

    27. AM

      It was IBM PC-

    28. MA

      So what year was this about?

    29. AM

      ... MS-DOS, uh, 1993.

    30. MA

      1993?

  19. 57:301:03:00

    The Origins of Replit and Early Coding Days

    1. AM

      computer engineering and-

    2. MA

      Oh, okay

    3. AM

      ... and, and did that for a while, uh, but then rediscovered my love for, for programming, uh, reading Paul Graham essays on LISP and things like that and, uh, started messing around with Scheme and programming languages like that. Um, but then I found it incredibly difficult to just, like, learn different programming languages. I didn't have a laptop at the time, and so every time I'd go to, like, wanting to learn Python or Java, I would go to the computer lab, download gigabytes of software, try to set it up, type a little bit of code, try to run it, you know, run into missing DLL issue or... And I was like, "Man, this is so primitive." Like, at the time, it was 2008, something like that, you know, we had, uh, Google Docs. We had Gmail. You could, like, open the browser, uh, and probably thanks to you, and be able to kind of, uh, use software on the internet, and I thought the web is the ultimate software platform.

    4. MA

      Right.

    5. AM

      Like, everything should go on the web. Okay, who's building an online development environment?

    6. MA

      Right.

    7. AM

      And, and no one.

    8. MA

      Right.

    9. AM

      And it felt like I w- I found, like, a hundred dollar bill on the, you know-

    10. MA

      Right

    11. AM

      ... on the floor of Grand Central Station.

    12. MA

      Right.

    13. AM

      Like, surely someone should be building this.

    14. MA

      Right.

    15. AM

      But no, no one was building this. And so it's like, okay, I'll, I'll try to build it. And I got something done in, like, a couple hours, uh, which was a text box. You type in some JavaScript. We-- And there's a, there's a button that says Eval. You click Eval, and it evaluates the Java-- It shows you in a, in an alert box [laughs] .

    16. MA

      Right. Right.

    17. AM

      So one plus one, two.

    18. MA

      Right.

    19. AM

      I was like, "Oh, I have a programming environment."

    20. MA

      Yeah.

    21. AM

      I showed it to my friends. People started using it. I added a few additional things like saving the program. I was like, "Okay. All right. This is-- There's, there's a real idea here. People love it." And then again, it took me two, two to three years to actually be able to build anything because, you know, the browser can only run JavaScript, and it took a breakthrough at the time. Um, Mozilla had a research project called M Scripton that allowed you to, uh, compile different, uh, programming languages like C, C++ into JavaScript.

    22. MA

      Mm.

    23. AM

      And for the browser to be able to run something like Python, I needed to compile C Python to JavaScript, so I was the first to do it in the world.

    24. MA

      Mm-hmm.

    25. AM

      Uh, so built, uh, contributed to that project and built a lot of the scaffolding around it, and we c- uh, my friends and I compiled Python into JavaScript, and it was like, okay, we did it for Python. Let's do it for Ruby. Let's do it for Lua. Let's do it... And that's how the emergence of the idea for Replit came, is that when you need a REPL, you should get it. You should REPL it.

    26. MA

      Mm-hmm.

    27. AM

      And so a REPL is, is the most primitive programming environment possible. So I added all these programming languages, and again, all this time, my friends were using it and excited about it. And I was on GitHub at the time, and just my standard thing is, like, when I make a piece of software, is to open source it. And so I was open sourcing all the things. I was, you know, years building just, like, this underlying infrastructure to be able to just run code in the browser, and then it went viral.

    28. MA

      Right.

    29. AM

      Uh, it went viral on Hacker News, and it coincided with the MOOC era.

    30. MA

      Right.

  20. 1:03:001:08:00

    Hacking His University and Getting Caught

    1. AM

      I went into my parents', uh, uh, basement, uh, and, uh, implemented, uh, the polyphasic sleep. Are you familiar with that?

    2. ET

      I, I, I, I am.

    3. AM

      Uh, Leonardo da Vinci's, uh, polyphasic sleep. I didn't h-hear it from Leonardo da Vinci. I heard it from Seinfeld, 'cause, uh, there's an episode where John Kammerer goes on, [laughs] on polyphasic sleep.

    4. ET

      So you sleep, what, 20 minutes every four hours?

    5. AM

      Yes, 20 minutes every four hours.

    6. ET

      Every four hours.

    7. AM

      Yeah.

    8. ET

      And, and yes, and this, this somehow is gonna work well, and it, it-

    9. AM

      Yeah. A- and, and hacking-

    10. ET

      It's, it's definitely-

    11. AM

      ... if you've ever done any-

    12. ET

      As, as the meme goes, this, this has never worked for anybody else, but it might work, but, but-

    13. AM

      But it might work for me? Yes. [laughs] And a lot of what hacking is, is that you're, you're coming up with ideas for, like, finding certain security holes and, like, writing a script and then running that script, and that script will take you, like, uh, 20, 30 minutes to run. And so you'll take that, you know, 20, 30 minutes to sleep and go on. And so I spent two weeks just going mad, like, trying to hack into the university database. And, uh, finally, I found, um, a way, I found a SQL injection somewhere on the site, uh, and I found a way to, like, be able to edit the, the records. But I didn't wanna risk it, so I went to my neighbor who was going to the same school. Uh, I think till this day no one caught him. But I went to him and I said, um: "Hey, uh, I have this way to change grades. Like, would you want to be my guinea pig?" And I was honest about it. I was like, "I'm not gonna do it. [laughs] Are you open to do it?" [laughs] He's like, "Yeah, yeah, yeah."

    14. ET

      They call this human trials.

    15. AM

      [laughs]

    16. ET

      This is how medicine works.

    17. AM

      [laughs] So, so we, we went and, and, and, uh, we went in and changed his grades, and he, he went and pulled his transcript, and the, you know, the update wasn't, wasn't there, and went back to the basement. Well, it turned out that I had access to the, uh, slave database.

    18. ET

      Mm-hmm.

    19. AM

      I didn't have access to the master database. So found a way through the network, privilege escalation. It was an Oracle database that had a vulnerability, and then found the real database, and then I just, you know, did it for myself, uh, changed the grades, and went and pulled my transcripts. And sure enough, it actually changed. Went and p-bought the, the, the, the gown, went to all the graduation parties.

    20. ET

      [laughs]

    21. AM

      Uh, did all that, and we're graduating. Um, and then one day I'm at home. It's, like, maybe 6:00 or 7:00 PM. I get a, you know, the, the telephone at home rings. You know the-

    22. ET

      [laughs] Ominous, ominous, ominous ring sound.

    23. AM

      Yes. [laughs] Uh, well, um, hello, and he's like: "Hey, this is the university registration system." And I knew the guy that run it. Uh, he's like: "Look, you know, we, we, we're having this problem. The system's been down all day, and it, it keeps coming back to your record. There's an anomaly in your record where you're both pass-- you have a passing grade, but you're also banned from that, uh, final exam or subject." I was like, "Oh, shit." Well, it turns out the database is not normalized, so typically they, when they ban you from an exam, the, the grades reset to 35 out of 100. But apparently there's a Boolean flag, and by the way, all the column names in the database are single, are single letters. So [laughs] that was the hardest thing is security by obscurity.

    24. ET

      Right.

    25. AM

      And it turns out there's a flag that I didn't check. So when, when, when you go over attendance, um, uh, wh-when you don't attend and they, they, they wanna fail you, they, they ban you from the final exam. So I changed the grades, and that, that, that created, uh, a-an issue and brought down the system. So they were calling me, and I thought at the time, I was like, you know, I could, I could potentially lie and it'll, it'll be a huge issue, or I just like, I'll just, I'll just fess up. I'll just fess up. Yeah.

    26. ET

      [laughs]

    27. AM

      So I said, "Hey, listen, look, um, yeah, I might know something about it. Hey, let me, let me come, uh, tomorrow and kind of talk to you about what happened." So I go in, and I open the door, and it's the deans of all the un- all the schools. It's like computer science, computer... They, they were all working on it for, like, days because it's like, it's like, it's a very computer heavy-

    28. ET

      Mm

    29. AM

      ... you know, university, and it, it was, like, a problem. And they're all kind of really intrigued about what happened. And so I pull up a whiteboard and started explaining what I did.

    30. ET

      [laughs]

  21. 1:08:001:11:55

    The Redemption and Lessons Learned for the AI Age

    1. AM

      gonna let you go, but you're gonna have to help the system administrators secure the system-

    2. MA

      There we go

    3. AM

      ... uh, for the summer." I was like, "Yeah, happy to do it." And I show up, and all the programmers there hate me.

    4. MA

      Yeah, I'll bet.

    5. AM

      Hate my guts.

    6. MA

      Yes, yes, 100%.

    7. AM

      And, uh, they, they would lock me out. Like, I would see them. They would be outside. I would knock on the door, and no one would listen. [chuckles] It's like they don't wanna let me in. I tried to help them a little bit. They, they weren't collaborative. And so I was like, "All right, whatever." Uh, and so, uh, it, it came time for me to actually graduate. It was the final project, and one of the computer science dean ca-came to me and he said, "Look, I, I need to call a favor. I was a big part of the reason we kind of let you go, and we didn't kind of prosecute you. Uh, so I want you to work with me on the, um, on the final project.

    8. MA

      [chuckles]

    9. AM

      And it's gonna be around security and hacking." I was like, "No, I'm, I'm done with that shit."

    10. MA

      [laughs]

    11. AM

      Like, I just wanna, I just wanna build programming environments and things like that. Uh, and he's like, "No, you have to do it." I was like, "Okay." So I, I thought I'd do something more productive, so I wrote a security scanner, uh, that I was very proud of that, that kind of crawls the different site, that tries to do SQL injection and all sorts of things. Um, and actually, my security scanner found another vulnerability in the system.

    12. MA

      Amazing. Amazing.

    13. AM

      And so I went to the defense, and he's like, "You need to sh-run this security scan live and show that there's a vulnerability." And I didn't understand what was going on at the time, but I just, "Okay." So I give the presentation about how the system works, and I was like, "Oh, let's run it," and it showed that there's a security vulnerability. "Okay, let's get, let's try to get a shell." So the system automatically runs all the security stuff, and it gets you, gets you a shell. And then, w-well, the other dean that turned out, he was giving the mandate to secure the system, and now I started to realize I'm a pawn in some kind of rivalry here. And, and his, his face turned red, and he's like, "No, it's impossible. You know, we secured the system. You're lying." I was like, "You're, you know, you're accusing me of lying? All right, what should we know? Should we know your, um, uh, your salary or your password? What do you want me to look up?" And I was like, "Yeah, look up my password." So I, I look up his, his password, uh, and it was like gibberish. It was encrypted. And I was like, "Oh, that's not my password. See? You're lying." I was like, "Well, there's a decrypt function [chuckles] that the programmers put in there." So I w-- I do decrypt, and it shows his password, and it was something em- uh, embarrassing. I forgot, [laughs] I forgot what it was.

    14. MA

      Yeah.

    15. AM

      And so he gets up, really angry, shakes my hand, and leaves to change his password. Uh, and so that, that I, I was able to hack into the university another time. Luckily, I, I was able to graduate, give them the software. They secured the system. But, um, but yeah, later on I would realize that, yeah, he wanted to embarrass the other guy [laughs] which was why I was in the middle.

    16. MA

      Economic politics. I, well, I think the m- the moral, the moral of the story is if, if you can successfully hack into your school system and change your grade, you deserve the grade and you deserve to graduate.

    17. AM

      I, I think so.

    18. MA

      And, and, and, and just for any pa- for any parents out there, just, yeah, or children out there-

    19. AM

      [laughs]

    20. MA

      ... you can just, you can, you can cite, you can cite, cite me as the-

    21. AM

      Marc Andreessen

    22. MA

      ... as the moral, you can cite, you can cite Amjad and me as the mor- moral authority, moral authority on this. Yeah.

    23. AM

      One maybe lesson I think that is very relevant for the AI age, uh, I think that the traditional sort of more conformist path is paying less and less dividends.

    24. MA

      Right.

    25. AM

      And I think, uh, you know, kids coming up today should use all the tools available to be able to discover and chart their own paths. 'Cause I feel like just, you know, listening to the traditional advice and doing the same things that-

    26. MA

      Yeah

    27. AM

      ... people have always done is just not as, uh, it's not working out as much as, as we'd like.

    28. MA

      Yeah. That's right.

    29. AM

      Amjad.

    30. MA

      Yeah.

Episode duration: 1:11:56

Install uListen for AI-powered chat & search across the full episode — Get Full Transcript

Transcript of episode g-WeCOUYBrk

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome