Skip to content
Y CombinatorY Combinator

Gmail Creator Paul Buchheit On AGI, Open Source Models, Freedom

It’s the first guest episode of Lightcone! The hosts sit down with Paul Buchheit, one of Google’s earliest employees, the creator of Gmail and a YC Group Partner. (He also came up with Google’s famous tagline “Don’t be evil.”) This discussion covers a wide range of topics, including the future of AGI, the early days of OpenAI, and the crucial importance of open source models. Chapters (Powered by https://bit.ly/chapterme-yc) - 0:00 Coming Up 1:11 Google's early views on AI 2:29 Paul's time at Google 8:34 Why isn't Google the AI leader? 12:01 Paul's connection to OpenAI 14:34 Open source models 16:09 YC involved in OpenAI's origin story 20:56 Zuck/Meta: Champions for open source? 29:31 How do we get to AGI? 37:53 Dangers of centralized AI planning & control 42:10 Doomers vs Optimists 48:18 Outro

Jared FriedmanhostDiana HuhostPaul BuchheitguestGarry TanhostHarj Taggarhost
Aug 9, 202448mWatch on YouTube ↗

EVERY SPOKEN WORD

  1. 0:001:11

    Coming Up

    1. JF

      It seems like Google has all the ingredients to just be the dominant AI company in the world. So, why isn't it?

    2. DH

      Do you think OpenAI in 2016 was comparable to Google in 1999 when you joined it?

    3. JF

      Are you a believer that we are definitely going to get to AGI?

    4. PB

      What is the long-term trajectory of AI? It's the most powerful technology we've ever invented, and so the question is, like, where does that power go? I think we ha- have to build a whole coalition of people who are in favor of freedom and open source and not just sort of bet everything on Facebook saving us.

    5. DH

      (laughs)

    6. GT

      Welcome to another episode of The Light Cone. I'm Gary. This is Jared, Harj, and Diana, and we're the partners at Y Combinator where we funded hundreds of billions of dollars worth of companies. And we have a special guest who is also one of the original outside partners, the non-founding partners at YC, Paul Buchheit. He created Gmail, he coined the term "Don't be evil." PB, thanks for joining us today.

    7. PB

      Thanks, Gary.

    8. GT

      So what should we start off with?

  2. 1:112:29

    Google's early views on AI

    1. JF

      Well, I think one thing people don't often realize is that you've been thinking about AI for a long time and that Google itself was kind of an AI company. Can you tell us more about that? What was the internal view of AI at Google?

    2. PB

      Yeah, I mean, I think really Google has always... was always supposed to be an AI company from the beginning. Um, you know, Larry and Sergey set out to build, um, you know, these very large compute clusters and do a lot of machine learning on all of the data that they gather, a- and actually, arguably, you know, the mission statement is pretty straightforward. The Google mission is to gather all the world's training data and feed it into a giant AI supercomputer, and they put it slightly less direct. They said, "Gather all the world's information and make it universally useful and accessible," or something like that. But essentially, y- you know, what that really meant in practice is feeding it into a giant AI supercomputer.

    3. JF

      And even the origin story of Google was all based on their PhD with PageRank-

    4. PB

      Mm-hmm.

    5. JF

      ... which is very much, today's, in a lot of machine learning classes that gets taught. It is one of the foundational, kind of historical AI algorithms that gets taught.

    6. PB

      Yeah, I mean, there was a, there was an understanding very early on that if you have enough data, that's actually the path to, to making things intelligent instead of just trying to iterate forever on little

  3. 2:298:34

    Paul's time at Google

    1. PB

      algorithms.

    2. DH

      How early did you join Google, Paul Buchheit? Can you talk a little bit about what Google was like when you joined?

    3. PB

      Uh, yeah, so it was June 1999, so that was, uh, let me see... (laughs)

    4. DH

      (laughs)

    5. PB

      ... 25 years ago, a little more, um, and so yeah, it was a very small startup. We were, we were in Palo Alto on University Ave, just, uh, up above, like, a tea shop at the time, and it was, it was electric. It was really cool. Um, I, I actually... After I was there for about a week, I, I tried to get more equity. (laughs)

    6. JF

      (laughs)

    7. PB

      But it turns out you have to negotiate before accepting.

    8. DH

      (laughs)

    9. PB

      Um, uh, so... But yeah, it was... It, it, it had a very kind of unreal sense of, like, just an excitement, you know? I was excited to go into work because we were, we were just doing big things.

    10. DH

      And when you were there, like, in that early set of Google people, how did you all envision that this AI thing would play out and what Google's, like, AI future would look like?

    11. PB

      You know, we didn't know.

    12. DH

      Was it something that ever came up?

    13. PB

      Right. No, I mean, AI has obviously been a thing that people have been thinking about for a long time. Um, I, I made my first neural net back... I, I dug up the code a while back. I think it was, like, 1995, and I had... It was like one of those three-layer neural nets with-

    14. JF

      You did a classic, uh, Minsk digit classification thing?

    15. PB

      Yeah, I was doing... I, I, I did a, uh, not exactly digit classification, but there, there were these things called figlets that are like ASCII letters, and so I made it do essentially like an OCR on, on those. Um, but you know, it'd be like 100 weights (laughs) , so that's very, much smaller than today's models.

    16. JF

      Now, it's like trillions of weights now.

    17. PB

      Yeah, and the history of, like, neural nets is kind of weird. Um, the first thing was when they invented the perceptron, which was like a single neuron, and it was very hot for a short time until some researcher showed that a perceptron can't compute XOR, and then they were like, "Well..." Like, it's just dead for a while, until someone had this idea to use multiple neurons, and so it was like very slow going. And then it was kind of like dead again for a while, and then, to my perception, it kind of really picked up in the early teens, you know, when deep learning became popular, and that was when we first started seeing, like, I think impressive results, where that was when we started feeling like internally, you know, in the discussions at YC, that AI had switched from being something in the indefinite future to being in the more definite future. Um, and that is, you know, kind of what led to the creation of open eye- OpenAI as well.

    18. JF

      Were there any conversations around, like, the power of AI and the implications of AI, specifically AGI and just like the impact on society, or did it feel too far removed?

    19. PB

      Yeah, I think it was still too far off in the future. I mean, it was very much sci-fi at that point. Um, we were dealing with more, you know, near-term, how do we make search better? But search is, you know, kind of a... to some extent, uh, an AI problem. You have to figure out what it is the, the user is looking for. It's remarkably good. If you actually look at Google Search, like, there's a lot of stuff going on behind the scenes. Um, and actually, one of the earliest kind of magical features that we added was the "Did you mean... ?" Uh, you know, the spell correction, and so that actually comes from originally just my inability to spell. I've never been very good at spelling. My, my brain doesn't like arbitrary patterns. (laughs)

    20. JF

      (laughs)

    21. PB

      So like when I was in school, math was easy because it's predictable, but spelling always made me struggle. Um, and so when I started at Google, one of the first features I added was the spell corrector because I was looking at the query logs, and I would see that I'm not the only person with this problem. Like a third of the queries were misspelled or something like that. So it was like the easiest quality win ever was just to fix the spelling.

    22. DH

      Wait, wait, so you built the original spelling corrector at Google? How did that... I didn't know that.

    23. PB

      Um, I did the first "Did you mean..." feature, um, and so it... but w-I built it just based off of kind of an existing spell corrector library and then... But it would give really dumb corrections, like if you typed in TurboTax, it would try to correct it to turbot ax.

    24. DH

      (laughs)

    25. PB

      Turbot being a type of fish.

    26. DH

      (laughs)

    27. PB

      Um, and so, I-I did some basic, like, statistical filtering that would say like, "That's an idiotic correction, don't show it." And so I would just, like, filter the results and then I was working on building a better spell corrector, 'cause I knew, you know, we could just use all of the data. We had a copy of the web and we had billions of search queries. There's like, a lot of information there. So I was working on making something better and then I was just using it as an interview question, so when I would interview engineers, I'd be like, "How would you build a spell corrector?" And I would say like, 80% of engineers just had no idea and the other 20% gave sort of mediocre answers. But then there was this like, one guy who gave a really, really good answer. Um, it's just like, he was ahead of where I was already, so I was like, "We have to hire him." Um, and so his first project, he started, I think it was end of 2000, kind of like late December. His... I gave him as his like intro project, I just gave him all of my code and showed him how to, how to run, you know, projects on the cluster. Um, and then I went away for a couple of weeks for Christmas and when I came back, he had invented w- what we now know as like the "Did you mean?" feature. And so he did that, all of that, in like his first two weeks at Google and it was like, this incredible thing that could spell correct my last name. You know, no one had ever done a spell corrector that would correct proper, proper nouns and things like that. Um, and so that person was Noam Shazeer who then, is also the person who later on invented AI, so he's, he's one of the key people on the "All You Need Is Attention" paper.

    28. DH

      Wow.

    29. PB

      And then he's, he's now since started Character.AI.

    30. DH

      I never connected those dots, but I remember-

  4. 8:3412:01

    Why isn't Google the AI leader?

    1. JF

      seems like Google has been working on AI for a long time. It has the data, the compute, the people. It has all the ingredients to just be the dominant AI company in the world, so like, why isn't it? What do you think happened?

    2. DH

      It seems like it got stuck someplace.

    3. PB

      Yeah, I mean, I don't know exactly. So I... And just to clarify for everyone-

    4. DH

      (laughs)

    5. PB

      ... I don't work at Google. Um, I left in, uh, 2006. Um, but m- my perception, you know, as an outsider, I think a lot of it kind of happened especially around the time of the transition to Alphabet when, you know, the company was no longer really being run by the founders so much, and especially, you know, after they, they, they left. Um, and I think it became more about protecting and preserving the search monopoly. And so if you think about it from that perspective, they have, you know, this gold mine, like, like search is just so valuable. Um, and AI is an inherently disruptive technology both in terms of l- maybe breaking the search, you know, business model where if you actually give people the right answer, they won't need to click on an entire page full of ads. There is... And this was noted, of course, in the er- very original Google paper back in, uh, 1998 that their search... A search company has an inherent tension between, um, profitability and giving the right answer 'cause there's always a temptation that if you make your results worse, people will actually click on more ads. Um, and so s-... AI has the potential to disrupt that, but I think even more than that, it has the potential to completely, um, anger regulators. Um, and so a lot of Google's business is just dealing with regulators and so, you know, we know if you put out an AI, it's definitely gonna say offensive things. And so I-I think they were kind of terrified of that and so even internally, uh, when they were developing, um, you know, there was a, there was a version of a chatbot that Noam had built, um, and this is the one that, that, that sort of whistleblower-

    6. DH

      Oh, yeah.

    7. PB

      ... as he claimed was conscious that... I think they called it LaMDA. Um, it actually originally had a different name, but he was forced, they were forced to change the name 'cause the original name was Human.

    8. DH

      (laughs)

    9. PB

      So they weren't even allowed to give it a human name, so the original name was Something Human and it had to be changed to LaMDA. Um, but even inside of the company, you know, th- there were restrictions on what you could put out. They had a version of, um, DALL-E called ImageGen and it was prohibited from making human form. So like, even internally, the researchers weren't allowed to, to generate images of humans. So they were just extremely risk-averse, I think is the answer.

    10. JF

      And how do you think it would've been different if Sergey and Larry were still in charge and pushing forward?

    11. PB

      I mean, I think they can override, you know, risk, risk aversion, right? Uh, but, but it takes someone with that level of credibility to, to, to really bet the company or, or to, to, to stay. "Yeah, we're gonna do this thing and it's gonna cause a lot of problems." Um, but I think that if given the chance, Google never would've launched AI. The only reason they launched it is 'cause OpenAI w- you know, put out ChatGPT and suddenly it became a thing that they were forced to do. And that also helped them too because, you know, OpenAI took a lot of those bullets in terms of like-

    12. JF

      Yeah.

    13. PB

      ... saying crazy and offensive things.

    14. DH

      (laughs)

    15. PB

      Um, and so at that point then, uh, you know, Google could put out something that was a more sanitized version that, you know, prohibits the existence of white people or whatever.

    16. JF

      (laughs)

    17. PB

      But, um, you know.

    18. JF

      And OpenAI kind of spun out of YC and you were

  5. 12:0114:34

    Paul's connection to OpenAI

    1. JF

      around at that time.

    2. DH

      Yeah, originally it was YC Research.

    3. PB

      Right, so, you know, again, kind of going back to the early teens, we were s- just tracking the progress of this technology and that was where we started to see deep learning doing really-... kind of impressive things where there was like playing video games and, like, winning and getting good at things where you could say... Where you could finally see that AI was real, right? So, so for decades, AI was kind of this sci-fi thing, and you had all this symbolic AI, which I would say is kind of like garbage. And so finally, AI was doing something that was like truly impressive. And, um, so, you know, it was kind of on our radar. And then, you know, Sam, I think, talks to just a lot of people. And so he had, uh, I think, been at one of these things where Elon was, was very, you know, essentially ringing the alarm bells that AI was going to kill us all and, and proposing that, um, you know, maybe there should be regulation. And so we were having these discussions. You know, Sam's asking like, "Do you think we should push for AI regulation?" And, um, you know, I'm of the opinion that that only makes things worse because I don't have great confidence in our, um, elected representatives to be, you know, super wise, uh, and forward-thinking. And so my argument was that the better thing to do would be that we actually build the AI, and, um, you know, that way we're able to influence the direction that it goes. Um, but AI was still, at that time, something that we didn't really know what the timeframe would be to be able to actually have revenue, because it was still basically a research project, and it requires just massive amounts of capital because the, the researchers are pretty highly paid, and then-

    4. DH

      Roughly what year was this?

    5. PB

      2015, I think.

    6. JF

      This is about the time after Google did the DeepMind acquisition as well, right?

    7. PB

      Yes, this was after DeepMind, so...

    8. JF

      Which made this issue more complicated because we didn't... Perhaps in those conversations, there was a desire that we don't want this AI to be stuck at Google.

    9. PB

      Right, exactly. So, so the, the fear is that basically this gets developed all locked up inside of Google. Um, and, and so the idea was that we wanted this to be something, you know, more open to the world, open to our startup ecosystem. Um, and so the idea was that, you know, we had this, this concept of YC Research that we would, um, find some way to fund this, and then hopefully, you know, our startups would be able to benefit from and, and, and build on top of that, which, you know, has in fact happened of course. Like half our startups now are, are, are building on top of it.

    10. JF

      What are your thoughts, uh, on now, uh, open-source

  6. 14:3416:09

    Open source models

    1. JF

      models?

    2. PB

      So I'm totally in favor of them. So I, I, I think like when we think about what is the long-term trajectory of AI, it's the most powerful technology we've ever invented. Um, and so the question is like, where does that power go? And I think there's essentially two directions. Y- you either go towards centralization where all the power gets, you know, centralized in, in the government or in a small number of like big tech companies or something like that. And my feeling is that that's catastrophic for the human species, um, because you essentially minimize the agency and power of the individual. Um, and I think the opposite direction is towards freedom, and, and, and as much as possible, we should give this power and these capabilities to every individual to, to be kind of the best version of themselves. And so you can think about that in terms of, you know, how much... What would it look like if everyone had a 200 IQ or whatever, right? Like, instead of just having all of that power concentrated in one place. And open source is very important because it's kind of a litmus test for that, right? Because it's, it's true freedom. It's freedom of speech. It's First Amendment, right? Um, and, and if you don't have that, if your models are all locked away under some sort of lockdown system where there's a lot of rules about what can be said, what kinds of thoughts are acceptable, then we essentially lose all freedom, right? The freedom of speech is meaningless if I don't have the freedom of thought to even compose the ideas that I'm going to communicate.

  7. 16:0920:56

    YC involved in OpenAI's origin story

    1. DH

      Going back to the, the history of OpenAI, like the, the, the real story of how OpenAI got started is, is actually not well-known. Um, you know, like, like many companies, the, the founding story as it gets retold and retold becomes sort of like sanitized for public consumption. But you, you had a front-row seat. In fact, you interviewed many of the early researchers that became essentially the people who built OpenAI. Like, what is the... Like, can you tell us the real founding story?

    2. PB

      Sure. I, I wouldn't say many. One. (laughs) I interviewed Ilya. Um, so yeah, I mean, it, it goes back to, again, these discussions of like, okay, maybe the way forward instead of trying to outlaw AI is actually that we should build it and as much as possible, you know, in, in the public interest. Um, and so Sam, you know, is just an incredible, uh, organizer. I've never met someone who's able to bring together so many different interests, um, and so many different people. And so he was able to round up, uh, you know, essentially donations from, uh, Elon and a number of other people. I know PG and Jessica also contributed to the, to the original, um, OpenAI nonprofit. Um, I think we even kicked in some, some YC, uh, value.

    3. DH

      We did.

    4. PB

      Um, and, and so that was kind of the root of it, and then he recruited the original team, um, you know, Greg and Ilya, and, and basically got the whole thing, whole thing started.

    5. DH

      And he was still running YC at the time.

    6. PB

      Right.

    7. DH

      And originally, this was like a subsidiary of YC called YC Research.

    8. PB

      Right. So the original-

    9. DH

      How did that work?

    10. PB

      The original concept, I think, was that it was actually part of this thing that we were calling YC Research, and then I think kind of like as Elon got more involved, it became its own, you know, OpenAI with kind of Elon more, more of the, the face of it, and no one really even knew about the, the YC, uh, roots. Actually, if you go back and look as part of their, their most recent lawsuit, they published some of the emails, and there's the one where Elon is like, "Get rid of the YC stuff." (laughs)

    11. DH

      (laughs) Why do you think OpenAI worked? Like, m- w- I remember in the early 2000s looking at Google and being like, "That's the company that's going to invent AGI some day."

    12. PB

      Yeah.

    13. DH

      And then the way it played out is not the way I would have predicted.

    14. PB

      Again, the idea with OpenAI and part of the lure, like, the pitch to researchers was that when you come here, your stuff not gonna be locked away. We're gonna put it out in the world, right? And so researchers, you know, are motivated by that and, and motivated by the mission of, of, you know, making this something that isn't just locked up inside of Google, um, and so I, I think that attracted a lot of talent. And it's the same thing, you know, as with a startup. Do you want to be inside of, like, a large corporation where... Again, Google, the people working at, the researchers working at Google couldn't even make a version of ImageGen that would generate human form, right? (laughs)

    15. DH

      (laughs)

    16. PB

      So they're just, like, so locked down, um, internally that if, if you're a person who I think likes to ship and likes to move fast, you know, OpenAI was the startup version of, of AI, and... But yeah, I, I think if Google were in top form, there, there, there is no way that it would have worked. Um, and that's often the way it is with startups, right? Like, if you were, if you were facing a, a actual, like, formidable competitor, you don't have a chance. The, the reason startups work a lot of times is because you're competing with a slow company- you know, big companies that, that, um, have the wrong incentives internally.

    17. DH

      Do you think OpenAI in 2016 was comparable to Google in 1999 when you joined it?

    18. PB

      I would say it's actually more of a crazy long shot. Like, it really seemed... A- a- and again, if you look at these emails, you know, th- that got released as part of the, the lawsuit, there's, like, one from Elon where he's like, "You guys have a 0% chance of success," right? Like...

    19. DH

      (laughs)

    20. PB

      And it really looked like that. Um, and so it, it was far from obvious that it was gonna be successful. Um, I, I think the, the place... And for a long time, it really wasn't. You know, they, they were still doing the, like, the video games and everything, um, and it was really actually, like, the LMS that I- that made the big difference, right? And so, like, GPT-2 was kind of like... I remember Sam just being really excited and wanting to show me this thing, you know, where, where it, like, predicts the next word. (laughs) Um, and, and the next word prediction is such a, like, deceptively simple thing that you still hear people, you know, dismissing it, like, "Oh, it's not really intelligent. It's just predicting the next word," but it's like, you try predicting the next word.

    21. DH

      (laughs)

    22. PB

      It's not that easy. Um, and in fact, if you think about it, if you can predict the next word, you can predict anything, right? That's what a prompt is, right? You say, like, whatever the thing is you want predicted-

    23. DH

      (laughs)

    24. PB

      ... that's your prompt, and then the next word is the prediction, right? And so in order to do, um, next word prediction and be able to, to, to do what it does, it necessarily has to be built in some sort of model of, of reality or of, you know, its, its perception of reality, which in this case is limited by the fact that it's just being fed text, which is a sort of strange thing to, to grow up

  8. 20:5629:31

    Zuck/Meta: Champions for open source?

    1. PB

      on. On the, like, control versus freedom thing, we're sort of betting on open source to give us freedom. Zuck sort of interestingly become, like, the hero of open source. And like, on the one hand, I feel like you could argue it's accidental, like, the weights were released, like, you know, unofficially and he only had the GPUs because they were trying to compete with the TikTok algorithm. (laughs)

    2. DH

      You worked with him. Like, is it sort of accidental or is he, like, just the kind of guy that's always gonna be at the center of everything big that happens in the world?

    3. PB

      It's a good question. I, I mean, I don't know the backstory on it. He's definitely, like, a smart guy. Like, I wouldn't underestimate him. Um, but, and obviously there's, like, an opportunistic element, right? Because they're kind of behind in many ways, right? And so it's a way for them to differentiate and a way for them to, to sort of weaken their competitors. So there is... But there's nothing wrong with that. I mean, the fact that it's good for them is, is a great thing.

    4. DH

      But should we be worried that we're relying on Meta to keep pushing open source forward when he's a fairly strategic guy?

    5. PB

      Oh, I, I... Yeah, we shouldn't exclusively rely on them. I think we should be grateful that they're on the right side-

    6. DH

      (laughs)

    7. PB

      ... but we can't count on them being the only ones. Like, I think we have to build a whole coalition of people who are in favor of freedom and open source and not just sort of bet everything on Facebook saving us.

    8. DH

      (laughs) Well, I guess to build on Harsh's question, Meta is not making money on this. They're funneling profits from their gigantic advertising monopoly and just using that to build open source AI models for reasons, but not to, like, make money.

    9. PB

      They'll make money. Right. So-

    10. DH

      Well-

    11. PB

      I mean, they're using the models internally as well, right?

    12. DH

      Yeah.

    13. PB

      So, so the... And there's a lot of interesting stuff you can do with these models in terms of improving ad targeting, recommendations. Like, all the things that are driving their business are going to be improved by, um, those algorithms. And then of course, it's also an opportunity, you know, they, they exist in this competitive ecosystem versus Facebook- I mean versus, um, Google and Apple who are, you know, are both rivals in various ways, and so they're all kind of competing with each other, so their ability to kind of undercut competitors is also an important thing.

    14. DH

      But Jared, you, you were saying, like, specifically Facebook's not making money off open source-

    15. PB

      Right. Yeah, well, well-

    16. DH

      ... as a strategy.

    17. PB

      Well, I guess it's just like they seem to be in a fairly unique position to do this. If Zuck changes his mind and decides to stop open sourcing it, how else will we get large open source models if they cost like a billion dollars to train-

    18. DH

      Right.

    19. PB

      ... and it's not clear how you make a billion dollars off them? Yeah, I think that's, that's an unanswered question. I mean, that, that is, like, the, one of the fundamental concerns I have, which is that I think because it's so expensive to build these models-

    20. DH

      Yep.

    21. PB

      ... it is, that is like an inherently centralizing thing, where if, if you need a trillion dollar d-... cluster (laughs) -

    22. JF

      (laughs)

    23. PB

      ... to build your, your AGI. It's, it's hard to do that. Um, but at the very least, uh, to the extent that we can have like the legislative groundwork that says we have the right to do that, um, and then, you know, we also have a lot of startups that are working on ways to make all this more efficient. So, you know, right now it costs that much, but we're also developing new hardware that's going to be able to do these things perhaps orders of magnitude more efficient. Like, right now, I, I would say our algorithms are probably not that great. I, I would, I would be willing to bet that in 10 years the actual fundamental learning algorithms are gonna be way better and, and hopefully more efficient. So we'll have both better hardware and better algorithms.

    24. JF

      It seems like that if you just think about the amount of computational power to train a human versus the computational power to train like GPT-4, like-

    25. PB

      Exactly.

    26. JF

      ... we're evidently much more efficient.

    27. PB

      Yeah, I think, I think, I think there's still a lot-

    28. JF

      (laughs)

    29. PB

      I think there's still just a lot of inefficiency.

    30. JF

      What was that, uh, the human brain runs on like 15 or some-

  9. 29:3137:53

    How do we get to AGI?

    1. JF

      looking forwards, what do you think are some of the, the ways this is gonna break over the next few years?

    2. PB

      Which is gonna break?

    3. JF

      A- AI.

    4. PB

      Oh.

    5. JF

      Like, a- and one thing we haven't talked about here, 'cause we're kind of in the trenches of just helping the startups in the batch, is like, are we trending towards AGI and... And just like all the laws of everything we know goes to change. Is the world over ...

    6. JF

      Yeah.

    7. GT

      We'll live by the 2000.

    8. JF

      We have to talk about-

    9. JF

      Will there be startups? Will there be money? I, I don't know.

    10. GT

      Will there be humans?

    11. JF

      (laughs) Yeah.

    12. JF

      Will money still exist?

    13. PB

      Yeah, I mean, we don't know. That's, that's again one of the-... you know, funny questions of OpenAI since it's all funded with these sort of post-AGI IOUs. (laughs)

    14. HT

      (laughs)

    15. DH

      (laughs) Yeah.

    16. PB

      It's like, "We'll pay you back once AGI happens."

    17. HT

      (laughs)

    18. PB

      You're like, "Will we still have money?" Maybe.

    19. HT

      (laughs)

    20. DH

      (laughs)

    21. PB

      It could happen. Um, yeah, I mean, I, I, I think just honestly, we don't, we don't really know. Um, I-

    22. HT

      Are you a believer that we are definitely going to get to AGI?

    23. PB

      Yeah, I, I, I think we're, we're on the path. I, I think the, the key point that happened is we crossed a line where AI went from a research project, where you kind of put in a lot of money and don't really get much out, to a, a thing where you, you put in money and then you get out more. Um, and so it, it's like when a, when a reaction, you know, like a, like a-

    24. DH

      Goes critical.

    25. PB

      ... right, goes critical.

    26. DH

      It's sort of like, uh-

    27. PB

      Like if you have a plutonium, you have plutonium spheres and they're kind of warm, and then you put them together and then it explodes. Um-

    28. DH

      Or when ARPANET became the internet moment.

    29. PB

      Right. Um, and so, right, and, and so, right, the internet crossed that point, you know, in the, in the '90s, in the mid '90s, where all of a sudden more investment produces more impressive outcomes, which leads to more investment. And that's where we are right now, where people can't seem to throw money at it fast enough, right? And we're, we're actually talking about it's actually like a, a national issue, is that we need to build, uh, increase our electric supply to, like, train the AI, right? It's become, like, a national security thing. Um, and so I think once that happens, you get that, that cycle and it just keeps growing, right? We just keep investing more and that just keeps making the AI better. And it's clearly, you know, solving a lot of problems, and we know this because we have all the companies that are out there building it. Um, and so I, I think it just keeps improving.

    30. HT

      But why is that not unanimously the view amongst smart people? Like, why... Like, there's Yann LeCun from Meta, who's constantly arguing that this is not the path to AGI, and he's a pretty smart domain expert. Like-

  10. 37:5342:10

    Dangers of centralized AI planning & control

    1. PB

      we can plan this all out, and- and- and we can't. All we try to do is move in the right direction and give people the right tools, and I think that as we enable everyone to be smarter and everyone to make better decisions, then collectively, we can move the whole world in a better direction. But w- we're not smart enough, and I think it's a mistake to think that we are to- to- to actually be able to say, "Here's what the world's gonna look like, and you know, this is exactly how it's all gonna work." And- a- and that's how you end up with people, you know, locked up in their pods or whatever.

    2. DH

      Paul, another thing you've been thinking about a lot is geopolitics. As this AI stuff starts to become real, how is that going to relate to geopolitics and the great power competition that we're seeing now?

    3. PB

      This is part of the reason why we wanted to build it here, right? Is 'cause if- if, you know, China has the super AI, uh, that's not gonna be good for us, um, and in particular, you know, wanting to keep it away from these kind of authoritarian systems of control, because the worst-case scenario is that we basically end up in permanent lockdown, right? 'Cause AI can create a totalitarian system from which escape is impossible because, you know, even our thoughts are essentially being censored, um, and, you know, I think that's kind of, like, the disaster scenario for- for our species. And I think that if we go down the path of control, humans basically end up zoo animals, um, and I- I don't really want that.

    4. GT

      Yeah. One of the funnier things is, uh, you know, some of the, uh, legislation that's coming along to try to control AI that we've been fighting, like SB-1047, they actually have certain statutes in there. They've watered it down a little bit, but ultimately what they want to do is, uh, hold the model builders, you know, in sort of, uh, personal liability-

    5. PB

      Mm-hmm.

    6. GT

      ... or even criminal liability for the things that their models might have a hand in doing, which is sort of like throwing the car designer, uh, in jail because someone got drunk and, you know-

    7. PB

      Yeah.

    8. GT

      ... drove the car and hit someone, right?

    9. PB

      It's incredibly insidious. I- I- I think if you attach that kind of liability, it becomes toxic, right? I'm not gonna want to touch something that has unlimited liability, and so necessarily that's a way for them to exert essentially total control, right? Is- is if you're- if you impose that kind of liability on things, then no one is gon- going to want to go near it, and they're strongly incentivized to put, like, really draconian guardrails in place, um, that, again, will limit our abilities in ways that, you know, we may not even think about. But we've seen this very recent, in recent history with the lockdown of social media. Um, you know, during COVID, we had a global pandemic that was, you know, ultimately killed tens of millions of people, people were locked up in their homes, schools were closed, and we weren't allowed to talk about where it came from. And I think that was like... That's the thing that we still don't fully appreciate how catastrophically bad that is. You know, if we can't make sense of the most important thing in the world, then we can't make sense of anything.

    10. GT

      I guess the wild thing to spot is that, like, this is basically, uh, statism. (laughs)

    11. PB

      Mm-hmm.

    12. GT

      And, uh, the wild thing is I've heard stories of even China sort of, you know, doing that thing that is in SB-1047. I've heard that that has actually happened to, uh, AI founders in China, that they've literally been sort of disappeared and told, like, "You... We will hold you personally accountable for the output of, uh, the LLM and models that your software that you created, uh, spits out."

    13. PB

      Yeah. Well, this is one of our great advantages is- is- is freedom.

    14. GT

      Yeah. (laughs)

    15. PB

      It's why, it's why we're ahead, right?

    16. DH

      (laughs)

    17. PB

      Is because you can't build a model in that environment, you know, because if you ask it about Tiananmen Square or something like that, right, it has to lie to you. Um, and actually, again, I, you know, uh... One of the things I- I like really about, like, xAI, they haven't really released a great product yet, but they have a great mission statement, right? To- to be maximally s- truth-seeking, and I think that's- that's really, um, important. And- and- and- and the authoritarian regime is inherently truth-denying, and so I- they put themselves at a disadvantage, and hopefully they keep themselves there.

    18. GT

      So it's up to us then. We've got to get involved. We've actually got to fight for open-source AI and keep it open.

    19. PB

      Yeah. Yeah. And fight, and fight to- to- to make sure that AI is a thing that- that increases the individual agency instead of eroding it.

    20. DH

      For

  11. 42:1048:18

    Doomers vs Optimists

    1. DH

      people who are relatively neutral about...

    2. JF

      ... being doomers or optimists? Like do you ... What are the things that tip them in, like, one direction versus the other?

    3. PB

      I mean, I do think some people are inherently kind of in one direction or another, right, because the doomer thing has been around for a long time. It isn't just now. Uh, you know, a, a lot of the same doomer thing goes back, um, to the, you know, '50s, '60s, or even much earlier than that, right? Like-

    4. JF

      Industrial Revolution, typewriters.

    5. PB

      Right. But e- e- in particular, you think about like there was a very influential book, Limits of Growth, The Club of Rome, or something like that. There was a book published, The Population Bomb, that had everyone convinced that there was going to be mass famines in the '70s and '80s. Um, and this is something that I grew up very aware of a- actually 'cause it was, um ... I was like the fourth of five children born in the '70s and apparently, people would give my mother, you know ... Well, she'd be at the store and they'd give her nasty looks, right, like, "You're killing the planet."

    6. JF

      (laughs)

    7. PB

      You know, that kind of thing because people genuinely believed that w- w- you know, we were all gonna have famines and everything by now. A- and there's been a continual string of doom, um, and, and always the doomers, the doomers always are pushing for central control. They're always on the side of control and lockdown. And so, you know, if you look at what did The Population Bomb advocate for, you know, mandatory sterilization. They, they, they want to lock people down and we still have that today where they're trying to lock down the food supply, they're trying to lock down the flow of information. You know, anything where they talk about combating misinformation, the misinformation is, is, is anything that threatens the power of control, right? Because it, it always comes down to control versus, versus freedom ultimately, and growth. And so the doomers are, are, are de-growth, they're lockdown, they're control versus, you know, freedom, growth and, and, um, open source. (laughs)

    8. JF

      We were, uh, talking a bit earlier about this. I, I had just watched this, uh, lecture from Richard Hamming, who's a legendary scienti- mathematician who created lots of interesting things like the hamming coding distance and all these thing. He was, uh ... Earned a Nobel Prize as well. And he has this really cool lecture from like the early '90s or '80s. He has been writ- writing about AI actually since way, way back. And he starts the lecture with saying that what's gonna get in the way of AI progress is going to be human ego, which like reminds me a lot of this thing of wanting to control it and that what's gonna get in the way is really that, which still like applies now.

    9. PB

      Yeah, I mean, there's definitely a lot of ego always in the way. (laughs)

    10. GT

      (laughs)

    11. JF

      I think YC has a huge role to play. Well, just a- like the startup community broadly, 'cause I just feel like the more cool tools there are that show everyone how awesome AI can be, like makes us all better, just the more inspiring that vision is.

    12. PB

      Yeah, absolutely. And e- and again, I think that was part of what's so important about, um, like the launch of ChatGPT. Like even if ... I would say even if OpenAI just vanishes tomorrow, I, I think they've achieved the most important part of their mission, which was just really bringing this out to public awareness.

    13. JF

      Yeah.

    14. PB

      And that now we have, you know, all of these people working on it, all these people thinking about it. It isn't something that's like locked away, you know, inside of Google or inside of ... You know, again, the doomers are like, "This needs to be done in a secret government laboratory." That's how you get Skynet.

    15. JF

      (laughs)

    16. PB

      Skynet is when you build it in a secret government laboratory. Um, y- y- you know, I think developing in the open and, and across, uh, you know, a wide variety of perspectives and everyone working on it is, is our best shot at, at the optimistic outcome.

    17. GT

      Yeah, these are not theoretical things, by the way. I mean, there is some evidence already that, um, giant corporations like UnitedHealthcare Group are already blocking, uh, you know, the use of AI calls just to get claims, um, cleared, for instance. And that's very much in their interest.

    18. JF

      Yeah, 100%.

    19. GT

      You know, they d- they detect AI, they decide they're not going to talk to that thing, and then on the flip side you could also ... It's purely adversarial. Like, on the flip side, you can imagine, uh, drowning human beings in like infinite phone trees that legally speaking are, you know, completely rock solid, but you will never get your claim, you know, reimbursed.

    20. PB

      Yeah.

    21. GT

      And, um, that's really sort of the most extreme, um, Kafka-esque sort of situation (laughs) that I have in my head. Like, we don't want the best frontier models in one or two giant corporations locked away behind, you know, sort of this corporate morass that is, you know, basically paperclip maximizing of its own, right? (laughs)

    22. JF

      (laughs) That's a really g- I hadn't thought of that example. It's funny, 'cause it's totally the wrong thing for UnitedHealth. Like, what they should be doing is like developing their own like AI voice thing that's better at convincing the other one (laughs) that like the claims like shouldn't be processed or something, right? L- yeah.

    23. GT

      Yeah, and by default if we have this sort of statist view that locks everything down that's safetyist then, you know, guess what's gonna happen? UnitedHealthcare Group is the only one that should be entrusted with the Frontier 200 IQ model because it is, you know, right there alongside the state.

    24. PB

      Right. Right. Inevitably, you know, power concentrates. And part of, uh, yeah, I think what's great about Y Combinator as an organization is that we're about empowering all these individuals, you know, where we find some 19-year-old kid and then like help them build something enormous, you know? I mean, like Sam himself was like one of the original 19-year-olds, right?

    25. JF

      (laughs)

    26. PB

      So, we ... He's, he's this random 19-year-old that, that PG picks out-

    27. GT

      Yeah.

    28. PB

      ... from the crowd, right?

    29. GT

      Sort of definitionally, like if you're, you know, 20-something and you know how to code and you want to build things for people, like there's just another option. Like, you don't have to go and work for Moloch. (laughs)

    30. PB

      Yeah, absolutely. And, and again, this is one of the great things about AI, is that your ability to do those things is increasing. I think we're gonna see, you know, very successful startups that actually don't even require a massive team anymore. And that was part of, you know, what really has enabled ... And again, the original concept behind the founding of YC was because of technology, it is now possible for like a couple of kids to start a real company. Um, a- a- and that trend has only accelerated.

  12. 48:1848:43

    Outro

    1. GT

      was one of the best episodes we've done so far. And, uh, PB, thank you so much for joining us. Uh, we hope to have you back many, many more times.

    2. PB

      Thanks, Garry.

    3. GT

      That's it for this time. Catch you next time. (instrumental music)

Episode duration: 48:43

Install uListen for AI-powered chat & search across the full episode — Get Full Transcript

Transcript of episode LSUviaN1eso

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome