Y CombinatorGmail Creator Paul Buchheit On AGI, Open Source Models, Freedom
EVERY SPOKEN WORD
50 min read · 9,918 words- 0:00 – 1:11
Coming Up
- JFJared Friedman
It seems like Google has all the ingredients to just be the dominant AI company in the world. So, why isn't it?
- DHDiana Hu
Do you think OpenAI in 2016 was comparable to Google in 1999 when you joined it?
- JFJared Friedman
Are you a believer that we are definitely going to get to AGI?
- PBPaul Buchheit
What is the long-term trajectory of AI? It's the most powerful technology we've ever invented, and so the question is, like, where does that power go? I think we ha- have to build a whole coalition of people who are in favor of freedom and open source and not just sort of bet everything on Facebook saving us.
- DHDiana Hu
(laughs)
- GTGarry Tan
Welcome to another episode of The Light Cone. I'm Gary. This is Jared, Harj, and Diana, and we're the partners at Y Combinator where we funded hundreds of billions of dollars worth of companies. And we have a special guest who is also one of the original outside partners, the non-founding partners at YC, Paul Buchheit. He created Gmail, he coined the term "Don't be evil." PB, thanks for joining us today.
- PBPaul Buchheit
Thanks, Gary.
- GTGarry Tan
So what should we start off with?
- 1:11 – 2:29
Google's early views on AI
- JFJared Friedman
Well, I think one thing people don't often realize is that you've been thinking about AI for a long time and that Google itself was kind of an AI company. Can you tell us more about that? What was the internal view of AI at Google?
- PBPaul Buchheit
Yeah, I mean, I think really Google has always... was always supposed to be an AI company from the beginning. Um, you know, Larry and Sergey set out to build, um, you know, these very large compute clusters and do a lot of machine learning on all of the data that they gather, a- and actually, arguably, you know, the mission statement is pretty straightforward. The Google mission is to gather all the world's training data and feed it into a giant AI supercomputer, and they put it slightly less direct. They said, "Gather all the world's information and make it universally useful and accessible," or something like that. But essentially, y- you know, what that really meant in practice is feeding it into a giant AI supercomputer.
- JFJared Friedman
And even the origin story of Google was all based on their PhD with PageRank-
- PBPaul Buchheit
Mm-hmm.
- JFJared Friedman
... which is very much, today's, in a lot of machine learning classes that gets taught. It is one of the foundational, kind of historical AI algorithms that gets taught.
- PBPaul Buchheit
Yeah, I mean, there was a, there was an understanding very early on that if you have enough data, that's actually the path to, to making things intelligent instead of just trying to iterate forever on little
- 2:29 – 8:34
Paul's time at Google
- PBPaul Buchheit
algorithms.
- DHDiana Hu
How early did you join Google, Paul Buchheit? Can you talk a little bit about what Google was like when you joined?
- PBPaul Buchheit
Uh, yeah, so it was June 1999, so that was, uh, let me see... (laughs)
- DHDiana Hu
(laughs)
- PBPaul Buchheit
... 25 years ago, a little more, um, and so yeah, it was a very small startup. We were, we were in Palo Alto on University Ave, just, uh, up above, like, a tea shop at the time, and it was, it was electric. It was really cool. Um, I, I actually... After I was there for about a week, I, I tried to get more equity. (laughs)
- JFJared Friedman
(laughs)
- PBPaul Buchheit
But it turns out you have to negotiate before accepting.
- DHDiana Hu
(laughs)
- PBPaul Buchheit
Um, uh, so... But yeah, it was... It, it, it had a very kind of unreal sense of, like, just an excitement, you know? I was excited to go into work because we were, we were just doing big things.
- DHDiana Hu
And when you were there, like, in that early set of Google people, how did you all envision that this AI thing would play out and what Google's, like, AI future would look like?
- PBPaul Buchheit
You know, we didn't know.
- DHDiana Hu
Was it something that ever came up?
- PBPaul Buchheit
Right. No, I mean, AI has obviously been a thing that people have been thinking about for a long time. Um, I, I made my first neural net back... I, I dug up the code a while back. I think it was, like, 1995, and I had... It was like one of those three-layer neural nets with-
- JFJared Friedman
You did a classic, uh, Minsk digit classification thing?
- PBPaul Buchheit
Yeah, I was doing... I, I, I did a, uh, not exactly digit classification, but there, there were these things called figlets that are like ASCII letters, and so I made it do essentially like an OCR on, on those. Um, but you know, it'd be like 100 weights (laughs) , so that's very, much smaller than today's models.
- JFJared Friedman
Now, it's like trillions of weights now.
- PBPaul Buchheit
Yeah, and the history of, like, neural nets is kind of weird. Um, the first thing was when they invented the perceptron, which was like a single neuron, and it was very hot for a short time until some researcher showed that a perceptron can't compute XOR, and then they were like, "Well..." Like, it's just dead for a while, until someone had this idea to use multiple neurons, and so it was like very slow going. And then it was kind of like dead again for a while, and then, to my perception, it kind of really picked up in the early teens, you know, when deep learning became popular, and that was when we first started seeing, like, I think impressive results, where that was when we started feeling like internally, you know, in the discussions at YC, that AI had switched from being something in the indefinite future to being in the more definite future. Um, and that is, you know, kind of what led to the creation of open eye- OpenAI as well.
- JFJared Friedman
Were there any conversations around, like, the power of AI and the implications of AI, specifically AGI and just like the impact on society, or did it feel too far removed?
- PBPaul Buchheit
Yeah, I think it was still too far off in the future. I mean, it was very much sci-fi at that point. Um, we were dealing with more, you know, near-term, how do we make search better? But search is, you know, kind of a... to some extent, uh, an AI problem. You have to figure out what it is the, the user is looking for. It's remarkably good. If you actually look at Google Search, like, there's a lot of stuff going on behind the scenes. Um, and actually, one of the earliest kind of magical features that we added was the "Did you mean... ?" Uh, you know, the spell correction, and so that actually comes from originally just my inability to spell. I've never been very good at spelling. My, my brain doesn't like arbitrary patterns. (laughs)
- JFJared Friedman
(laughs)
- PBPaul Buchheit
So like when I was in school, math was easy because it's predictable, but spelling always made me struggle. Um, and so when I started at Google, one of the first features I added was the spell corrector because I was looking at the query logs, and I would see that I'm not the only person with this problem. Like a third of the queries were misspelled or something like that. So it was like the easiest quality win ever was just to fix the spelling.
- DHDiana Hu
Wait, wait, so you built the original spelling corrector at Google? How did that... I didn't know that.
- PBPaul Buchheit
Um, I did the first "Did you mean..." feature, um, and so it... but w-I built it just based off of kind of an existing spell corrector library and then... But it would give really dumb corrections, like if you typed in TurboTax, it would try to correct it to turbot ax.
- DHDiana Hu
(laughs)
- PBPaul Buchheit
Turbot being a type of fish.
- DHDiana Hu
(laughs)
- PBPaul Buchheit
Um, and so, I-I did some basic, like, statistical filtering that would say like, "That's an idiotic correction, don't show it." And so I would just, like, filter the results and then I was working on building a better spell corrector, 'cause I knew, you know, we could just use all of the data. We had a copy of the web and we had billions of search queries. There's like, a lot of information there. So I was working on making something better and then I was just using it as an interview question, so when I would interview engineers, I'd be like, "How would you build a spell corrector?" And I would say like, 80% of engineers just had no idea and the other 20% gave sort of mediocre answers. But then there was this like, one guy who gave a really, really good answer. Um, it's just like, he was ahead of where I was already, so I was like, "We have to hire him." Um, and so his first project, he started, I think it was end of 2000, kind of like late December. His... I gave him as his like intro project, I just gave him all of my code and showed him how to, how to run, you know, projects on the cluster. Um, and then I went away for a couple of weeks for Christmas and when I came back, he had invented w- what we now know as like the "Did you mean?" feature. And so he did that, all of that, in like his first two weeks at Google and it was like, this incredible thing that could spell correct my last name. You know, no one had ever done a spell corrector that would correct proper, proper nouns and things like that. Um, and so that person was Noam Shazeer who then, is also the person who later on invented AI, so he's, he's one of the key people on the "All You Need Is Attention" paper.
- DHDiana Hu
Wow.
- PBPaul Buchheit
And then he's, he's now since started Character.AI.
- DHDiana Hu
I never connected those dots, but I remember-
- 8:34 – 12:01
Why isn't Google the AI leader?
- JFJared Friedman
seems like Google has been working on AI for a long time. It has the data, the compute, the people. It has all the ingredients to just be the dominant AI company in the world, so like, why isn't it? What do you think happened?
- DHDiana Hu
It seems like it got stuck someplace.
- PBPaul Buchheit
Yeah, I mean, I don't know exactly. So I... And just to clarify for everyone-
- DHDiana Hu
(laughs)
- PBPaul Buchheit
... I don't work at Google. Um, I left in, uh, 2006. Um, but m- my perception, you know, as an outsider, I think a lot of it kind of happened especially around the time of the transition to Alphabet when, you know, the company was no longer really being run by the founders so much, and especially, you know, after they, they, they left. Um, and I think it became more about protecting and preserving the search monopoly. And so if you think about it from that perspective, they have, you know, this gold mine, like, like search is just so valuable. Um, and AI is an inherently disruptive technology both in terms of l- maybe breaking the search, you know, business model where if you actually give people the right answer, they won't need to click on an entire page full of ads. There is... And this was noted, of course, in the er- very original Google paper back in, uh, 1998 that their search... A search company has an inherent tension between, um, profitability and giving the right answer 'cause there's always a temptation that if you make your results worse, people will actually click on more ads. Um, and so s-... AI has the potential to disrupt that, but I think even more than that, it has the potential to completely, um, anger regulators. Um, and so a lot of Google's business is just dealing with regulators and so, you know, we know if you put out an AI, it's definitely gonna say offensive things. And so I-I think they were kind of terrified of that and so even internally, uh, when they were developing, um, you know, there was a, there was a version of a chatbot that Noam had built, um, and this is the one that, that, that sort of whistleblower-
- DHDiana Hu
Oh, yeah.
- PBPaul Buchheit
... as he claimed was conscious that... I think they called it LaMDA. Um, it actually originally had a different name, but he was forced, they were forced to change the name 'cause the original name was Human.
- DHDiana Hu
(laughs)
- PBPaul Buchheit
So they weren't even allowed to give it a human name, so the original name was Something Human and it had to be changed to LaMDA. Um, but even inside of the company, you know, th- there were restrictions on what you could put out. They had a version of, um, DALL-E called ImageGen and it was prohibited from making human form. So like, even internally, the researchers weren't allowed to, to generate images of humans. So they were just extremely risk-averse, I think is the answer.
- JFJared Friedman
And how do you think it would've been different if Sergey and Larry were still in charge and pushing forward?
- PBPaul Buchheit
I mean, I think they can override, you know, risk, risk aversion, right? Uh, but, but it takes someone with that level of credibility to, to, to really bet the company or, or to, to, to stay. "Yeah, we're gonna do this thing and it's gonna cause a lot of problems." Um, but I think that if given the chance, Google never would've launched AI. The only reason they launched it is 'cause OpenAI w- you know, put out ChatGPT and suddenly it became a thing that they were forced to do. And that also helped them too because, you know, OpenAI took a lot of those bullets in terms of like-
- JFJared Friedman
Yeah.
- PBPaul Buchheit
... saying crazy and offensive things.
- DHDiana Hu
(laughs)
- PBPaul Buchheit
Um, and so at that point then, uh, you know, Google could put out something that was a more sanitized version that, you know, prohibits the existence of white people or whatever.
- JFJared Friedman
(laughs)
- PBPaul Buchheit
But, um, you know.
- JFJared Friedman
And OpenAI kind of spun out of YC and you were
- 12:01 – 14:34
Paul's connection to OpenAI
- JFJared Friedman
around at that time.
- DHDiana Hu
Yeah, originally it was YC Research.
- PBPaul Buchheit
Right, so, you know, again, kind of going back to the early teens, we were s- just tracking the progress of this technology and that was where we started to see deep learning doing really-... kind of impressive things where there was like playing video games and, like, winning and getting good at things where you could say... Where you could finally see that AI was real, right? So, so for decades, AI was kind of this sci-fi thing, and you had all this symbolic AI, which I would say is kind of like garbage. And so finally, AI was doing something that was like truly impressive. And, um, so, you know, it was kind of on our radar. And then, you know, Sam, I think, talks to just a lot of people. And so he had, uh, I think, been at one of these things where Elon was, was very, you know, essentially ringing the alarm bells that AI was going to kill us all and, and proposing that, um, you know, maybe there should be regulation. And so we were having these discussions. You know, Sam's asking like, "Do you think we should push for AI regulation?" And, um, you know, I'm of the opinion that that only makes things worse because I don't have great confidence in our, um, elected representatives to be, you know, super wise, uh, and forward-thinking. And so my argument was that the better thing to do would be that we actually build the AI, and, um, you know, that way we're able to influence the direction that it goes. Um, but AI was still, at that time, something that we didn't really know what the timeframe would be to be able to actually have revenue, because it was still basically a research project, and it requires just massive amounts of capital because the, the researchers are pretty highly paid, and then-
- DHDiana Hu
Roughly what year was this?
- PBPaul Buchheit
2015, I think.
- JFJared Friedman
This is about the time after Google did the DeepMind acquisition as well, right?
- PBPaul Buchheit
Yes, this was after DeepMind, so...
- JFJared Friedman
Which made this issue more complicated because we didn't... Perhaps in those conversations, there was a desire that we don't want this AI to be stuck at Google.
- PBPaul Buchheit
Right, exactly. So, so the, the fear is that basically this gets developed all locked up inside of Google. Um, and, and so the idea was that we wanted this to be something, you know, more open to the world, open to our startup ecosystem. Um, and so the idea was that, you know, we had this, this concept of YC Research that we would, um, find some way to fund this, and then hopefully, you know, our startups would be able to benefit from and, and, and build on top of that, which, you know, has in fact happened of course. Like half our startups now are, are, are building on top of it.
- JFJared Friedman
What are your thoughts, uh, on now, uh, open-source
- 14:34 – 16:09
Open source models
- JFJared Friedman
models?
- PBPaul Buchheit
So I'm totally in favor of them. So I, I, I think like when we think about what is the long-term trajectory of AI, it's the most powerful technology we've ever invented. Um, and so the question is like, where does that power go? And I think there's essentially two directions. Y- you either go towards centralization where all the power gets, you know, centralized in, in the government or in a small number of like big tech companies or something like that. And my feeling is that that's catastrophic for the human species, um, because you essentially minimize the agency and power of the individual. Um, and I think the opposite direction is towards freedom, and, and, and as much as possible, we should give this power and these capabilities to every individual to, to be kind of the best version of themselves. And so you can think about that in terms of, you know, how much... What would it look like if everyone had a 200 IQ or whatever, right? Like, instead of just having all of that power concentrated in one place. And open source is very important because it's kind of a litmus test for that, right? Because it's, it's true freedom. It's freedom of speech. It's First Amendment, right? Um, and, and if you don't have that, if your models are all locked away under some sort of lockdown system where there's a lot of rules about what can be said, what kinds of thoughts are acceptable, then we essentially lose all freedom, right? The freedom of speech is meaningless if I don't have the freedom of thought to even compose the ideas that I'm going to communicate.
- 16:09 – 20:56
YC involved in OpenAI's origin story
- DHDiana Hu
Going back to the, the history of OpenAI, like the, the, the real story of how OpenAI got started is, is actually not well-known. Um, you know, like, like many companies, the, the founding story as it gets retold and retold becomes sort of like sanitized for public consumption. But you, you had a front-row seat. In fact, you interviewed many of the early researchers that became essentially the people who built OpenAI. Like, what is the... Like, can you tell us the real founding story?
- PBPaul Buchheit
Sure. I, I wouldn't say many. One. (laughs) I interviewed Ilya. Um, so yeah, I mean, it, it goes back to, again, these discussions of like, okay, maybe the way forward instead of trying to outlaw AI is actually that we should build it and as much as possible, you know, in, in the public interest. Um, and so Sam, you know, is just an incredible, uh, organizer. I've never met someone who's able to bring together so many different interests, um, and so many different people. And so he was able to round up, uh, you know, essentially donations from, uh, Elon and a number of other people. I know PG and Jessica also contributed to the, to the original, um, OpenAI nonprofit. Um, I think we even kicked in some, some YC, uh, value.
- DHDiana Hu
We did.
- PBPaul Buchheit
Um, and, and so that was kind of the root of it, and then he recruited the original team, um, you know, Greg and Ilya, and, and basically got the whole thing, whole thing started.
- DHDiana Hu
And he was still running YC at the time.
- PBPaul Buchheit
Right.
- DHDiana Hu
And originally, this was like a subsidiary of YC called YC Research.
- PBPaul Buchheit
Right. So the original-
- DHDiana Hu
How did that work?
- PBPaul Buchheit
The original concept, I think, was that it was actually part of this thing that we were calling YC Research, and then I think kind of like as Elon got more involved, it became its own, you know, OpenAI with kind of Elon more, more of the, the face of it, and no one really even knew about the, the YC, uh, roots. Actually, if you go back and look as part of their, their most recent lawsuit, they published some of the emails, and there's the one where Elon is like, "Get rid of the YC stuff." (laughs)
- DHDiana Hu
(laughs) Why do you think OpenAI worked? Like, m- w- I remember in the early 2000s looking at Google and being like, "That's the company that's going to invent AGI some day."
- PBPaul Buchheit
Yeah.
- DHDiana Hu
And then the way it played out is not the way I would have predicted.
- PBPaul Buchheit
Again, the idea with OpenAI and part of the lure, like, the pitch to researchers was that when you come here, your stuff not gonna be locked away. We're gonna put it out in the world, right? And so researchers, you know, are motivated by that and, and motivated by the mission of, of, you know, making this something that isn't just locked up inside of Google, um, and so I, I think that attracted a lot of talent. And it's the same thing, you know, as with a startup. Do you want to be inside of, like, a large corporation where... Again, Google, the people working at, the researchers working at Google couldn't even make a version of ImageGen that would generate human form, right? (laughs)
- DHDiana Hu
(laughs)
- PBPaul Buchheit
So they're just, like, so locked down, um, internally that if, if you're a person who I think likes to ship and likes to move fast, you know, OpenAI was the startup version of, of AI, and... But yeah, I, I think if Google were in top form, there, there, there is no way that it would have worked. Um, and that's often the way it is with startups, right? Like, if you were, if you were facing a, a actual, like, formidable competitor, you don't have a chance. The, the reason startups work a lot of times is because you're competing with a slow company- you know, big companies that, that, um, have the wrong incentives internally.
- DHDiana Hu
Do you think OpenAI in 2016 was comparable to Google in 1999 when you joined it?
- PBPaul Buchheit
I would say it's actually more of a crazy long shot. Like, it really seemed... A- a- and again, if you look at these emails, you know, th- that got released as part of the, the lawsuit, there's, like, one from Elon where he's like, "You guys have a 0% chance of success," right? Like...
- DHDiana Hu
(laughs)
- PBPaul Buchheit
And it really looked like that. Um, and so it, it was far from obvious that it was gonna be successful. Um, I, I think the, the place... And for a long time, it really wasn't. You know, they, they were still doing the, like, the video games and everything, um, and it was really actually, like, the LMS that I- that made the big difference, right? And so, like, GPT-2 was kind of like... I remember Sam just being really excited and wanting to show me this thing, you know, where, where it, like, predicts the next word. (laughs) Um, and, and the next word prediction is such a, like, deceptively simple thing that you still hear people, you know, dismissing it, like, "Oh, it's not really intelligent. It's just predicting the next word," but it's like, you try predicting the next word.
- DHDiana Hu
(laughs)
- PBPaul Buchheit
It's not that easy. Um, and in fact, if you think about it, if you can predict the next word, you can predict anything, right? That's what a prompt is, right? You say, like, whatever the thing is you want predicted-
- DHDiana Hu
(laughs)
- PBPaul Buchheit
... that's your prompt, and then the next word is the prediction, right? And so in order to do, um, next word prediction and be able to, to, to do what it does, it necessarily has to be built in some sort of model of, of reality or of, you know, its, its perception of reality, which in this case is limited by the fact that it's just being fed text, which is a sort of strange thing to, to grow up
- 20:56 – 29:31
Zuck/Meta: Champions for open source?
- PBPaul Buchheit
on. On the, like, control versus freedom thing, we're sort of betting on open source to give us freedom. Zuck sort of interestingly become, like, the hero of open source. And like, on the one hand, I feel like you could argue it's accidental, like, the weights were released, like, you know, unofficially and he only had the GPUs because they were trying to compete with the TikTok algorithm. (laughs)
- DHDiana Hu
You worked with him. Like, is it sort of accidental or is he, like, just the kind of guy that's always gonna be at the center of everything big that happens in the world?
- PBPaul Buchheit
It's a good question. I, I mean, I don't know the backstory on it. He's definitely, like, a smart guy. Like, I wouldn't underestimate him. Um, but, and obviously there's, like, an opportunistic element, right? Because they're kind of behind in many ways, right? And so it's a way for them to differentiate and a way for them to, to sort of weaken their competitors. So there is... But there's nothing wrong with that. I mean, the fact that it's good for them is, is a great thing.
- DHDiana Hu
But should we be worried that we're relying on Meta to keep pushing open source forward when he's a fairly strategic guy?
- PBPaul Buchheit
Oh, I, I... Yeah, we shouldn't exclusively rely on them. I think we should be grateful that they're on the right side-
- DHDiana Hu
(laughs)
- PBPaul Buchheit
... but we can't count on them being the only ones. Like, I think we have to build a whole coalition of people who are in favor of freedom and open source and not just sort of bet everything on Facebook saving us.
- DHDiana Hu
(laughs) Well, I guess to build on Harsh's question, Meta is not making money on this. They're funneling profits from their gigantic advertising monopoly and just using that to build open source AI models for reasons, but not to, like, make money.
- PBPaul Buchheit
They'll make money. Right. So-
- DHDiana Hu
Well-
- PBPaul Buchheit
I mean, they're using the models internally as well, right?
- DHDiana Hu
Yeah.
- PBPaul Buchheit
So, so the... And there's a lot of interesting stuff you can do with these models in terms of improving ad targeting, recommendations. Like, all the things that are driving their business are going to be improved by, um, those algorithms. And then of course, it's also an opportunity, you know, they, they exist in this competitive ecosystem versus Facebook- I mean versus, um, Google and Apple who are, you know, are both rivals in various ways, and so they're all kind of competing with each other, so their ability to kind of undercut competitors is also an important thing.
- DHDiana Hu
But Jared, you, you were saying, like, specifically Facebook's not making money off open source-
- PBPaul Buchheit
Right. Yeah, well, well-
- DHDiana Hu
... as a strategy.
- PBPaul Buchheit
Well, I guess it's just like they seem to be in a fairly unique position to do this. If Zuck changes his mind and decides to stop open sourcing it, how else will we get large open source models if they cost like a billion dollars to train-
- DHDiana Hu
Right.
- PBPaul Buchheit
... and it's not clear how you make a billion dollars off them? Yeah, I think that's, that's an unanswered question. I mean, that, that is, like, the, one of the fundamental concerns I have, which is that I think because it's so expensive to build these models-
- DHDiana Hu
Yep.
- PBPaul Buchheit
... it is, that is like an inherently centralizing thing, where if, if you need a trillion dollar d-... cluster (laughs) -
- JFJared Friedman
(laughs)
- PBPaul Buchheit
... to build your, your AGI. It's, it's hard to do that. Um, but at the very least, uh, to the extent that we can have like the legislative groundwork that says we have the right to do that, um, and then, you know, we also have a lot of startups that are working on ways to make all this more efficient. So, you know, right now it costs that much, but we're also developing new hardware that's going to be able to do these things perhaps orders of magnitude more efficient. Like, right now, I, I would say our algorithms are probably not that great. I, I would, I would be willing to bet that in 10 years the actual fundamental learning algorithms are gonna be way better and, and hopefully more efficient. So we'll have both better hardware and better algorithms.
- JFJared Friedman
It seems like that if you just think about the amount of computational power to train a human versus the computational power to train like GPT-4, like-
- PBPaul Buchheit
Exactly.
- JFJared Friedman
... we're evidently much more efficient.
- PBPaul Buchheit
Yeah, I think, I think, I think there's still a lot-
- JFJared Friedman
(laughs)
- PBPaul Buchheit
I think there's still just a lot of inefficiency.
- JFJared Friedman
What was that, uh, the human brain runs on like 15 or some-
- 29:31 – 37:53
How do we get to AGI?
- JFJared Friedman
looking forwards, what do you think are some of the, the ways this is gonna break over the next few years?
- PBPaul Buchheit
Which is gonna break?
- JFJared Friedman
A- AI.
- PBPaul Buchheit
Oh.
- JFJared Friedman
Like, a- and one thing we haven't talked about here, 'cause we're kind of in the trenches of just helping the startups in the batch, is like, are we trending towards AGI and... And just like all the laws of everything we know goes to change. Is the world over ...
- JFJared Friedman
Yeah.
- GTGarry Tan
We'll live by the 2000.
- JFJared Friedman
We have to talk about-
- JFJared Friedman
Will there be startups? Will there be money? I, I don't know.
- GTGarry Tan
Will there be humans?
- JFJared Friedman
(laughs) Yeah.
- JFJared Friedman
Will money still exist?
- PBPaul Buchheit
Yeah, I mean, we don't know. That's, that's again one of the-... you know, funny questions of OpenAI since it's all funded with these sort of post-AGI IOUs. (laughs)
- HTHarj Taggar
(laughs)
- DHDiana Hu
(laughs) Yeah.
- PBPaul Buchheit
It's like, "We'll pay you back once AGI happens."
- HTHarj Taggar
(laughs)
- PBPaul Buchheit
You're like, "Will we still have money?" Maybe.
- HTHarj Taggar
(laughs)
- DHDiana Hu
(laughs)
- PBPaul Buchheit
It could happen. Um, yeah, I mean, I, I, I think just honestly, we don't, we don't really know. Um, I-
- HTHarj Taggar
Are you a believer that we are definitely going to get to AGI?
- PBPaul Buchheit
Yeah, I, I, I think we're, we're on the path. I, I think the, the key point that happened is we crossed a line where AI went from a research project, where you kind of put in a lot of money and don't really get much out, to a, a thing where you, you put in money and then you get out more. Um, and so it, it's like when a, when a reaction, you know, like a, like a-
- DHDiana Hu
Goes critical.
- PBPaul Buchheit
... right, goes critical.
- DHDiana Hu
It's sort of like, uh-
- PBPaul Buchheit
Like if you have a plutonium, you have plutonium spheres and they're kind of warm, and then you put them together and then it explodes. Um-
- DHDiana Hu
Or when ARPANET became the internet moment.
- PBPaul Buchheit
Right. Um, and so, right, and, and so, right, the internet crossed that point, you know, in the, in the '90s, in the mid '90s, where all of a sudden more investment produces more impressive outcomes, which leads to more investment. And that's where we are right now, where people can't seem to throw money at it fast enough, right? And we're, we're actually talking about it's actually like a, a national issue, is that we need to build, uh, increase our electric supply to, like, train the AI, right? It's become, like, a national security thing. Um, and so I think once that happens, you get that, that cycle and it just keeps growing, right? We just keep investing more and that just keeps making the AI better. And it's clearly, you know, solving a lot of problems, and we know this because we have all the companies that are out there building it. Um, and so I, I think it just keeps improving.
- HTHarj Taggar
But why is that not unanimously the view amongst smart people? Like, why... Like, there's Yann LeCun from Meta, who's constantly arguing that this is not the path to AGI, and he's a pretty smart domain expert. Like-
- 37:53 – 42:10
Dangers of centralized AI planning & control
- PBPaul Buchheit
we can plan this all out, and- and- and we can't. All we try to do is move in the right direction and give people the right tools, and I think that as we enable everyone to be smarter and everyone to make better decisions, then collectively, we can move the whole world in a better direction. But w- we're not smart enough, and I think it's a mistake to think that we are to- to- to actually be able to say, "Here's what the world's gonna look like, and you know, this is exactly how it's all gonna work." And- a- and that's how you end up with people, you know, locked up in their pods or whatever.
- DHDiana Hu
Paul, another thing you've been thinking about a lot is geopolitics. As this AI stuff starts to become real, how is that going to relate to geopolitics and the great power competition that we're seeing now?
- PBPaul Buchheit
This is part of the reason why we wanted to build it here, right? Is 'cause if- if, you know, China has the super AI, uh, that's not gonna be good for us, um, and in particular, you know, wanting to keep it away from these kind of authoritarian systems of control, because the worst-case scenario is that we basically end up in permanent lockdown, right? 'Cause AI can create a totalitarian system from which escape is impossible because, you know, even our thoughts are essentially being censored, um, and, you know, I think that's kind of, like, the disaster scenario for- for our species. And I think that if we go down the path of control, humans basically end up zoo animals, um, and I- I don't really want that.
- GTGarry Tan
Yeah. One of the funnier things is, uh, you know, some of the, uh, legislation that's coming along to try to control AI that we've been fighting, like SB-1047, they actually have certain statutes in there. They've watered it down a little bit, but ultimately what they want to do is, uh, hold the model builders, you know, in sort of, uh, personal liability-
- PBPaul Buchheit
Mm-hmm.
- GTGarry Tan
... or even criminal liability for the things that their models might have a hand in doing, which is sort of like throwing the car designer, uh, in jail because someone got drunk and, you know-
- PBPaul Buchheit
Yeah.
- GTGarry Tan
... drove the car and hit someone, right?
- PBPaul Buchheit
It's incredibly insidious. I- I- I think if you attach that kind of liability, it becomes toxic, right? I'm not gonna want to touch something that has unlimited liability, and so necessarily that's a way for them to exert essentially total control, right? Is- is if you're- if you impose that kind of liability on things, then no one is gon- going to want to go near it, and they're strongly incentivized to put, like, really draconian guardrails in place, um, that, again, will limit our abilities in ways that, you know, we may not even think about. But we've seen this very recent, in recent history with the lockdown of social media. Um, you know, during COVID, we had a global pandemic that was, you know, ultimately killed tens of millions of people, people were locked up in their homes, schools were closed, and we weren't allowed to talk about where it came from. And I think that was like... That's the thing that we still don't fully appreciate how catastrophically bad that is. You know, if we can't make sense of the most important thing in the world, then we can't make sense of anything.
- GTGarry Tan
I guess the wild thing to spot is that, like, this is basically, uh, statism. (laughs)
- PBPaul Buchheit
Mm-hmm.
- GTGarry Tan
And, uh, the wild thing is I've heard stories of even China sort of, you know, doing that thing that is in SB-1047. I've heard that that has actually happened to, uh, AI founders in China, that they've literally been sort of disappeared and told, like, "You... We will hold you personally accountable for the output of, uh, the LLM and models that your software that you created, uh, spits out."
- PBPaul Buchheit
Yeah. Well, this is one of our great advantages is- is- is freedom.
- GTGarry Tan
Yeah. (laughs)
- PBPaul Buchheit
It's why, it's why we're ahead, right?
- DHDiana Hu
(laughs)
- PBPaul Buchheit
Is because you can't build a model in that environment, you know, because if you ask it about Tiananmen Square or something like that, right, it has to lie to you. Um, and actually, again, I, you know, uh... One of the things I- I like really about, like, xAI, they haven't really released a great product yet, but they have a great mission statement, right? To- to be maximally s- truth-seeking, and I think that's- that's really, um, important. And- and- and- and the authoritarian regime is inherently truth-denying, and so I- they put themselves at a disadvantage, and hopefully they keep themselves there.
- GTGarry Tan
So it's up to us then. We've got to get involved. We've actually got to fight for open-source AI and keep it open.
- PBPaul Buchheit
Yeah. Yeah. And fight, and fight to- to- to make sure that AI is a thing that- that increases the individual agency instead of eroding it.
- DHDiana Hu
For
- 42:10 – 48:18
Doomers vs Optimists
- DHDiana Hu
people who are relatively neutral about...
- JFJared Friedman
... being doomers or optimists? Like do you ... What are the things that tip them in, like, one direction versus the other?
- PBPaul Buchheit
I mean, I do think some people are inherently kind of in one direction or another, right, because the doomer thing has been around for a long time. It isn't just now. Uh, you know, a, a lot of the same doomer thing goes back, um, to the, you know, '50s, '60s, or even much earlier than that, right? Like-
- JFJared Friedman
Industrial Revolution, typewriters.
- PBPaul Buchheit
Right. But e- e- in particular, you think about like there was a very influential book, Limits of Growth, The Club of Rome, or something like that. There was a book published, The Population Bomb, that had everyone convinced that there was going to be mass famines in the '70s and '80s. Um, and this is something that I grew up very aware of a- actually 'cause it was, um ... I was like the fourth of five children born in the '70s and apparently, people would give my mother, you know ... Well, she'd be at the store and they'd give her nasty looks, right, like, "You're killing the planet."
- JFJared Friedman
(laughs)
- PBPaul Buchheit
You know, that kind of thing because people genuinely believed that w- w- you know, we were all gonna have famines and everything by now. A- and there's been a continual string of doom, um, and, and always the doomers, the doomers always are pushing for central control. They're always on the side of control and lockdown. And so, you know, if you look at what did The Population Bomb advocate for, you know, mandatory sterilization. They, they, they want to lock people down and we still have that today where they're trying to lock down the food supply, they're trying to lock down the flow of information. You know, anything where they talk about combating misinformation, the misinformation is, is, is anything that threatens the power of control, right? Because it, it always comes down to control versus, versus freedom ultimately, and growth. And so the doomers are, are, are de-growth, they're lockdown, they're control versus, you know, freedom, growth and, and, um, open source. (laughs)
- JFJared Friedman
We were, uh, talking a bit earlier about this. I, I had just watched this, uh, lecture from Richard Hamming, who's a legendary scienti- mathematician who created lots of interesting things like the hamming coding distance and all these thing. He was, uh ... Earned a Nobel Prize as well. And he has this really cool lecture from like the early '90s or '80s. He has been writ- writing about AI actually since way, way back. And he starts the lecture with saying that what's gonna get in the way of AI progress is going to be human ego, which like reminds me a lot of this thing of wanting to control it and that what's gonna get in the way is really that, which still like applies now.
- PBPaul Buchheit
Yeah, I mean, there's definitely a lot of ego always in the way. (laughs)
- GTGarry Tan
(laughs)
- JFJared Friedman
I think YC has a huge role to play. Well, just a- like the startup community broadly, 'cause I just feel like the more cool tools there are that show everyone how awesome AI can be, like makes us all better, just the more inspiring that vision is.
- PBPaul Buchheit
Yeah, absolutely. And e- and again, I think that was part of what's so important about, um, like the launch of ChatGPT. Like even if ... I would say even if OpenAI just vanishes tomorrow, I, I think they've achieved the most important part of their mission, which was just really bringing this out to public awareness.
- JFJared Friedman
Yeah.
- PBPaul Buchheit
And that now we have, you know, all of these people working on it, all these people thinking about it. It isn't something that's like locked away, you know, inside of Google or inside of ... You know, again, the doomers are like, "This needs to be done in a secret government laboratory." That's how you get Skynet.
- JFJared Friedman
(laughs)
- PBPaul Buchheit
Skynet is when you build it in a secret government laboratory. Um, y- y- you know, I think developing in the open and, and across, uh, you know, a wide variety of perspectives and everyone working on it is, is our best shot at, at the optimistic outcome.
- GTGarry Tan
Yeah, these are not theoretical things, by the way. I mean, there is some evidence already that, um, giant corporations like UnitedHealthcare Group are already blocking, uh, you know, the use of AI calls just to get claims, um, cleared, for instance. And that's very much in their interest.
- JFJared Friedman
Yeah, 100%.
- GTGarry Tan
You know, they d- they detect AI, they decide they're not going to talk to that thing, and then on the flip side you could also ... It's purely adversarial. Like, on the flip side, you can imagine, uh, drowning human beings in like infinite phone trees that legally speaking are, you know, completely rock solid, but you will never get your claim, you know, reimbursed.
- PBPaul Buchheit
Yeah.
- GTGarry Tan
And, um, that's really sort of the most extreme, um, Kafka-esque sort of situation (laughs) that I have in my head. Like, we don't want the best frontier models in one or two giant corporations locked away behind, you know, sort of this corporate morass that is, you know, basically paperclip maximizing of its own, right? (laughs)
- JFJared Friedman
(laughs) That's a really g- I hadn't thought of that example. It's funny, 'cause it's totally the wrong thing for UnitedHealth. Like, what they should be doing is like developing their own like AI voice thing that's better at convincing the other one (laughs) that like the claims like shouldn't be processed or something, right? L- yeah.
- GTGarry Tan
Yeah, and by default if we have this sort of statist view that locks everything down that's safetyist then, you know, guess what's gonna happen? UnitedHealthcare Group is the only one that should be entrusted with the Frontier 200 IQ model because it is, you know, right there alongside the state.
- PBPaul Buchheit
Right. Right. Inevitably, you know, power concentrates. And part of, uh, yeah, I think what's great about Y Combinator as an organization is that we're about empowering all these individuals, you know, where we find some 19-year-old kid and then like help them build something enormous, you know? I mean, like Sam himself was like one of the original 19-year-olds, right?
- JFJared Friedman
(laughs)
- PBPaul Buchheit
So, we ... He's, he's this random 19-year-old that, that PG picks out-
- GTGarry Tan
Yeah.
- PBPaul Buchheit
... from the crowd, right?
- GTGarry Tan
Sort of definitionally, like if you're, you know, 20-something and you know how to code and you want to build things for people, like there's just another option. Like, you don't have to go and work for Moloch. (laughs)
- PBPaul Buchheit
Yeah, absolutely. And, and again, this is one of the great things about AI, is that your ability to do those things is increasing. I think we're gonna see, you know, very successful startups that actually don't even require a massive team anymore. And that was part of, you know, what really has enabled ... And again, the original concept behind the founding of YC was because of technology, it is now possible for like a couple of kids to start a real company. Um, a- a- and that trend has only accelerated.
- 48:18 – 48:43
Outro
- GTGarry Tan
was one of the best episodes we've done so far. And, uh, PB, thank you so much for joining us. Uh, we hope to have you back many, many more times.
- PBPaul Buchheit
Thanks, Garry.
- GTGarry Tan
That's it for this time. Catch you next time. (instrumental music)
Episode duration: 48:43
Install uListen for AI-powered chat & search across the full episode — Get Full Transcript
Transcript of episode LSUviaN1eso
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome