Skip to content
Lex Fridman PodcastLex Fridman Podcast

Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI | Lex Fridman Podcast #419

Sam Altman is the CEO of OpenAI, the company behind GPT-4, ChatGPT, Sora, and many other state-of-the-art AI technologies. Please support this podcast by checking out our sponsors: - Cloaked: https://cloaked.com/lex and use code LexPod to get 25% off - Shopify: https://shopify.com/lex to get $1 per month trial - BetterHelp: https://betterhelp.com/lex to get 10% off - ExpressVPN: https://expressvpn.com/lexpod to get 3 months free TRANSCRIPT: https://lexfridman.com/sam-altman-2-transcript EPISODE LINKS: Sam's X: https://x.com/sama Sam's Blog: https://blog.samaltman.com/ OpenAI's X: https://x.com/OpenAI OpenAI's Website: https://openai.com ChatGPT Website: https://chat.openai.com/ Sora Website: https://openai.com/sora GPT-4 Website: https://openai.com/research/gpt-4 PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ Full episodes playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4 Clips playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOeciFP3CBCIEElOJeitOr41 OUTLINE: 0:00 - Introduction 1:05 - OpenAI board saga 18:31 - Ilya Sutskever 24:40 - Elon Musk lawsuit 34:32 - Sora 44:23 - GPT-4 55:32 - Memory & privacy 1:02:36 - Q* 1:06:12 - GPT-5 1:09:27 - $7 trillion of compute 1:17:35 - Google and Gemini 1:28:40 - Leap to GPT-5 1:32:24 - AGI 1:50:57 - Aliens SOCIAL: - Twitter: https://twitter.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - Medium: https://medium.com/@lexfridman - Reddit: https://reddit.com/r/lexfridman - Support on Patreon: https://www.patreon.com/lexfridman

Sam AltmanguestLex Fridmanhost
Mar 18, 20241h 55mWatch on YouTube ↗

EVERY SPOKEN WORD

  1. 0:001:05

    Introduction

    1. SA

      I think compute is gonna be the currency of the future. I think it will be maybe the most precious commodity in the world. I expect that by the end of this decade, and possibly somewhat sooner than that, we will have quite capable systems that we look at and say, "Wow, that's really remarkable." The road to AGI should be a giant power struggle. I expect that to be the case.

    2. LF

      Whoever builds AGI first gets a lot of power. Do you trust yourself with that much power? The following is a conversation with Sam Altman, his second time on the podcast. He is the CEO of OpenAI, the company behind GPT-4, ChatGPT, Sora, and perhaps one day the very company that will build AGI. This is the Lex Fridman Podcast. To support it, please check out our sponsors in the description. And now, dear friends, here's Sam Altman.

  2. 1:0518:31

    OpenAI board saga

    1. LF

      Take me through the OpenAI board saga that started on Thursday, November 16th, maybe Friday, November 17th for you.

    2. SA

      That was definitely the most painful professional experience of my life, and... chaotic, and shameful, and upsetting, and a bunch of other negative things. Uh, there were great things about it too, and I wish, I wish it had not been in such an adrenaline rush that I wasn't able to stop and appreciate them at the time, but, um... I came across this old tweet of mine, or this tweet of mine from that time period, which was like, it was like, you know, kind of going to your own eulogy, watching people say all these great things about you, and, uh, just, like, unbelievable support from people I- I love and care about. Uh, that was really nice. Um, that whole weekend I- I kind of like felt, with one big exception, I- I felt, like, a great deal of love. And very little hate. Um... Even though it felt like it just, I have no idea what's happening and what's gonna happen here, and this feels really bad, and... There were definitely times I thought it was gonna be like, one of the worst things to ever happen for AI safety. But I also think I'm happy that it happened relatively early. I thought at some point between when OpenAI started and when we created AGI, there was gonna be something crazy and explosive that happened. But there may be more crazy and explosive things still to happen. Um, it still, I think, helped us build up some resilience and be ready for... more challenges in the future.

    3. LF

      But the thing y- you had a sense that you would experience is some kind of power struggle.

    4. SA

      The road to AGI should be a giant power struggle. Like, the world should... I- like, well, not should. I expect that to be the case.

    5. LF

      And so you have to go through that as, like you said, iterate as often as possible, uh, in figuring out how to have a board structure, how to have organization, how to have, um, the kind of people that you're working with, how to communicate, all that, i- in order to, uh, deescalate the power struggle as much as possible.

    6. SA

      Yeah.

    7. LF

      Pacify it.

    8. SA

      But at this point, it feels... You know, like something that was in the past that was really unpleasant and really difficult and painful. But we're back to work, and things are so busy and so intense that I don't spend a lot of time thinking about it. There was a time after, uh, there was like this fugue state, um, for kind of like the month after, maybe 45 days after, that was... I was just sort of like drifting through the days. I was so out of it. Um, I was feeling so down. Uh-

    9. LF

      Just on a personal, psychological level.

    10. SA

      Yeah. Really painful. Um, and hard to like have to keep running OpenAI in the middle of that. I just wanted to like crawl into a cave and kinda recover for a while. But, you know, now it's like we're just back to working on a mission.

    11. LF

      Well, it's still useful to go back there and reflect on board structures, on power dynamics, on how companies are run, the tension between research and product development and money and all this kind of stuff, so that you, who have a very high potential of building AGI, would do so in a slightly more organized, less dramatic way-

    12. SA

      Yeah.

    13. LF

      ... in the future. So, there's value there to go both the personal psychological aspects of you as a leader and also just the- the board structure and all this kind of messy stuff.

    14. SA

      Definitely learned a lot about, um, structure and incentives and, um, what we need out of a- a board. Um, and I think that is... It is valuable that this happened now in some sense. Um, I think this is probably not, like, the last high stress moment of OpenAI, but it was quite a high stress moment. My company very nearly got destroyed. And we think a lot about many of the other things we've gotta get right for AGI, but thinking about, uh, how to build a resilient org and how to build a structure that will stand up to, like, a lot of pressure in the world, which I expect more and more as we get closer, I think that's super important.

    15. LF

      Do you have a sense of how deep and rigorous the deliberation process by the board was? Like, can you shine some light...... on just human dynamics involved in situations like this? Was it just a few conversations and all of a sudden it escalates and, "Why don't we fire Sam?" kind of thing.

    16. SA

      I think... I think the board members were, are well-meaning people on the whole. Um, and I believe that in stressful situations, um, where people feel time pressure, whatever, uh, people understandably make suboptimal decisions, and I think one of the challenges for OpenAI will be, we're gonna have to have a board and a team, uh, that are good at operating under, under pressure.

    17. LF

      Do you think the board had too much power?

    18. SA

      I think boards are supposed to have a lot of power. Um, but one of the things that we did see is, in, in most corporate structures, boards are usually answerable to shareholders, so, you know, there's, sometimes people have, like, super voting shares or whatever. Um, in this case, and I think one of the things with our structure that we maybe should have thought about more than we did, is that the board of a nonprofit has, unless you put other rules in place, like, quite a, quite a lot of power. They don't really answer to anyone but themselves and there's ways in which that's good, but what we'd really like is for the board of OpenAI to, like, answer to the world as a whole as much as that's a practical thing.

    19. LF

      So, there's a new board announced.

    20. SA

      Yeah.

    21. LF

      There's, I guess, uh, a new smaller board at first, and now there's a new final board.

    22. SA

      Not a final board yet. We've added some, we'll add more.

    23. LF

      Added some, okay. What is fixed in the new one that was perhaps broken in the previous one?

    24. SA

      The old board sort of got smaller, uh, over the course of about a year. It was nine and then it went down to six. And then we couldn't agree on who to add. And the board also, uh, I think didn't have a lot of experienced board members, uh, and a lot of the new board members at OpenAI are, have just, have more experience as board members. Um, I think that'll help.

    25. LF

      It's been criticized, some of the people that are added to the board. I heard a lot of people criticizing the addition of Larry Summers, for example. What, what's the process of selecting the board like? What's involved in that?

    26. SA

      So Brett and Larry were kind of, uh, decided in the heat of the moment over this, like, very tense weekend, and that was a, I mean, that weekend was, like, a real rollercoaster. It was like a lot of-

    27. LF

      Mm-hmm.

    28. SA

      ... a lot of ups and downs. Um, and we were trying to agree on new board members that both sort of the executive team here and the old board members felt would be reasonable. Um, Larry was actually one of their suggestions, the old board members. Um, Brett, I think I had even previous to that weekend suggested, but he was, you know, busy and didn't wanna do it, and then we really needed help in Wood. Um, we talked about a lot of other people too, uh, but that was... I felt like if I was going to come back, uh, I needed new board members. Um, I didn't think I could work with the old board again in the same configuration, although we then decided, uh, and I'm grateful that Adam would stay, um, but we wanted to get to, uh, we considered various configurations, decided we wanted to get to a board of three, and, uh, h- had to find two new board members over the course of sort of a short period of time. So those were decided honestly without a, you know, that's like, you kinda do that on the battlefield. You don't have time to design a rigorous process then. Um, for new board members since, and new board members we'll add going forward, um, we have some criteria, uh, that we think are important for the board to have, different expertise that we want the board to have. Um, unlike hiring an executive where you need them to do one role well, the board needs to do a whole role of kind of governance and thoughtfulness, uh, well. And so one thing that Brett says, which I really like, is that, you know, we wanna hire board members in slates, not as individuals one at a time. And, uh, you know, thinking about a group of people that will bring nonprofit expertise, expertise in running companies, sort of good legal and governance expertise. Uh, that's kind of what we've tried to optimize for.

    29. LF

      So is technical savvy important for the individual board members?

    30. SA

      Not for every board member, but for s- certainly some you need that. That's part of what the board needs to do.

  3. 18:3124:40

    Ilya Sutskever

    1. LF

      Let me ask you about Ilya. Is he being held hostage in a secret nuclear facility?

    2. SA

      No.

    3. LF

      What about a regular secret facility?

    4. SA

      No.

    5. LF

      What about a nuclear non-secret facility?

    6. SA

      Neither of that.

    7. LF

      Okay. (laughs)

    8. SA

      Not, not that either.

    9. LF

      I mean, this is becoming a meme at some point. You've known Ilya for a, for a long time. He was, obviously he's in part, part of this drama with the board and all that kind of stuff. What's your relationship with him, uh, now?

    10. SA

      I love Ilya. I have tremendous respect for Ilya. I, uh, I don't have anything I can, like, say about his plans right now. That's, that's a question for him. Um, but I really hope we work together for, you know, certainly the rest of my career. He's a little bit younger than me, maybe he works a little bit longer. (laughs)

    11. LF

      (laughs) You know, there's a, there's a meme that he saw something. Like, he maybe saw AGI and that gave him a lot of worry internally. Uh, what did Ilya see?

    12. SA

      Uh, well, he has not seen AGI. None of us have seen AGI.

    13. LF

      Mm-hmm.

    14. SA

      We have not built AGI. Uh, I do think, uh, one of the many things that I really love about Ilya is he takes AGI and the safety concerns broadly speaking, you know, including things like the impact this is gonna have on society very seriously. And we, as we continue to make significant progress, um, Ilya is one of the people that I've spent the most time over the last couple of years talking about what this is going to mean, what we need to do to ensure we get it right, to ensure that we succeed at the mission. Um, so Ilya did not see AGI, um, but Ilya is a credit to humanity in terms of how much he thinks and worries about making sure we get this right.

    15. LF

      I've had a bunch of conversation with him in the past. I think when he talks about technology, he's always, like, doing this long-term thinking type of thing. So he's not thinking about what this is gonna be in a year, he's thinking about it in 10 years.

    16. SA

      Yeah.

    17. LF

      Just thinking from first principles, like, okay, if this scales, what are the fundamentals here? Where's this going? And so that, that's a foundation for then thinking about, like, all the other safety concerns and all that kind of stuff, um, which makes him a really fascinating human, uh, to talk with. Do you have, uh, any idea why he's been kind of quiet? Is it he's just doing some soul searching?

    18. SA

      Again, I don't want to, like, speak for Ilya.

    19. LF

      (laughs) Mm-hmm.

    20. SA

      I think that y- you should ask him that. Um... He's definitely a thoughtful guy. Uh, I think I kind of think of Ilya as, like, always on a soul search in a really good way.

    21. LF

      Yes. Yeah. Also, he appreciates the power of silence. Also, I'm told he can be a silly guy, which I've never-

    22. SA

      Totally.

    23. LF

      I've never seen that side of him.

    24. SA

      It's very, it's very sweet when that happens. (laughs)

    25. LF

      (laughs) I've never witnessed a silly Ilya, but, um, I look forward to, to that as well.

    26. SA

      I was at a dinner party with him recently and he was playing with a puppy, and I re- and he was, like, in a very silly mood, very endearing. And I was thinking like, "Oh man, this is, like, not the side of Ilya that the world sees the most."

    27. LF

      So just to wrap up this whole saga, are you feeling good about the board structure-

    28. SA

      Yes.

    29. LF

      ... about all of this and, like, where it's moving?

    30. SA

      I feel great about the new board. In terms of the structure of OpenAI, I, you know, one of the board's tasks is to look at that and see where we can make it more robust. Um, we wanted to get new board members in place first, uh, but, you know, we clearly learned a lesson about structure throughout this process. I don't have, I think, super deep things to say. It was a crazy, very painful experience. I think it was like a perfect storm of weirdness. It was like a preview for me of what's gonna happen as the stakes get higher and higher, and the need that we have, like, robust governance structures and processes and people. Um, I am kind of happy it happened when it did, but it was a shockingly painful thing to go through.

  4. 24:4034:32

    Elon Musk lawsuit

    1. LF

      mutual friend Elon sued OpenAI. What do you think is the essence of what he's criticizing? To what degree does he have a point? To what degree is he wrong?

    2. SA

      I don't know what it's really about. We started off just thinking we were gonna be a research lab, and having no idea about how this technology was gonna go. It's hard to... because it was only, you know, seven or eight years ago, it's hard to go back and really remember what it was like then. But this was before language models were a big deal. This was before we had any idea about an API, or selling access to a chatbot. This was before we had any idea we were going to productize at all. So we're like, "Eh, we're just like gonna try to do research and, you know, we don't really know what we're going to do with that." I think with like many new, fundamentally new things, you start fumbling through the dark and you make some assumptions, most of which turn out to be wrong. And then it became clear that we were going to need to do different things, and also have huge amounts more capital. So we said, "Okay, well the structure doesn't quite work for that. How do we patch the structure?" Um, and then patch it again and patch it again, and you end up with something that does look kind of eyebrow raising to say the least. But we got here gradually with, I think, reasonable decisions at each point along the way, and doesn't mean I wouldn't do it totally differently if we could go back now with an oracle, but you don't get the oracle at the time. But anyway, in terms of what Elon's real motivations here are, I don't know.

    3. LF

      To the degree you remember, what was the response that OpenAI gave in the blog post? Can you summarize it?

    4. SA

      Oh, we just said like, you know, "Elon said this set of things. Here's our characterization, or here's the sort of... not our characterization, here's like the characterization of how this went down." Uh, we tried to like not make it emotional, and just sort of say, like... "Here's the history."

    5. LF

      I do think there's a degree of mischaracterization from Elon here about one of the points you just made, which is the degree of uncertainty you had at the time. You guys are a bunch of, like, a small group of researchers crazily talking about AGI when everybody's laughing at that thought.

    6. SA

      Wasn't that long ago Elon was crazily talking about launching rockets-

    7. LF

      Yeah.

    8. SA

      ... when people were laughing at that thought. Uh, so I think he'd have more empathy for this.

    9. LF

      I mean, I, I do think that there's personal stuff here. That there was a split. That OpenAI and a lot of amazing people here chose to part ways with Elon. So there's a personal thing.

    10. SA

      Elon chose to part ways.

    11. LF

      Can you describe that exactly? The, the choosing to part ways.

    12. SA

      He thought OpenAI was gonna fail. Um, he wanted total control to sort of turn it around. We wanted to keep going in the direction that now has become OpenAI. He also wanted Tesla to be able to build an AGI effort. At various times he wanted to make OpenAI into a for-profit company that he could have control of, or have it merge with Tesla. Um, we didn't want to do that and he decided to leave, which that's fine.

    13. LF

      So you're saying, and that's one of the things that the blog post says is that he wanted OpenAI to be basically acquired by Tesla?

    14. SA

      Yeah.

    15. LF

      In those same way that... Or maybe something similar, or maybe something more dramatic than the partnership with Microsoft?

    16. SA

      My memory is the proposal was just like, yeah, like get acquired by Tesla and have Tesla have full control over it. I'm pretty sure that's what it was.

    17. LF

      So what is the word "open" in OpenAI mean to Elon at the time? Ilya has talked about this in, in the email exchanges and all this kind of stuff. What does it mean to you at the time? What does it mean to you now?

    18. SA

      I would definitely pick a diff- Speaking of going back with an oracle, I'd pick a different name. Um, one of the things that I think OpenAI is doing that is the most important of everything that we're doing is putting powerful technology in the hands of people for free as a public good. Not, we're not... You know, we don't run ads on our free version. We don't monetize it in other ways. We just say, "As part of our mission, we wanna put increasingly powerful tools in the hands of people for free, and get them to use them." And I think that kind of open is really important to our mission. I think if you give people great tools and teach them to use them, or don't even teach them, they'll figure it out, and let them go build an incredible future for each other with that, uh, that's a big deal. So if we can keep putting like free or low cost, or free and low cost powerful AI tools out in the world, uh, I think that's a huge deal for how we fulfill the mission. Um, open source or not? Yeah, I think we should open source some stuff and not other stuff. Uh, the... It does become this like religious battle line where nuance is hard to have, but I think nuance is the right answer.

    19. LF

      So he said, "Change your name to ClosedAI and I'll drop the lawsuit." I mean, is it going to become this battleground in, in the land of memes about the name?

    20. SA

      I, I think that speaks to the seriousness with which Elon means the lawsuit. And, uh, yeah, I mean, that's like an astonishing thing to say, I think. Like-

    21. LF

      Well, I don't think the lawsuit ma- maybe correct me if I'm wrong, but I don't think the lawsuit is legally serious. It's more to make a point about the future of AGI and the company that's currently leading the way. So...

    22. SA

      Look, I mean Grok had not open sourced anything until people pointed out it was a little bit hypocritical, and then he announced that Grok will open source things this week. Uh, I don't think open source versus not is what this is really about for him.

    23. LF

      Well, we'll talk about open source and not. I do think maybe criticizing the competition is great, just talking a little shit is great, but friendly competition versus like... I personally hate lawsuits.

    24. SA

      Look, I think this whole thing is like unbecoming of a builder, and I respect Elon as one of the great builders of our time, and, um, I know he knows what it's like to have like haters attack him and it makes me extra sad he's doing it to us.

    25. LF

      Yeah, he's one of the greatest builders of all time. Potentially the greatest builder of all time.

    26. SA

      It makes me sad. And I think it makes a lot of people sad. Like there's a lot of people who've really looked up to him for a long time and said this- I said, you know, in some interview or something that I missed the old Elon, and the number of messages I got being like that exactly encapsulates how I feel.

    27. LF

      I think he should just win. He should just, um, make X Grok beat GPT, and then GPT beats Grok, and it's just a competition, and that's just beautiful for everybody. But on the question of open source, do you think... There's a lot of companies playing with this idea, it's quite interesting. I would say Meta, surprisingly-

    28. SA

      Mm-hmm.

    29. LF

      ... has led the way on this, or like, uh, at least took the first step in the game of chess of, like, really open sourcing the model. Of course it's not the, uh, state-of-the-art model, but open sourcing LLaMA, um, you know Google is flirting with the idea of open sourcing a smaller version. Have you... What are the pros and cons of open sourcing? Have you played around with this idea?

    30. SA

      Yeah, I think there- there is definitely a place for open source models, particularly smaller models that people can run locally, I think there's huge demand for. Um, I think there will be some open source models, there will be some closed source models. Uh, this, it won't be unlike other ecosystems in that way.

  5. 34:3244:23

    Sora

    1. LF

      So speaking of cool shit, uh, Sora. There's like a million questions that I could ask. First of all, it's amazing. It truly is amazing on a product level, but also just on a philosophical level. So let me just, uh, technical/philosophical ask, what do you think it understands about the world more or less than GPT-4, for example? The world model when you train on these patches versus language tokens.

    2. SA

      I think all of these models understand something more about the world model than most of us give them credit for. And because there are also very clear things they just don't understand or don't get right, it's easy to like look at the weaknesses, see through the veil and say, "Ah, this is just, this is all fake." But it's not all fake, it's just some of it works and some of it doesn't work. Like I remember when I started first watching Sora videos and I would see like a person walk in front of something for a few seconds and include it and then walk away and the same thing was still there. I was like, "Ah."

    3. LF

      Mm-hmm.

    4. SA

      "That's pretty good."

    5. LF

      Mm-hmm.

    6. SA

      Or there's examples where it like the underlying physics looks so well represented over, uh, you know, a lot of steps in a sequence. It's like, "Oh, this is, this is like quite impressive." But like fundamentally, these models are just getting better, and that will keep happening. If you look at the trajectory from DALLE-1 to 2 to 3 to Sora, you know there are a lot of people that were dunked on each version, saying, "It can't do this, it can't do that," and like look at it now.

    7. LF

      Well, the thing you just mentioned is kind of with occlusions is basically modeling the physics, the three-dimensional physics of the world sufficiently well to capture those kinds of things.

    8. SA

      Well-

    9. LF

      Or like under... Or yeah, maybe you can tell me. In order to deal with occlusions, what does the world model need to-

    10. SA

      Yeah, so what I would say is it's doing something to deal with occlusions really well. Would I represent that it has like a great underlying 3D model of the world?It's a little bit more of a stretch.

    11. LF

      But can you get there through just these kinds of two-dimensional training data approaches?

    12. SA

      Uh, it looks like this approach is gonna go surprisingly far. I don't wanna speculate too much about what limits it will surmount and which it won't, but...

    13. LF

      What are some interesting limitations of the system that you've seen? I mean, there's been some fun ones you've posted.

    14. SA

      There's all kinds of fun... I mean, like, you know, cats sprouting an extra limb at random points in a video. Uh, like, pick what you want, but there's still a lot of problem... uh, still a lot of weaknesses.

    15. LF

      Do you think that's a fundamental flaw of the approach, or is it just, mm, you know, bigger model or better, like, technical details or better data, more data is going to solve those, uh, uh, the cats sprouting extremes?

    16. SA

      I would say yes to both. Like, I think there is something about the approach which just seems to feel different from how we think and learn and whatever, and then also I think it'll get better with skill.

    17. LF

      Like I mentioned, LLMs have tokens, text tokens, and Sora has visual patches, so it converts all visual data, uh, diverse kinds of visual data, videos and images into patches. Is the training to the degree you can say fully self-supervised, or is there some manual labeling going on? Like, what's the involvement of humans in all this?

    18. SA

      I mean, without saying anything specific about the Sora approach, we, we use lots of human data in our work.

    19. LF

      But not internet-scale data. So lots of humans. Lots is a complicated word, Sam. (laughs)

    20. SA

      Well, (laughs) I think lots is a fair word in this case.

    21. LF

      But it doesn't... 'cause to me, lots... like, listen, I'm an introvert, and when I hang out with, like, three people, that's a lot of people.

    22. SA

      Yeah.

    23. LF

      Four people, that's a lot. But I suppose you mean more than...

    24. SA

      More than three people work on labeling the data for these models, yeah.

    25. LF

      Okay. All right. But fundamentally, there's a lot of self-supervised learning, 'cause, uh, you, th- what you mentioned in the technical report is internet-scale data. That's another beautiful... it's like poetry. Uh, so it's a lot of data that's not human label. It's like-

    26. SA

      Yeah.

    27. LF

      It's self-supervised in that way.

    28. SA

      Yeah.

    29. LF

      And then the question is how much int- how much data is there on the internet that could be used in this c- that, uh, is conducive to this kind of s- self-supervised way? If only we knew th- the details of the self-supervised. Do you, have you considered opening it up a little more, the details?

    30. SA

      We have. For... you mean for Sora specifically?

  6. 44:2355:32

    GPT-4

    1. LF

      Let me ask you about GPT-4. There's so many questions. Uh, first of all, also amazing. It's- looking back, it'll probably be this kinda historic, pivotal moment with three, five, and four, with ChatGPT.

    2. SA

      Maybe five will be the pivotal moment. I don't know.

    3. LF

      (laughs)

    4. SA

      Hard to say that looking forwards.

    5. LF

      We never know. That's the annoying thing about the future, it's hard to predict. But for me, looking back, GPT-4, ChatGPT is pretty damn impressive, like historically impressive. So allow me, uh, to ask, what's been the most impressive capabilities of GPT-4 to you, and GPT-4 Turbo?

    6. SA

      I think it kinda sucks.

    7. LF

      Hmm. Typical human also.

    8. SA

      It's-

    9. LF

      Gotten used to an awesome thing.

    10. SA

      No, I think it is an amazing thing, um, but relative to where we need to get to and where I believe we will get to, uh, you know, at the time of, like, GPT-3, people were like, "Oh, this is amazing. This is this, like, marvel of technology," and it is, it was. Uh, but, you know, now we have GPT-4 and you look at GPT-3 and you're like, "That's unimaginably horrible." Um, I expect that the delta between five and four will be the same as between four and three, and I think it is our job to live a few years in the future and remember that the tools we have now are gonna kinda suck looking backwards at them, and that's how we make sure the future is better.

    11. LF

      What are the most glorious ways in that GPT-4 sucks? Meaning, uh-

    12. SA

      What are the best things it can do?

    13. LF

      What are the best things it can do, and the- the limits of those best things that allow you to say it sucks, therefore gives you an inspiration, a hope for the future?

    14. SA

      You know, one way I've been using it for more recently is sort of a- like a brainstorming partner.

    15. LF

      Yep.

    16. SA

      And-

    17. LF

      I'll use it for that.

    18. SA

      ... there's a glimmer of something amazing in there. I don't think it gets... You know, when people talk about it, it- what it does, they're like, "Oh, it helps me code more productively. It helps me write more faster and better. It helps me, you know, translate from this language to another," all these, like, amazing things. But there's something about the, like, kind of creative brainstorming partner, I need to come up with a name for this thing. I need to, like, think about this problem in a different way. I'm not sure what to do here. Uh, that I think, like, gives a glimpse of something I hope to see more of. Um, one of the other things that you can see, like, a very small glimpse of is when it can help on longer horizon tasks, you know, break down something in multiple steps, maybe, like, execute some of those steps, search the internet, write code, whatever, put that together. Uh, when that works, which is not very often, it's, like, very magical.

    19. LF

      The iterative back and forth with a human. Well, it works a lot for me. What do you mean it won't work?

    20. SA

      Uh, w- iterative back and forth with a human, it can get more often. Well, then it can go do, like, a 10-step problem on its own.

    21. LF

      Oh.

    22. SA

      Doesn't work for that too often. Sometimes.

    23. LF

      Add multiple layers of abstraction, or do you mean just sequential?

    24. SA

      Uh, both. Like, you know, to break it down, and then do things at different layers of abstraction and put them together. Look, I don't wanna- I don't wanna, like, downplay the accomplishment of GPT-4.

    25. LF

      Mm-hmm.

    26. SA

      Um, but I don't wanna overstate it either, and I think this point that we are on an exponential curve, we will look back relatively soon at GPT-4 like we look back at GPT-3 now.

    27. LF

      That said, I mean, ChatGPT was a transition to where people, like, started to believe again. There was a kinda- there was an uptick of believing.

    28. SA

      Sure. Sure, sure, sure.

    29. LF

      Not internally at OpenAI, perhaps. There's believers here, but, uh, when you think about

    30. SA

      ... And in that sense, I do think it'll be a moment where a lot of the world went from not believing to believing. Um, that was more about the ChatGPT interface than the... A- and- and by the interface-

  7. 55:321:02:36

    Memory & privacy

    1. LF

      You've given ChatGPT the ability to have memories. You've been playing with that about previous conversations, and also the ability to turn off memory, which is, I wish I could do that sometimes, just turn on and off depending ... I guess sometimes alcohol can do that, but-

    2. SA

      (laughs)

    3. LF

      ... not- not in, uh, not optimally, I suppose. Uh, what- what have you seen through that? Like, playing around with that idea of remembering conversations or not?

    4. SA

      We're very early in our exp- explorations here, but I think what people want, or at least what I want for myself, is a model that gets to know me and gets more useful to me over time. This is an early exploration. Um, I think there's, like, a lot of other things to do, but that's where we'd like to head. You know, you'd like to use a model and over the course of your life ... Or use a system. There'll be many models. And over the course of your life, it gets- it gets better and better.

    5. LF

      Yeah. How hard is that problem? 'Cause right now, it's more like remembering little factoids and preferences and s- so on. What about remembering, like, don't you want GPT to remember all the shit you went through in November and all-

    6. SA

      Yeah.

    7. LF

      ... all the drama, and then you can-

    8. SA

      Yeah, yeah, yeah.

    9. LF

      'Cause right now, you're clearly blocking it out a little bit.

    10. SA

      It's not just that I want it to remember that. I want it to integrate the lessons of that-

    11. LF

      Yes.

    12. SA

      ... and remind me in the future what to do differently or what to watch out for.

    13. LF

      Mm-hmm.

    14. SA

      And, you know, we all gain from experience over the course of our lives, varying degrees, and I'd like my AI agent to gain with that experience, too. Um, so if we, if we go back and let ourselves imagine that, you know, trillions and trillions of context length, if I can put every conversation I've ever had with anybody in my life in there, if I can have all of my emails, input/output, like all of my input/output in the context window every time I ask a question, that'd be pretty cool, I think.

    15. LF

      Yeah, I think that would be very cool. Um, people sometimes will hear that and be concerned about privacy. Is there ... What- what- what do you think about that aspect of it? The more effective the AI becomes at really integrating all the experiences and all the data that happened to you and give you advice ...

    16. SA

      I think the right answer there is just user choice. You know, anything I want stricken from the record for my AI agent, I wanna be able to, like, take out. If I don't want it to remember anything, I want that, too. You and I may have different opinions about where on that privacy utility trade-off for our own AI we wanna be, which is totally fine, but I think the answer is just, like, really easy user choice.

    17. LF

      But there should be a- some high level of transparency from a company about the user choice, 'cause sometimes company in the past, companies in the past have been kind of-

    18. SA

      Absolutely.

    19. LF

      ... shady about, like, eh, we're ... It's kind of presumed that we're collecting all your data, and we're using it for a good reason, for advertisement and so on. But there's not a transparency about the details of that.

    20. SA

      That's totally true. You- you know, you mentioned earlier that I'm, like, blocking out the November stuff. I-

    21. LF

      Oh, just teasing you.

    22. SA

      Well, I mean, I think it was a very traumatic thing, and it did immobilize me for a long period of time. Like, definitely the hardest, like the hardest work thing I've had to do was just, like, keep working that period, 'cause I had to, like, you know, try to come back in here and put the pieces together while I was just, like, in sort of shock and pain, and you know, nobody really cares about that. I mean, I- the team gave me a pass, and I was not working at my normal level, but there was a period where I was just, like ... It was really hard to have to do both, but I kinda woke up one morning, and then I was like, "This was a horrible thing to happen to me. I think I could just feel like a victim forever, uh, or I can say this is, like, the most important work I'll ever touch in my life, and I need to get back to it." And it doesn't mean that I've repressed it, because sometimes I, like wake up in the middle of the night thinking about it, but I do feel, like, an obligation to keep moving forward.

    23. LF

      Well, that- that's beautifully said, but there could be some lingering stuff in there. Like, what I would be concerned about is that trust thing that you mentioned, that being paranoid about people, uh, as opposed to just trusting everybody or most people, like using your gut. It's a tricky dance.

    24. SA

      For sure.

    25. LF

      I mean, because I've seen in- in my part-time explorations, I've been diving deeply into the Zelensky administration and the Putin administration and the dynamics there in wartime in a very highly stressful environment, and what happens is distrust, and you isolate yourself both, and you start to not see the world clearly, and that's a concern. That's a human concern. You seem to have taken it in stride and kinda learned the good lessons and felt the love and let the love energize you, which is great, but still can linger in there. There's just some questions I would love to ask and your intuition about what's GPT able to do and not. So it's allocating approximately the same amount of compute for each token it generates. Is there room there in this kind of approach to slower thinking, sequential thinking?

    26. SA

      I think there will be a new paradigm for that kind of thinking.

    27. LF

      Will it be similar, like architecturally, as what we're seeing now with LLMs? Is it a layer on top of the LLMs?

    28. SA

      Uh, I can imagine many ways to implement that. I think that's less important than the question you were getting at, which is, do we need a way to do a slower kind of thinking where the answer doesn't have to get, like... You know, it's li- like, I guess like spiritually you could say that you want an AI to be able to think harder about a harder problem-

    29. LF

      Right.

    30. SA

      ... and answer more quickly about an easier problem. And I think that will be important.

  8. 1:02:361:06:12

    Q*

    1. SA

    2. LF

      This does make me think of the mysterious, the lore behind Q*.

    3. SA

      (laughs)

    4. LF

      What's this mysterious Q* project? Is it also in the same nuclear facility?

    5. SA

      Uh, there is no nuclear facility.

    6. LF

      Mm-hmm. That's what a person with a nuclear facility always says.

    7. SA

      I would love to have a secret nuclear facility.

    8. LF

      (laughs)

    9. SA

      There isn't one.

    10. LF

      All right. Uh-

    11. SA

      Maybe someday.

    12. LF

      Someday? All right. (laughs) One can dream.

    13. SA

      OpenAI is not a good company at keeping secrets. It would be nice, you know, we're like been plagued by a lot of leaks, and it would be nice if we were able to have something like that.

    14. LF

      Can you speak to what Q* is?

    15. SA

      We are not ready to talk about that.

    16. LF

      See, but an answer like that means there's something to talk about. It's very mysterious, Sam.

    17. SA

      I mean, we work on all kinds of research.

    18. LF

      Yeah.

    19. SA

      Uh, we have said for a while that we think better reasoning in these systems is an important direction that we'd like to pursue. We haven't cracked the code yet. Uh, we're in- we're very interested in it.

    20. LF

      Is there gonna be moments, Q* or otherwise, where there's going to be leaps similar to ChatGPT, where you're like...

    21. SA

      That's a good question. Um, what do I think about that? It's interesting. To me, it all feels pretty continuous.

    22. LF

      Right, this is kind of a theme that you're saying, is there's a gradual, you're basically gradually going up an exponential slope. But from an outsider perspective, from me just watching it, it does feel like there's leaps. But to you, there isn't.

    23. SA

      I do wonder if we should have... So, you know, part of the reason that we deploy the way we do is that we think, um, we call it iterative deployment. We, uh, rather than go build in secret until we got all the way to GPT-5, we decided to talk about GPT-1, two, three, and four. And part of the reason there is I think AI and surprise don't go together. And also, the world, people, institutions, whatever you wanna call it, need time to adapt and think about these things. And I think one of the best things that OpenAI has done is this strategy, and we, we get the world to pay attention to the progress, to take AGI seriously, to think about what systems and structures and governance we want in place before we're like under the gun and have to make a rush decision. I think that's really good. But the fact that people like you and others say, um, you still feel like there are these leaps makes me think that maybe we should be doing our releasing even more iteratively. I don't know what that would mean. I don't have an answer ready to go. But like our goal is not to have shock updates to the world. The opposite.

    24. LF

      Yeah, for sure. More iter- iterative would be amazing. I think that's just beautiful for everybody.

    25. SA

      But that's what we're trying to do. That's like our state of the strategy, and I think we're somehow missing the mark. So maybe we should think about, you know, releasing GPT-5 in a different way or something like that.

    26. LF

      Yeah, 4.71, 4.72, but people tend to like to celebrate, people celebrate birthdays. I don't know if you know humans, but they kinda have these, uh, milestones and-

    27. SA

      I do know some humans. Um, people do like milestones. I, uh, I totally get that. I think we like milestones too. It's like fun to, you know, say, declare victory on this one and go start the next thing. But, but yeah, I feel like we're somehow getting this a little

  9. 1:06:121:09:27

    GPT-5

    1. SA

      bit wrong.

    2. LF

      So, uh, when is GPT-5 coming out again?

    3. SA

      I don't know. That's the honest answer.

    4. LF

      Oh, that's the honest answer. Is it blink twice if it's this year?

    5. SA

      (laughs) I also... We will release an amazing new model this year. I don't know what we'll call it.

    6. LF

      So that goes to the question of like what, what's the way we release this thing?

    7. SA

      We'll release, over, in the coming months, many different things. Uh, I think they'll be very cool. Uh, I think before we talk about like a GPT-5 like model called that or called, or not called that, or a little bit worse or a little bit better than what, what you'd expect from a GPT-5, I think we have a lot of other important things to release.... first.

    8. LF

      I don't know what to expect from GPT-5. (laughs) You're making me nervous and excited. Uh, what, what are some of the biggest challenges and bottlenecks to overcome for whatever it ends up being called? But let's call it GPT-5. Just interesting to ask, what are... Is it on the compute side? Is it on the technical side?

    9. SA

      It's always all of these... I was, I was, you know, what's the one big unlock? Is it, is it a bigger computer? Is it, like, a new secret? Is it something else? Um, it's all of these things together. Like, the thing that OpenAI, I think, does really well... This is actually an original Ilya quote that I'm gonna butcher, but it's something like, "We multiply 200 medium sized things together into one giant thing."

    10. LF

      So there's this, uh, distributed constant innovation happening.

    11. SA

      Yeah.

    12. LF

      So even on the tactical side, like, uh-

    13. SA

      Especially on the tactical side.

    14. LF

      So even, like, detailed approaches, like, d- detailed aspects of every... How does that work with different disparate teams and so on? Like, how, how do they, how do (laughs) how do the medium sized things become one whole giant transformer? How does this...

    15. SA

      There's a few people who have to, like, think about putting the whole thing together, but a lot of people try to keep most of the picture in their head.

    16. LF

      Oh, like the individual teams, individual contributors try to keep the big picture?

    17. SA

      At a high level, yeah. You don't know exactly how every piece works, of course. But one thing I generally believe is that it's sometimes useful to zoom out and look at the entire map, and... And I think this is true for, like, a technical problem, I think this is true for, like, innovating in business. Uh, but things come together in surprising ways, and having an understanding of that whole picture, even if most of the time you're operating in the weeds in one area, pays off with surprising insights. In fact, one of the things that I used to have and I thought was super valuable was, I used to have, like, a, a, a good map of that... All of the front- or most of the frontiers in the tech industry. And I could sometimes see these connections or new things that were possible that if I were only, you know, deep in one area, I wouldn't, I wouldn't, I wouldn't be able to, like, have the idea for it because I wouldn't have all the data. And I don't really have that much anymore. I'm, like, super deep now. Um, but I know that it's a valuable thing.

    18. LF

      You're not the man you used to be, Sam.

    19. SA

      Very different job now than what I used to have.

  10. 1:09:271:17:35

    $7 trillion of compute

    1. SA

    2. LF

      Speaking of zooming out, let's zoom out to, uh, another cheeky thing, but profound thing perhaps that you said. You tweeted, uh, about needing seven trillion dollars.

    3. SA

      I did not tweet about that. I never said, like, "We're raising seven trillion dollars, blah, blah, blah."

    4. LF

      Oh, that's somebody else?

    5. SA

      Yeah.

    6. LF

      Oh, but you said, "Uh, fuck it, maybe eight," I think.

    7. SA

      Okay, I meme, like, once there's, like, misinformation out in the world.

    8. LF

      Oh, you meme? But, sort of, misinformation may have a foundation of, like, insight there.

    9. SA

      Look, I think compute is gonna be the currency of the future. I think it will be maybe the most precious commodity in the world, and I think we-

    10. LF

      Interesting.

    11. SA

      ... should be investing heavily to make a lot more compute. Uh, compute is... It's an unusual... I think it's gonna be an unusual market. Um, you know, people think about the market for, like, chips for mobile phones or something like that, and you can say that, okay, there's eight billion people in the world, maybe seven billion of them have phones, maybe there- or six billion, let's say. They upgrade every two years, so the market per year is three billion system on chip for smartphones. And if you make 30 billion, you will not sell 10 times as many phones because most people have one phone. Um, but compute is different. Like, intelligence is gonna be more like energy or something like that where the only thing that I think makes sense to talk about is at price X, the world will use this much compute, and at price Y, the world will use this much compute. Um, because if it's really cheap, I'll have it, like, reading my email all day, like, giving me suggestions about what I maybe should think about or work on, and trying to cure cancer, and if it's really expensive, maybe I'll only use it, or will only use it to try to cure cancer. So, I think the world is gonna want a tremendous amount of compute, and there's a lot of parts of that that are hard. Uh, energy is the hardest part. Building data centers is also hard, the supply chain is hard, and then of course fabricating enough chips is hard. Um, but this seems to me where things are going, like, we're gonna want an amount of compute that's just hard to reason about right now.

    12. LF

      How do you solve the energy puzzle? Nuclear-

    13. SA

      That's what I believe.

    14. LF

      Fusion?

    15. SA

      That's what I believe.

    16. LF

      Nuclear fusion?

    17. SA

      Yeah.

    18. LF

      Who's gonna solve that?

    19. SA

      I think Helion's doing the best work, but I'm happy there's, like, a race for fusion right now. Nuclear fission I think is also, like, quite amazing, and I hope as a world we can re-embrace that. It's really sad to me wh- how the history of that went, and hope we get back to it in a meaningful way.

    20. LF

      So to you, part of the puzzle is nuclear fission, like, nuclear reactors as we currently have them. And a lot of people are terrified because of Chernobyl and so on.

    21. SA

      Well, I think we should make new reactors. I, I, I think it, it's just like, it's a shame that industry kind of ground to a halt.

    22. LF

      And what, just mass hysteria is how you explain the halt?

    23. SA

      Yeah.

    24. LF

      I don't know if you know humans, but that's one of the dangers, that's one of the security threats for, for, for, uh, nuclear fission is humans seem to be really afraid of it, and that's something we have to incorporate into the calculus of it. So we have to kinda win people over and to show how safe it is.

    25. SA

      I worry about that for AI.

    26. LF

      Mm-hmm.

    27. SA

      I think some things are gonna go theatrically wrong with AI. I don't know what the percent chance is that I eventually get shot, but it's not zero.

    28. LF

      Oh, like, we wanna stop this from-

    29. SA

      Maybe.

    30. LF

      How do you decrease the theatrical nature of it? You know, I've already starting to- to hear rumblings, because I do talk to people on the, on both sides of the political spectrum, hear rumblings where it's going to be politicized AI, it's going to be politicized. Really, really worries me because then it's like maybe the right is against AI and the left is for AI 'cause it's going to help the people, or whatever, whatever the narrative and fo- formulation is, that really worries me. And then the theatrical nature of it can be leveraged fully. How do you fight that?

  11. 1:17:351:28:40

    Google and Gemini

    1. LF

      um, Google, with the help of search, has been dominating the past 20 years. I think it's fair to say in terms of the access, the world's access to information, how we interact and so on, and one of the nerve-racking things for Google, but for the entirety of people in this space is thinking about how are people going to access information?

    2. SA

      Yeah.

    3. LF

      Like, like you said, people show up to GPT as a-

    4. SA

      Yeah.

    5. LF

      ... as a starting point. So is OpenAI going to really take on this thing that Google started 20 years ago, which is, how do we a- get-

    6. SA

      I find that boring.

    7. LF

      (laughs)

    8. SA

      I- I mean, if the, if the question is, is like, is, if we can build a better search engine than Google or whatever, then sure we should, like, go, you know, like people should use a better product. But I think that would so understate what this can be. You know, Google shows you, like, 10 blue links, well, like, 13 ads and then 10 blue links, and that's, like, one way to find information. But the thing that's exciting to me is not that we can go build a better copy of Google Search, but that maybe there's just some much better way to help people find and act and on and synthesize information. Actually, I think ChatGPT is that for some use cases, and hopefully it will make it be like that for a lot more use cases.... but I don't think it's that interesting to say, like, how do we go do a better job of giving you, like, 10 ranked webpages to look at than what Google does?

    9. LF

      Mm.

    10. SA

      Maybe it's really interesting to go say, "How do we help you get the answer or the information you need?"

    11. LF

      Mm.

    12. SA

      How do we help create that, in some cases, synthesize that in others, or point you to it in, in yet others? Um, but a lot of people have tried to just make a better search engine than Google, and it's, it is a hard technical problem, it is a hard branding problem, it is a hard ecosystem problem. I don't think the world needs another copy of Google.

    13. LF

      And integrating a chat client, like a ChatGPT, with a search engine-

    14. SA

      That's cooler.

    15. LF

      It's cool, but it's tricky. It's awk- Uh, it's like if you just do it simply, it's awkward, because like if you just shove it in there-

    16. SA

      Yeah.

    17. LF

      ... it's all, it can be awkward.

    18. SA

      As you might guess, we are interested in how to do that well.

    19. LF

      Hmm.

    20. SA

      That would be an example of a cool thing, that's not just like-

    21. LF

      How to do that well. Like a heterogeneous, like integrating-

    22. SA

      The intersection of LLMs plus search, I don't think anyone has cracked the code on yet. I would love to go do that. I think that would be cool.

    23. LF

      Yeah. What about the ad side? Have you ever considered monetization of that?

    24. SA

      You know, I kind of hate ads, just as like an aesthetic choice. Uh, I think ads needed to happen on the internet for a bunch of reasons to get it going, but it's a more mature industry. The world is richer now. I like that people pay for ChatGPT and know that the answers they're getting are not influenced by advertisers. There is, I'm sure, there's an ad unit that makes sense for LLMs, and I'm sure there's a way to like participate in the transaction stream in an unbiased way that is okay to do. But it's also easy to think about like the dystopic visions of the future where you ask ChatGPT something and it says, "Oh, here's, you know, you should think about buying this product."

    25. LF

      Yeah.

    26. SA

      "Or you should think about, you know, this, going here for va- your vacation or whatever." And I don't know, like, we have a very simple business model, and I like it, and I know that I'm not the product. Like, I know I'm paying, and that's how the business model works. And when I go use like Twitter or Facebook or Google or any other great product, but ad-supported great product, I don't love that, and I think it gets worse, not better, in a world with AI.

    27. LF

      Yeah, I mean, I can imagine AI would be better at showing the best kind of version of ads, not in a dystopic future, but where the ads are for things you actually need. But then does that system always result in the ads driving the kind of stuff that's shown, all that c- It's... Um, yeah, I think it was a really bold move of Wikipedia not to do advertisements, but then it makes it very challenging in the, on, as a business model. So you're saying the current thing with OpenAI is sustainable from a business perspective?

    28. SA

      Well, we have to figure out how to grow, but looks like we're gonna figure that out. If the question is do I think we can have a great business that pays for our compute needs without ads, that, I think the answer is yes.

    29. LF

      Hmm. Well, that's promising. I also just don't want to completely throw out ads as a-

    30. SA

      I'm not saying that. I, I, I'm, I guess I'm saying I have a bias against them.

Episode duration: 1:55:09

Install uListen for AI-powered chat & search across the full episode — Get Full Transcript

Transcript of episode jvqFAi7vkBc

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome