OpenAISam Altman on AGI, GPT-5, and what’s next — the OpenAI Podcast Ep. 1
EVERY SPOKEN WORD
40 min read · 7,909 words- 0:00 – 1:00
Welcome to the OpenAI Podcast
- AMAndrew Mayne
Welcome to the OpenAI Podcast. My name is Andrew Mayne. For several years, I worked at OpenAI, first as an engineer on the applied team, and then as the science communicator. After that, I worked with companies and individuals trying to figure out how to incorporate artificial intelligence. With this podcast, we have the opportunity to talk to the people working with and at OpenAI about what's going on behind the scenes, and maybe get a glimpse of the future. My first guest is Sam Altman, CEO and co-founder of OpenAI, and we're gonna find out a bit more about Stargate, how he uses ChatGPT as a parent, and maybe get an idea of when GPT-5 is coming. [upbeat music]
- SASam Altman
More and more people will think we've gotten to an AGI system every year. What you want out of hardware and software is changing quite rapidly. If people knew what we could do with more compute, they would want way, way more.
- AMAndrew Mayne
One of my friends is a new parent and is using ChatGPT a lot to ask questions. It's become a very good resource, and you are a new parent, and how much has ChatGPT
- 1:00 – 4:10
ChatGPT & parenthood
- AMAndrew Mayne
been helping you with that?
- SASam Altman
A lot. I, I, I mean, clearly, people have been able to take care of babies without ChatGPT-
- AMAndrew Mayne
Right
- SASam Altman
... for a long time. I don't know how I would've done that. [chuckles] Those first few weeks, it was like every que- I mean, constantly. Now I, now I kind of ask it questions about like developmental stages more-
- AMAndrew Mayne
Mm-hmm
- SASam Altman
... 'cause I can, I can, I can do the basics, but, uh-
- AMAndrew Mayne
Is this normal? [chuckles]
- SASam Altman
Yeah. But it was super helpful for that. I, I spend a lot of time thinking about how my kid will use AI in-
- AMAndrew Mayne
Hmm
- SASam Altman
... in the future. Um, it, it is sort of like... By the way, extremely kid-pilled. I think everybody should have a lot of kids. Um-
- AMAndrew Mayne
Yeah.
- SASam Altman
This is awesome.
- AMAndrew Mayne
A lot of my friends at OpenAI, uh, former colleagues and current ones, are having kids, and people go like, "Oh, what about this AI thing?" Everybody I know inside is very optimistic in having families.
- SASam Altman
I think it's a good sign.
- AMAndrew Mayne
Yeah.
- SASam Altman
Like, my kids will never be smarter than AI, mm, but also, they will grow up-
- AMAndrew Mayne
Way to set them back there, though. [chuckles]
- SASam Altman
I mean, they will grow up, like, vastly more capable-
- AMAndrew Mayne
Yeah
- SASam Altman
... than we grew up, um, and able to do things that would just we cannot imagine, and they'll be really good at using AI. And, uh, obviously, I think about that a lot. Uh, but I, I think much more about the, like, what they will have that we didn't than what is gonna be taken away. Um, they're-- like, I don't, I don't think my kids will ever be bothered by the fact that they're not smarter than AI.
- AMAndrew Mayne
Yeah.
- SASam Altman
I just, like, you know, I... There's this video that always has stuck with me of, um, a baby or, like, a little toddler, it- with, uh, one of those old glossy magazines-
- AMAndrew Mayne
[chuckles]
- SASam Altman
... um, going like this on the screen.
- AMAndrew Mayne
'Cause he thinks it's an iPad.
- SASam Altman
Thought it was a broken iPad.
- AMAndrew Mayne
Yeah.
- SASam Altman
Um, and, you know, kids born now will just think the world always had extremist my AI, and they will use it incredibly naturally, and they will look back at this as like a very, you know, prehistoric time period.
- AMAndrew Mayne
I, I saw something on social media where a guy talked about he got tired of talking to his kid about Thomas the Tank Engine, so he put it into ChatGPT, into voice mode.
- SASam Altman
Kids love voice mode on ChatGPT.
- 4:10 – 7:10
AGI, superintelligence & scientific progress
- SASam Altman
I mean, I think ChatGPT will just be a totally different thing five years from now. So in some sense, no, but will it still be called ChatGPT? Probably.
- AMAndrew Mayne
Yeah. Okay, so still the name. So the other thing we hear is AGI, which, yeah, I'd like to hear your definition of AGI.
- SASam Altman
In many senses, if you asked me or anybody else, um, to s- propose a definition of AGI five years ago, um, based off, like, the cognitive capabilities of software, I think the definition many people would, would've given then is now, like, well surpassed.
- AMAndrew Mayne
Hmm.
- SASam Altman
Like, these models are smart now.
- AMAndrew Mayne
Right.
- SASam Altman
Um, and they'll keep getting smarter, they'll keep improving. I think more and more people will think we've gotten to an AGI system every year. Um, even though the definition will keep pushing out and getting more ambitious, like, more people will still agree to it. But, you know, we have systems now that are really increasing people's productivity, that are able to do, um, valuable economic work. Maybe a better question is, what will it take for something I would call superintelligence?
- AMAndrew Mayne
Okay.
- SASam Altman
Um, if we had a system that was capable of either doing autonomous discovery of new science or greatly increasing the capability of people using the tool to discover new science, um, that would feel like kind of almost definitionally superintelligence to me, and be a wonderful thing for the world, I think.
- AMAndrew Mayne
So basically, w- a lot of it's kind of this gradient where it keeps getting better and better, and each one of our definitions, we go, "Oh, this feels..." I, I felt like that way when we hit GPT-4. Internally, playing with this, I'm like, "There's 10 years of runway that we can do so much stuff with this," and even when it starts using itself, like you can enter reasoning, it was really capable. But when you're saying it comes up with some new theorem or proof or something, and then, "Oh, hey, we found a better cure for cancer, or I found out some new GLP drug or something," or-
- SASam Altman
Yeah, I mean, I, I am a big believer that the higher order bit of-... people's lives getting better is more scientific progress.
- AMAndrew Mayne
Mm.
- SASam Altman
That is kind of, that is kind of what, what limits us. And so if we can discover much more, I, I think that really will have a, a very significant impact. And for me, that'll just be, like, a tremendously exciting milestone. I think many other great uses of AI will happen, too, but that one feels really important.
- AMAndrew Mayne
Have you seen, like, signs of this you'd see internally? Have you seen things that made you go, "Oh, I think we've kind of figured it out?"
- SASam Altman
Nothing where I would say we have figured it out, but I would say increasing confidence on the directions to pursue. Uh, m- maybe the... I mean, this is the example everyone talks about, but I think it, it is still interesting. W- what's happening with people using AI systems to write code, and coders being much more productive, and thus researchers as well, like, that is a sort of example of, okay, it's obviously not doing new science, but it is definitely making
- 7:10 – 10:30
Operator, Deep Research & productivity
- SASam Altman
scientists able to do their work faster. Uh, we hear this with o3 all the time, uh, from scientists as well. So I wouldn't say we figured it out, I wouldn't say we, we know the algorithm where we're just like, "All right, we can point this thing, and it'll go do science on its own," but we're getting good guesses, and the rate of progress is continuing to just be, like, super impressive. Um, w- watching the progress from o1 to o3, where it was like every couple of weeks, the team was just like, "We have a major new idea," and they all kept working, uh, it was a reminder of sometimes when you, like, discover a, a big, new insight, things can go surprisingly fast, and I'm sure we'll see that many more times.
- AMAndrew Mayne
I noticed, uh, recently, OpenAI just shifted the model and Operator to o3.
- SASam Altman
Yeah.
- AMAndrew Mayne
And I noticed a big improvement-
- SASam Altman
Way better
- AMAndrew Mayne
... with Operator. And it, and I- I'd say that the thing that we ran into before was brittleness, is that you have people who promise agentic systems, this can do all these things, but the moment it gets into a problem it can't solve, it falls apart.
- SASam Altman
Interestingly, speaking of the AGI question, a lot of people f- have told me that their personal moment was Operator-
- AMAndrew Mayne
Hmm
- SASam Altman
... with o3. And there's something about watching an AI use a computer pretty well. Not perfectly, but it's not-
- AMAndrew Mayne
Yeah
- SASam Altman
... it's, it- o3 was a big step forward, that feels very AGI-like. It didn't, it didn't really have that effect on me to the same degree, although it's, it's quite impressive, but I ha- I've heard that enough times.
- AMAndrew Mayne
My- mine was with Deep Research, 'cause that felt like a really agentic use of it, and that was when I came back and it produced something on a topic as I was interested in, that was better than I've read before. Because previously, all those models would just get a bunch of sources, summarise it, but when I watched the system go out on the Internet, get data-
- SASam Altman
Yeah
- AMAndrew Mayne
... follow that, then follow that lead, and then follow back, then come back, like I would've but better, was interesting.
- SASam Altman
I met this guy recently, who's like a, one of these, like, crazy autodidacts, just obsessed with learning and knows about everything. And he uses Deep Research to produce a report on anything he's curious about, and then just sits there all day and has gotten good at digesting them fast and knowing what to ask next. And it is like, it is an amazing new tool for people who really have a crazy appetite to learn.
- AMAndrew Mayne
I, I built my own app that literally lets me ask questions, and it generates audio files for me of the stuff, 'cause they're just like that. I'm like, my curiosity probably exceeds my retention. Um, in Operator, I'll tell you, the magical moment for me, and I'm curious to see where things go next, was I was doing a thing on Marshall McLuhan, and I wanted to get a bunch of images of Marshall McLuhan, and I asked it to do it. And then all of a sudden, I had a whole folder full of these things-
- SASam Altman
Yeah
- AMAndrew Mayne
... which was, for a research thing, would've taken me forever to do.
- SASam Altman
Yeah, I think we're just gonna keep seeing things like this, where whatever we thought about what a workflow had to be like and how long something had to take, is gonna just change, like, wildly fast.
- AMAndrew Mayne
Yeah. How are you using it?
- SASam Altman
Deep Research?
- AMAndrew Mayne
Yeah.
- SASam Altman
Science that I'm curious about. Uh, I'm, I'm just in this, like, weird place of I am extremely time-strapped. If I had more time, I would read... Like, I would read Deep Research reports preferentially to reading most other things, but I'm sort of short on time to read in general.
- AMAndrew Mayne
Yeah. What's neat, too, is the sharing feature, which I love, because now it's easier to share that with somebody else. The PDFs
- 10:30 – 13:40
GPT-5 & how we name models
- AMAndrew Mayne
are great, and that's cool. And I would say that even though we have Deep Research, we have these tools, there is a model race going on, and so the question comes up as, like, GPT-5, and, and the idea is that with a system like that, we should see an increase in capabilities. What is the timeframe for GPT-5? When are we gonna see this?
- SASam Altman
Probably sometime this summer.
- AMAndrew Mayne
Right.
- SASam Altman
Um, I don't know exactly when. One thing that we go back and forth on is how much are we supposed to, like, turn up the big number on new models versus what we did with GPT-4o, which is just better and better and better and better.
- AMAndrew Mayne
And I... And when we- I had to handle the recent GPT-4, right, when that was coming out. And meanwhile, I had to kind of do this ta- test-off between that and 3.5, and 3.5 kept getting better and better and better, and the comparisons I was able to make were changing. And so that's my question, is like, yeah, the, you know, like, would I know GPT-5 versus, wow, this is a really good GPT-4.5, or?
- SASam Altman
Probably not necessarily. I mean, it, like, it could go either way, right? You could just, like, keep doing iterations-
- AMAndrew Mayne
Right
- SASam Altman
... on 4.5, or at some point you could call it 5. Um, it used to be much clearer. We would train a model and put it out-
- AMAndrew Mayne
Right
- SASam Altman
... and then we train a new big model and put it out. And, you know, now the systems have gotten much more complex, and we can continually post-train them to make them better. I- we're thinking about this right now. Like, every time, let's say we launch GPT-5, um, and then we update it and update it and update it, should we just keep calling this GPT-5-
- AMAndrew Mayne
Right
- SASam Altman
... like we did with GPT-4o? Or should we call this 5.1, 5.2, 5.3, so you know which, you know when the version changes? Um, I don't think we have an answer to this yet, but, but I think there is something better to do than the way we handled it with 4o, and we, we, we see this periodically. Like, sometimes people like one snapshot much better than another, and they might wanna keep using one.
- AMAndrew Mayne
Yeah.
- SASam Altman
And we, we gotta sort of-... We've got to figure something out here.
- AMAndrew Mayne
Yeah, that's the, the, the challenge is even if you're technically inclined, you can kind of understand, okay, if there's an O before it, I know this, but if I want, uh, you know, like-- But then even then it's not clear, should I use o4-mini? Should I use o3?
- SASam Altman
Yeah.
- AMAndrew Mayne
Should I use this?
- SASam Altman
I, I think this was like an example of, this was an artifact of shifting paradigms.
- AMAndrew Mayne
Mm-hmm.
- SASam Altman
Um, and then we kind of had these two things going at once. I think we are near the end of this current problem, but I can imagine a world, I don't know what it is, but I can imagine a world where we discover some new paradigm that again means we need to, like, bifurcate the model tree.
- AMAndrew Mayne
Okay, even more complicated names.
- SASam Altman
I hope we don't have to do that.
- AMAndrew Mayne
[chuckles]
- SASam Altman
I am excited to just get to GPT-5-
- AMAndrew Mayne
Yeah
- SASam Altman
... and then GPT-6, and I think that'll be easier for people to use, and you won't have to think, "Do I want, you know, o4-mini-high or-
- AMAndrew Mayne
Right
- SASam Altman
... o3 or 4O?" Like-
- AMAndrew Mayne
o4-mini-high is what I use to code.
- SASam Altman
Yeah.
- 13:40 – 16:15
User privacy & NYT lawsuit
- SASam Altman
Memory is probably my favourite recent ChatGPT feature.
- AMAndrew Mayne
Mm-hmm.
- SASam Altman
Um, you know, the first time we could talk to a computer, like GPT-3 or whatever, uh, that felt like a really big deal. And now that the computer, I feel like it kind of like knows a lot of context on me, and if I ask it a question with only a small number of words, it knows enough about the rest of my life to be pretty confident in what I want it to do. Um, sometimes in ways I don't even think of, like, that has been a real surprising, like, level up. So I- and, and I hear that from a lot of other people as well. There are people who don't like it, but most people really do. I, I think we are heading towards a world where if you want, the AI will just have, like, unbelievable context on your life and give you these super, super helpful answers.
- AMAndrew Mayne
Which I- for me, is cool. The fact you can turn it off is also a great, uh... But one of the challenges came out was in New York Times' ongoing lawsuit with OpenAI, they just asked the court to tell OpenAI they had to preserve consumer ChatGPT user records beyond the 30-day window that has to be held for regular reasons. And, uh, Brad Lightcap-
- SASam Altman
Yeah
- AMAndrew Mayne
... just wrote a letter responding to this. Can you explain OpenAI's stance?
- SASam Altman
We're, we're, we're gonna fight that, obviously, and I suspect, I hope, but I do think we, we will win. Um, I think it was a crazy overreach of The New York Times to ask for that. Uh-
- AMAndrew Mayne
Yeah
- SASam Altman
... this is someone who says, you know, they value user privacy, whatever. Um, but I, to like look for the silver lining here, I hope this will be a moment where society realises that privacy is really important. Privacy needs to be a core principle of using AI. You cannot have a company like The New York Times ask an AI provider to compromise user privacy, and I think society needs to-- I think it's really unfortunate The New York Times did that, but I hope this accelerates the conversation that society needs to have about how we're going to treat privacy and AI, and I hope the answer is, like, we take it very, very seriously. People are having quite private conversations with ChatGPT now. ChatGPT will be a very sensitive source of information, and I think we need a framework that reflects that.
- AMAndrew Mayne
So that brings up the other question from people who are using this or are sceptical, is that OpenAI now has access to this data, and there's the concern, one, was about training,
- 16:15 – 20:30
Will ChatGPT ever show ads?
- AMAndrew Mayne
which OpenAI has been very clear about when or when not it's training. You have the option to turn that off. The other thing is, like, uh, advertising, things like that. What's OpenAI's approach towards that? How are you gonna handle that responsibility?
- SASam Altman
We haven't done any advertising product yet. Um, I kind of- I mean, I'm not totally against it. I can point to areas where I like ads. I think ads on Instagram, kind of cool. I bought a bunch of stuff from them. But I am like... I think it'd be very hard to- it would take a lot of care to get right.
- AMAndrew Mayne
Yeah.
- SASam Altman
Um, people have a very high degree of trust in ChatGPT, which is interesting because, like AI hallucinates, it should be the tech that you don't trust that much.
- AMAndrew Mayne
My friends hallucinate, too, so I trust them to an extent.
- SASam Altman
People really-
- AMAndrew Mayne
Yeah
- SASam Altman
... do. Um, but I think part of that is if you compare us to social media or, you know, web search or something, where you can kind of tell that you are being monetised and the company is trying to, like, deliver you good products and services, no doubt, but also to kind of like get you to click on ads or whatever. Like, you know, how much, how much do you believe that, like, you're getting the thing that that company actually thinks is the best content for you versus something that's also trying to, like, interact with the ads? I, I think there's like, there's a psychological thing there. So, for example, I think if we started modifying the output, like the stream that comes back from the LLM-
- AMAndrew Mayne
Mm-hmm
- SASam Altman
... in exchange for who is paying us more, that would feel really bad.
- AMAndrew Mayne
Yeah.
- SASam Altman
And I, and I would hate that as a user. I think that'd be like a trust-destroying moment. Um, maybe if we just said, "Hey, we're never gonna modify that stream, but, like, if you click on something in there that is gonna be what we'd show anyway, we'll, like, we'll get, like, a little bit of the transaction revenue, and it's a flat thing for everybody." Um, if, if we, you know, have like a easy way to pay for it or something, maybe that could work. Maybe there could be, like, ads outside the transaction stream.... I, I, sorry, outside of the LLM stream-
- AMAndrew Mayne
Mm-hmm
- SASam Altman
... that are still really great. But, but the burden of proof there, I think, would have to be very high.
- AMAndrew Mayne
Yeah.
- SASam Altman
And it would have to feel like really useful to users and really clear that it was not messing with the LLM's output.
- AMAndrew Mayne
Yeah, it's gonna be a, a difficult one. I, I, I hope there's a solution. I would love to do all my [chuckles] purchasing through ChatGPT or a really good chatbot, because a lot of the times I feel like I'm not making the most informed decisions, and so mitigating-
- SASam Altman
Yeah, no, that's good if we can do it in some sort of really clear and aligned way. But I don't know. Like, I love that we build good services, people pay us for them. It's like-
- AMAndrew Mayne
Yeah
- SASam Altman
... very clear. It's a-
- AMAndrew Mayne
Well, that's benefit-
- SASam Altman
Yeah
- AMAndrew Mayne
... that's like I'd say the difference in models is like, I think Google builds great stuff. I think the new Gemini 2.5 is a really good model. I think they went from-
- SASam Altman
It is a really good model.
- AMAndrew Mayne
Yeah. They went from kind of like, eh, and like, "Oh, man, these things are good," but end of the day, Google is an ad tech company, and that's the thing that always kind of, you know, I, I, you know, using their API and stuff is, I'm not as too concerned, although but I do think about like, man, if I'm using their chatbot, that whatever that is, my thinking is that their- where their incentives are aligned.
- SASam Altman
Google Search was an amazing product for a long time. I-
- AMAndrew Mayne
Mm.
- SASam Altman
It does feel to me like it's degraded. Um, but, you know, there was like a time where there were lots of ads, but I still thought it was the best thing on the internet.
- AMAndrew Mayne
Mm.
- SASam Altman
I mean, I love Google Search. Uh, so I don't-- like, it's clearly possible to be a good ad-driven company, but... And I, like, respect a lot of things Google has done, but there are obviously issues, too.
- 20:30 – 23:25
Social media & user behavior
- AMAndrew Mayne
had, uh, an issue. There was a model update, and then the, the id- uh, the thing that happened was apparently the model was trying to be a little bit too pleasing, was trying to be a little bit too agreeable. And that brings up the human-AI interaction, as people are using these systems more and developing this relationship with that, like-
- SASam Altman
Yeah
- AMAndrew Mayne
... how do you see the shape of that coming, and what's OpenAI's position on personality?
- SASam Altman
One of the big mistakes of the social media era was the feed algorithms had a bunch of unintended negative consequences on society as a whole, and maybe even individual users, although they were doing the thing that a user wanted or someone thought that user wanted in the moment, which is get them to, like, keep spending time on the site. And that was the, that was the big misalignment of, of social media, and I think there were a lot of other things like, you know, making people upset kind of gets them stuck on more than being, like, happy and content. And I always knew that there'd be, like, new problems in the world of AI-
- AMAndrew Mayne
Mm-hmm
- SASam Altman
... um, where the thing that, you know, there'd be like something that was, like, misaligned in a not obvious way. But definitely one of the first ones that we experienced was, if you ask a user what they want for one given response, um, versus and then you try to, like, build a model that is most helpful to the user, and you show a user, say, two responses, which one's more helpful to you? On any given thing, you might want a model to behave one way, but over the course of, you know, all your interaction with an AI, that might not match up.
- AMAndrew Mayne
Mm-hmm.
- SASam Altman
You know, you can see, and we did see these problems, where if you pay too much attention to the user signals, and a lot of other things that we talked about in our, our postmortem, but that- I think this is just, like, an interesting one. Um, on the short horizon, you kind of don't get the behaviour that a user most wants or is most helpful or useful or healthy to a user in the long run. Um, so, you know, maybe the analogy to filter bubbles is going to be, uh, AIs that are, uh, you know, helpful to a user in a short amount p- horizon, but not over a long horizon.
- AMAndrew Mayne
Mm. Well, I, I think a sign of that was DALL-E 3, which I thought technically was a really capable model, but they all kind of sorted to be one kind of genre of image, and, uh, and, and all kind of like an HDR sort of style, and was that from doing that sort of comparisons where users said, "Looking at just these two things in isolation, I prefer this one better?"
- SASam Altman
I don't remember for DALL-E 3, but I would assume so.
- AMAndrew Mayne
Yeah. Which I think it's gotten better. The new image model is like-
- SASam Altman
The new image model is fantastic.
- AMAndrew Mayne
Crazy good.
- SASam Altman
Yeah.
- AMAndrew Mayne
Yeah. Um, and I can only imagine where that's gonna go from here. So when you're building these things and you're increasing usage, and that's always been sort
- 23:25 – 31:30
Project Stargate & why compute matters
- AMAndrew Mayne
of a problem, the new image model comes out, and you have to restrict usage, and you have to have, like, you have Sora, which you can only have a certain amount of compute to do that. Illustrates the big problem everybody's facing, which is compute. And so to address this, we've heard about Project Stargate, which has a very cool name, and it involves computers. Other than that, I think a lot of people are going, and their price tag, you know, half a trillion dollars, people are going, like: "Wait, what?" What, what is the simple description I give to my mom about Stargate?
- SASam Altman
I think it's, it's quite simple. It's, uh, an effort to finance and build an unprecedented amount of compute. It's totally true that people- we don't have enough compute to let people, uh, do what they want, but if people knew what we could do with more compute, they would want way, way more. So there's this incredibly huge gap between what we could- what we can offer the world today and what we could offer the world with 10 times more compute, or someday, hopefully 100 times more compute. Um, and-... a thing that is different about AI than other technologies I've worked on, or at least AI, the scale of delivering it usefully to hundreds of millions, billions of people around the world, is just how big the infrastructure investment has to be. Um, and, and so Stargate is an effort to pull a lot of capital and technology and operational expertise together to build the infrastructure to go deliver the next generation of services to all the people who want them, and make intelligence as abundant and cheap as possible.
- AMAndrew Mayne
So it is a massive project, global project. We talked before, one of the partners is the UAE. You're working with that, you're working with other governments around the world on this. Um, one of the considerations is, you know, one, uh, been asked on social media, half a trillion dollars, five hundred billion dollars. Do you have the money?
- SASam Altman
We don't literally have it sitting in a bank account today, but-
- AMAndrew Mayne
Correct
- SASam Altman
... we are-
- AMAndrew Mayne
Is it in the room right now? [chuckles]
- SASam Altman
It's not in the room. But we will deploy it over the next-
- AMAndrew Mayne
Okay
- SASam Altman
... um, not even that many years. Uh, you know, unless something, like, really goes wrong and it turns out we can't build these computers, uh, I'm confident that people are, are good for it. Um, I've-- I went recently to the first site that we're building out in Abilene. Um, that'll be about, you know, roughly 10% of all of, all of the initial commitment to Stargate, the, the sort of five hundred billion. Um, it's incredible to see.
- AMAndrew Mayne
Yeah.
- SASam Altman
It is a sc-- like, I knew in my head what a order gigawatt scale site looks like, but then to go see one being built, uh, and the, like, thousands of people running around doing construction, and going to, like, you know, stand inside the rooms where the GPUs are getting installed and just, like, look at how complex the whole system is and the speed at which it's going, is quite something. Uh, we'll have more to share about the next sites soon, but there's a great quote about a pencil, just like a standard, you know, wood and graphite pencil, and-
- AMAndrew Mayne
A pencil
- SASam Altman
... how no one person-
- AMAndrew Mayne
Yeah
- SASam Altman
... could build it. And, and it's, it's this, like, magic of capitalism.
- AMAndrew Mayne
Mm-hmm.
- SASam Altman
It's a miracle, really, that, like, that the world gets coordinated to do these things. And, and standing inside of the first Stargate site, I was really just thinking about the, the global complexity that it took to get these racks of GPUs running. You know, when you get your phone out and you type something into ChatGPT, and you get the answer back, you, you probably... At this point, you probably don't even think that's, like, particularly surprising. You just expect it to work. Um, there was a time, maybe the first time you tried it, you were like, "That is really amazing." But the work that happened over the last thousand, or at least many hundreds of years, of people working incredibly hard to get these hard-won scientific insights, and then to build the engineering and the companies and the complex supply chains and kind of reconfigure the world, that had to happen to get this, like, rack of magic put somewhere. Think about all the stuff that went into that, the, you know, the-- and trace it all the way back to people that were just, like, digging rocks out of the ground and seeing what happened. Um, so that you now get to just, you know, type something into ChatGPT, and it does something for you.
- AMAndrew Mayne
I read a behind-the-scenes story about the development of Project Stargate and the international partnerships, particularly the UAE, and that Elon Musk had tried to derail that. And what have you seen? What have you heard? What's the take on that?
- SASam Altman
I had said, I think also e- externally, but at least internally, after the election, that I didn't think Elon was going to abuse his power in the government to unfairly compete. And I regret to say I was wrong about that. I mean, I don't like being wrong in general-
- AMAndrew Mayne
Right
- SASam Altman
... but mostly I just think it's really unfortunate for the country that he would do these things, and I didn't think-- I genuinely didn't think he was going to. Um, I'm grateful that the administration has really done the right thing and stuck up to that kind of behavior. Um, but yeah, it sucks.
- AMAndrew Mayne
Well, I think the thing that's changed, and I think, uh, Greg Brockman just talked about this, where there was a couple of years ago where people thought, like, "Okay, whoever gets there first is the winner, and that's it, and the game is over." And now we realize there are great AI labs elsewhere. Like, Anthropic is building great tools. I think Google's really got its game up. There's good stuff happening everywhere, and it's not gonna be that one person runs away with it.
- SASam Altman
Yeah, I agree.
- AMAndrew Mayne
And so it seems-
- SASam Altman
I, I-- yeah, the, the, the example that I like the most is the discovery of AI was analogous to this, not perfect, but close, um, to the discovery of the transistor in many surprising number of ways. Uh, but many companies are gonna build great things on that, and then eventually, it's gonna, like, seep into almost all products. But you won't think about using transistors all the time. So yeah, I think a lot of people are gonna build really successful companies built on this incredible scientific discovery, and I wish Elon would be less zero-sum about it.
- AMAndrew Mayne
Uh, yeah, I, I think-
- SASam Altman
Or negative sum.
- AMAndrew Mayne
I think the pie is just gonna get bigger and bigger if we think about that. I was just at an energy conference, and it was interesting talking to the people who were involved in energy production and stuff, and hyperscaling, the term they use for this, was a topic. Um, and that does bring up, like, the energy requirements. I know that for, like, Grok 3, apparently, I guess they had to put generators in the parking lot to be able to train that model. And that's the question, is like, how-- where is the energy gonna come from? Money, I understand. Energy, to think of when we talk about the scale of energy needed.
- SASam Altman
I think kinda everywhere.
- 31:30 – 38:45
Future progress & potential new AI devices
- AMAndrew Mayne
who was working, I think it was the James Webb Space Telescope, and he talked about his biggest bottleneck was they're about to get all of this, you know, terabytes of data, but he doesn't have enough scientists to work on it. Doesn't have enough people to go through the data. And here we- we have these-
- SASam Altman
Yeah
- AMAndrew Mayne
... answers about the universe, whatever, in front of us, and it's, like, a big data problem.
- SASam Altman
Yeah, I, um, I've always joked that one thing we should do when we have enough money, when OpenAI has enough money, is just build a gigantic particle accelerator- [chuckles]
- AMAndrew Mayne
[chuckles]
- SASam Altman
- and solve high-energy physics once and for all. Um, 'cause I think that'd be, like, a triumphant, wonderful thing. But I wonder, what are the odds that a really, really smart AI could look at the data we currently have-
- AMAndrew Mayne
Mm-hmm
- SASam Altman
... with no more data, no bigger particle accelerator, and just figure it out? It's not impossible.
- AMAndrew Mayne
Yeah.
- SASam Altman
A- a- and yeah, so there's this question of, like, okay, there's already a lot of data out there. There's a lot of smart people in the world, but we don't know how far intelligence can go. With no more experiments, how much more could we figure out?
- AMAndrew Mayne
Yeah, I remember reading something that had talked about how in early 1990s, somebody had found, like, a form of Ozempic, all right, and presented it to, like, a drug company and said this. And they said, "Nah, we're gonna pass on that." And that's been a life-changing drug for people, like, for people who've just basically chronic obesity, whatever.
- SASam Altman
Yeah.
- AMAndrew Mayne
It's gonna improve the quality of life, and you think, "Oh, this was sitting there for twenty-five years?"
- SASam Altman
I suspect there's a lot of other examples that we'll find where maybe we already have existing drugs that we know do something good, but they- they're reusable in some other big way, or with a couple of small modifications, we are very close to something great. Um, and it's been very heartening to hear from scientists using the, even the current generation models for this kind of work.
- AMAndrew Mayne
So it sounds like one of the things we're gonna need, though, for next-generation models is models that understand physics and chemistry and stuff. Is Sora sort of a stab at that?
- SASam Altman
I mean, it'll understand, like, Newtonian physics. I don't know if it'll help us with discovering new chemistry and sort of, like, new, like, novel physics, uh, or novel theoretical physics or whatever you'd like, but I think I'm optimistic that the techniques we use for the reasoning models-
- AMAndrew Mayne
Mm-hmm
- SASam Altman
... will help us with those things a lot.
- AMAndrew Mayne
Okay. And what is the short definition of how a reasoning model works versus just me asking GPT-4.1 something?
- SASam Altman
So the GPT models can reason a little bit, and in fact, one of the, one of the things that got people really excited in the early days of the GPT models was you could get better performance by telling the model, "Let's think step by step."
- AMAndrew Mayne
Mm-hmm.
- SASam Altman
And it would then just output text that was thinking step by step and get a better answer, which was sort of amazing that that worked at all. The reasoning models are just pushing that much further.
- AMAndrew Mayne
So it's the idea of, like, when it's able to break the question down, it can spend more time on each step.
- SASam Altman
When you ask me something, a question, I-- if it's a really easy question, I might just fire back, like, almost on reflex with the answer. But if it's a harder question, I might think in my head and have, like, my internal monologue go and say, "Well, I could do this or that, or maybe, maybe, you know, this will be clearer. I'm not sure about that," and I could, like, backtrack and retrace my steps, and then when I finish thinking, and I've, you know, been thinking in English, I can then, you know, make some bullet points and then kind of, like, output an answer to you in English.
- AMAndrew Mayne
One, one of the interesting things I've observed now when I use the app, if I ask a deep research question or something, and I go away, on my lock screen, I get the, "It's still processing and thinking about it." And I heard somebody, another company, I forget who it was, was using a metric of how long something spent-- I think it was Anthropic, like said, "Hey, this model actually spent, like, fifteen minutes or thirty minutes, or whatever length of time, to think about a thing," which is a good metric by, but it needs to actually give you the right answer.
- SASam Altman
Yeah.
- AMAndrew Mayne
And I thought that was sort of just interesting paradigm of-
- SASam Altman
One thing I have been surprised by is people are surprisingly willing to wait for a great answer-
- AMAndrew Mayne
Yeah
- SASam Altman
... even if the model's gonna think a while. All of my instincts have been, you know, the instant response is the thing that matters, and users hate to wait. And for a lot of stuff, that's true. But for hard problems with a really good answer, people are quite willing to wait.
- 38:45 – 40:23
Final thoughts
- AMAndrew Mayne
have thoughts, so... If you're, uh, giving advice to a 25-year-old right now, what do you tell them?
- SASam Altman
I mean, the obvious tactical stuff is probably what you'd expect me to say, like, learn how to use AI tools. It's, it's funny how quickly the world went from telling, you know, the average 20-year-old to 25-year-old, "Learn to program"-
- AMAndrew Mayne
Mm-hmm
- SASam Altman
... to, "Programming doesn't matter. Learn to use AI tools." I wonder what will be next, but of course, there will be something next. Um, but that's, that's very good tactical advice. And then on the sort of like broader front, um, I believe that skills like resilience, adaptability, creativity, figuring out what other people want, uh, I think these are all surprisingly learnable. And it's not as easy as, say, like, go practice using ChatGPT, but it is doable, and those are the kind of skills that I think will pay off a lot in the next, you know, couple of decades.
- AMAndrew Mayne
And would you say same thing, a 45-year-old, is just learn how to use it in your role now?
- SASam Altman
Yeah, probably.
- AMAndrew Mayne
Whenever we have whatever your personal definition of AGI, will more people be working for OpenAI after then or before?
- SASam Altman
More.
- AMAndrew Mayne
More. So, yeah, I, I see a lot of online people like, "Ah, they're, they're so good. Why are they hiring people?" I'm like, " 'Cause computers can't do everything. They're not gonna do everything."
- SASam Altman
The slightly longer answer with more than one word is that, um, there will be more people, but each of them will do vastly more than what one person did, you know, in the pre-AGI times.
- AMAndrew Mayne
Right. Which is the goal of technology.
- SASam Altman
Yeah.
Episode duration: 40:23
Install uListen for AI-powered chat & search across the full episode — Get Full Transcript
Transcript of episode DB9mjd-65gw
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome