a16zSam Altman on Sora, Energy, and Building an AI Empire
EVERY SPOKEN WORD
50 min read · 9,624 words- 0:00 – 0:41
Introduction
- SASam Altman
Sort of thought we had like stumbled on this one giant secret that we had these scaling laws for language models, and that felt like such an incredible triumph. I was like, "We're probably never gonna get that lucky again." And deep learning has been this miracle that keeps on giving, and we have kept finding breakthrough after breakthrough. A-again, when we got the, the reasoning model breakthrough, like I, I also thought that was like we're never gonna get another one like that. And it just seems so improbable that this one technology works so well. But maybe this is always what it feels like when you discover like one of the big, you know, scientific breakthroughs. Is it-- if it, if it's like really big, it's pretty fundamental, and it just, it keeps working.
- 0:41 – 2:37
OpenAI’s Vision and Infrastructure
- ETErik Torenberg
Sam, welcome to a16z Podcast.
- SASam Altman
Thanks for having me. All right.
- ETErik Torenberg
You've, uh, described in another interview, described OpenAI as a combination of four companies: a consumer technology business, a mega-scale infrastructure operation, a research lab, and all the new stuff, including planned hardware devices. From hardware to app integrations, job marketplace to commerce, what do all these bets add up to? What's OpenAI's vision?
- SASam Altman
Yeah, I mean, maybe you should count it as three, maybe as, as four for kind of our own version of the, what traditionally would've been the research lab at this scale. But three core ones. Uh, w-we want to be people's personal AI subscription. I think most people will have one. Some people will have several. And you'll use it in some first-party consumer stuff with us, but you'll also log into a bunch of other services, and you'll just-- you'll use it from dedicated devices at some point. You'll have this AI that gets to know you and be really useful to you, and you'll-- that's what we wanna do. Um, it turns out that to support that, we also have to build out this massive amount of infrastructure. But the goal there, the, the, the mission is really like build this AGI and make it very useful to people.
- BHBen Horowitz
And, and does the infrastructure, uh, do you think it will end up... You know, it's necessary for the main goal. Will it also separately end up being a-another business, or is it just really gonna be in service to the personal AI or unknown?
- SASam Altman
You mean like would we sell it to other companies, this raw infrastructure?
- BHBen Horowitz
Yeah. W-would you sell it to other companies? Um, yeah. Or, or, or, or, you know, it's such a massive thing. Would it, would it do something else? [chuckles]
- SASam Altman
It feels to me like there will emerge some other thing to do like that. But I don't know. We don't have a current plan.
- BHBen Horowitz
Yeah, no, it is. Yeah.
- SASam Altman
It's currently just meant to like support the service we wanna deliver and the research.
- BHBen Horowitz
Yeah. No, that makes sense.
- ETErik Torenberg
Yeah. The, um-
- BHBen Horowitz
And that, that, that's a good focus, too
- SASam Altman
... but the scale is sort of like-
- BHBen Horowitz
Ridiculous
- SASam Altman
... terrifying enough that you've gotta be open to doing something else with it.
- BHBen Horowitz
[laughs] Yeah, if you're building the biggest data center, uh, in the history of humankind.
- SASam Altman
The biggest infrastructure project in the history.
- BHBen Horowitz
Yeah.
- SASam Altman
Yeah.
- BHBen Horowitz
Yeah.
- 2:37 – 5:08
Business Model and Vertical Integration
- ETErik Torenberg
The, um... There was a great interview you did many years ago in Strictly VC, um, and sort of early OpenAI before-- well before ChatGPT and, and they're saying, "Hey, what's..." They're asking, "What's the business model?" And you said, "Oh, well, we'll ask AI. It'll, it'll figure it out for us."
- SASam Altman
[laughs]
- ETErik Torenberg
Everybody laughs, but-
- SASam Altman
There have been multiple times, and there was just another one recently, where we have asked a then-current model for, you know, what should we do, and it has had a insightful answer we missed. So I, I think when we say stuff like that, people don't take us seriously or literally.
- ETErik Torenberg
Yeah.
- SASam Altman
But maybe the answer is you should take, take us both.
- ETErik Torenberg
Yeah.
- BHBen Horowitz
Yeah. Yeah, well, no, as, as somebody who runs an organization, [laughs] I ask, uh, the AI a lot of questions about what I should do.
- SASam Altman
Yeah.
- BHBen Horowitz
It comes up with some pretty interesting answers.
- SASam Altman
Sometimes.
- BHBen Horowitz
Yeah. Yeah, yeah.
- SASam Altman
Sometimes not.
- BHBen Horowitz
It does. You know, you have to g- you have to give it enough context, but...
- ETErik Torenberg
What is, what is the thesis that, that connects these bets beyond more distribution, m-more compute? How do, how do we think about that?
- SASam Altman
I mean, the research enables us to make the great products, and the infrastructure enables us to do the research, so it is kind of like a vertical stack of things. Like, you can use ChatGPT or some other service to get advice about what you should do running an organization, but for that to work, it requires great research and requires a lot of infrastructure. So it is kind of just this one, this-
- ETErik Torenberg
Yeah
- SASam Altman
... one thing. It's-
- BHBen Horowitz
A-and do you think that there will be a point where that becomes completely horizontal, or will it stay vertically integrated for the foreseeable future?
- SASam Altman
I was always against vertical integration, and I now think I was just wrong about that.
- BHBen Horowitz
Yeah. Interesting.
- SASam Altman
And there's kind of-- 'cause y-you'd like, you'd like to think that the economy is efficient, and the theory that companies can do one thing, and then that's supposed to work.
- BHBen Horowitz
Like to think that, yeah.
- SASam Altman
And in our case at least, it hasn't really.
- BHBen Horowitz
Yeah.
- SASam Altman
I mean, it has in some ways, for sure.
- BHBen Horowitz
Right. Right.
- SASam Altman
Like, there's people that make a m- like, you know, NVIDIA makes an amazing chip or whatever that a lot of people can use. But the, the story of OpenAI has certainly been towards, we have to do more things than we thought to be able to deliver on the mission.
- BHBen Horowitz
Right. You know, although the, you know, the, the history of the computing industry has kind of been a story of kind of a back and forth in that, you know, there was the Wang Wa- Word Processor and then the personal computer and the, the BlackBerry before the smartphone. Um, so there, you know, there has been this kind of vertical integration and then not, but then the iPhone is also vertically integrated. [chuckles]
- SASam Altman
The i- the iPhone, I think, is the most incredible product the tech industry has ever produced, and it is extraordinarily vertically integrated.
- 5:08 – 8:01
AGI, Sora, and Societal Co-evolution
- ETErik Torenberg
W-which bets would you say are enablers of AGI versus which are sort of hedges against uncertainty?
- SASam Altman
I think you could say that on the surface, Sora, for example, does not look like it's AGI relevant. But I would bet that if we can build really great world models, that'll be much more important to AGI than people think. There were a lot of people who thought ChatGPT was not a very AGI-relevant thing, and it's been very helpful to us, not only in building better models and understanding how society wants to use this, but also in, like, bringing society along to actually figure out, man, we gotta contend with this thing now. We-- For a long time before ChatGPT, we would talk about AGI, and people were like, "This is not happening," or, "We don't care."
- ETErik Torenberg
Yeah.
- SASam Altman
And then all of a sudden, they really cared. And, and I, I think that... So research benefits aside, I'm a big believer that society and technology have to co-evolve. It's-
- ETErik Torenberg
Yeah
- SASam Altman
... you can't just drop the thing at the end. It doesn't work that way. It is, it is a sort of ongoing back and forth.
- ETErik Torenberg
Yeah.
- SASam Altman
Say more about how Sora fits into your strategy because there was some hullabaloo on, on X around, hey, um, you know, why devote precious GPUs to, to Sora? But i- is it a short-term, long-term trade-off or are we so agent- Well, and then the new one had like very interesting twists with the, uh, social networking. [laughs] Uh, be very interested in kinda how you're thinking about that and like, um, did, uh, Meta call you up and get mad? Or like, hey, what, what do you expect the reaction to be? [laughs] Um, I think if one company of the two of us has- [laughs] ... feels like more like the other one has gone after them, it wouldn't-- They, they shouldn't be calling us. [laughs] Well, I, I do know the history too, though. But, but, uh, look, we're not gonna thr- like... First of all, I think it's cool to make great products, and people love the new Sora. And I also think it is important to give society a taste of what's coming on this co-evolution point. So like very soon, the world is gonna have to contend with incredible video models that can deep fake anyone or kind of show anything you want, and that will mostly be great. There will be some adjustment that society has to go through. And just like with ChatGPT, we were like, the world kinda needs to understand where this is. I think it's very important the world understands where video is going very quickly, 'cause that's gonna be-- Video has much more, like, emotional resonance than text, and very soon we're gonna be in a world where, like, this is gonna be everywhere. So I think there's something there. Uh, as I mentioned, I think this will help our research program and is on the AGI path. But yeah, something like, you know, it can't all be about just making people, like, ruthlessly efficient and the AI, like, solving all our problems. There's gotta be, like, some fun and joy and delight along the way. But we won't throw, like, tons of compute at it, or not by a fraction of our compute. [laughs] Yeah. It, it, it, it's tons in the absolute sense- Yeah ... but not in the relative sense. [laughs]
- 8:01 – 9:12
The Future of AI Interfaces
- SASam Altman
Yeah. Yeah. I wanna talk about the future of AI human interfaces, 'cause back in August you said the models have already saturated the chat use case. W- so what do future AI human interfaces look like, both in terms of hardware and software? Is the vision for kind of a, a WeChat-like super app? So I- I'm solving the chat thing in, like, a very narrow sense, which is if you're trying to, like, you know, have the m- most basic kind of chat-style conversation, it's very good. But what a chat interface can do for you, it's, like, nowhere near saturated. 'Cause you could ask a chat interface, like, "Please cure cancer." A model certainly can't do that yet. So I think the text interface style can go very far, even if for the chitchat use case, the models are already very good. Um, but, but of course there's better interfaces to have. Uh, actually, that's another thing that I think is cool about Sora. Like, you can imagine a world where the interface is just constantly real-time rendered video- Yeah ... and what that would enable, and that's pretty cool. You can imagine new kinds of hardware devices that are sort of always ambiently aware of what's going on. And rather than your phone, like, blast you with text message notifications at whenever it wants, like it really understands your context and when to show you what, and there's a long way to go on all that stuff. Yeah.
- 9:12 – 11:44
AI Scientists and Scientific Progress
- SASam Altman
W- within the next couple years, what will models be able to do that they're not able to do today? Will it be sort of white collar, um, you know, re-replacement at a much deeper level, AI scientist, uh, human, humanoids, um, persons- I mean, a l- a lot of things, but you touched on the one that I am most excited about, which is the s- the AI scientist. Yeah. This is crazy that we're sitting here seriously talking about this. The-- I know there's like a quibble on what the Turing test literally is, but, but the popular conception of the Turing test sort of went whooshing by. Yeah. [laughs] That was fast, yeah. You know, it was just like we talked about it- [laughs] ... as this most important test of AI for a long time. It seemed impossibly far away. Then all of a sudden it was passed. The world freaked out for like a week, two weeks, and then it's like, "All right, I guess computers, like, can do that now." Yeah. And everything just went on. And I think that's happening again with science. Uh, my own personal, like, equivalent of the Turing test has always been when AI can do science. Like, that is always, like that is a real change to the world. And for the first time with GPT-5, we are seeing these little, little examples where it's happening. You see these things on Twitter. It did this-- It made this novel math discovery and did this small thing in my, you know, my physics research and my biology research. And everything we see is that that's gonna go much further. So in two years, I think the models will be doing bigger chunks of science and making important discoveries, and that is a crazy thing. Like, that will have a significant impact on the world. I am, I am a believer that to a first order, scientific progress is what makes the world better over time. And if we're about to have a lot more of that, that's a big change. It's interesting 'cause that's a positive change that people don't talk about. Y- y- th- it, it's gotten so, um, much into the realm of the negative changes if AI gets extremely smart. But, uh- But curing every disease is like- We, we could use a lot more science. Yeah. Yeah. Yeah. That, that, that, that, that's a really good point. I think Alan Turing said this. Somebody asked him, they said, "Well, you really think the, uh, computer's gonna be, you know, smarter than the brilliant minds?" He said, "It doesn't have to be smarter than a brilliant mind, just smarter than a mediocre mind like the president of AT&T." [laughs] And, uh, [laughs] we could use more of that too, probably. We, um, we just saw Periodic launch last week. Yeah. You know, OpenAI alums. And, uh, yeah, to, to, to that point, it- it's amazing to see both the innovation that you guys are doing, but also the, the teams that, you know, come out of OpenAI just feels like are, are, you know, creating tremendous capable things. We certainly hope so. Yeah. The,
- 11:44 – 16:17
Reflections on Progress and Model Capabilities
- SASam Altman
um-- I wanna ask you about just broader reflections in terms of what sort of about diffusion or, uh, development in 2025 has surprised you, or what has sort of updated your worldview since ChatGPT came out? A lot of things again, but maybe the m- most interesting one is how much new stuff we found. Sort of thought we had, like, stumbled on this one giant secret that we had these scaling laws for language models, and that felt like such an incredibleTriumph that I was like, "We're probably never gonna get that lucky again." And deep learning has been this miracle that keeps on giving, and we have kept finding, like, breakthrough after breakthrough. A-a-again, when we got the, the reasoning model breakthrough, like I, I also thought that was like, we're never gonna get another one like that. Uh, and it just seems so improbable that this one technology works so well. But maybe this is always what it feels like when you discover like one of the big, you know, scientific breakthroughs. Is if, if it's like really big, it's pretty fundamental, and it just- it keeps working. But the amount of progress-- Like if you went back and used GPT-3.5 from ChatGPT launch, you'd be like, "I cannot believe anyone used this thing."
- ETErik Torenberg
Yeah. [chuckles]
- SASam Altman
And, and now we're in this world where the capability overhang is so immense. Like most of the world still just thinks about what ChatGPT can do, and then you have like some nerds in Silicon Valley that are using Codex, and they're like, "Wow, those people have no idea what's going on." And then you have like a few scientists who say, "Those people using Codex have no idea what's going on." But the, the overhang of capability has come-- is, is, is so big now, and we've just come so far on the-- what the models can do.
- ETErik Torenberg
And in terms of further development, how far can we get with, with LLMs? At, at what point do we need either new architecture? H-how do you think about what breakthroughs are needed?
- SASam Altman
I think far enough that we can make something that will figure out the next breakthrough with the current technology. Like, I-I-- It's a very self-referential answer, but if, if LLMs can get-- If LLM-based stuff can get far enough that it can do, like, better research than all of OpenAI put together, maybe that's, like, good enough.
- BHBen Horowitz
Yeah, that would be a big breakthrough. [chuckles] A very big breakthrough. So o-on, um, on the more mundane, you know, one of the things that, uh, people have kind of started to complain about, I think South Park did a whole episode on it, is kind of the obsequiousness of, uh, of kind of AI and ChatGPT in particular. And how hard a problem is that to deal with? Is it not that hard, or is it like kind of a fundamentally hard problem?
- SASam Altman
Oh, it's not at all hard to-
- BHBen Horowitz
Yeah
- SASam Altman
...deal with. A lot of users really want it.
- BHBen Horowitz
Yeah. [chuckles] Okay.
- SASam Altman
Like, if you go look at what people-
- BHBen Horowitz
Yeah
- SASam Altman
...say about ChatGPT online-
- BHBen Horowitz
Yeah
- SASam Altman
...there's a lot of people who, like, really want that back.
- BHBen Horowitz
Yeah.
- SASam Altman
Um, and it is, you know.
- BHBen Horowitz
Yeah.
- SASam Altman
So it's not-- technically, it's not hard to deal with at all. Um, one thing, and this is not surprising in any way, but the, the incredibly wide distribution of what users want-
- BHBen Horowitz
Yeah
- SASam Altman
...out of how-- of, like, how they'd like a chatbot to behave in big and small ways.
- BHBen Horowitz
Does that-- Do you end up having to configure the personality then you think? Is that gonna be the answer?
- SASam Altman
I think so. Uh, I mean, ideally, like you just talk to ChatGPT for a little while, and it kinda interviews you and also sort of sees what you like-
- BHBen Horowitz
Yeah
- SASam Altman
...and don't like, and-
- BHBen Horowitz
A-and ChatGPT just figures it out-
- SASam Altman
And just figures it out
- BHBen Horowitz
...figures itself out.
- SASam Altman
But in the short term, you'll probably just pick one.
- BHBen Horowitz
Got it. Yeah, no, that makes sense. Very interesting. And, um, actually, so, so one thing I wanted to ask about is, uh-
- 16:17 – 17:34
Sam's Experience as CEO & Leadership Lessons
- BHBen Horowitz
did this deal with AMD. Um, and you know, of course, the company's in a different position, and you have more leverage and these kinds of things, but, like, how has your kind of thinking changed over the years since you did that, that initial deal, if at all?
- SASam Altman
I, I had very little operating experience then.
- BHBen Horowitz
Yeah.
- SASam Altman
I had very little experience-
- BHBen Horowitz
Right
- SASam Altman
...running a-- Like I, I am, I am not naturally someone to run a com-- I, I'm a great fit to be an investor.
- BHBen Horowitz
[chuckles] Yeah.
- SASam Altman
And I kinda felt that was gonna be-- That was what I did before this, and I thought that was gonna be my career.
- BHBen Horowitz
Yeah, yeah.
- SASam Altman
And-
- BHBen Horowitz
Although you were a CEO before that.
- SASam Altman
I-- Not a good one. Um-
- BHBen Horowitz
[chuckles]
- SASam Altman
And, and so I think I had the mindset of, like, an investor advising a company-
- BHBen Horowitz
Oh, interesting. Right
- SASam Altman
...when we did the thing. And now I understand what it's like to actually have to run a company.
- BHBen Horowitz
[laughs] Yeah. Right. Right, right. So-
- SASam Altman
So they're, they're different
- BHBen Horowitz
...there, there's more than just the numbers. Yeah.
- SASam Altman
I, I've learned a lot about-
- BHBen Horowitz
Yeah
- SASam Altman
...how to, you know-
- BHBen Horowitz
Yeah
- SASam Altman
...like, how you have to like-- what, what operational-- how you-- like what it takes to operationalize deals over time and-
- BHBen Horowitz
Right. All, all, all the implications of the agreement-
- SASam Altman
Yeah
- BHBen Horowitz
...as opposed to just, "Oh, we're gonna get distribution and money."
- SASam Altman
Yeah.
- BHBen Horowitz
Yeah. That makes sense. Yeah, no, 'cause it, it's really... I, I, I just say I was very impressed at the deal structure improvement.
- SASam Altman
Yeah. Right.
- 17:34 – 25:05
Strategic Partnerships and Scaling Infrastructure
- ETErik Torenberg
you know, in the last few weeks alone, you mentioned AMD, but also Oracle, NVIDIA. You've chosen to, you know, strike these deals and partnerships with, with companies that you collaborate with but could also potentially compete with in, in, in certain areas. H-how do you decide, you know, when to collaborate versus when, when not to, or h-how do you just think about?
- SASam Altman
Um, we have decided that it is time to go make a very aggressive infrastructure bet, and we're like-- I've never been more confident in the research roadmap in front of us and also the economic value that will come from using those models. But to make the bet at this scale, we kinda need the whole industry to-
- ETErik Torenberg
Yeah
- SASam Altman
...or a big chunk of the industry, to support it. And this is like, you know, from the level of like electrons to model distribution and all the stuff in between, which is a lot.And so we're gonna partner with a, a lot, a lot of people. Uh, you should expect, like, much more from us in the coming months.
- BHBen Horowitz
Actually, expand on that 'cause y-y... when you talk about the scale, it does feel like in your mind the, the limit on it is unlimited. Like, you would scale it as, as, you know, as big as you possibly could.
- SASam Altman
I mean, there's, like, some-- There's totally a limit. Like, there's some amount of global GDP, uh-
- BHBen Horowitz
Yeah. [laughs] Well, yes, yes
- SASam Altman
... and, you know, there's some fraction of it that is knowledge work, and we don't do robots yet.
- BHBen Horowitz
Yes.
- SASam Altman
But-
- BHBen Horowitz
But, but the limits are out there
- SASam Altman
... it, it feels like the limits are very far from where we are today.
- BHBen Horowitz
Yeah.
- SASam Altman
If we are right about-
- BHBen Horowitz
Mm-hmm
- SASam Altman
... So, so, I shouldn't say from where we are. Uh, like, if we are right that the model capability is gonna go where we think it's gonna go, then the economic value that sits there can, can go very, very far.
- BHBen Horowitz
Right. So you wouldn't do it, like, if all you ever had-
- SASam Altman
I wouldn't have-
- BHBen Horowitz
... was today's model, you wouldn't go there. But-
- SASam Altman
No, definitely not
- BHBen Horowitz
... so it's a combination.
- SASam Altman
I mean, uh, we would still expand because-
- BHBen Horowitz
Mm-hmm
- SASam Altman
... we can see how much, uh, demand there is we can't serve with today's model, but we would not be going this aggressive if all we had was today's model.
- BHBen Horowitz
Right.
- SASam Altman
Yeah.
- BHBen Horowitz
Right.
- SASam Altman
We get to see a year or two in advance, though, so-
- BHBen Horowitz
Yeah
- SASam Altman
... like...
- 25:05 – 28:33
Regulation, Safety, and Societal Impact
- BHBen Horowitz
Um, well, but to that end, how, how have you sort of evolved your thinking? You mentioned you evolved your thinking on sort of, uh, you know, vertical integration. How have you evolved your thinking, or what's the latest thinking on sort of AI stewardship, you know, safety? Uh, what, what's the latest thinking on that?
- SASam Altman
I do still think there are gonna be some really strange or scary moments. Uh, the fact that, like, so far the technology has not produced a really scary, giant risk doesn't mean it never will. It also, like, there's-- We're talking about it's kinda weird to have, like, billions of people talking to the same brain. Like-
- BHBen Horowitz
Wow
- SASam Altman
... there may be these weird societal skill things that are already happening, and we-- that aren't scary in the big way but are just sort of different. Um, but I expect, like... I expect some really bad stuff to happen because of the technology, which also has happened with previous technologies. And-
- BHBen Horowitz
Mm-hmm
- SASam Altman
... I think-
- BHBen Horowitz
All the way back to fire.
- SASam Altman
Yeah.
- BHBen Horowitz
Yeah.
- SASam Altman
And I think we'll, like, develop some guardrails around it as a, as a society.
- BHBen Horowitz
Yeah. What, what is sort of your latest thinking on the, the right mental models we should have around the, the right regulatory f-frameworks to, to think about or, or the ones we shouldn't be thinking about?
- SASam Altman
Um, I think most... I think the right thing to f-- I, I think most regulation, uh, probably has a lot of downside. The one thing I would like is as the models get-- The thing I would most like is as the models get truly, like, extremely superhuman capable, um, I think those models and only those models are probably worth some sort of, like, very careful safety testing, uh, as, as the frontier pushes back. Um, I don't want a big bang either.
- BHBen Horowitz
Mm-hmm.
- SASam Altman
And you can see a bunch of ways that could go very seriously wrong. But I hope we'll only focus the regulatory burden on that stuff and not all of the wonderful stuff that less capable models can do that you could just have, like, a European-style complete clampdown on, and that would be very bad.
- BHBen Horowitz
Yeah, it seems like the, the thought experiment that, okay, there's going to be a model down the line that is a super, superhuman intelligence that could, you know, do some kind of takeoff-like thing, w-we really do need to wait till we get there, uh, um, or, like, at least we get to a much bigger scale, or we get close to it. Um, because nothing is gonna pop out of your lab in the next week that's gonna do that. And I, I think that's where we as an industry kind of confuse the regulators-
- SASam Altman
Yeah
- BHBen Horowitz
... uh, because I think you, you, you really could-- One, y-you damage America in particular in that, um, like, China's not gonna have that kind of restriction. And, a-and you getting behind, um, in AI, I think would be very dangerous for the world.
- SASam Altman
Extremely dangerous.
- BHBen Horowitz
Yeah.
- SASam Altman
Extremely dangerous.
- BHBen Horowitz
Much more dangerous than not regulating something we don't know how to do yet.
- SASam Altman
Yeah.
- BHBen Horowitz
Yeah.
- SASam Altman
You also wanna talk about copyright?
- BHBen Horowitz
Um,
- 28:33 – 33:15
Copyright, Open Source, and Content Creation
- BHBen Horowitz
yeah, so [laughs] well, th-th-that, that's a segue. But, um, wh-when you think about-- Well, I guess, how do you see copyright unfolding? 'Cause you've done some very interesting things, um, with the opt-out. [laughs] Uh, and y-you know, as you see people selling rights, do you think will they be, be bought exclusively? Will they be just like, um, I could sell it to everybody who wants to pay me? Or h-how do you think that's gonna unfold?
- SASam Altman
This is my current guess. It, it... Speaking of that, like, society and technology co-evolve-
- BHBen Horowitz
Mm-hmm
- SASam Altman
... as the technology goes in different directions. And we saw an example of the different, like, video models got a very different response from rights holders than image gen does.
- BHBen Horowitz
Yeah. Yes.
- SASam Altman
So, like, you'll see this continue to move. But forced to guess from the position we're in today, I would say that society decides training is fair use.
- BHBen Horowitz
Mm-hmm.
- SASam Altman
But there's a new model for generating content in the style of or with the IP of or something else.
- BHBen Horowitz
Mm-hmm.
- SASam Altman
So, you know, anyone can read, like a human author can. Anybody can read a novel and get some inspiration, but you can't reproduce the novel on your own.
- BHBen Horowitz
Right.
- SASam Altman
And-
- BHBen Horowitz
You can talk about Harry Potter, but you can't re- spit it out.
- SASam Altman
Yes.
- BHBen Horowitz
Yeah.
- SASam Altman
Although, another thing that I think will change, um, i-in the case of Sora, we've heard from a lot of concerned rights holders and also a lot of-
- BHBen Horowitz
Name and likeness. [laughs]
- SASam Altman
And a, and a lot of rights holders who are like, "My concern is you won't put my character in enough."
- BHBen Horowitz
Yeah. [laughs]
- SASam Altman
[laughs]
- BHBen Horowitz
Yeah, yeah.
- SASam Altman
I want restrictions, for sure, but, like, if I'm, you know, whatever, and I have this character, like, I don't want the character to say some crazy offensive thing, but, like-
- BHBen Horowitz
Yeah
- SASam Altman
... I want people to interact.
- BHBen Horowitz
Right.
- SASam Altman
Like, that's how they develop the relationship-
- BHBen Horowitz
Right
- SASam Altman
... and that's how, like, my franchise gets more valuable. And if you become really-- If you're picking, like, his character over my character all the time, like, I don't like that.
- BHBen Horowitz
Yeah.
- SASam Altman
So I can completely see a world where-Subject to the decisions that a rights holder has, they get more upset with us for not generating their character often enough-
- 33:15 – 37:07
Energy, Policy, and AI’s Resource Needs
- BHBen Horowitz
of the interpretation of everything to somebody-
- SASam Altman
Yeah
- BHBen Horowitz
... who may be or may not be influenced heavily by the Chinese government. Yeah.
- ETErik Torenberg
What about-
- BHBen Horowitz
And by the way, we see, I mean, you know, just to give you-- and, and we really thank you for, um, putting out a really good open source model because what we're seeing now is in all the universities, they're all using-
- SASam Altman
Yeah
- BHBen Horowitz
... the Chinese models.
- SASam Altman
Yep.
- BHBen Horowitz
Yeah. Which feels very dangerous.
- ETErik Torenberg
You, you've said that the, the things you care most about professionally are AI and energy.
- SASam Altman
I did not know they were gonna end up being the same thing.
- ETErik Torenberg
[laughs]
- SASam Altman
They were two independent interests that really converged.
- BHBen Horowitz
Yeah. [chuckles]
- SASam Altman
Um.
- BHBen Horowitz
Yeah.
- ETErik Torenberg
Ta-talk more about how your interest in energy, uh, sort of began, how you sort of chosen to, to play in it, and then we could talk about, you know, how, how they converge.
- BHBen Horowitz
Uh, 'cause you started your career in physics, yeah.
- SASam Altman
CS, CS and physics. Yeah. Uh, well, I never really had a career. I studied physics. [chuckles]
- BHBen Horowitz
Yeah. You studied physics, yeah.
- SASam Altman
My, my first job was like a CS job. Like I-
- ETErik Torenberg
Yeah.
- SASam Altman
This is an oversimplification, but roughly speaking, I, I think if you look at history, the best, the highest impact thing to improve people's quality of life has been cheaper and more abundant energy. And so it seems like pushing that much further is a good idea. And I, I don't know, I just like-- People have these different lenses they look at the world by. I see energy everywhere.
- ETErik Torenberg
Yeah.
- BHBen Horowitz
Yeah. And so [chuckles] getting to c- 'cause we've kind of, uh, i-in the West, I think we've, uh, painted ourselves into a little bit of a corner on energy, um, by both outlawing nuclear for a very long time.
- SASam Altman
That was an incredibly dumb decision.
- BHBen Horowitz
Yeah. And then, k- you know, like also a lot of policy restrictions on energy. Um, and, you know, worse so in Europe than in the US, but also dangerous here. And now with AI here, it feels like we're gonna need all the energy from every possible source. And how do you see that developing kind of policy-wise and technologically? Like, what are gonna be the big sources, and how will those kind of curves cross? Um, and then what's the right policy posture around, you know, drilling, fracking, all these kinds of things?
- SASam Altman
I expect in the short term it will be most of the net new in the US will be natural gas-
- BHBen Horowitz
Mm-hmm
- SASam Altman
... for, relative to at least sort of base load energy. In the long term, I expect it'll be a... I don't know what the ratio, but the two dominant sources will be, uh, solar plus storage and nuclear.
- 37:07 – 43:03
Monetization and User Behavior
- BHBen Horowitz
Yeah.
- ETErik Torenberg
On OpenAI, what, what's, what's the latest thinking in terms of monetization, in terms of either certain experiments or cer-certain things that you could see yourself, uh, spending more time or less, less time on, you know, different models that you're excited about or?
- SASam Altman
The thing that's top of mind for me, like right now, just 'cause it just launched and there's so much usage, is h- what we're gonna do for Sora.
- ETErik Torenberg
Yeah.
- SASam Altman
Um, uh, another thing you learn once you launch one of these things is how people use them versus how you think they're gonna use them.
- ETErik Torenberg
Yeah.
- SASam Altman
And people are certainly using Sora the ways we thought they were going to use it, but they're also using it in these ways that are very different. Like, people are generating funny memes of them and their friends and sending them in a group chat, and that will require a very different-
- ETErik Torenberg
Yeah.
- SASam Altman
Like, Sora videos are expensive to make.
- ETErik Torenberg
Yeah.
- SASam Altman
Uh, or so that will require a very different-- You know, for people that are doing that like hundreds of times a day, it's gonna require a very different monetization method than the kinds of things we were, we were thinking about.
- ETErik Torenberg
Yeah.
- SASam Altman
I think it's very cool that the thesis of Sora, which is people actually wanna create a lot of content, it's, it's not that, you know, the traditional naive thing that it's like one percent of users create content, ten percent leave comments, and a hundred percent view. Maybe a lot more wanna create content, but it's just been harder to do, and I think that's a very cool change. But it does mean that we gotta figure out a very different monetization model for this than we were thinking about if people wanna create that much. I assume it's like some version of you have to charge people per generation, per generation when, when, when it's this expensive. Um, but that's like a new thing we haven't had to really think about before.
- ETErik Torenberg
What's your thinking on ads for the long tail?
- SASam Altman
Open to it. I, like many other people, I find ads somewhat distasteful, but not, not a non-starter. Um, and there's some ads that I like. Like, one thing I give Meta a lot of credit for is Instagram ads are like a net value add to me. Um-
- ETErik Torenberg
Hmm.
- SASam Altman
I like Instagram ads.
- ETErik Torenberg
Yeah.
- SASam Altman
I've never felt that. Like, you know, on, on Google, I feel like I know what I'm looking for. The first result is probably better. The ad is an annoyance to me. On Instagram, it's like, "I didn't know I want this thing. It's very cool. I'd never heard of it. I never would have thought to search for it. I want the thing."
- ETErik Torenberg
Hmm.
- SASam Altman
So that's like, there's kinds of things like that, but people have a very high trust relationship with ChatGPT. Even if it screws up, even if it hallucinates, even if it gets it wrong, people feel like it is trying to help them and that it's trying to do the right thing. And as-- if we broke that trust, it's like you say, "What coffee machine should I buy?"
- ETErik Torenberg
Yeah.
- SASam Altman
And we recommended one, and it was not the best thing we could do, but the one we were getting paid for, that trust would vanish. So, like, that kind of ad does not, does not work. There are others that I imagine that could work totally fine, um, but that would require, like, a lot of care to avoid the obvious traps.
- ETErik Torenberg
Yeah.
- BHBen Horowitz
Hmm. And then how, how big a problem is, you know, just you-- extending the Google example, is like, um, you know, f-fake, uh, content that then gets slurped in by the model, and then they recommend the wrong coffee maker 'cause somebody just blasted a thousand great reviews of their horrible coffee maker?
- SASam Altman
You know, this is-- So there's all of these things that have changed very quickly for us.
- BHBen Horowitz
Yeah. [chuckles]
- SASam Altman
Um, this is one of those examples that people are doing these-
- BHBen Horowitz
Yeah
- SASam Altman
... crazy things to-- maybe not even fake reviews, but just paying a bunch of, like, human re-
- 43:03 – 45:20
The Talent War and Personal Reflections
- ETErik Torenberg
We've, uh, we've given Meta their flowers, so right now I can feel like I can ask you this question, which is the great talent war or hall of 2025 has, has, has taken place and OpenAI remains intact. Uh, te-team as strong as ever, sh-shipping in-in-incredible products. What can you say about what, what it-- what's been like this year in, in terms of just everything that's, that's been going on?
- SASam Altman
I mean, every year has been exhausting-
- ETErik Torenberg
Yeah. [chuckles]
- SASam Altman
... since we like, uh, I... I remember when the first few years of running OpenAI were like the most fun professional years of my life by far. It was like unbelievable.
- SPSpeaker
Yeah.
- SASam Altman
You know, the running-
- SPSpeaker
Tell them before you release the product. [chuckles]
- SASam Altman
Yeah, yeah, yeah. Running a research lab-
- ETErik Torenberg
Yeah.
- SASam Altman
... with the smartest people doing this, like amazing-
- ETErik Torenberg
Yeah
- SASam Altman
... like historical work, and I got to watch it, and that was very cool. And then we launched ChatGPT, and everybody was like congratulating me, and I was like, m-m-my life is about to get completely ransacked. And of course it has. Uh, and but it, it, it feels like it's just been crazy all the way through. It's been almost three years now, and I think it does get a little bit crazier over time, but I'm like more used to it, so it feels about the same.
- ETErik Torenberg
Yeah. We've talked a lot about OpenAI, but you also have a few other companies, Retro Biosciences and Longevity, and energy companies like Helion and, and Oklo. Did you have a, a master plan, you know, a decade ago to sort of make some big bets across these major spaces? Or h-how, how do we think about the Sam Altman arc in this way?
- SASam Altman
No, I just wanted to like use my capital to fund stuff I believed in. Like, I, I didn't-- it, it felt, yeah, it felt like a good use of capital.
- ETErik Torenberg
Yeah.
- SASam Altman
Like, and more fun or more interesting to me, and certainly like a better return than like buying a bunch of art or something.
- ETErik Torenberg
Yeah. W-what about the quote-unquote human algorithm do you think AIs of the future will find most fascinating?
- SASam Altman
I mean, kind of the whole-- I would bet the whole thing, like the whole-- My intuition is that like, hey, I will be fascinated by all other things to study and observe and-
- ETErik Torenberg
Mm.
- SASam Altman
... you know, like-
- ETErik Torenberg
Yeah.
- SASam Altman
Yeah.
- 45:20 – 49:25
Advice for Founders
- ETErik Torenberg
In, in closing, I, I love this insight you, you had, um, where you talked about how, you know, the, the next opening-- this-- a mistake investors make is pattern matching off previous breakthroughs and just trying to find, oh, what's the, what's the next Facebook or what, what's the next OpenAI? And, and that the next, you know, potential trillion-dollar company won't look exactly like O-OpenAI. It will be built off of the breakthrough that OpenAI has helped, you know, emerge, which is, you know, near free A-A-AGI at scale in the same way that OpenAI-
- SASam Altman
Yeah
- ETErik Torenberg
... leveraged pre-previous breakthroughs. And so for founders and investors and people trying to ascertain the future listening to this, how, how do you think about a world in which there is-- OpenAI achieves its mission, there is near, near free AGI? What types of opportunities m-might emerge for, for company building or, or investing that you're potentially excited about as you put your investor hat on or company building hat on?
- SASam Altman
I, I, I have no idea. I mean, I have like guesses-
- ETErik Torenberg
[chuckles]
- SASam Altman
... but they're like, they're, they're-
- ETErik Torenberg
Really?
- SASam Altman
I, I have learned-
- SPSpeaker
You're always wrong. [chuckles]
- SASam Altman
I-- you've learned you're always wrong. I've learned deep humility on this point. Um, I think the, the on-- like... I think if you try to like armchair quarterback it, you sort of say these things that sound smart, but they're pretty much what everybody else is saying, and it's like really hard to get the right kind of conviction. The only way I know how to do this is to like be deeply in the trenches exploring ideas, like talking to a lot of people, and I don't have time to do that anymore.
- ETErik Torenberg
Yeah.
- SASam Altman
Like, I only get to think about one thing now.
- ETErik Torenberg
Yeah.
- SASam Altman
So I w- I would just be like repeating other people's or saying the obvious things. But I think it's a very important-- like, if you are an investor or a founder, I think this is the most important question, and you don't-- you, you figure it out by like building stuff and playing with technology and talking to people and being out in the world. I have been always enormously disappointed by the willingness of investors to back this kind of stuff, even though it's always the thing that works. You all have done a lot of it, but most firms just kinda chase whatever the current-
- ETErik Torenberg
Yeah. [chuckles]
- SASam Altman
... thing is, and so do most founders.
- ETErik Torenberg
Yeah.
- SASam Altman
Uh, so I hope people will try to go.
- ETErik Torenberg
Yeah. We, we talk about how, you know, silly, you know, five-year plans can be in a world that's constantly changing. It feels like when I was asking you about your master plan, you know, your, your career arc has been following your curiosity, staying, you know, super close to the, the s- the smartest people, uh, the, uh-- super close to technology and just identifying opportunities and just kind of an organic and incremental way from there.
- SASam Altman
Uh, yes, but AI was always the thing I wanted to do.
- ETErik Torenberg
Yeah.
- SASam Altman
I went to co-
- ETErik Torenberg
Yeah.
- SASam Altman
I, I, I-
- ETErik Torenberg
Right
- SASam Altman
... studied AI. I worked in the AI lab between my freshman and sophomore year of college.
- ETErik Torenberg
Yeah.
- SASam Altman
It wasn't working all the time, so I'm like not, I'm not like enough of a-- I, I don't wanna like work on something that's totally not working. It was clear to me at the time AI was totally not working. Um, but I've been an AI nerd since I was a kid. Like this-
- ETErik Torenberg
Yeah.
- SPSpeaker
It's so amazing how it, you know, got enough GPUs, got enough data, and the lights came on. [chuckles]
Episode duration: 49:26
Install uListen for AI-powered chat & search across the full episode — Get Full Transcript
Transcript of episode JfE1Wun9xkk
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome