The Twenty Minute VCSam Altman: What Startups Will be Steamrolled by OpenAI & Where is Opportunity | E1223
EVERY SPOKEN WORD
75 min read · 14,601 words- 0:00 – 1:01
Intro
- SASam Altman
We are gonna try our hardest and believe we will succeed at making our models better and better and better. And if you are building a business that patches some current small shortcomings, if we do our job right, then that will not be as important in the future. We believe that we are on a pretty, a quite steep trajectory of improvement, and that the current shortcomings of the models today will just be taken care of by future generations. And I would encourage people to be in line with that.
- HSHarry Stebbings
Ready to go? (instrumental music plays) (mouse clicking) Hello everyone. Welcome to OpenAI DevDay. I am Harry Stebbings of 20VC, and I am very, very excited to interview Sam Altman. Welcome, Sam. Sam, thank you for letting me do this today with you.
- SASam Altman
Thanks for being here, Harry.
- HSHarry Stebbings
Now, we have many, many questions from the audience, and so I wanted to start with one.
- 1:01 – 1:47
Will OpenAI's Future Focus Be on Smaller or Larger Models?
- HSHarry Stebbings
When we look forward, is the future of OpenAI more models like 01 or is it more larger models that we would maybe have expected of old? How do we think about that?
- SASam Altman
I mean, we want to make things better across the board, but this direction of reasoning models is of particular importance to us. I think reasoning will unlock... I hope reasoning will unlock a lot of the things that we've been waiting years to do. And the- the ability for models like this to, for example, contribute to new science, uh, help write a lot more very difficult code, uh, that I think can drive things forward to a significant degree. So you should expect rapid improvement in the O series of models, and it's of great strategic importance to us.
- 1:47 – 4:05
Will No-Code Tools Empower Non-Technical Founders?
- HSHarry Stebbings
Another one that I thought was really important for us to touch on was when we look forward to OpenAI's future plans, how do you think about developing no-code tools for non-technical founders to build and scale AI apps? How do you think about that?
- SASam Altman
It'll get there for sure. Uh, I- I think the- the first step will be tools that make people who know how to code well more productive. But eventually, I think we can offer really high-quality no-code tools. And already there's some out there that make sense. But you can't- you can't sort of in a no-code way say, "I have like a full startup I wanna build." Um, that's gonna take a while.
- HSHarry Stebbings
So, when we look at where we are in the stack today, OpenAI sits in a certain place. How far up the stack is OpenAI going to go? I think it's a brilliant question but, uh, if you're spending a lot of time tuning your RAG system, is this a waste of time because OpenAI ultimately thinks they'll own this part of the application layer, or is it not? And how do you answer a founder who has that question?
- SASam Altman
The general answer we try to give is we are gonna try our hardest and believe we will succeed at making our models better and better and better. And if you are building a business that patches some current small shortcomings, if we do our job right, then that will not be as important in the future. If, on the other hand, you build a company that benefits from the model getting better and better, if- If, you know, an oracle told you today that 04 was gonna be just absolutely incredible and do all of these things that right now feel impossible and you were happy about that, then, you know, maybe we're wrong but at least that's what we're going for. And if instead you say, "Okay, there's this area where..." There are many, but you pick one of the many areas, "where 01 preview underperforms, and so I'm gonna patch this and just barely get it to work," then you're sort of assuming that the next turn of the model crank won't be as good as we think it will be. And that is the general philosophical message we try to get out to startups. Like, we- we believe that we are on a pretty, a quite steep trajectory of improvement and that the current shortcomings of the models today, um, will just be taken care of by future generations. And, you know, I would encourage people to be in line
- 4:05 – 6:42
Where Will OpenAI Dominate & How Should Founders and Investors Prepare?
- SASam Altman
with that.
- HSHarry Stebbings
We did an interview before with Brad, and sorry, it's not quite on schedule, but I think this show has always been successful when we kind of go a little bit off schedule. There was this brilliant-
- SASam Altman
You can both fall off.
- HSHarry Stebbings
Yeah, sorry for that. Uh, but there was this brilliant kind of meme that came out of it, and I felt a little bit guilty. But you- you said wearing this 20VC jumper, which is an incredibly proud moment for me, uh, for certain segments like the one you mentioned there, there would be the potential to steamroll. If you're thinking as a founder today building, where is OpenAI gonna potentially come and steamroll versus where they're not? Also for me as an investor trying to invest in opportunities that aren't going to get damaged, how should founders and me as an investor think about that?
- SASam Altman
There will be many trillions of dollars of market cap that gets created, new market cap that gets created, by using AI to build products and services that were either impossible or quite impractical before. And there is this one set of areas where we're gonna try to make it relevant which is, you know, we just want the models to be really, really good such that you don't have to, like, fight so hard to get them to do what you wanna do. But all of this other stuff which is building these incredible products and services on top of this new technology, we think that just gets better and better. Um, one of the surprises to me early on was, and this is no longer the case, but in like the GPT-3.5 days, it felt like 95% of startups, something like that, wanted to bet against the models getting way better. And so, and they were doing these things where we could already see GPT-4 coming and we were like, "Man, it's gonna be so good. It's not gonna have these problems." Uh, if you're building a tool just to get around this one shortcoming of the model, that's gonna become less and less relevant. And we forget how bad the models were a couple of years ago. It hasn't been that long on the calendar. But there were, there were just a lot of things and so it seemed like these good areas to build a thing, uh, to like-... to plug a hole rather than to build something to go deliver, like, the great AI tutor or the great AI medical advisor or whatever. And so I feel like 95% of people that were, were, like, betting against the models getting better, 5% of people betting for the models getting better. I think that's now reversed. I think people have, like, internalized the rate of improvement and have heard us on what we intend to do. So it's, it no longer seems to be such an issue, but it was something we used to fret about a lot because we kind of, we saw what was gonna happen to all of these very
- 6:42 – 8:43
Is Masa Son's $9 Trillion AI Value Prediction Realistic?
- SASam Altman
hardworking people.
- HSHarry Stebbings
Y- you said about the trillions of dollars of value to be created then, and I promise we will return to these brilliant questions. I'm sure you saw, I'm not sure if you saw, but Masa sit on stage and say, "We will have..." No, I'm not gonna do an accent 'cause (laughs) my accents are terrible. Um, but there'll be $9 trillion of value created every single year, which will offset the $9 trillion CapEx that he thought would be needed. I'm just intrigued. How did you think about that when you saw that? How do you reflect on that?
- SASam Altman
I can't put it down to, like, any sig- I think, like, if we can get it right within orders of magnitude, that's, that's good enough for now. There's clearly gonna be a lot of CapEx spent and clearly a lot of value created. This happens with every other mega technological revolution of which this is clearly one. Um, but, uh, you know, like, next year will be a big push for us into these next generation systems. You talked about when there could be, like, a no-code software agent. I don't know how long that's gonna take, but if we use that as an example and imagine forward to- towards it, think about what... Think about how much economic value gets unlocked for the world if anybody can just describe, like, a whole company's worth of software that they want. This is a ways away, obviously, but when we get there and have it happen, um, think about how difficult and how expensive that is now. Think about how much value it creates if you keep the same amount of value, but make it wildly more accessible and less expensive. That- that's really powerful, and I think we'll see many other examples like that. We, I mentioned earlier, like, healthcare and education, but those are two that are both, like, trillions of dollars of value to the world to get right. If you... And if AI can really, really truly enable this to happen in a different way than it has before, and only big numbers are the point, and they're also the debate about whether it's 9 trillion or 1 trillion or whatever. Like, you know, I don't... Smarter people than me it takes to figure that out, but, but the value creation does seem just unbelievable here.
- 8:43 – 12:52
What Role Will Open Source Play in AI & Should Models Be Open-Sourced?
- SASam Altman
- HSHarry Stebbings
We're gonna get to agents in terms of kind of how that value is delivered. In terms of, like, the delivery mech- mechanism for which is valued, open source is an incredibly prominent method through which it could be. How do you think about the role of open source in the future of AI and how does internal discussions look like for you when the question comes, "Should we open source any models or some models?"
- SASam Altman
There, there's clearly a really important place in the ecosystem for open source models. There's also really good open source models that now exist. Um, I think there's also a place for, like, nicely offered, well-integrated services and APIs, and, you know, I think it's... I think it makes sense that all of this stuff is on offer and people will take what, what works for them.
- HSHarry Stebbings
As a delivery mechanism, we have the open source as a kind of end prop to customers, and a way to deliver that, we can have agents. I think there's a lot of, uh, kind of semantic confusion around what an agent is. How do you think about the definition of agents today and what is an agent to you and what is it not?
- SASam Altman
This is, like, my off-the-cuff answer, it's not well-considered, but something that I can give a long duration task to and provide minimal supervision during execution for.
- HSHarry Stebbings
What do you think people think about agents that actually they get wrong?
- SASam Altman
Well, it's more like I don't... I don't think any of us yet have an intuition for what this is going to be like in a world gesturing at something that seems important. Maybe I can give the following example. When people talk about an AI agent acting on their behalf, uh, the, the main example they seem to give fairly consistently is, "Oh, you can, like, ask the agent to go book you a restaurant reservation. Um, and either it can, like, use OpenTable or it can, like, call the restaurant." Okay, sure. That's, that's, like, a mildly annoying thing to have to do and it maybe, like, saves you some work. One of the things that I think is interesting is a world where, uh, you can just do things that you wouldn't or couldn't do as a human. So what if, what if instead of calling, uh, one restaurant to make a reservation, my agent would call me, like, 300 and figure out which one had the best food for me or some special thing available or whatever? And then you would say, "Well, that's like really annoying if your agent is calling 300 restaurants." But if, if it's an agent answering each of those 300, 300 places, then no problem, and it can be this, like, massively parallel thing that a human can't do. So that's like a trivial example, but there are these, like, limitations to human bandwidth that maybe these agents won't have. The category I think that was more interesting is not the one that people normally talk about where you have this thing calling restaurants for you, but something that's more like a really smart senior coworker, um, where you can, like, collaborate on a project with and the agent can go do, like, a two-day task or two-week task really well and, you know, ping you uh, when it asks questions, but come back to you with, like, a great work product.
- HSHarry Stebbings
Does this fundamentally change the way that SaaS is priced when you think about extraction of value bluntly? And normally it's on a per-seat basis, but now you're actually kind of replacing labor so to speak. How do you think about the future of pricing with that in mind when you are such a core part of an enterprise workforce?
- SASam Altman
I'll speculate here for fun, but we really have-... no idea. I mean, I could imagine a world where you can say, like, "I want one GPU or 10 GPUs or 100 GPUs to just be, like, churning on my problems all the time." And it's not like... You're not, like, paying per seat or even per agent. But you're, like the... It's priced based off the amount of compute that's, like, working on a... you know, on your problems all the time.
- HSHarry Stebbings
Do we need to build specific models for agentic use, or do we not? How do you think about that?
- SASam Altman
There's a huge amount of infrastructure and scaffolding to build, for sure. But I think O1 points the way to a model that is capable of doing great agentic tasks.
- 12:52 – 14:10
Are AI Models Depreciating Assets or Becoming Exclusive Due to High Costs?
- SASam Altman
- HSHarry Stebbings
Uh, on the model side, Sam, everyone says that, uh, models are depreciating assets. The commoditization of models is so rife. How do you respond and think about that? And when you think about the increasing capital intensity to train models, are we actually seeing a reversion of that where it requires so much money that actually very few people can do it?
- SASam Altman
The... It's definitely true that they are depreciating assets. Um, this thing that they're not, though, worth as much as they cost to train, that seems totally wrong. Um, to say nothing of the fact that there's, like, a... there's a positive compounding effect as you learn to train these models. You get better at training the next one. But the actual, like, revenue we can make from a model, I think justifies the investment. To be fair, uh, I don't think that's true for everyone. And there's a lot of... There are probably too many people training very similar models. And if you're a little behind or if you don't have a product with the sort of normal rules of business that make that product sticky and valuable, then yeah, maybe you can't... May- (clears throat) Maybe it's harder to get a return on the investment. We're very fortunate to have ChatGPT and hundreds of millions of people that use our models. And so, even if it cost a lot, we get to, like, amortize that cost across a lot of people.
- 14:10 – 15:23
How Will OpenAI Continue Differentiating Its Models?
- SASam Altman
- HSHarry Stebbings
How do you think about how OpenAI models continue to differentiate over time and where you most wanna focus to expand that differentiation?
- SASam Altman
Reasoning is our current most important area of focus. I think this is what unlocks the next, like, massive leap forward in, in value created. So, that's... We'll improve them in lots of ways. Uh, we will do multimodal work. Uh, we will do other features in the models that we think are super important to the ways that people wanna use these things.
- HSHarry Stebbings
How do you think about reasoning in multimodal work like there, the challenges, what you want to achieve? Would love to understand that.
- SASam Altman
Reasoning in multimodality specifically?
- HSHarry Stebbings
Yeah.
- SASam Altman
I hope it's just gonna work. I mean, it obviously takes some doing to get done. But, uh, you know, like, people, like when they're babies and toddlers before they're good at language, can still do quite complex visual reasoning. So, clearly this is possible.
- HSHarry Stebbings
(laughs) T- totally is. Um, how will vision capabilities scale with new inference time paradigm set by O1?
- SASam Altman
Without spoiling anything, I would expect rapid progress in image-based
- 15:23 – 17:53
How Does OpenAI Advance Core Reasoning?
- SASam Altman
models.
- HSHarry Stebbings
Going off schedule is one thing. Trying to tease that out might get me in real trouble. How does OpenAI make breakthroughs in terms of, like, core reasoning? Do we need to start pushing into reinforcement learning as a pathway or other new techniques aside from the transformer?
- SASam Altman
I mean, there's two questions in there. There's how we do it, and then, you know, there's everyone's favorite question, which is what comes beyond the transformer. How we do it is like our special sauce. Uh, it's easy. It's really easy to copy something you know works. Uh, and one of the reasons that people don't talk about, about why it's so easy is you have the conviction to know it's possible. And so, after, after a research lab does something, even if you don't know exactly how they did it, it's, I wouldn't say easy, but it's doable to go off and copy it. And you can see this in the replications of GPT-4. And I'm sure you'll see this in replications of O1. What is really hard, and the thing that I'm most proud of about our culture, is the repeated ability to go off and do something new and totally unproven. And a lot of organizations... No, I'm not talking about AI research, just generally. A lot of organizations talk about the ability to do this. There are very few that do, um, across any field. And in some sense, I think this is one of the most important inputs to human progress. So, one of the, like, retirement things I fantasize about doing is writing a book of everything I've learned about how to build an organization and a culture that does this thing, not the organization that just copies what everybody else has done. Because I think this is something that the world could have a lot more of. It's limited by human talent. But there's a huge amount of wasted human talent because this is not an organization style or culture, whatever you wanna call it, that we are all good at building. So, I'd love way more of that. And that is, I think the thing most special about us.
- HSHarry Stebbings
Sam, how is human talent wasted?
- SASam Altman
Oh, there's just a lot of really talented people in the world that are not working to their full potential, um, because they work at a bad company or they live in a country that doesn't support any good companies, uh, or a long list of other things. I mean, the... One of the things I'm most excited about with AI is I hope it'll get us much better than we are now at helping get everyone to their max potential, which we are nowhere, nowhere near. There's a lot of people in the world that I'm sure would be phenomenal AI researchers, had their life paths just gone a little
- 17:53 – 20:52
How Has Sam’s Leadership Changed Over the Last Decade?
- SASam Altman
bit differently.
- HSHarry Stebbings
You've had an incredible journey over the last few years through, you know, unbelievable hypergrowth. You say about writing a book there in retirement. If you reflect back on the 10 years of leadership change that you've undergone, how have you changed your leadership most significantly?
- SASam Altman
I think the thing that has been most unusual for me about these last couple of years is just the rate at which...... things have changed. At a normal company, you get time to go from zero to 100 million in revenue, 100 million to a billion, a billion to 10 billion. You don't have to do that in, like, a two-year period. And you don't have to, like, build the company. We had to research that, but we really didn't have a company in the sense of a traditional Silicon Valley startup that's, you know, scaling and serving lots of customers and whatever. Um, having to do that so quickly, there was just, like, a lot of stuff that I was supposed to get more time to learn than I got.
- HSHarry Stebbings
What did you not know that you would have liked more time to learn?
- SASam Altman
I mean, I'd say, like, what did I know? One of the things that just came to mind out of, like, a rolling list of 100 is how hard it is or how much active work it takes to get the company to focus not on how you grow the next 10% but the next 10X. And growing the next 10%, it's the same things that worked before will work again. But to go from a company doing, say, like, a billion to $10 billion in revenue requires a whole lot of change. And it is not the sort of, like, "Let's do last week what we did this week" mindset. And in a world where people don't get time to even get caught up on the basics because growth is just so rapid, uh, I- I badly underappreciated the amount of work it took to be able to, like, keep charging at the next big step forward while still not neglecting everything else that we have to do. There's a big piece of internal communication around that and how you sort of share information, how you build the structures to, like, get the company to get good at thinking about 10X more stuff or bigger stuff or more complex stuff every eight months, 12 months, whatever. Um, there's a big piece in there about planning, about how you balance what has to happen today and next month with the- the long lead pieces you need in place for... to be able to execute in a year or two years with, you know, build-out of compute or even s- you know, things that are more normal, like planning ahead enough for, like, office space in a city like San Francisco is surprisingly hard at this kinda rate. So I- I think there was either no playbook for this or someone had a secret playbook they didn't give me, um, or all of us, like, we've all just sort of fumbled our way through this. But there's been a lot to learn on the fly.
- HSHarry Stebbings
God, I don't know if I'm gonna get into trouble for this, but f- sod it, I'll ask it anyway and if so, I'll deal with it later.
- SASam Altman
Let's go. (laughs)
- 20:52 – 23:34
Is Hiring Under 30s the Best Way to Build Companies?
- HSHarry Stebbings
Um, Keith Rabois, uh, did a talk and he said about you should hire incredibly young people under 30, and that is what Peter Thiel taught him and that is the secret to building great companies. I'm intrigued. When you think about this book that you write in retirement and that advice, you build great companies by building incredibly young, hungry, ambitious people who are under 30, that is the mechanism, how do you feel about that?
- SASam Altman
I think I was 30 when we started OpenAI, or at least thereabouts, so you know, I wasn't that young. Seemed to work okay so far.
- HSHarry Stebbings
(laughs) Worth a try. Uh, uh, t- (laughs) going back to something-
- SASam Altman
So I- I, uh, is the question like...
- HSHarry Stebbings
The question is, how do you think about hiring incredibly young, under 30s-
- SASam Altman
Oh.
- HSHarry Stebbings
... as this, like, Trojan horse of youth, energy, ambition, but less experience, or the much more experienced, "I know how to do this, I've done it before"?
- SASam Altman
I mean, the obvious answer is you can succeed with hiring both classes of people. Like, we have... I was just, like right before this, I was sending someone a Slack message about, there was a guy that we recently hired on one of the teams, I don't know how old he is but low 20s probably, doing just insanely amazing work. And I was like, "Can we find a lot more people like this? This is just, like, off the charts brilliant. I don't get how these people can be so good so young." But it clearly happens, and when you can find those people, they bring amazing, fresh perspective, energy, whatever else. On the other hand, uh, when you're, like, designing some of the most complex and massively expensive computer systems that humanity has ever built, actually, like, pieces of infrastructure of any sort, then I would not be comfortable taking a bet on someone who is just sort of, like, starting out, uh, where the stakes are higher. So you want both, uh, and I think what you really want is just, like, an extremely high talent bar of people at any age, and a strategy that said "I'm only gonna hire younger people" or "I'm only gonna hire older people" I believe would be misguided. I think it's, like, somehow just not, it's not quite the framing that resonates with me. But the part of it that does is, and one of the things that I feel most grateful about Y Combinator for, is inexperienced does not inherently mean not valuable, and there are incredibly high-potential people at the very beginning of their career that can create huge amounts of value. And, uh, we as a society should bet on those people and it's a great
- 23:34 – 24:30
Are Anthropic Models Better for Coding? When to Choose OpenAI vs. Others?
- SASam Altman
thing.
- HSHarry Stebbings
I am gonna return to some segments of the schedule or else I'm- I'm really gonna get told off. But Anthropics models have been sometimes cited as being better for coding tasks. Why is that? Do you think that's fair? And how should developers think about when to pick OpenAI versus a different provider?
- SASam Altman
Yeah, they have a model that is great at coding for sure, uh, and it's impressive work. I- I think developers use multiple models most of the time, and I'm not sure how that's all gonna evolve as we head towards this more agentified world. Um, but I sort of think there's just gonna be a lot of AI everywhere and something about the way that we currently talk about it or think about it feels wrong. Uh-May- maybe if I had to describe it, we will s- shift from talking about models to talking about systems, but that'll take a while.
- 24:30 – 26:34
How Much Longer Will Scaling Laws Hold for Model Iterations?
- SASam Altman
- HSHarry Stebbings
When we think about scaling models, how many more model iterations do you think scaling laws will hold true for? It was the kind of common refrain that it won't last for long, and it seems to be proving to last longer than people think.
- SASam Altman
Without going into detail about how it's going to happen, the- the- the core of the question that you're getting at is, is the trajectory of model capability improvement going to keep going like it has been going? And the answer that I believe is yes, for a long time.
- HSHarry Stebbings
Have you ever doubted that?
- SASam Altman
Totally.
- HSHarry Stebbings
Why?
- SASam Altman
Uh, we have had (laughs) ... Well, we've had, like, behavior we don't understand. We've had failed training runs. We've... All sorts of things. We've had to figure out new paradigms when we kind of get t- towards the end of one and have to figure out the next.
- HSHarry Stebbings
What was the hardest one to navigate?
- SASam Altman
Well, when we started working on GPT-4, there were some issues that caused us a lot of consternation that we really didn't know how to solve. We figured it out, but there was... there was definitely a time period where we just didn't know how we were gonna do that model. And then, in this shift to o1 and the idea of reasoning models, uh, that was something we had been excited about for a long time. But it was like a long and winding road of research to get here.
- HSHarry Stebbings
Is it difficult to maintain morale when it is long and winding roads, when training runs can fail? How do you maintain morale in those times?
- SASam Altman
You know, we have a lot of people here who are excited to build AGI, and that- that's a very motivating thing. And no one expects that to be easy and a straight line to success. But there's a famous quote from history. It's something like, I- I'm gonna get this totally wrong, but the spirit of it is like, "I never pray and ask for God to be on my side. You know, I pray and hope to be on God's side." And there is something about betting on deep learning that feels like being on the side of the angels, and you kinda just... It eventually seems to work out, even though you hit some big stumbling blocks along the way. And so, like, a deep belief in that has been good for us.
- 26:34 – 28:03
What Unmade Decision Weighs on Sam’s Mind Most Often?
- SASam Altman
- HSHarry Stebbings
Can I ask you a really weird one? I had a great quote the other day, and it was, "The heaviest things in life are not iron or gold, but unmade decisions." What unmade decision weighs on your mind most?
- SASam Altman
It's different every day. Like I don't... There's not one big one. I mean, I guess there are some big ones that... Like about, are we gonna bet on this next product or that next product? Uh, or are we gonna, like, build our next computer this way or that way? The- that are kind of like really high stakes, one-way door-ish, that like everybody else I probably delay for too long. But- but mostly, the hard part is every day it feels like there are a few new 51/49 decisions that come up that kinda make it to me because they were 51/49 in the first place. And then I don't feel like particularly likely I can do better than somebody else would have done, but I kinda have to make them anyway. And it's- it's the volume of them. It is not any one.
- HSHarry Stebbings
Is there a commonality in the person that you call when it's 51/49?
- SASam Altman
No. Um, I think the wrong way to do that is to have one person you lean on for everything, and the right way to... At least for me, the right way to do it is to have, like, 15 or 20 people, each of which you have come to believe has good instincts and good context in a particular way. And you get to, like, phone-a-friend to the best expert rather than try to have just one across the board.
- 28:03 – 29:46
Is Sam Worried About Semiconductor Supply Chains & Global Tensions?
- SASam Altman
- HSHarry Stebbings
In terms of hard decisions, I do wanna touch on semiconductor supply chains. How worried are you about semiconductor supply chains and international tensions today?
- SASam Altman
I don't know how to quantify that. Worried, of course, is the answer. Uh, it's probably n- It's... Well, I guess I could quantify it this way. It is not my top worry, but it is in, like, the top 10% of all worries.
- HSHarry Stebbings
Am I allowed to ask what's your top worry?
- NANarrator
(laughs)
- HSHarry Stebbings
I'm- I'm in so much... I've got past the stage of being in trouble for this one. (laughs)
- SASam Altman
It's sort of generalized complexity of all we, as a whole field, are trying to do, and it feels like a... I think it's all gonna work out fine, but it feels like a very complex system. Now, this kind of, like, works fractally at every level. So you can say that's also true, like, inside of OpenAI itself. Uh, that's also true inside of any one team. Um, but, you know, an example of this, since we were just talking about semiconductors, is you gotta balance the power availability with the right networking decisions, with being able to, like, get enough chips in time and whatever risk there's going to be there, um, with the ability to have the research ready to intersect that so you don't either, like, be caught totally flat-footed or have a system that you can't utilize, um, with the right product that is going to use that research to be able to, like, pay the eye-watering cost of that system. So it's... "Supply chain" makes it sign- sound too much like a pipeline, but- but yeah, the overall ecosystem complexity, at every level of like the fractal scan, is unlike anything I have seen in any industry before. Uh, and some version of that is probably my top worry.
- 29:46 – 32:35
Is $100 Billion a Realistic Entry Cost for Foundation Models?
- SASam Altman
- HSHarry Stebbings
You said, "Unlike anything we've seen before." A lot of people, I think, compare this, you know, wave to the internet bubble, uh, in terms of, you know, the excitement and the exuberance, and I think the thing that's different is the amount that people are spending. Larry Ellison said that it will cost $100 billion to enter the foundation model race as a starting point. Do you agree with that statement? And when you saw that, were you like, "Yeah, that makes sense"?
- SASam Altman
Uh, no, I think it will cost less than that. But there's an interesting point here, um, which is everybody likes to use previous examples of a technology revolution to talk about... to put a new one into more familiar context. And A:... I think that's a bad habit on the whole, and, but I understand why people do it. And B, I think the ones people pick for analogizing to AI are particularly bad. So the internet was obviously quite different than AI. And you brought up this one thing about cost, and whether it costs like 10 billion or 100 billion or whatever to be competitive, it was very... Like, one of the defining things about the internet revolution was it was actually really easy to get started. Now, another thing that cuts more towards the internet is mostly, for many companies, this will just be like a continuation of the internet. It's just like someone else makes these AI models, and you get to use them to build all sorts of great stuff, and it's like a new primitive for building technology. But if you're trying to build the AI itself, that's pretty different. Another example people use is electricity, um, which I think doesn't make sense for a ton of reasons. The one I like the most, caveated by my earlier comment that I don't think people should be doing this, or trying to, like, use these analogies too seriously, is the transistor. It was a new discovery of physics. It had incredible scaling properties. It seeped everywhere pretty quickly. You know, we had things like Moore's Law in a way that we could now imagine, like, a bunch of laws for AI that tell us something about how quickly it's going to get better. Um, and everyone kind of ben- Like, the whole tech industry kind of benefited from it. And there's a lot of transistors involved in the products and delivery of services that you use, but you don't really think of them as transistor companies. Um, it's, there's a very complex, very expensive industrial process around it with a massive supply chain. And incredible progress based off of this very simple discovery of physics led to this gigantic uplift of the whole economy for a long time, even though most of the time you didn't think about it. And you don't say, "Oh, this is a transistor product." It's just like, "Oh, all right, this thing can, like, process information for me." You don't even really think about that. It's just expected.
- 32:35 – 39:10
Quick-Fire Round
- SASam Altman
- HSHarry Stebbings
Sam, I'd love to do a quick fire round with you. So I'm gonna say-
- SASam Altman
All right.
- HSHarry Stebbings
So I'm gonna say a short statement, you give me your immediate thoughts, okay?
- SASam Altman
Okay.
- HSHarry Stebbings
So you are building today as a, whatever, 23, 24-year-old with the infrastructure that we have today. What do you choose to build if you started today?
- SASam Altman
Uh, some AI-enabled vertical. I'll, I'll st- I'll use tutors as an example, but, like, the, the, the best AI tutoring product or the... you know, that I could possibly imagine to teach people to learn. Any category like that. Could be the AI lawyer, could be the sort of, like, AI CAD engineer or whatever.
- HSHarry Stebbings
You mentioned your book. If you were to write a book, what would you call it?
- SASam Altman
I don't have a title ready. I haven't thought about this book other than, like, I wish something existed 'cause I think it could unlock a lot of human potential. So maybe, I think it would be something about human potential.
- HSHarry Stebbings
What in AI does no one focus on that everyone should spend more time on?
- SASam Altman
What I would love to see... There's a lot of different ways to solve this problem, but something about an AI that can understand your whole life. Doesn't have to, like, literally be infinite context, but some way that y- you can have an AI agent that, like, knows everything there is to know about you, has access to all of your data, things like that.
- HSHarry Stebbings
What was one thing that surprised you in the last month, Sam?
- SASam Altman
It's a research result I can't talk about.
- HSHarry Stebbings
(laughs)
- SASam Altman
But it is breathtakingly good.
- HSHarry Stebbings
(laughs) Which competitor do you most respect, and why them?
- SASam Altman
I mean, I kind of respect everybody in the space right now. I think there's, like, really amazing work coming from the whole field and incredibly talented, incredibly hardworking people. I don't mean this to be a question dodge. It's like I can point to super talented people doing super great work everywhere in the field.
- HSHarry Stebbings
Is there one?
- NANarrator
(laughs)
- SASam Altman
Not really.
- HSHarry Stebbings
(laughs) Uh, tell me, what's your favorite OpenAI API?
- SASam Altman
I think the new real-time API is pretty awesome. But we have a lot of... I mean, we have a, we have a big API business at this point. So there's a lot of good stuff in there.
- HSHarry Stebbings
Who do you most respect in AI today, Sam?
- SASam Altman
Uh, let me give a shout-out to the Cursor team. I mean, the- there's a lot of people doing incredible work in AI. But I think to really have, do what they've done and built... I thought about, like, a bunch of researchers I could name. Um, but in terms of using AI to deliver a really magical experience that creates a lot of value in a way that people just didn't quite manage to put the pieces together, I think that's, it's really quite remarkable. And I s- specifically left e- anybody at OpenAI out as I was thinking through it. Otherwise, it would have been a long list of OpenAI people first.
- HSHarry Stebbings
How do you think about the trade-off between latency and accuracy?
- SASam Altman
You need a dial to change between them. Like, in the same way that you wanna do a rapid fire thing now, and I'm not even going that quick, but I'm, you know, trying not to think for multiple minutes. Uh, in this context, latency is what you want. If you... But if you were like, "Hey, Sam, I want you to go, like, make a new important discovery in physics," you'd probably be happy to wait a couple of years, and the answer is it should be user controllable.
- HSHarry Stebbings
Can I ask, when you think about insecurity in leadership, I think it's something that everyone has. Uh, it's something we don't often talk about. Um, when you think about maybe an insecurity in leadership, an area of your leadership that you'd like to improve, where would you most like to improve as a leader and a CEO today?
- SASam Altman
The thing I'm struggling with most this week is I feel more uncertain than I have in the past about what our, like, the details of what our product strategy should be. Um, I think that product is a weakness of mine in general. Um, and it's something that right now the company, like, needs stronger and clearer vision on from me. Like, we have a wonderful head of product and a great product team, but it's an area that I wish I were a lot stronger on and I'm acutely feeling the, the miss right now.
- HSHarry Stebbings
You hired Kevin. Uh, I've known-
- SASam Altman
Yeah.
Episode duration: 39:20
Install uListen for AI-powered chat & search across the full episode — Get Full Transcript
Transcript of episode peg-aX1oii4
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome