
Sam Altman: What Startups Will be Steamrolled by OpenAI & Where is Opportunity | E1223
Sam Altman (guest), Harry Stebbings (host), Narrator, Narrator, Narrator
In this episode of The Twenty Minute VC, featuring Sam Altman and Harry Stebbings, Sam Altman: What Startups Will be Steamrolled by OpenAI & Where is Opportunity | E1223 explores sam Altman Warns Founders: Bet With AI Progress, Not Against It Sam Altman discusses OpenAI’s strategic focus on reasoning-heavy models (like the O-series), agents, and multimodal capabilities, emphasizing a steep trajectory of improvement that will eliminate many current model shortcomings.
Sam Altman Warns Founders: Bet With AI Progress, Not Against It
Sam Altman discusses OpenAI’s strategic focus on reasoning-heavy models (like the O-series), agents, and multimodal capabilities, emphasizing a steep trajectory of improvement that will eliminate many current model shortcomings.
He warns startups and investors not to build thin “patch” businesses around today’s limitations, but to target massive, durable opportunities that benefit as models get better, such as AI tutors, healthcare, and other high-value verticals.
Altman explores the economics and infrastructure of AI—capital intensity, semiconductors, scaling laws—arguing that despite model depreciation, the value created and compounding know‑how justify large investments for a few players.
He also reflects on organizational challenges: building a culture that repeatedly does unproven work, managing hypergrowth, combining young and experienced talent, and his own current struggle with providing clear product vision in a rapidly shifting landscape.
Key Takeaways
Build startups that benefit from models getting better, not from current gaps.
Altman repeatedly urges founders to avoid businesses that merely patch today’s model weaknesses (like narrow RAG hacks) because the core models are improving fast and will erase those advantages; instead, target products whose value *increases* as models advance (e. ...
Get the full analysis with uListen AI
Reasoning is OpenAI’s primary differentiation focus for the next leap in value.
OpenAI is prioritizing reasoning-heavy models (O1 and successors) that can tackle complex code, science, and long-horizon tasks, and Altman believes this ‘reasoning’ capability will unlock the next major wave of economic and scientific breakthroughs.
Get the full analysis with uListen AI
Agents will move beyond trivial tasks to act as senior collaborators.
Altman thinks people under-imagine agents: instead of just booking a restaurant, future agents could run massively parallel tasks (e. ...
Get the full analysis with uListen AI
Expect significant AI value creation despite model depreciation and high training costs.
He agrees models are depreciating assets and many players will struggle to justify training costs, but argues that for platforms with scale and sticky products (like ChatGPT), revenue and accumulated training know‑how more than justify large CapEx.
Get the full analysis with uListen AI
OpenAI will not own everything: massive vertical opportunities remain.
OpenAI aims to make base models ‘really, really good’ and some higher-level services, but Altman expects “many trillions” in new market cap from others building vertical products and services (education, healthcare, legal, engineering, etc. ...
Get the full analysis with uListen AI
The winning organizations are those that repeatedly do unproven, hard things.
Altman attributes OpenAI’s success less to specific techniques and more to a culture that can repeatedly pursue uncertain, novel research directions (like GPT-4 and reasoning models) rather than just copying known results; he sees this skill as rare and crucial for human progress.
Get the full analysis with uListen AI
Hypergrowth demands constant 10X thinking, not incrementalism.
Reflecting on OpenAI’s rapid scaling, he highlights how difficult it is to push a company’s mindset from ‘next 10%’ to ‘next 10X’ growth, requiring new internal communication, planning, and structures to handle ever-larger complexity (compute, products, offices) on very short timelines.
Get the full analysis with uListen AI
Notable Quotes
“If you are building a business that patches some current small shortcomings, if we do our job right, then that will not be as important in the future.”
— Sam Altman
“Reasoning is our current most important area of focus. I think this is what unlocks the next, like, massive leap forward in value created.”
— Sam Altman
“There will be many trillions of dollars of market cap that gets created by using AI to build products and services that were either impossible or quite impractical before.”
— Sam Altman
“What is really hard… is the repeated ability to go off and do something new and totally unproven. And in some sense, I think this is one of the most important inputs to human progress.”
— Sam Altman
“I think in five years, it looks like we have an unbelievably rapid rate of improvement in technology itself… and society itself actually changes surprisingly little.”
— Sam Altman
Questions Answered in This Episode
If you are a founder today, how do you practically distinguish between a ‘patch’ business that OpenAI may steamroll and a durable product that benefits from model improvement?
Sam Altman discusses OpenAI’s strategic focus on reasoning-heavy models (like the O-series), agents, and multimodal capabilities, emphasizing a steep trajectory of improvement that will eliminate many current model shortcomings.
Get the full analysis with uListen AI
What concrete technical or product features will differentiate OpenAI’s reasoning-focused models from competitors as more players adopt similar techniques?
He warns startups and investors not to build thin “patch” businesses around today’s limitations, but to target massive, durable opportunities that benefit as models get better, such as AI tutors, healthcare, and other high-value verticals.
Get the full analysis with uListen AI
How might AI agents that know “your whole life” balance usefulness with privacy, consent, and user control over data?
Altman explores the economics and infrastructure of AI—capital intensity, semiconductors, scaling laws—arguing that despite model depreciation, the value created and compounding know‑how justify large investments for a few players.
Get the full analysis with uListen AI
Given Altman’s belief in massive value creation, which specific verticals beyond education and healthcare are most underexplored yet clearly defensible against OpenAI’s direct entry?
He also reflects on organizational challenges: building a culture that repeatedly does unproven work, managing hypergrowth, combining young and experienced talent, and his own current struggle with providing clear product vision in a rapidly shifting landscape.
Get the full analysis with uListen AI
How should startups and enterprises think about pricing and business models in a world where work may be measured more in GPUs and agent cycles than human seats?
Get the full analysis with uListen AI
Transcript Preview
We are gonna try our hardest and believe we will succeed at making our models better and better and better. And if you are building a business that patches some current small shortcomings, if we do our job right, then that will not be as important in the future. We believe that we are on a pretty, a quite steep trajectory of improvement, and that the current shortcomings of the models today will just be taken care of by future generations. And I would encourage people to be in line with that.
Ready to go? (instrumental music plays) (mouse clicking) Hello everyone. Welcome to OpenAI DevDay. I am Harry Stebbings of 20VC, and I am very, very excited to interview Sam Altman. Welcome, Sam. Sam, thank you for letting me do this today with you.
Thanks for being here, Harry.
Now, we have many, many questions from the audience, and so I wanted to start with one. When we look forward, is the future of OpenAI more models like 01 or is it more larger models that we would maybe have expected of old? How do we think about that?
I mean, we want to make things better across the board, but this direction of reasoning models is of particular importance to us. I think reasoning will unlock... I hope reasoning will unlock a lot of the things that we've been waiting years to do. And the- the ability for models like this to, for example, contribute to new science, uh, help write a lot more very difficult code, uh, that I think can drive things forward to a significant degree. So you should expect rapid improvement in the O series of models, and it's of great strategic importance to us.
Another one that I thought was really important for us to touch on was when we look forward to OpenAI's future plans, how do you think about developing no-code tools for non-technical founders to build and scale AI apps? How do you think about that?
It'll get there for sure. Uh, I- I think the- the first step will be tools that make people who know how to code well more productive. But eventually, I think we can offer really high-quality no-code tools. And already there's some out there that make sense. But you can't- you can't sort of in a no-code way say, "I have like a full startup I wanna build." Um, that's gonna take a while.
So, when we look at where we are in the stack today, OpenAI sits in a certain place. How far up the stack is OpenAI going to go? I think it's a brilliant question but, uh, if you're spending a lot of time tuning your RAG system, is this a waste of time because OpenAI ultimately thinks they'll own this part of the application layer, or is it not? And how do you answer a founder who has that question?
The general answer we try to give is we are gonna try our hardest and believe we will succeed at making our models better and better and better. And if you are building a business that patches some current small shortcomings, if we do our job right, then that will not be as important in the future. If, on the other hand, you build a company that benefits from the model getting better and better, if- If, you know, an oracle told you today that 04 was gonna be just absolutely incredible and do all of these things that right now feel impossible and you were happy about that, then, you know, maybe we're wrong but at least that's what we're going for. And if instead you say, "Okay, there's this area where..." There are many, but you pick one of the many areas, "where 01 preview underperforms, and so I'm gonna patch this and just barely get it to work," then you're sort of assuming that the next turn of the model crank won't be as good as we think it will be. And that is the general philosophical message we try to get out to startups. Like, we- we believe that we are on a pretty, a quite steep trajectory of improvement and that the current shortcomings of the models today, um, will just be taken care of by future generations. And, you know, I would encourage people to be in line with that.
Install uListen to search the full transcript and get AI-powered insights
Get Full TranscriptGet more from every podcast
AI summaries, searchable transcripts, and fact-checking. Free forever.
Add to Chrome