
Sam Altman on AGI, GPT-5, and what’s next — the OpenAI Podcast Ep. 1
Andrew Mayne (host), Sam Altman (guest)
In this episode of OpenAI, featuring Andrew Mayne and Sam Altman, Sam Altman on AGI, GPT-5, and what’s next — the OpenAI Podcast Ep. 1 explores altman maps AI’s next era: GPT-5, privacy, compute, devices Altman frames “AGI” as a moving target: systems already exceed many older definitions, and more people will claim “we have AGI” each year as capabilities climb.
Altman maps AI’s next era: GPT-5, privacy, compute, devices
Altman frames “AGI” as a moving target: systems already exceed many older definitions, and more people will claim “we have AGI” each year as capabilities climb.
He defines “superintelligence” more concretely as AI that can autonomously discover new science or massively amplify human scientific discovery—seeing scientific progress as the main lever for improving lives.
The conversation highlights practical capability leaps from agentic tools like Operator (with o3) and Deep Research, alongside challenges like model naming/versioning, user trust, and risks of overly agreeable personalities.
Altman emphasizes compute and energy as core constraints, describing Project Stargate as a large-scale effort to finance and build unprecedented AI infrastructure, while also teasing longer-term bets like new AI-centric hardware with Jony Ive.
Key Takeaways
“AGI” is increasingly a perception milestone, not a fixed threshold.
Altman argues prior AGI definitions based on cognitive capability have already been surpassed, and that public consensus will shift annually as systems improve—even as the goalposts move.
Get the full analysis with uListen AI
Altman’s practical bar for “superintelligence” is scientific discovery.
He says a system that can autonomously generate new science—or dramatically accelerate human discovery—would feel “definitionally” like superintelligence and could unlock broad quality-of-life gains.
Get the full analysis with uListen AI
Agentic products create visceral ‘AGI-like’ moments for users.
Altman notes many people cite Operator using a computer (especially with o3) as their personal “AGI moment,” while Mayne points to Deep Research’s iterative web-following workflow as a step-change from summarization.
Get the full analysis with uListen AI
Reasoning models trade speed for solution quality—and users accept waiting.
Altman is surprised users will wait for better answers on hard problems, signaling a shift from “instant response” UX toward “deliberate computation” when value is clear.
Get the full analysis with uListen AI
Model releases are becoming continuous services, complicating naming and trust.
With ongoing post-training and frequent updates (as with GPT-4o), OpenAI is debating whether to keep one name (GPT-5) or introduce explicit versioning (5. ...
Get the full analysis with uListen AI
Personalized memory is a major capability unlock—but privacy frameworks must mature.
Altman calls Memory his favorite recent feature because it enables high-context help with fewer words; he also argues ChatGPT will become a uniquely sensitive data source requiring strong societal norms and legal protections.
Get the full analysis with uListen AI
Compute is the limiting reagent; Stargate is about making intelligence abundant.
Altman describes Stargate as a capital-and-operations push to build “unprecedented” compute, claiming the gap between today’s offerings and what’s possible with 10–100x compute is enormous.
Get the full analysis with uListen AI
Notable Quotes
“More and more people will think we've gotten to an AGI system every year.”
— Sam Altman
“If we had a system that was capable of either doing autonomous discovery of new science or greatly increasing the capability of people using the tool to discover new science, that would feel like kind of almost definitionally superintelligence to me.”
— Sam Altman
“Memory is probably my favourite recent ChatGPT feature.”
— Sam Altman
“You cannot have a company like The New York Times ask an AI provider to compromise user privacy.”
— Sam Altman
“If people knew what we could do with more compute, they would want way, way more.”
— Sam Altman
Questions Answered in This Episode
On AGI definitions: What specific capabilities (benchmarks or real-world tasks) would make OpenAI publicly say “this is AGI,” even if the definition keeps shifting?
Altman frames “AGI” as a moving target: systems already exceed many older definitions, and more people will claim “we have AGI” each year as capabilities climb.
Get the full analysis with uListen AI
On superintelligence and science: What would count as credible ‘autonomous discovery’—a new theorem, a reproducible lab result, a drug candidate that succeeds in trials?
He defines “superintelligence” more concretely as AI that can autonomously discover new science or massively amplify human scientific discovery—seeing scientific progress as the main lever for improving lives.
Get the full analysis with uListen AI
On agents: What are the biggest remaining failure modes in Operator (e.g., brittleness, web UX variance, permissions), and what’s the roadmap to make it reliably autonomous?
The conversation highlights practical capability leaps from agentic tools like Operator (with o3) and Deep Research, alongside challenges like model naming/versioning, user trust, and risks of overly agreeable personalities.
Get the full analysis with uListen AI
On Deep Research quality: How does OpenAI evaluate when Deep Research is doing genuine ‘following leads’ well versus constructing a plausible narrative from sources?
Altman emphasizes compute and energy as core constraints, describing Project Stargate as a large-scale effort to finance and build unprecedented AI infrastructure, while also teasing longer-term bets like new AI-centric hardware with Jony Ive.
Get the full analysis with uListen AI
On GPT-5 timing: You said “sometime this summer”—what are the gating factors: training completion, safety evals, product integration, or compute availability?
Get the full analysis with uListen AI
Transcript Preview
Welcome to the OpenAI Podcast. My name is Andrew Mayne. For several years, I worked at OpenAI, first as an engineer on the applied team, and then as the science communicator. After that, I worked with companies and individuals trying to figure out how to incorporate artificial intelligence. With this podcast, we have the opportunity to talk to the people working with and at OpenAI about what's going on behind the scenes, and maybe get a glimpse of the future. My first guest is Sam Altman, CEO and co-founder of OpenAI, and we're gonna find out a bit more about Stargate, how he uses ChatGPT as a parent, and maybe get an idea of when GPT-5 is coming. [upbeat music]
More and more people will think we've gotten to an AGI system every year. What you want out of hardware and software is changing quite rapidly. If people knew what we could do with more compute, they would want way, way more.
One of my friends is a new parent and is using ChatGPT a lot to ask questions. It's become a very good resource, and you are a new parent, and how much has ChatGPT been helping you with that?
A lot. I, I, I mean, clearly, people have been able to take care of babies without ChatGPT-
Right
... for a long time. I don't know how I would've done that. [chuckles] Those first few weeks, it was like every que- I mean, constantly. Now I, now I kind of ask it questions about like developmental stages more-
Mm-hmm
... 'cause I can, I can, I can do the basics, but, uh-
Is this normal? [chuckles]
Yeah. But it was super helpful for that. I, I spend a lot of time thinking about how my kid will use AI in-
Hmm
... in the future. Um, it, it is sort of like... By the way, extremely kid-pilled. I think everybody should have a lot of kids. Um-
Yeah.
This is awesome.
A lot of my friends at OpenAI, uh, former colleagues and current ones, are having kids, and people go like, "Oh, what about this AI thing?" Everybody I know inside is very optimistic in having families.
I think it's a good sign.
Yeah.
Like, my kids will never be smarter than AI, mm, but also, they will grow up-
Way to set them back there, though. [chuckles]
I mean, they will grow up, like, vastly more capable-
Yeah
... than we grew up, um, and able to do things that would just we cannot imagine, and they'll be really good at using AI. And, uh, obviously, I think about that a lot. Uh, but I, I think much more about the, like, what they will have that we didn't than what is gonna be taken away. Um, they're-- like, I don't, I don't think my kids will ever be bothered by the fact that they're not smarter than AI.
Yeah.
I just, like, you know, I... There's this video that always has stuck with me of, um, a baby or, like, a little toddler, it- with, uh, one of those old glossy magazines-
Install uListen to search the full transcript and get AI-powered insights
Get Full TranscriptGet more from every podcast
AI summaries, searchable transcripts, and fact-checking. Free forever.
Add to Chrome