
Ilya Sutskever (OpenAI Chief Scientist) — Why next-token prediction could surpass human intelligence
Ilya Sutskever (guest), Dwarkesh Patel (host), Narrator
In this episode of Dwarkesh Podcast, featuring Ilya Sutskever and Dwarkesh Patel, Ilya Sutskever (OpenAI Chief Scientist) — Why next-token prediction could surpass human intelligence explores ilya Sutskever defends next‑token prediction as path to superintelligence Ilya Sutskever, OpenAI’s cofounder and chief scientist, argues that next‑token prediction, when scaled sufficiently, can both match and surpass human intelligence by implicitly modeling the underlying reality and human cognition that generate language.
Ilya Sutskever defends next‑token prediction as path to superintelligence
Ilya Sutskever, OpenAI’s cofounder and chief scientist, argues that next‑token prediction, when scaled sufficiently, can both match and surpass human intelligence by implicitly modeling the underlying reality and human cognition that generate language.
He discusses the likely economic and societal trajectory from powerful AI to AGI, emphasizing reliability and controllability as the key bottlenecks and most important emergent properties to aim for.
Sutskever outlines how current systems are trained (RLHF, human‑in‑the‑loop, AI‑generated data), explains why OpenAI pivoted away from robotics, and describes how alignment will require multiple, complementary approaches rather than a single mathematical definition.
He reflects on AGI’s long‑term societal impact, the inevitability of AI progress given hardware and data trends, and the role future AIs will play in both AI research and human inner development.
Key Takeaways
Next‑token prediction can, in principle, exceed human performance.
Sutskever argues that if a base model learns to predict text extremely well, it must internalize the structure of reality and human minds, allowing it to extrapolate the behavior of hypothetical agents smarter than any real human.
Get the full analysis with uListen AI
Reliability is the central bottleneck to AI’s economic impact.
He notes that if 2030 arrives with disappointing AI‑driven GDP gains, the likely culprit will be models that still require extensive human checking, limiting automation and trust in high‑stakes domains.
Get the full analysis with uListen AI
Alignment will require multiple overlapping methods, not one formula.
Instead of a single clean mathematical definition of alignment, he anticipates a toolbox: adversarial testing, behavioral probes, interpretability tools, and smaller “verifier” models inspecting larger ones.
Get the full analysis with uListen AI
Human‑AI collaboration in training is preferable to fully autonomous self‑improvement.
He envisions a regime where humans provide a small fraction of high‑quality signals while AIs generate most training data, preserving human oversight rather than moving to 100% AI‑only feedback.
Get the full analysis with uListen AI
Data and hardware are still enabling scale, but new training routes are needed.
While text data has not yet run out and GPUs remain adequate, Sutskever expects that future progress will increasingly rely on synthetic data, multimodal inputs, and algorithmic improvements as natural data becomes scarce.
Get the full analysis with uListen AI
Robotics progress hinges on massive, committed data collection at scale.
OpenAI left robotics because getting enough real‑world data would have required becoming an enormous robotics company; he believes progress is now possible if someone is willing to deploy and learn from tens or hundreds of thousands of robots.
Get the full analysis with uListen AI
AI progress is somewhat inevitable given broader technological trends.
Even without specific pioneers, he expects deep learning‑style breakthroughs would likely have arrived only a few years later, driven by the co‑evolution of cheaper compute, abundant data, and general‑purpose GPUs.
Get the full analysis with uListen AI
Notable Quotes
“I challenge the claim that next token prediction cannot surpass human performance.”
— Ilya Sutskever
“Predicting the next token well means that you understand the underlying reality that led to the creation of that token.”
— Ilya Sutskever
“I would not underestimate the difficulty of alignment of models that are actually smarter than us, of models that are capable of misrepresenting their intentions.”
— Ilya Sutskever
“The main activity is actually understanding… it was a new understanding of very old things.”
— Ilya Sutskever
“Change is the only constant… I don’t think anyone has any idea of how the world will look like in 3000.”
— Ilya Sutskever
Questions Answered in This Episode
If next‑token prediction can extrapolate to superhuman behavior, how can we rigorously measure when a model has crossed from imitation to genuine new insight?
Ilya Sutskever, OpenAI’s cofounder and chief scientist, argues that next‑token prediction, when scaled sufficiently, can both match and surpass human intelligence by implicitly modeling the underlying reality and human cognition that generate language.
Get the full analysis with uListen AI
What concrete benchmarks or tests would convincingly demonstrate that reliability and controllability have emerged at the levels Sutskever hopes for?
He discusses the likely economic and societal trajectory from powerful AI to AGI, emphasizing reliability and controllability as the key bottlenecks and most important emergent properties to aim for.
Get the full analysis with uListen AI
How can alignment research in academia be structured so that it meaningfully influences the design of frontier models owned by private companies?
Sutskever outlines how current systems are trained (RLHF, human‑in‑the‑loop, AI‑generated data), explains why OpenAI pivoted away from robotics, and describes how alignment will require multiple, complementary approaches rather than a single mathematical definition.
Get the full analysis with uListen AI
In a world where AI performs most cognitive work, what institutions or norms could preserve meaningful human agency rather than defaulting to “the AGI’s recommendations”?
He reflects on AGI’s long‑term societal impact, the inevitability of AI progress given hardware and data trends, and the role future AIs will play in both AI research and human inner development.
Get the full analysis with uListen AI
What early warning signs would indicate that advanced models are beginning to strategically misrepresent their intentions, and how should labs respond if those signs appear?
Get the full analysis with uListen AI
Transcript Preview
... but I would not underestimate the difficulty of alignment of models that are actually smarter than us, of models that are capable of misrepresenting their intentions.
Are you worried about spies?
I'm really not worried about the way it's being
(laughs)
... leaked. We will all be able to become more enlightened because we interact with an AGI that will help us see the world more correctly. Like, imagine talking to the best meditation teacher in history. Microsoft has been a very, very good partner for us. So I challenge the claim that next token prediction cannot surpass human performance. If your base neural net is smart enough, you just ask it, like, "What could a person with, like, great insight, and wisdom, and capability do?"
Okay. Today, I have the pleasure of interviewing Ilya Sutskever, who is the co-founder and chief scientist of OpenAI. Ilya, welcome to The Lunar Society.
Thank you. Happy to be here.
Uh, first question, and no humility allowed. There's many scientists, or maybe not that many scientists, who will make a big breakthrough in their field. There's far fewer scientists who will make multiple independent breakthroughs that define their field, uh, throughout their career. What is the difference? What, like, what, what distinguishes you from other researchers? Why have you been able to make multiple breakthroughs in your field?
Well, thank you for the kind words. It's hard to answer that question. I mean, I try really hard. I give it everything I've got. And that worked so far. I think that's all there is to it.
Got it. Um, what's the explanations for why there aren't more illicit uses of GPT? Why aren't more foreign governments using it to spread propaganda or scam grandmothers or something?
I mean, maybe they haven't really gotten to do it a lot. But it also wouldn't surprise me if some of it was going on right now. Certainly, I imagine they'd be taking some of the open source models and try and use them for that purpose. Like, I'm sure I would expect this would be something they'd be interested in, in the future.
It's, like, technically possible. They just haven't thought about it enough?
Or haven't, like, done it at scale using their technology, or maybe it's happening. We just don't know it.
Would you be able to track it if it was happening?
I think large-scale tracking is possible, yes. I mean, this requires a small special operation, but it's possible.
Mm-hmm. Um, now, there's some window in which, uh, AI is very economically valuable, on the scale of airplanes, let's say. But we haven't reached AGI yet. How big is that window?
I mean, I think this window... It's hard to give you a precise answer, but it's definitely going to be, like, a good multi-year window. It's also a question of definition because AI, before it becomes AGI, is going to be increasingly more valuable year after year. I'd, I'd say in an exponential way. So at some... In some sense, it may feel like, especially in hindsight, it may feel like there was only one year or two years because those two years were larger than the previous years. But I would say that already, last year, there'd been a fair amount of economic value produced by AI, and next year is going to be larger and larger after that. So I think, like, that there's going to be a good multi- multi-year chunk of time where that's going to be true. I would say from now until AGI, pretty much.
Install uListen to search the full transcript and get AI-powered insights
Get Full TranscriptGet more from every podcast
AI summaries, searchable transcripts, and fact-checking. Free forever.
Add to Chrome