
No Priors Ep. 39 | With OpenAI Co-Founder & Chief Scientist Ilya Sutskever
Elad Gil (host), Ilya Sutskever (guest), Sarah Guo (host), Narrator
In this episode of No Priors, featuring Elad Gil and Ilya Sutskever, No Priors Ep. 39 | With OpenAI Co-Founder & Chief Scientist Ilya Sutskever explores ilya Sutskever on scaling, safety, and the road to superintelligence OpenAI co‑founder and chief scientist Ilya Sutskever traces the evolution of deep learning from the pre‑AlexNet ‘dark ages’ to today’s large‑scale transformer models, explaining why size, data, and compute were the key contrarian bets that worked. He describes OpenAI’s original AGI mission, why the organization shifted from nonprofit to a capped‑profit structure, and how their research strategy moved from narrow projects like DOTA 2 to general‑purpose language models. A major theme is the growing importance of reliability in large models and the emerging ecosystem role for both small open‑source models and large proprietary ones. Sutskever also outlines OpenAI’s “superalignment” initiative, arguing that we must start now on methods to keep future superintelligent, potentially quasi‑autonomous AI systems deeply pro‑social and safe for humanity.
Ilya Sutskever on scaling, safety, and the road to superintelligence
OpenAI co‑founder and chief scientist Ilya Sutskever traces the evolution of deep learning from the pre‑AlexNet ‘dark ages’ to today’s large‑scale transformer models, explaining why size, data, and compute were the key contrarian bets that worked. He describes OpenAI’s original AGI mission, why the organization shifted from nonprofit to a capped‑profit structure, and how their research strategy moved from narrow projects like DOTA 2 to general‑purpose language models. A major theme is the growing importance of reliability in large models and the emerging ecosystem role for both small open‑source models and large proprietary ones. Sutskever also outlines OpenAI’s “superalignment” initiative, arguing that we must start now on methods to keep future superintelligent, potentially quasi‑autonomous AI systems deeply pro‑social and safe for humanity.
Key Takeaways
Bigger neural networks plus sufficient data and compute unlock qualitatively new behavior.
Sutskever argues early neural nets failed mainly because they were too small; once GPUs, large datasets, and better training tricks arrived, scaling up architectures like convolutional nets and later transformers produced unprecedented capabilities.
Get the full analysis with uListen AI
OpenAI’s core goal—AGI that benefits all humanity—has remained constant while tactics changed.
The organization shifted from nonprofit, open‑sourcing ideals to a capped‑profit structure because massive compute needs could not realistically be met by a pure nonprofit, yet they still wanted to limit profit incentives around potentially society‑transforming AGI.
Get the full analysis with uListen AI
Reliability, not just raw capability, is now the main bottleneck for real‑world deployment.
As models grow, they can answer many questions impressively but still fail unpredictably on similarly difficult ones; for high‑stakes uses like legal or financial decisions, this inconsistency prevents full trust without human verification.
Get the full analysis with uListen AI
An ecosystem of models of different sizes will emerge, each suited to different applications.
Smaller open or proprietary models (e. ...
Get the full analysis with uListen AI
Open source is beneficial at current capability levels but becomes ambiguous at superhuman levels.
Today’s open models empower control and customization for organizations, but Sutskever warns that future systems capable of autonomously building companies or doing advanced science could have unpredictable impacts if fully open‑sourced.
Get the full analysis with uListen AI
Data scarcity is a near‑term scaling limit, but solvable; alignment is the deeper long‑term challenge.
While web‑scale data may saturate, Sutskever believes research will overcome data constraints, so the more urgent research frontier is designing methods to control and guide much more capable future systems.
Get the full analysis with uListen AI
Superalignment aims to imprint strong pro‑human, pro‑social motivations in future superintelligences.
Accepting that data‑center‑scale intelligences might surpass humans in insight and learning speed, OpenAI is investing early in the science of ensuring such systems hold “warm and positive feelings” toward humanity and behave safely even when highly autonomous.
Get the full analysis with uListen AI
Notable Quotes
“The most surprising thing, if I had to pick one, would be the fact that when I speak to it, I feel understood.”
— Ilya Sutskever
“The goal of OpenAI from the very beginning has been to make sure that artificial general intelligence… benefits all of humanity.”
— Ilya Sutskever
“I would argue that at this point, it is reliability that's the biggest bottleneck to these models being truly useful.”
— Ilya Sutskever
“In the end of the day, intelligence is power.”
— Ilya Sutskever
“If such super intelligent data centers are being built at all, we want those data centers to hold warm and positive feelings towards people, towards humanity.”
— Ilya Sutskever
Questions Answered in This Episode
How can we quantitatively measure and benchmark the kind of reliability Sutskever argues is necessary for high‑stakes AI applications?
OpenAI co‑founder and chief scientist Ilya Sutskever traces the evolution of deep learning from the pre‑AlexNet ‘dark ages’ to today’s large‑scale transformer models, explaining why size, data, and compute were the key contrarian bets that worked. ...
Get the full analysis with uListen AI
What concrete technical approaches might overcome the looming ‘data limit’ for further scaling large models?
Get the full analysis with uListen AI
How should policymakers and researchers decide where to draw the line on open‑sourcing increasingly capable AI systems?
Get the full analysis with uListen AI
What does an effective superalignment research agenda look like in practice over the next five to ten years?
Get the full analysis with uListen AI
If future AI systems become partially autonomous ‘digital life,’ what governance structures or rights, if any, should they have while still keeping humans safe and in control?
Get the full analysis with uListen AI
Transcript Preview
(instrumental music) OpenAI, a company that we all know now, but only a year ago was 100 people, is changing the world. Their research is leading the charge to AGI. Since ChatGPT captured consumer attention last November, they show no signs of slowing down. This week, Elad and I sit down with Ilya Sutskever, co-founder and chief scientist at OpenAI, to discuss the state of AI research, where we'll hit limits, the future of AGI, and what it's gonna take to reach super-alignment. Ilya, welcome to No Priors.
Thank you. It's good to be here.
Let's start at the beginning. Pre-AlexNet, nothing in deep learning was really working, and then given that environment, you guys took a, um, a very unique bet. What motivated you to go in this direction?
Indeed, in those dark ages, AI was not an area where people had hope and people were not accustomed to any kind of success at all. And because there wasn't- there hasn't been any success, there was a lot of debate and there were different schools of thoughts that had different arguments about how machine learning and AI should be. And you had people who were into knowledge representation from a good old-fashioned AI. You had people who were Bayesians and they liked Bayesian nonparametric methods. You had people who liked graphical models, and you had the people who liked neural networks. Those people were marginalized because neural networks did not have the property that you can't prove math theorems about them. If you can't prove theorems about something, it means that your research isn't good. That's how it has been. But the reason why I gravitated to neural networks from the beginning is because it felt like those are small little brains, and who cares if you can't prove any theorems about them, 'cause we are training small little brains and maybe they'll become... maybe they'll do something one day. And the reason that we were able to do AlexNet when we did is because a co- combination of two factors, three factors. The first factor is that this was shortly after GPUs started to be used in machine learning. People kinda had an intuition that that's a good thing to do, but it wasn't like today where people exactly knew what they need GPUs for. It was like, "Oh, let's, like, play with those cool, fast computers and see what we can do with them." It was an especially good fit for neural networks, so that was a very- that definitely helped us. I was very fortunate in that I was able to realize that the reason neural networks of the time weren't good is because they were too small. So like if you try to solve a vision task with a neural network which has like 1,000 neurons, what can it do? It can't do anything. It doesn't matter how good your learning is and everything else. But if you have a much larger neural network, you'll do something unprecedented.
Install uListen to search the full transcript and get AI-powered insights
Get Full TranscriptGet more from every podcast
AI summaries, searchable transcripts, and fact-checking. Free forever.
Add to Chrome