No PriorsNo Priors Ep. 39 | With OpenAI Co-Founder & Chief Scientist Ilya Sutskever
Episode Details
EPISODE INFO
- Released
- November 2, 2023
- Duration
- 41m
- Channel
- No Priors
- Watch on YouTube
- ▶ Open ↗
EPISODE DESCRIPTION
Each iteration of ChatGPT has demonstrated remarkable step function capabilities. But what’s next? Ilya Sutskever, Co-Founder & Chief Scientist at OpenAI, joins Sarah Guo and Elad Gil to discuss the origins of OpenAI as a capped profit company, early emergent behaviors of GPT models, the token scarcity issue, next frontiers of AI research, his argument for working on AI safety now, and the premise of Superalignment. Plus, how do we define digital life? Ilya Sutskever is Co-founder and Chief Scientist of OpenAI. He leads research at OpenAI and is one of the architects behind the GPT models. He co-leads OpenAI's new "Superalignment" project, which tries to solve the alignment of superintelligences in 4 years. Prior to OpenAI, Ilya was co-inventor of AlexNet and Sequence to Sequence Learning. He earned his Ph.D in Computer Science from the University of Toronto. 00:00 - Early Days of AI Research 06:49 - Origins of Open Ai & CapProfit Structure 13:54 - Emergent Behaviors of GPT Models 18:05 - Model Scale Over Time & Reliability 23:51 - Roles & Boundaries of Open Source in the AI Ecosystem 28:38 - Comparing AI Systems to Biological & Human Intelligence 32:56 - Definition of Digital Life 35:11 - Super Alignment & Creating Pro Human AI 41:20 - Accelerating & Decelerating Forces
SPEAKERS
Elad Gil
hostIlya Sutskever
guestSarah Guo
hostNarrator
other
EPISODE SUMMARY
In this episode of No Priors, featuring Elad Gil and Ilya Sutskever, No Priors Ep. 39 | With OpenAI Co-Founder & Chief Scientist Ilya Sutskever explores ilya Sutskever on scaling, safety, and the road to superintelligence OpenAI co‑founder and chief scientist Ilya Sutskever traces the evolution of deep learning from the pre‑AlexNet ‘dark ages’ to today’s large‑scale transformer models, explaining why size, data, and compute were the key contrarian bets that worked. He describes OpenAI’s original AGI mission, why the organization shifted from nonprofit to a capped‑profit structure, and how their research strategy moved from narrow projects like DOTA 2 to general‑purpose language models. A major theme is the growing importance of reliability in large models and the emerging ecosystem role for both small open‑source models and large proprietary ones. Sutskever also outlines OpenAI’s “superalignment” initiative, arguing that we must start now on methods to keep future superintelligent, potentially quasi‑autonomous AI systems deeply pro‑social and safe for humanity.
RELATED EPISODES
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome




