Skip to content
No PriorsNo Priors

No Priors Ep. 39 | With OpenAI Co-Founder & Chief Scientist Ilya Sutskever

Each iteration of ChatGPT has demonstrated remarkable step function capabilities. But what’s next? Ilya Sutskever, Co-Founder & Chief Scientist at OpenAI, joins Sarah Guo and Elad Gil to discuss the origins of OpenAI as a capped profit company, early emergent behaviors of GPT models, the token scarcity issue, next frontiers of AI research, his argument for working on AI safety now, and the premise of Superalignment. Plus, how do we define digital life? Ilya Sutskever is Co-founder and Chief Scientist of OpenAI. He leads research at OpenAI and is one of the architects behind the GPT models. He co-leads OpenAI's new "Superalignment" project, which tries to solve the alignment of superintelligences in 4 years. Prior to OpenAI, Ilya was co-inventor of AlexNet and Sequence to Sequence Learning. He earned his Ph.D in Computer Science from the University of Toronto. 00:00 - Early Days of AI Research 06:49 - Origins of Open Ai & CapProfit Structure 13:54 - Emergent Behaviors of GPT Models 18:05 - Model Scale Over Time & Reliability 23:51 - Roles & Boundaries of Open Source in the AI Ecosystem 28:38 - Comparing AI Systems to Biological & Human Intelligence 32:56 - Definition of Digital Life 35:11 - Super Alignment & Creating Pro Human AI 41:20 - Accelerating & Decelerating Forces

Elad GilhostIlya SutskeverguestSarah Guohost
Nov 1, 202341mWatch on YouTube ↗

At a glance

WHAT IT’S REALLY ABOUT

Ilya Sutskever on scaling, safety, and the road to superintelligence

  1. OpenAI co‑founder and chief scientist Ilya Sutskever traces the evolution of deep learning from the pre‑AlexNet ‘dark ages’ to today’s large‑scale transformer models, explaining why size, data, and compute were the key contrarian bets that worked. He describes OpenAI’s original AGI mission, why the organization shifted from nonprofit to a capped‑profit structure, and how their research strategy moved from narrow projects like DOTA 2 to general‑purpose language models. A major theme is the growing importance of reliability in large models and the emerging ecosystem role for both small open‑source models and large proprietary ones. Sutskever also outlines OpenAI’s “superalignment” initiative, arguing that we must start now on methods to keep future superintelligent, potentially quasi‑autonomous AI systems deeply pro‑social and safe for humanity.

IDEAS WORTH REMEMBERING

5 ideas

Bigger neural networks plus sufficient data and compute unlock qualitatively new behavior.

Sutskever argues early neural nets failed mainly because they were too small; once GPUs, large datasets, and better training tricks arrived, scaling up architectures like convolutional nets and later transformers produced unprecedented capabilities.

OpenAI’s core goal—AGI that benefits all humanity—has remained constant while tactics changed.

The organization shifted from nonprofit, open‑sourcing ideals to a capped‑profit structure because massive compute needs could not realistically be met by a pure nonprofit, yet they still wanted to limit profit incentives around potentially society‑transforming AGI.

Reliability, not just raw capability, is now the main bottleneck for real‑world deployment.

As models grow, they can answer many questions impressively but still fail unpredictably on similarly difficult ones; for high‑stakes uses like legal or financial decisions, this inconsistency prevents full trust without human verification.

An ecosystem of models of different sizes will emerge, each suited to different applications.

Smaller open or proprietary models (e.g., 7B–34B) will power cost‑sensitive, narrower use cases, while large frontier models justify their expense in domains where high capability and reliability (e.g., expert advice, complex reasoning) create outsized value.

Open source is beneficial at current capability levels but becomes ambiguous at superhuman levels.

Today’s open models empower control and customization for organizations, but Sutskever warns that future systems capable of autonomously building companies or doing advanced science could have unpredictable impacts if fully open‑sourced.

WORDS WORTH SAVING

5 quotes

The most surprising thing, if I had to pick one, would be the fact that when I speak to it, I feel understood.

Ilya Sutskever

The goal of OpenAI from the very beginning has been to make sure that artificial general intelligence… benefits all of humanity.

Ilya Sutskever

I would argue that at this point, it is reliability that's the biggest bottleneck to these models being truly useful.

Ilya Sutskever

In the end of the day, intelligence is power.

Ilya Sutskever

If such super intelligent data centers are being built at all, we want those data centers to hold warm and positive feelings towards people, towards humanity.

Ilya Sutskever

Early neural networks, AlexNet, and the contrarian bet on scaleOrigins, mission, and capped‑profit structure of OpenAIShift from narrow ML projects to large transformer language models (GPT series)Model scaling, reliability, and trade‑offs between large and small modelsRole and future risks/benefits of open‑source AI modelsLimits to scaling: data constraints, compute, and engineering complexitySuperalignment: aligning future superintelligent and potentially autonomous AI with human values

High quality AI-generated summary created from speaker-labeled transcript.

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome