Dwarkesh PodcastIlya Sutskever (OpenAI Chief Scientist) — Why next-token prediction could surpass human intelligence
Episode Details
EPISODE INFO
- Released
- March 27, 2023
- Duration
- 47m
- Channel
- Dwarkesh Podcast
- Watch on YouTube
- ▶ Open ↗
EPISODE DESCRIPTION
Asked Ilya Sutskever (Chief Scientist of OpenAI) about:
- time to AGI
- leaks and spies
- what's after generative models
- post AGI futures
- working with MSFT and competing with Google
- difficulty of aligning superhuman AI
Hope you enjoy as much as I did! 𝐄𝐏𝐈𝐒𝐎𝐃𝐄 𝐋𝐈𝐍𝐊𝐒
- Transcript: https://www.dwarkeshpatel.com/p/ilya-sutskever
- Apple Podcasts: https://apple.co/42H6c4D
- Spotify: https://spoti.fi/3LRqOBd
- Follow me on Twitter: https://twitter.com/dwarkesh_sp
𝐓𝐈𝐌𝐄𝐒𝐓𝐀𝐌𝐏𝐒 00:00:00 - Time to AGI 00:05:57 - What’s after generative models? 00:10:57 - Data, models, and research 00:15:27 - Alignment 00:20:53 - Post AGI Future 00:26:56 - New ideas are overrated 00:36:22 - Is progress inevitable? 00:41:27 - Future Breakthroughs
SPEAKERS
Ilya Sutskever
guestDwarkesh Patel
hostNarrator
other
EPISODE SUMMARY
In this episode of Dwarkesh Podcast, featuring Ilya Sutskever and Dwarkesh Patel, Ilya Sutskever (OpenAI Chief Scientist) — Why next-token prediction could surpass human intelligence explores ilya Sutskever defends next‑token prediction as path to superintelligence Ilya Sutskever, OpenAI’s cofounder and chief scientist, argues that next‑token prediction, when scaled sufficiently, can both match and surpass human intelligence by implicitly modeling the underlying reality and human cognition that generate language.
RELATED EPISODES
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome




