Dwarkesh PodcastShane Legg (DeepMind Founder) — 2028 AGI, superhuman alignment, new architectures
Episode Details
EPISODE INFO
- Released
- October 26, 2023
- Duration
- 44m
- Channel
- Dwarkesh Podcast
- Watch on YouTube
- ▶ Open ↗
EPISODE DESCRIPTION
I had a lot of fun chatting with Shane Legg - Founder & Chief AGI Scientist, Google DeepMind! We discuss:
- Why he expects AGI around 2028
- How to align superhuman models
- What new architectures needed for AGI
- Has Deepmind sped up capabilities or safety more?
- Why multimodality will be next big landmark
- & much more
𝐄𝐏𝐈𝐒𝐎𝐃𝐄 𝐋𝐈𝐍𝐊𝐒
- Transcript: https://www.dwarkeshpatel.com/p/shane-legg
- Apple Podcasts: https://podcasts.apple.com/us/podcast/shane-legg-deepmind-founder-2028-agi-new-architectures/id1516093381?i=1000632720307
- Spotify: https://open.spotify.com/episode/0Ru2CtaJqsQ5mpA5dqHWAK?si=4AsglwIZQpqht7p9Wpc_CA
- Twitter: https://twitter.com/dwarkesh_sp/status/1717566262472237134
𝐓𝐈𝐌𝐄𝐒𝐓𝐀𝐌𝐏𝐒 00:00:00 - Measuring AGI 00:11:41 - Do we need new architectures? 00:16:26 - Is search needed for creativity? 00:19:19 - Superhuman alignment 00:29:58 - Impact of Deepmind on safety vs capabilities 00:34:03 - Timelines 00:41:24 - Multimodality
SPEAKERS
Dwarkesh Patel
hostShane Legg
guestNarrator
other
EPISODE SUMMARY
In this episode of Dwarkesh Podcast, featuring Dwarkesh Patel and Shane Legg, Shane Legg (DeepMind Founder) — 2028 AGI, superhuman alignment, new architectures explores deepMind’s Shane Legg Predicts 2028 AGI, Maps Path and Risks Shane Legg, co‑founder and Chief AGI Scientist at Google DeepMind, discusses how to define and measure AGI as human‑level, broadly general cognitive capability across many domains rather than a single benchmark. He argues current large language models have unlocked a scalable form of understanding but still lack key ingredients like episodic memory, robust system‑2 reasoning, and grounded multimodal perception. Legg is cautiously optimistic that architectural advances, better search/agency on top of foundation models, and improved factuality/memory will remove most remaining technical blockers, making AGI by around 2028 roughly a 50% probability. On safety, he believes containment will fail for very capable systems, so alignment must come from deeply embedding explicit ethical reasoning and oversight into powerful, well‑informed world models, alongside institutional safeguards and evolving safety benchmarks.
RELATED EPISODES
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome




