Skip to content
Dwarkesh PodcastDwarkesh Podcast

Shane Legg (DeepMind Founder) — 2028 AGI, superhuman alignment, new architectures

I had a lot of fun chatting with Shane Legg - Founder & Chief AGI Scientist, Google DeepMind! We discuss: * Why he expects AGI around 2028 * How to align superhuman models * What new architectures needed for AGI * Has Deepmind sped up capabilities or safety more? * Why multimodality will be next big landmark * & much more 𝐄𝐏𝐈𝐒𝐎𝐃𝐄 𝐋𝐈𝐍𝐊𝐒 * Transcript: https://www.dwarkeshpatel.com/p/shane-legg * Apple Podcasts: https://podcasts.apple.com/us/podcast/shane-legg-deepmind-founder-2028-agi-new-architectures/id1516093381?i=1000632720307 * Spotify: https://open.spotify.com/episode/0Ru2CtaJqsQ5mpA5dqHWAK?si=4AsglwIZQpqht7p9Wpc_CA * Twitter: https://twitter.com/dwarkesh_sp/status/1717566262472237134 𝐓𝐈𝐌𝐄𝐒𝐓𝐀𝐌𝐏𝐒 00:00:00 - Measuring AGI 00:11:41 - Do we need new architectures? 00:16:26 - Is search needed for creativity? 00:19:19 - Superhuman alignment 00:29:58 - Impact of Deepmind on safety vs capabilities 00:34:03 - Timelines 00:41:24 - Multimodality

Dwarkesh PatelhostShane Leggguest
Oct 26, 202344mWatch on YouTube ↗

At a glance

WHAT IT’S REALLY ABOUT

DeepMind’s Shane Legg Predicts 2028 AGI, Maps Path and Risks

  1. Shane Legg, co‑founder and Chief AGI Scientist at Google DeepMind, discusses how to define and measure AGI as human‑level, broadly general cognitive capability across many domains rather than a single benchmark. He argues current large language models have unlocked a scalable form of understanding but still lack key ingredients like episodic memory, robust system‑2 reasoning, and grounded multimodal perception. Legg is cautiously optimistic that architectural advances, better search/agency on top of foundation models, and improved factuality/memory will remove most remaining technical blockers, making AGI by around 2028 roughly a 50% probability. On safety, he believes containment will fail for very capable systems, so alignment must come from deeply embedding explicit ethical reasoning and oversight into powerful, well‑informed world models, alongside institutional safeguards and evolving safety benchmarks.

IDEAS WORTH REMEMBERING

5 ideas

AGI should be judged by breadth across many human‑like cognitive tasks, not a single benchmark.

Legg defines AGI as a machine that can do the typical range of human cognitive activities at roughly human level; you only call it AGI when extensive, adversarial testing fails to uncover domains where humans clearly outperform it.

Current LLMs lack key cognitive systems like rapid episodic memory and robust system‑2 reasoning.

Transformers mainly have a short‑term 'context window' and slow weight updates, analogous to working and long‑term cortical memory, but miss the brain’s dedicated, fast‑learning episodic memory and explicit deliberative reasoning needed for reliability and sample efficiency.

Architectural innovations, not just more data and compute, are needed to close remaining gaps.

Legg expects targeted changes—new memory systems, better factuality controls, multimodal perception, and integrated search/agent frameworks—to address most shortcomings, rather than fundamental limits of deep learning.

True creativity beyond training data will require integrated search over possibilities, not just pattern mimicry.

Using AlphaGo’s famous move 37 as an example, he argues that real innovation comes from searching large spaces and evaluating unlikely but powerful options, something current LLMs don’t yet do in a robust, agentic way.

Alignment will depend on embedding an explicit, well‑understood ethical reasoning process in powerful models.

Legg suggests capable AIs must have strong world models, deep knowledge of human ethical theories, and a 'system‑2' process that explicitly evaluates possible actions against specified ethical principles, with humans auditing both reasoning and outcomes.

WORDS WORTH SAVING

5 quotes

If you can't find, with some effort, a cognitive task where humans clearly outperform it, then for all practical purposes you now have an AGI.

Shane Legg

I don't see big walls in front of us. I just see there's more research and work, and these things will improve and probably be adequately solved.

Shane Legg

To get real creativity, you need to search through spaces of possibilities and find these hidden gems. Current language models don't really do that kind of a thing.

Shane Legg

If the system is really capable, really intelligent, really powerful, trying to somehow contain it or limit it is probably not a winning strategy.

Shane Legg

We actually need better reasoning, better understanding of the world, and better understanding of ethics in our systems if we want them to be profoundly ethical.

Shane Legg

Defining and operationally measuring AGI and general intelligenceLimits of current benchmarks and missing cognitive capabilities (episodic memory, video understanding)Architectural gaps in today’s LLMs and the role of new memory and system‑2 componentsFoundation models as sequence predictors, world models, and bases for agents/searchAlignment strategies for human‑level and superhuman AI, including ethics and reasoningDeepMind’s impact on AI capabilities vs. safety, and historical perspective on timelinesForecasts for AGI timing, near‑term trends, and the importance of multimodal models

High quality AI-generated summary created from speaker-labeled transcript.

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome