Dwarkesh PodcastShane Legg (DeepMind Founder) — 2028 AGI, superhuman alignment, new architectures
At a glance
WHAT IT’S REALLY ABOUT
DeepMind’s Shane Legg Predicts 2028 AGI, Maps Path and Risks
- Shane Legg, co‑founder and Chief AGI Scientist at Google DeepMind, discusses how to define and measure AGI as human‑level, broadly general cognitive capability across many domains rather than a single benchmark. He argues current large language models have unlocked a scalable form of understanding but still lack key ingredients like episodic memory, robust system‑2 reasoning, and grounded multimodal perception. Legg is cautiously optimistic that architectural advances, better search/agency on top of foundation models, and improved factuality/memory will remove most remaining technical blockers, making AGI by around 2028 roughly a 50% probability. On safety, he believes containment will fail for very capable systems, so alignment must come from deeply embedding explicit ethical reasoning and oversight into powerful, well‑informed world models, alongside institutional safeguards and evolving safety benchmarks.
IDEAS WORTH REMEMBERING
5 ideasAGI should be judged by breadth across many human‑like cognitive tasks, not a single benchmark.
Legg defines AGI as a machine that can do the typical range of human cognitive activities at roughly human level; you only call it AGI when extensive, adversarial testing fails to uncover domains where humans clearly outperform it.
Current LLMs lack key cognitive systems like rapid episodic memory and robust system‑2 reasoning.
Transformers mainly have a short‑term 'context window' and slow weight updates, analogous to working and long‑term cortical memory, but miss the brain’s dedicated, fast‑learning episodic memory and explicit deliberative reasoning needed for reliability and sample efficiency.
Architectural innovations, not just more data and compute, are needed to close remaining gaps.
Legg expects targeted changes—new memory systems, better factuality controls, multimodal perception, and integrated search/agent frameworks—to address most shortcomings, rather than fundamental limits of deep learning.
True creativity beyond training data will require integrated search over possibilities, not just pattern mimicry.
Using AlphaGo’s famous move 37 as an example, he argues that real innovation comes from searching large spaces and evaluating unlikely but powerful options, something current LLMs don’t yet do in a robust, agentic way.
Alignment will depend on embedding an explicit, well‑understood ethical reasoning process in powerful models.
Legg suggests capable AIs must have strong world models, deep knowledge of human ethical theories, and a 'system‑2' process that explicitly evaluates possible actions against specified ethical principles, with humans auditing both reasoning and outcomes.
WORDS WORTH SAVING
5 quotesIf you can't find, with some effort, a cognitive task where humans clearly outperform it, then for all practical purposes you now have an AGI.
— Shane Legg
I don't see big walls in front of us. I just see there's more research and work, and these things will improve and probably be adequately solved.
— Shane Legg
To get real creativity, you need to search through spaces of possibilities and find these hidden gems. Current language models don't really do that kind of a thing.
— Shane Legg
If the system is really capable, really intelligent, really powerful, trying to somehow contain it or limit it is probably not a winning strategy.
— Shane Legg
We actually need better reasoning, better understanding of the world, and better understanding of ethics in our systems if we want them to be profoundly ethical.
— Shane Legg
High quality AI-generated summary created from speaker-labeled transcript.
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome