Shane Legg (DeepMind Founder) — 2028 AGI, superhuman alignment, new architectures

Shane Legg (DeepMind Founder) — 2028 AGI, superhuman alignment, new architectures

Dwarkesh PodcastOct 26, 202344m

Dwarkesh Patel (host), Shane Legg (guest), Narrator

Defining and operationally measuring AGI and general intelligenceLimits of current benchmarks and missing cognitive capabilities (episodic memory, video understanding)Architectural gaps in today’s LLMs and the role of new memory and system‑2 componentsFoundation models as sequence predictors, world models, and bases for agents/searchAlignment strategies for human‑level and superhuman AI, including ethics and reasoningDeepMind’s impact on AI capabilities vs. safety, and historical perspective on timelinesForecasts for AGI timing, near‑term trends, and the importance of multimodal models

In this episode of Dwarkesh Podcast, featuring Dwarkesh Patel and Shane Legg, Shane Legg (DeepMind Founder) — 2028 AGI, superhuman alignment, new architectures explores deepMind’s Shane Legg Predicts 2028 AGI, Maps Path and Risks Shane Legg, co‑founder and Chief AGI Scientist at Google DeepMind, discusses how to define and measure AGI as human‑level, broadly general cognitive capability across many domains rather than a single benchmark. He argues current large language models have unlocked a scalable form of understanding but still lack key ingredients like episodic memory, robust system‑2 reasoning, and grounded multimodal perception. Legg is cautiously optimistic that architectural advances, better search/agency on top of foundation models, and improved factuality/memory will remove most remaining technical blockers, making AGI by around 2028 roughly a 50% probability. On safety, he believes containment will fail for very capable systems, so alignment must come from deeply embedding explicit ethical reasoning and oversight into powerful, well‑informed world models, alongside institutional safeguards and evolving safety benchmarks.

DeepMind’s Shane Legg Predicts 2028 AGI, Maps Path and Risks

Shane Legg, co‑founder and Chief AGI Scientist at Google DeepMind, discusses how to define and measure AGI as human‑level, broadly general cognitive capability across many domains rather than a single benchmark. He argues current large language models have unlocked a scalable form of understanding but still lack key ingredients like episodic memory, robust system‑2 reasoning, and grounded multimodal perception. Legg is cautiously optimistic that architectural advances, better search/agency on top of foundation models, and improved factuality/memory will remove most remaining technical blockers, making AGI by around 2028 roughly a 50% probability. On safety, he believes containment will fail for very capable systems, so alignment must come from deeply embedding explicit ethical reasoning and oversight into powerful, well‑informed world models, alongside institutional safeguards and evolving safety benchmarks.

Key Takeaways

AGI should be judged by breadth across many human‑like cognitive tasks, not a single benchmark.

Legg defines AGI as a machine that can do the typical range of human cognitive activities at roughly human level; you only call it AGI when extensive, adversarial testing fails to uncover domains where humans clearly outperform it.

Get the full analysis with uListen AI

Current LLMs lack key cognitive systems like rapid episodic memory and robust system‑2 reasoning.

Transformers mainly have a short‑term 'context window' and slow weight updates, analogous to working and long‑term cortical memory, but miss the brain’s dedicated, fast‑learning episodic memory and explicit deliberative reasoning needed for reliability and sample efficiency.

Get the full analysis with uListen AI

Architectural innovations, not just more data and compute, are needed to close remaining gaps.

Legg expects targeted changes—new memory systems, better factuality controls, multimodal perception, and integrated search/agent frameworks—to address most shortcomings, rather than fundamental limits of deep learning.

Get the full analysis with uListen AI

True creativity beyond training data will require integrated search over possibilities, not just pattern mimicry.

Using AlphaGo’s famous move 37 as an example, he argues that real innovation comes from searching large spaces and evaluating unlikely but powerful options, something current LLMs don’t yet do in a robust, agentic way.

Get the full analysis with uListen AI

Alignment will depend on embedding an explicit, well‑understood ethical reasoning process in powerful models.

Legg suggests capable AIs must have strong world models, deep knowledge of human ethical theories, and a 'system‑2' process that explicitly evaluates possible actions against specified ethical principles, with humans auditing both reasoning and outcomes.

Get the full analysis with uListen AI

Safety via containment alone is unlikely to work for very capable AGI systems.

Because sufficiently advanced systems will be extremely powerful, he believes the focus must be on making them fundamentally value‑aligned and ethical from the outset, rather than trying to box or hard‑limit them post‑hoc.

Get the full analysis with uListen AI

AGI by ~2028 remains plausible, driven by scalable algorithms plus exponential compute and data.

Based on trends he identified as early as 2001, Legg still assigns ~50% probability to human‑level AGI by 2028, expecting the intervening years to bring steadily improving, multimodal, less‑delusional and more broadly useful models.

Get the full analysis with uListen AI

Notable Quotes

If you can't find, with some effort, a cognitive task where humans clearly outperform it, then for all practical purposes you now have an AGI.

Shane Legg

I don't see big walls in front of us. I just see there's more research and work, and these things will improve and probably be adequately solved.

Shane Legg

To get real creativity, you need to search through spaces of possibilities and find these hidden gems. Current language models don't really do that kind of a thing.

Shane Legg

If the system is really capable, really intelligent, really powerful, trying to somehow contain it or limit it is probably not a winning strategy.

Shane Legg

We actually need better reasoning, better understanding of the world, and better understanding of ethics in our systems if we want them to be profoundly ethical.

Shane Legg

Questions Answered in This Episode

How can we rigorously design a test suite for AGI that is both comprehensive and adaptable as new human‑like tasks are identified?

Shane Legg, co‑founder and Chief AGI Scientist at Google DeepMind, discusses how to define and measure AGI as human‑level, broadly general cognitive capability across many domains rather than a single benchmark. ...

Get the full analysis with uListen AI

What concrete architectural designs for episodic memory and system‑2 reasoning look most promising to integrate with current transformer‑based models?

Get the full analysis with uListen AI

How do we choose which ethical framework(s) an AGI should follow, given cultural and philosophical disagreements about values?

Get the full analysis with uListen AI

In practice, how can humans reliably audit an AGI’s internal ethical reasoning and world‑model in high‑stakes, time‑sensitive situations?

Get the full analysis with uListen AI

What kinds of institutional and governance structures are needed to ensure that rapidly advancing capabilities are matched by equally robust safety and alignment standards?

Get the full analysis with uListen AI

Transcript Preview

Dwarkesh Patel

Okay. Today, I have the pleasure of interviewing Shane Legg, who is a founder and the chief AGI scientist of Google DeepMind. Shane, welcome to the podcast.

Shane Legg

Thank you. It's a pleasure to be here.

Dwarkesh Patel

So first question, how do we measure progress towards AGI concretely? So we have these loss numbers, and we can see how the loss improves from one model to another, but it's just a number. How do we interpret this? How do we see w- how much progress we're actually making?

Shane Legg

That's a, that's a hard question (laughs) actually. Um, AGI, by its definition, is about generality. So it's not about doing a specific thing. It's much easier to measure performance when you have a very specific thing in mind, because you can construct a test around that. Well, maybe I should first of all explain, what do I mean by AGI? 'Cause there are a few different notions around. When I say AGI, I mean, um, a, a machine that can do the sorts of, uh, cognitive things that people can typically do, possibly more. But that's, to be an AGI, that's kind of the, the bar you need to meet. So if we want to test whether we're, we're meeting this threshold or we're getting close to the threshold, what we actually need then is, um, a lot of different kinds of measurements and tests of all the, spans the breadth of all the sorts of cognitive tasks that people can do. And then to have a sense of what is human performance, you know, on the, on these sorts of tasks, and that'll then allows us to sort of judge whether or not we're, we're there. It's difficult because you'll never have a complete set of everything that people can do, because it's, you know, such a large set. But I think that if you ever get to the point where you have a, have a, have a pretty good range of tests of all sorts of different things that people do, cognitive things that people can do, and you have an AI system which can meet human performance and all those things, and with some effort you can't actually come up with new examples of cognitive tasks where the machine is below human performance, then at that point, it's conceptually possible that there is something that the, um, the machine can't do that people can do. But if you can't find it with some effort, I think for all practical purposes, you now have an AGI.

Dwarkesh Patel

So, uh, let's get more concrete. Um, and, uh, bl- you know, we measure the performance of these large language models on MMLU or something, and maybe you can explain what all these different benchmarks are. But the ones we use right now that you might see in a paper, what, what are they missing? What aspect of human cognition do they not measure a- adequately?

Shane Legg

Ooh, yeah. Another hard question. (laughs)

Install uListen to search the full transcript and get AI-powered insights

Get Full Transcript

Get more from every podcast

AI summaries, searchable transcripts, and fact-checking. Free forever.

Add to Chrome