
Demis Hassabis: Why LLMs Will Not Commoditize & Why We Have Not Hit Scaling Laws
Demis Hassabis (guest), Harry Stebbings (host)
In this episode of The Twenty Minute VC, featuring Demis Hassabis and Harry Stebbings, Demis Hassabis: Why LLMs Will Not Commoditize & Why We Have Not Hit Scaling Laws explores demis Hassabis on AGI timeline, scaling, safety, and science breakthroughs Hassabis defines AGI as matching the full set of human cognitive capabilities and estimates a strong chance of reaching it within five years based on compute and algorithmic progress trends.
Demis Hassabis on AGI timeline, scaling, safety, and science breakthroughs
Hassabis defines AGI as matching the full set of human cognitive capabilities and estimates a strong chance of reaching it within five years based on compute and algorithmic progress trends.
He argues scaling laws have not “hit a wall,” but returns are naturally less explosive than early generations, with compute remaining the main bottleneck both for training and for running meaningful experiments.
DeepMind’s recent acceleration is attributed to consolidating talent and compute across Google and operating with startup-like focus to build larger frontier systems faster.
Key missing capabilities include continual learning, better memory architectures, long-horizon planning, and improved consistency to reduce today’s “jagged intelligence.”
He advocates international minimum safety standards and independent auditing (akin to an atomic-agency model) to mitigate misuse and ensure increasingly agentic systems remain controllable.
Key Takeaways
AGI is benchmarked to the human mind, not narrow test scores.
Hassabis uses humans as the only proven example of general intelligence, so AGI must exhibit the full range of cognitive capabilities rather than excelling at a subset of tasks.
Get the full analysis with uListen AI
Compute limits progress twice: scaling models and validating ideas.
Beyond training bigger systems, labs need massive compute to test new algorithmic ideas at realistic scale; otherwise promising concepts often fail when integrated into frontier models.
Get the full analysis with uListen AI
Scaling returns are moderating, but not exhausted.
He rejects the “plateau” framing: performance gains are no longer near-doubling each generation, yet remain substantial enough that frontier labs still see strong ROI from scaling.
Get the full analysis with uListen AI
Continual learning is a major unsolved capability gap.
Current models struggle to incorporate new knowledge post-training without degrading prior capabilities; Hassabis points to brain-like “consolidation” (e. ...
Get the full analysis with uListen AI
Long context windows are a brute-force stand-in for real memory.
He expects new architectures for memory—beyond stuffing everything into context—to improve efficiency and reliability, especially for agentic systems operating over time.
Get the full analysis with uListen AI
Frontier advantage will increasingly come from new algorithms, not just bigger models.
As “the juice” from current methods gets wrung out, labs that can invent novel algorithmic ideas should pull away, reducing the likelihood of near-term commoditization at the frontier.
Get the full analysis with uListen AI
AI drug discovery will likely improve in two phases: design first, then faster trials via trust.
Isomorphic aims to build a general drug design engine (chemistry, toxicity, properties) within 5–10 years, while regulatory acceleration may follow only after multiple AI-designed drugs validate model predictions end-to-end.
Get the full analysis with uListen AI
Notable Quotes
“We’ve always defined AGI as basically a system that exhibits all the cognitive capabilities the human mind has.”
— Demis Hassabis
“There’s a very good chance of it being within the next five years.”
— Demis Hassabis
“No, I don’t think so… the returns are still very substantial, although they’re a bit less than they were.”
— Demis Hassabis
“I sometimes call these systems jagged intelligences.”
— Demis Hassabis
“Those labs that have capability to invent new algorithmic ideas are gonna start having bigger advantage… as the last set of ideas… all the juice has been wrung out of them.”
— Demis Hassabis
Questions Answered in This Episode
On your definition of AGI, which specific human capabilities (e.g., theory of mind, causal reasoning, transfer) are the hardest remaining pieces to demonstrate convincingly?
Hassabis defines AGI as matching the full set of human cognitive capabilities and estimates a strong chance of reaching it within five years based on compute and algorithmic progress trends.
Get the full analysis with uListen AI
What concrete technical approaches do you think are most promising for continual learning without catastrophic forgetting—replay buffers, modular networks, sleep-like phases, or something else?
He argues scaling laws have not “hit a wall,” but returns are naturally less explosive than early generations, with compute remaining the main bottleneck both for training and for running meaningful experiments.
Get the full analysis with uListen AI
You describe long context as brute force; what would a “proper” memory system look like in a production agent, and how would it be trained and audited?
DeepMind’s recent acceleration is attributed to consolidating talent and compute across Google and operating with startup-like focus to build larger frontier systems faster.
Get the full analysis with uListen AI
When people claim scaling laws are plateauing, what metrics do you think they’re misreading—benchmarks, real-world task performance, or cost/performance curves?
Key missing capabilities include continual learning, better memory architectures, long-horizon planning, and improved consistency to reduce today’s “jagged intelligence.”
Get the full analysis with uListen AI
What would be the clearest sign that frontier models are starting to commoditize—or conversely, that the gap between top labs is structurally widening?
He advocates international minimum safety standards and independent auditing (akin to an atomic-agency model) to mitigate misuse and ensure increasingly agentic systems remain controllable.
Get the full analysis with uListen AI
Transcript Preview
I would say about 90% of the breakthroughs that underpin the modern AI industry were done either by Google Brain or Google Research or DeepMind, so one of our groups. The returns are kind of still very substantial, although they're a bit less than they were obviously at the start of all of this scaling.
We have amazing guests on the show, but very few honestly will be considered in the same realm as Newton, Turing, Einstein. Our guest today is one of the greatest minds on the planet, and I consider myself incredibly lucky to have had the chance to sit down with him.
Those labs that have capability to invent new algorithmic ideas are gonna start having bigger advantage over the next few years, as the last set of ideas, all the juice has been wrung out of them.
This is a truly special one, and one that I'll remember for a very long time.
I think we could probably get 30, 40% more efficiency out of our national grids.
Enjoy the episode, and I so appreciate the time we had with a very special human being.
I sometimes quantify the coming of AGI as 10 times the Industrial Revolution at 10 times the speed.
I'm thrilled to welcome Demis Hassabis at DeepMind. [clapperboard clicks] Ready to go? [upbeat music] Demis, I'm so excited to be doing this. Thank you so much for joining me today.
Great to be here.
Now, there are many places that we could've start, but-
Yeah
... I was watching actually the documentary that you did, which was fantastic, and I actually wanted to start on AGI.
Mm-hmm.
Definitions are very varying. You've been very thoughtful about what it means to you.
Mm.
And so I wanted to start, can you explain to me how you think about it today so we get that as a kind of ground center?
Yeah. Uh, well, we've, we've always defined... We've been very consistent how we define AGI as basically a system that exhibits all the cognitive capabilities the human mind has. And that's important because the brain is the only existence proof we have that we know of in, maybe in the universe, uh, that general intelligence is possible. So that for me is the bar for what AGI should be.
It's the worst question. How close are we? [laughs]
[laughs]
Everyone, everyone s- has different things, and it's very difficult when you have, you know, very prominent figures saying it could be as early as 2026, 2027.
Yeah. I mean, I think... Look, I've got a probability distribution around, um, the timings, but I, I would say there's a very good chance of it being within the next five years. So that's not long at all.
Is that closer than you thought? Has that changed over time?
Not really. I mean, actually, when you, when you... Uh, it's funny, um, my co-founder Shane Legg, who's chief scientist here, um, uh, when we started out DeepMind back in 2010, he used to write blog posts sort of predicting about, uh, when AGI would happen. And bearing in mind, in 2010 when we started, almost nobody was working in AI, and everyone thought AI, um, basically didn't work.
Install uListen to search the full transcript and get AI-powered insights
Get Full TranscriptGet more from every podcast
AI summaries, searchable transcripts, and fact-checking. Free forever.
Add to Chrome