Skip to content
The Twenty Minute VCThe Twenty Minute VC

Demis Hassabis: Why LLMs Will Not Commoditize & Why We Have Not Hit Scaling Laws

Demis Hassabis is the Co-Founder & CEO of Google DeepMind - working on AGI, responsible for AI breakthroughs such as AlphaGo, the first program to beat the world champion at the game of Go; and AlphaFold, which cracked the 50-year grand challenge of protein structure prediction and was recognised with the 2024 Nobel Prize in Chemistry. Demis is revolutionising drug discovery at Isomorphic Labs. Ultimately, trying to understand the fundamental nature of reality. ----------------------------------------------- Timestamps: 00:00 Intro 01:21 What Actually Counts as AGI & Where Are We Today? 02:58 What Are the Biggest Bottlenecks Holding AI Back Today? 03:48 Have We Hit the Limits of Scaling Laws? 04:40 Where Is AI Ahead of Expectations & What's Still Missing? 05:24 Why Can't AI Systems Learn Continuously Like Humans? 06:10 How Did DeepMind Go from Behind to Leading the Pack? 09:10 Are We Heading Toward Model Commoditization? 09:59 What Does the Future of Open Source Really Look Like? 11:25 What Does a Post LLM World Look Like? 13:03 Can AI Really Fix Drug Discovery? 15:01 What Does "Good" AI Regulation Actually Look Like? 17:31 Who Should Be the Ultimate Arbiter of Truth in an AI World? 18:36 If Demis Had One Shot to Fix AI Safety, What Would He Do? 19:58 Is This Time Different for Jobs or Will History Repeat Itself? 24:06 How Do We Solve the Energy Crisis Created by AI? 25:34 Why Stay in the UK Instead of Moving to Silicon Valley? 27:38 Will Europe Ever Build a Trillion-Dollar Tech Giant? 29:20 Meeting Elon Musk for the First Time? 31:03 What Big Questions About AI Is No One Talking About? 31:42 What Does Demis Want His Legacy to Be? ----------------------------------------------- Subscribe on Spotify: https://open.spotify.com/show/3j2KMcZTtgTNBKwtZBMHvl?si=85bc9196860e4466 Subscribe on Apple Podcasts: https://podcasts.apple.com/us/podcast/the-twenty-minute-vc-20vc-venture-capital-startup/id958230465 Follow Harry Stebbings on X: https://twitter.com/HarryStebbings Follow Demis Hassabis on X: https://twitter.com/demishassabis Follow 20VC on Instagram: https://www.instagram.com/20vchq Follow 20VC on TikTok: https://www.tiktok.com/@20vc_tok Visit our Website: https://www.20vc.com Subscribe to our Newsletter: https://www.thetwentyminutevc.com/contact ----------------------------------------------- #20vc #harrystebbings #demishassabis #googledeepmind #deepmind #google #ai #agi

Demis HassabisguestHarry Stebbingshost
Apr 7, 202632mWatch on YouTube ↗

CHAPTERS

  1. Demis’s AGI definition and why the human mind is the benchmark

    Demis sets a consistent bar for AGI: a system that can exhibit the full range of human cognitive capabilities. He explains why the brain is the only “existence proof” we have for general intelligence and therefore the right reference point for progress.

  2. AGI timelines: probability, compute/algorithm extrapolations, and “within five years”

    Demis discusses timing as a probability distribution, but indicates a strong chance AGI arrives within ~5 years. He notes DeepMind’s early (2010-era) extrapolations of compute and algorithmic progress that implied roughly a 20-year path, which he believes remains on track.

  3. What’s holding AI back: compute as scaling fuel and the “experiment workbench”

    Compute is described as the primary bottleneck, not only to scale larger models but also to run enough serious experiments to validate new ideas. Demis emphasizes that research velocity depends on having ample compute to test algorithmic innovations at realistic scale.

  4. Have scaling laws plateaued? Slower jumps, but still strong returns

    Demis rejects the idea that scaling has “hit a wall,” arguing the situation is more nuanced. While early generations saw enormous near-doubling leaps that naturally slowed, he says frontier labs still see substantial (though diminishing) returns from scaling.

  5. Where AI is ahead—and the missing pieces: continual learning, memory, planning, consistency

    He highlights areas where progress surprised him positively (video models and interactive world models), while outlining what remains missing for robust general intelligence. The gaps include continual learning after deployment, better memory architectures, long-horizon planning, and more consistent (less “jagged”) behavior.

  6. Why continual learning is hard: consolidation analogies from the brain

    Demis explains that labs haven’t yet solved how to integrate new learning into systems trained for months without degrading prior capabilities. He points to brain mechanisms like replay, reinforcement, and sleep-like consolidation as potential inspiration for integrating new memories into existing knowledge.

  7. DeepMind’s resurgence: organizational consolidation, focus, and unified compute

    Demis attributes recent acceleration to organizational changes that unified talent and resources across Google Brain/Research/DeepMind. He argues the company’s research bench produced many foundational breakthroughs and that combining compute/model efforts enabled building the largest frontier systems with startup-like focus.

  8. Why models won’t fully commoditize: algorithmic invention becomes the edge

    Demis argues that as existing ideas get exhausted, the advantage shifts toward labs that can invent new algorithmic breakthroughs. He expects the leading labs to pull away because better tools (e.g., coding/math) accelerate building the next generation and scaling alone becomes less sufficient.

  9. Open source’s role: frontier is ahead, but small/edge models matter (Gemma)

    Demis supports open science while predicting open models will typically lag the frontier by roughly months as ideas are reimplemented. He describes a strategy of releasing strong small models (Gemma) for developers, academics, startups, and edge use cases, while continuing openness in applied science domains.

  10. A “post-LLM” world: foundation models stay central, but may need world models on top

    Demis disagrees with claims that LLM-style foundation models will be replaced, emphasizing their demonstrated power and continued scaling returns. The real question, he says, is whether the foundation model is just a component in a larger AGI system (e.g., with world models) rather than the entire solution.

  11. AI for science and medicine: drug discovery progress and the clinical-trials bottleneck

    Demis paints AGI as a transformative tool for science, then focuses on drug discovery via Isomorphic Labs. He expects major progress in drug design within 5–10 years, but notes clinical trials remain slow; AI may help via simulation and patient stratification, with regulatory trust building after multiple AI-designed drugs succeed.

  12. AI safety and regulation: misuse, agentic autonomy, and global minimum standards

    Demis distinguishes two main risks: malicious use by bad actors and loss of control as systems become more autonomous and agentic. He argues for international minimum standards, benchmarks for dangerous traits (like deception), and certification so companies and consumers can trust audited safeguards.

  13. Who verifies truth and safety: government oversight, safety institutes, and an “IAEA-like” body

    Demis proposes that governments should be the ultimate authority, supported by technically strong AI safety institutes that can audit models against agreed benchmarks. With a “magic wand,” he advocates an international agency akin to the Atomic Energy Agency, plus constraints to reduce vulnerabilities (e.g., avoiding non-human-readable model outputs).

  14. Jobs, inequality, and the pace of change: “10x the Industrial Revolution at 10x speed”

    Demis expects major disruption: old jobs will disappear, but new ones will emerge—though he cautions this wave may be far larger than prior tech shifts. He argues AI is overhyped in the next year yet underappreciated over a decade, and he explores ways to share gains (pension/sovereign funds, redistribution, infrastructure investment).

  15. Energy and geopolitics of innovation: grids, fusion, the UK/Europe, and Demis’s personal closing themes

    Demis argues AI can offset its own energy costs by optimizing infrastructure (notably grid efficiency) and accelerating breakthroughs like fusion, batteries, and superconductors. He explains why he built in the UK (talent density, less distraction than Silicon Valley), what Europe needs to produce trillion-dollar firms (growth capital/pension fund flexibility), shares an early Elon Musk meeting, and closes on under-discussed philosophical questions and the legacy he wants: advancing science and curing disease.

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome