No PriorsNo Priors Ep. 74 | With Google DeepMind VP of Research Oriol Vinyals
At a glance
WHAT IT’S REALLY ABOUT
Google DeepMind’s Oriol Vinyals on Gemini, AGI, and Infinite Context
- Oriol Vinyals, VP of Research at Google DeepMind and Gemini co-lead, explains how Google Brain and DeepMind were unified into Google DeepMind and how the Gemini project emerged as Google’s core, multimodal foundation model. He outlines how Gemini powers products from Search and Ads to Cloud, developer tooling, and the Gemini chatbot, and why Google remains agnostic between chat-first and search-first interfaces. Vinyals highlights long and “infinite” context windows, hybrid retrieval-plus-neural architectures, and improved reasoning/reward models as the next major frontiers for LLMs. He is optimistic about AGI arriving around the 2028–2030 timeframe but argues the focus should be on practical impact, scientific progress, and how humans adapt to and collaborate with these systems.
IDEAS WORTH REMEMBERING
5 ideasLong context windows unlock qualitatively new use cases, but product-market fit is still emerging.
Gemini’s ability to handle millions of tokens allows users to query hour-long videos or large document corpora directly, yet truly mainstream, high-value applications for extreme context length are still being discovered.
Chat and search will likely coexist, each enhanced by LLMs rather than replaced.
Vinyals views chatbots as LLM-first experiences that can call search as a tool, while traditional search will incorporate AI summaries and reasoning; different query types will naturally gravitate toward different interfaces.
Future LLM progress hinges on making reasoning more reliable, not just bigger models.
Current models can solve very hard problems yet still make trivial mistakes; improving “crisp and accurate” reasoning likely requires better search-like procedures, redundancy, and explicit reasoning steps layered on top of base models.
Reward modeling beyond games is both critical and unsolved at scale.
Unlike Go or chess, real-world tasks lack perfect, binary rewards; Vinyals expects progress from better reward models, RL with human feedback, and models that can increasingly judge and self-correct their own outputs.
Hybrid systems combining retrieval with long context models are here to stay.
While infinite context reduces the need to compress documents into single vectors, retrieval and hierarchical memory are still essential for efficiency and will likely be integrated tightly with neural models.
WORDS WORTH SAVING
5 quotesThe goal of Gemini is to create an awesome core model to power the technology that LLMs are enabling all around the world.
— Oriol Vinyals
It just feels like that search experience will be tremendously enhanced by these models.
— Oriol Vinyals
You can put a whole one-hour video in and just ask anything and it feels superhuman.
— Oriol Vinyals
We now have very powerful general models that, from an AGI definition standpoint, start to tick many boxes.
— Oriol Vinyals
I’m not sure it matters that we achieve AGI; it’s going to be a distribution of capabilities rather than a single moment of parity with humans.
— Oriol Vinyals
High quality AI-generated summary created from speaker-labeled transcript.
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome