Skip to content
No PriorsNo Priors

No Priors Ep. 74 | With Google DeepMind VP of Research Oriol Vinyals

In this episode of No Priors, hosts Sarah and Elad are joined by Oriol Vinyals, VP of Research, Deep Learning Team Lead, at Google DeepMind and Technical Co-lead of the Gemini project. Oriol shares insights from his career in machine learning, including leading the AlphaStar team and building competitive StarCraft agents. We talk about Google DeepMind, forming the Gemini project, and integrating AI technology throughout Google products. Oriol also discusses the advancements and challenges in long context LLMs, reasoning capabilities of models, and the future direction of AI research and applications. The episode concludes with a reflection on AGI timelines, the importance of specialized research, and advice for future generations in navigating the evolving landscape of AI. Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @oriolvinyalsml Show Notes: 00:00 Introduction to Oriol Vinyals 00:55 The Gemini Project and Its Impact 02:04 AI in Google Search and Chat Models 08:29 Infinite Context Length and Its Applications 14:42 Scaling AI and Reward Functions 31:55 The Future of General Models and Specialization 38:14 Reflections on AGI and Personal Insights 43:09 Will the Next Generation Study Computer Science? 45:37 Closing thoughts

Sarah GuohostOriol VinyalsguestElad Gilhost
Jul 31, 202446mWatch on YouTube ↗

At a glance

WHAT IT’S REALLY ABOUT

Google DeepMind’s Oriol Vinyals on Gemini, AGI, and Infinite Context

  1. Oriol Vinyals, VP of Research at Google DeepMind and Gemini co-lead, explains how Google Brain and DeepMind were unified into Google DeepMind and how the Gemini project emerged as Google’s core, multimodal foundation model. He outlines how Gemini powers products from Search and Ads to Cloud, developer tooling, and the Gemini chatbot, and why Google remains agnostic between chat-first and search-first interfaces. Vinyals highlights long and “infinite” context windows, hybrid retrieval-plus-neural architectures, and improved reasoning/reward models as the next major frontiers for LLMs. He is optimistic about AGI arriving around the 2028–2030 timeframe but argues the focus should be on practical impact, scientific progress, and how humans adapt to and collaborate with these systems.

IDEAS WORTH REMEMBERING

5 ideas

Long context windows unlock qualitatively new use cases, but product-market fit is still emerging.

Gemini’s ability to handle millions of tokens allows users to query hour-long videos or large document corpora directly, yet truly mainstream, high-value applications for extreme context length are still being discovered.

Chat and search will likely coexist, each enhanced by LLMs rather than replaced.

Vinyals views chatbots as LLM-first experiences that can call search as a tool, while traditional search will incorporate AI summaries and reasoning; different query types will naturally gravitate toward different interfaces.

Future LLM progress hinges on making reasoning more reliable, not just bigger models.

Current models can solve very hard problems yet still make trivial mistakes; improving “crisp and accurate” reasoning likely requires better search-like procedures, redundancy, and explicit reasoning steps layered on top of base models.

Reward modeling beyond games is both critical and unsolved at scale.

Unlike Go or chess, real-world tasks lack perfect, binary rewards; Vinyals expects progress from better reward models, RL with human feedback, and models that can increasingly judge and self-correct their own outputs.

Hybrid systems combining retrieval with long context models are here to stay.

While infinite context reduces the need to compress documents into single vectors, retrieval and hierarchical memory are still essential for efficiency and will likely be integrated tightly with neural models.

WORDS WORTH SAVING

5 quotes

The goal of Gemini is to create an awesome core model to power the technology that LLMs are enabling all around the world.

Oriol Vinyals

It just feels like that search experience will be tremendously enhanced by these models.

Oriol Vinyals

You can put a whole one-hour video in and just ask anything and it feels superhuman.

Oriol Vinyals

We now have very powerful general models that, from an AGI definition standpoint, start to tick many boxes.

Oriol Vinyals

I’m not sure it matters that we achieve AGI; it’s going to be a distribution of capabilities rather than a single moment of parity with humans.

Oriol Vinyals

Formation of Google DeepMind and the Gemini projectChat-based interfaces versus traditional search and product integrationLong and “infinite” context windows and multimodal capabilitiesHybrid architectures: retrieval, hierarchical memory, and efficiencyLLM limitations: hallucinations, reasoning, and reward modelingSpecialized models versus general-purpose AGI systemsSocietal and personal implications of AGI timelines and education

High quality AI-generated summary created from speaker-labeled transcript.

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome