The Twenty Minute VCDavid Luan: Why Nvidia Will Enter the Model Space & Models Will Enter the Chip Space | E1169
At a glance
WHAT IT’S REALLY ABOUT
AI’s Next Era: Vertical Integration, Smarter Agents, and Chip Wars Ahead
- David Luan, CEO of Adept and former leader at Google Brain and OpenAI, outlines how AI has shifted from bottom‑up academic research to large, mission‑driven teams solving concrete problems with Transformers as the universal model architecture.
- He argues that model progress will continue despite talk of diminishing returns, driven first by scaling base models and now increasingly by reinforcement-style loops where models act in environments, generate their own data, and improve reasoning.
- Luan predicts a tightly concentrated layer of 5–7 frontier model providers, deep vertical integration between chips and models (with Nvidia moving up-stack and clouds moving down-stack), and a clear separation between creative chatbots and reliable work-focused agents.
- He sees the biggest long-term value not in raw models or services, but in vertically integrated agent products that can learn arbitrary enterprise workflows, while warning about regulatory capture, overhyped short‑term expectations, and underappreciated human–computer interaction challenges.
IDEAS WORTH REMEMBERING
5 ideasAI progress has moved from curiosity-driven papers to Apollo-style, goal-driven projects.
Luan contrasts the 2012–2018 Google Brain era of bottom-up research with OpenAI’s shift to large teams focused on specific big goals (e.g., robotics, game-playing, GPT scaling), and he structures Adept similarly around solving concrete, high-impact problems rather than publishing papers.
Diminishing returns to compute are overstated; new training paradigms will soak up vast compute.
Traditional scaling shows predictable gains when you double compute, and now a second frontier is emerging: giving models environments (math tools, theorem provers, notebooks) to explore, fail, and self-generate training data via RL-style loops, which both improves reasoning and demands even more compute.
Reasoning will be solved at the model-provider layer through environment-based training, not just more data.
Simply scaling unsupervised internet training can’t teach composition of ideas; Luan expects leading LLM providers to enhance reasoning by training models to act in rich problem-solving environments with feedback, which requires changing the models themselves rather than just fine-tuning on proprietary corpora.
Models and chips are on a collision course toward vertical integration.
Clouds need in-house chips for margin and scale advantages (e.g., Google TPUs), while chipmakers like Nvidia risk commoditization unless they move up into the model layer; Luan anticipates both sides “eating each other’s lunch” and tight coupling between hardware costs and model competitiveness.
Agents and chatbots are diverging into distinct product categories with different requirements.
Hallucinations are acceptable or even useful in creative chatbots, but intolerable for agents running real workflows (taxes, logistics); Luan predicts reliable, tool-using, goal-driven agents that operate software on your behalf will develop separately from conversational systems designed for information and companionship.
WORDS WORTH SAVING
5 quotesThe next phase of AI after Transformer was not going to be about research paper writing. It was going to be about, ‘Let's choose a major unsolved scientific problem and just try to solve it.’
— David Luan
The second way of improving model performance is just starting to be tapped now, and that's also going to absorb a boatload of compute.
— David Luan
I actually think agents and chatbots are gonna speciate and turn into two different products.
— David Luan
Every enterprise workflow is an edge case.
— David Luan (relaying a comment from Parag Agrawal)
I view open really as a way for the rest of the field to keep up with the biggest incumbents, and therefore I think it's actually pretty darn important.
— David Luan
High quality AI-generated summary created from speaker-labeled transcript.
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome