Skip to content
The Twenty Minute VCThe Twenty Minute VC

David Luan: Why Nvidia Will Enter the Model Space & Models Will Enter the Chip Space | E1169

David Luan is the CEO and Co-Founder at Adept, a company building AI agents for knowledge workers. To date, David has raised over $400M for the company from Greylock, Andrej Karpathy, Scott Belsky, Nvidia, ServiceNow and WorkDay. Previously, he was VP of Engineering at OpenAI, overseeing research on language, supercomputing, RL, safety, and policy and where his teams shipped GPT, CLIP, and DALL-E. He led Google’s giant model efforts as a co-lead of Google Brain. ----------------------------------------------- Timestamps: (00:00) Intro (01:03) Lessons from Google Brain & Their Influence on Building Adept (05:06) Why It Took 6 Years for ChatGPT to Emerge After Transformers (06:49) Takeaways from OpenAI (09:57) The Key Bottleneck in AI Model Performance (16:06) Understanding Minimum Viable Capability Levels & Model Scale (20:17) The Future of the Foundational Model Layer (33:26) Adept’s Focus for Vertical Integration for AI Agents (35:53) The Distinction Between RPA & Agents (40:24) The Co-pilot Approach: Incumbent Strategy or Innovation Catalyst (42:46) Enterprise AI Adoption Budgets: Experimental vs. Core (46:53) AI Services Providers vs. Actual Providers (49:32) Open vs. Closed AI Systems for Crucial Decision Making (54:18) Quick-Fire Round ----------------------------------------------- In Today’s Episode with David Luan We Discuss: 1. The Biggest Lessons from OpenAI and Google Brain: What did OpenAI realise that no one else did that allowed them to steal the show with ChatGPT? Why did it take 6 years post the introduction of transformers for ChatGPT to be released? What are 1-2 of David’s biggest lessons from his time leading teams at OpenAI and Google Brain? 2. Foundation Models: The Hard Truths: Why does David strongly disagree that the performance of foundation models is at a stage of diminishing returns? Why does David believe there will only be 5-7 foundation model providers? What will separate those who win vs those who do not? Does David believe we are seeing the commoditization of foundation models? How and when will we solve core problems of both reasoning and memory for foundation models? 3. Bunding vs Unbundling: Why Chips Are Coming for Models: Why does David believe that Jensen and Nvidia have to move into the model layer to sustain their competitive advantage? Why does David believe that the largest model providers have to make their own chips to make their business model sustainable? What does David believe is the future of the chip and infrastructure layer? 4. The Application Layer: Why Everyone Will Have an Agent: What is the difference between traditional RPA vs agents? Why is agents a 1,000x larger business than RPA? In a world where everyone has an agent, what does the future of work look like? Why does David disagree with the notion of “selling the work” and not the tool? What is the business model for the next generation of application layer AI companies? ----------------------------------------------- Subscribe on Spotify: https://open.spotify.com/show/3j2KMcZTtgTNBKwtZBMHvl?si=85bc9196860e4466 Subscribe on Apple Podcasts: https://podcasts.apple.com/us/podcast/the-twenty-minute-vc-20vc-venture-capital-startup/id958230465 Follow Harry Stebbings on Twitter: https://twitter.com/HarryStebbings Follow David Luan on Twitter: https://twitter.com/jluan Follow 20VC on Instagram: https://www.instagram.com/20vchq Follow 20VC on TikTok: https://www.tiktok.com/@20vc_tok Visit our Website: https://www.20vc.com Subscribe to our Newsletter: https://www.thetwentyminutevc.com/contact ----------------------------------------------- #20vc #harrystebbings #davidluan #adeptai #venturecapital #ai #openai #nvidia #deepmind #chatgpt #apple

David LuanguestHarry Stebbingshost
Jun 23, 202458mWatch on YouTube ↗

At a glance

WHAT IT’S REALLY ABOUT

AI’s Next Era: Vertical Integration, Smarter Agents, and Chip Wars Ahead

  1. David Luan, CEO of Adept and former leader at Google Brain and OpenAI, outlines how AI has shifted from bottom‑up academic research to large, mission‑driven teams solving concrete problems with Transformers as the universal model architecture.
  2. He argues that model progress will continue despite talk of diminishing returns, driven first by scaling base models and now increasingly by reinforcement-style loops where models act in environments, generate their own data, and improve reasoning.
  3. Luan predicts a tightly concentrated layer of 5–7 frontier model providers, deep vertical integration between chips and models (with Nvidia moving up-stack and clouds moving down-stack), and a clear separation between creative chatbots and reliable work-focused agents.
  4. He sees the biggest long-term value not in raw models or services, but in vertically integrated agent products that can learn arbitrary enterprise workflows, while warning about regulatory capture, overhyped short‑term expectations, and underappreciated human–computer interaction challenges.

IDEAS WORTH REMEMBERING

5 ideas

AI progress has moved from curiosity-driven papers to Apollo-style, goal-driven projects.

Luan contrasts the 2012–2018 Google Brain era of bottom-up research with OpenAI’s shift to large teams focused on specific big goals (e.g., robotics, game-playing, GPT scaling), and he structures Adept similarly around solving concrete, high-impact problems rather than publishing papers.

Diminishing returns to compute are overstated; new training paradigms will soak up vast compute.

Traditional scaling shows predictable gains when you double compute, and now a second frontier is emerging: giving models environments (math tools, theorem provers, notebooks) to explore, fail, and self-generate training data via RL-style loops, which both improves reasoning and demands even more compute.

Reasoning will be solved at the model-provider layer through environment-based training, not just more data.

Simply scaling unsupervised internet training can’t teach composition of ideas; Luan expects leading LLM providers to enhance reasoning by training models to act in rich problem-solving environments with feedback, which requires changing the models themselves rather than just fine-tuning on proprietary corpora.

Models and chips are on a collision course toward vertical integration.

Clouds need in-house chips for margin and scale advantages (e.g., Google TPUs), while chipmakers like Nvidia risk commoditization unless they move up into the model layer; Luan anticipates both sides “eating each other’s lunch” and tight coupling between hardware costs and model competitiveness.

Agents and chatbots are diverging into distinct product categories with different requirements.

Hallucinations are acceptable or even useful in creative chatbots, but intolerable for agents running real workflows (taxes, logistics); Luan predicts reliable, tool-using, goal-driven agents that operate software on your behalf will develop separately from conversational systems designed for information and companionship.

WORDS WORTH SAVING

5 quotes

The next phase of AI after Transformer was not going to be about research paper writing. It was going to be about, ‘Let's choose a major unsolved scientific problem and just try to solve it.’

David Luan

The second way of improving model performance is just starting to be tapped now, and that's also going to absorb a boatload of compute.

David Luan

I actually think agents and chatbots are gonna speciate and turn into two different products.

David Luan

Every enterprise workflow is an edge case.

David Luan (relaying a comment from Parag Agrawal)

I view open really as a way for the rest of the field to keep up with the biggest incumbents, and therefore I think it's actually pretty darn important.

David Luan

Historical phases of AI progress: from bottom-up research to mission-driven scalingTransformers, scaling laws, data vs compute, and new paths to model improvementReasoning, memory, and the emerging split between chatbots and agentsEconomic structure of the model layer: concentration, commoditization, and business modelsVertical integration between chips and models (Nvidia, TPUs, Apple, cloud providers)Enterprise adoption dynamics, RPA vs agents, and workflow automationRegulation, open vs closed models, and human–computer interaction as a missing piece

High quality AI-generated summary created from speaker-labeled transcript.

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome