Skip to content
All-In PodcastAll-In Podcast

Jensen Huang: Why cheap chips still produce expensive tokens

Via Groq, Nvidia routes each inference step to the right chip. Vera Rubin targets agentic racks; Huang's key metric is token cost, not the datacenter price tag.

Jason CalacanishostJensen HuangguestDavid FriedberghostChamath Palihapitiyahost
Mar 19, 20261h 6mWatch on YouTube ↗

CHAPTERS

  1. Jensen Huang at GTC: Nvidia’s shift from GPU maker to “AI factory” company

    The hosts welcome Jensen Huang and frame the conversation around Nvidia’s rapid expansion and influence across the AI ecosystem. Jensen sets the tone: Nvidia’s roadmap is laid out years ahead at GTC, and the company now thinks in terms of end-to-end AI computing factories rather than standalone GPUs.

  2. Groq acquisition and “disaggregated inference” as the driver of the inference explosion

    Jensen explains why inference has become the most complex computing problem today and why disaggregating the inference pipeline matters. He describes the “Dynamo” operating system concept for AI factories and how adding Groq enables placing the right workload on the right silicon.

  3. From LLMs to agents: why data centers now need heterogeneous racks (Vera Rubin era)

    The discussion moves from plain LLM serving to agentic workloads that hammer memory, storage, tools, and multi-model orchestration. Jensen argues this diversity is why next-gen systems like Vera Rubin are designed for heterogeneous compute, networking, and storage processing at scale.

  4. Three-computer architecture: training, simulation/Omniverse, and edge robotics

    Jensen outlines a simple mental model: AI requires separate systems for training, physics-based simulation/evaluation, and edge deployment. He highlights Omniverse as the “virtual gym” for robots and stresses the breadth of edge form factors—from cars to teddy bears to telecom base stations.

  5. Token economics vs. factory sticker price: why expensive factories can yield cheaper tokens

    Responding to concerns about Nvidia systems costing more than alternatives, Jensen argues price comparisons confuse capex with unit economics. The key metric is token cost, and Nvidia claims system-level efficiency and throughput can make a bigger factory the cheapest way to produce inference.

  6. How Jensen makes decisions: choosing problems that are ‘insanely hard’ and uniquely Nvidia

    Jensen describes his CEO framework: pursue only problems that are extremely difficult, not yet solved, and aligned with Nvidia’s special capabilities. He emphasizes that meaningful invention requires enduring “pain and suffering,” and that difficulty is a competitive moat.

  7. Physical AI and digital biology: Nvidia’s long-tail bets hitting inflection points

    Jensen argues physical AI is the tech industry’s first real shot at a $50T market and says Nvidia’s decade-long investment is now compounding into a multi-billion-dollar business. He also predicts a near-term “ChatGPT moment” for digital biology as AI begins to represent genes, proteins, and cells.

  8. OpenClaw and the rise of personal AI on the desktop: agents as the new operating system

    The panel discusses hobbyist momentum and local workstations, then Jensen frames OpenClaw as a cultural and architectural breakthrough that popularized useful agents. He describes agent systems as a reinvention of computing—memory, scheduling, I/O, and “skills” forming a new OS-like blueprint that can run everywhere.

  9. AI’s PR crisis and policymaking: countering doomer narratives and Anthropic comms lessons

    Jensen urges proactive education of policymakers: AI is software, not a biological or conscious entity, and the industry understands more than it sometimes claims. He praises Anthropic’s technical rigor and safety culture but warns that ‘scaring’ narratives can backfire and slow adoption, weakening national competitiveness.

  10. Enterprise ROI and the compute curve: generative → reasoning → agentic as 10,000× demand

    Jensen and the hosts argue that compute demand has surged dramatically as AI moved from token generation to reasoning and then to agents that do real work. Jensen claims agentic systems will drive willingness to pay, creating the conditions for a massive inference buildout and sustained demand growth.

  11. Tokens as labor leverage: budgeting AI usage like CAD tools for elite engineers

    Jensen proposes a striking internal benchmark: if a highly paid engineer isn’t consuming substantial token spend, the company should worry. The panel explores how agents remove perceived constraints (‘too hard,’ ‘too slow,’ ‘need more people’) and shift engineering toward specifying, evaluating, and orchestrating AI collaborators.

  12. Open source, decentralized training, and global diffusion: why both closed and open models matter

    The conversation covers distributed/community training examples and the strategic role of open models. Jensen argues models are a technology layer, not merely a service: proprietary frontier models and open-weight models will coexist, with open models enabling domain control and specialization across industries.

  13. Geopolitics and supply chain resilience: China access, Taiwan manufacturing, Iran/Israel families

    Jensen addresses export controls and argues US national security benefits from widespread adoption of the American tech stack globally. He discusses efforts to reindustrialize in the US with Taiwanese partners, diversify manufacturing beyond Taiwan, and manage regional risks—from Middle East instability to helium supply concerns.

  14. Autonomy, robotics, space and healthcare: platform strategy, timelines, and what comes next

    Jensen outlines Nvidia’s autonomy strategy: enable every automaker with a full-stack platform (training, simulation, in-car compute) without building cars themselves, while competing with customers who also build chips. He then covers data centers in space, healthcare agentics and surgical robotics, and predicts humanoid robotics will become real products within roughly 3–5 years, with China’s hardware supply chain strength as a key factor.

  15. AI revenue potential, moats, and advice for the next generation in an agentic world

    Jensen predicts model-company revenues could exceed even aggressive forecasts because enterprise software firms may become resellers of model tokens, expanding distribution. He argues application-layer moats come from deep vertical specialization and customer-connected flywheels, then closes with guidance for young people: master AI tools, build deep math/science foundations, and treat language as the new programming interface—while expecting jobs to transform rather than simply vanish.

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome