Skip to content
AcquiredAcquired

Nvidia Part III: The Dawn of the AI Era (2022-2023) (Audio)

It’s a(nother) new era for Nvidia. We thought we’d closed the Acquired book on Nvidia back in April 2022. The story was all wrapped up: Jensen & crew had set out on an amazing journey to accelerate the world’s computing workloads. Along the way they’d discovered a wondrous opportunity (machine learning powered social media feed recommendations). They forged incredible Power in the CUDA platform, and used it to triumph over seemingly insurmountable adversity — the stock market penalty-box. But, it turned out that was only the precursor to an even wilder journey. Over the past 18 months Nvidia has weathered one of the steepest stock crashes in history ($500B+ market cap wiped away peak-to-trough!). And, it has of course also experienced an even more fantastical rise — becoming the platform that’s powering the emergence of perhaps a new form of intelligence itself… and in the process becoming a trillion-dollar company. Today we tell another chapter in the amazing Nvidia saga: the dawn of the AI era. Tune in! Sponsors: Thanks to our fantastic partners, any member of the Acquired community can now get: Scalable, clean and low-cost cloud AI compute from Crusoe, and listen to our recent ACQ2 interview with CEO Chase Lochmiller https://bit.ly/acquiredcrusoe https://bit.ly/CrusoeACQ2 Your product growth powered by Statsig https://bit.ly/statsigacquired Free access to our episode research on Blinkist, plus our favorite books on Ben & David’s Bookshelf https://bit.ly/BlinkistNvidia https://bit.ly/BlinkistBookshelf More Acquired!: Get email updates with hints on next episode and follow-ups from recent episodes https://www.acquired.fm/email Join the Slack http://acquired.fm/slack Subscribe to ACQ2 https://pod.link/acquiredlp Become an LP and support the show. Help us pick episodes, Zoom calls and more https://acquired.fm/lp ACQ hats are back in stock in the ACQ Merch Store! https://www.acquired.fm/store Links: Asianometry on AI Hardware https://youtu.be/5tmGKTNW8DQ?si=m4PJpgnrERddk99E Episode sources https://docs.google.com/document/d/1d5vDY0pFcLGeZEg9t0vNMVnMvFTwMwcv9QhQHca4Aus/edit?usp=sharing Carve Outs: Alias https://www.imdb.com/title/tt0285333/ Moana https://movies.disney.com/moana Note: Acquired hosts and guests may hold assets discussed in this episode. This podcast is not investment advice, and is intended for informational and entertainment purposes only. You should do your own research and make your own independent decisions when considering any financial transactions. © Copyright ACQ, LLC

Ben GilberthostDavid Rosenthalhost
Sep 5, 20232h 54mWatch on YouTube ↗

At a glance

WHAT IT’S REALLY ABOUT

How Nvidia’s data-center platform powered the generative AI boom

  1. Ben Gilbert and David Rosenthal revisit Nvidia because the 2022–2023 generative AI breakout (ChatGPT and LLMs) created a step-function shift in data-center demand for GPU compute.
  2. They trace the technical lineage from AlexNet (2012) to transformers (2017), then to OpenAI’s pivot to for-profit funding to afford massive training runs—culminating in ChatGPT’s “Netscape/iPhone moment.”
  3. The episode argues Nvidia was uniquely prepared: CUDA’s developer ecosystem, data-center “full stack” systems (DGX, Grace CPU, Hopper GPUs), and the Mellanox/InfiniBand networking acquisition enabled multi-GPU, multi-rack training at scale.
  4. They close with business analysis: Nvidia’s platform-like moats (software, switching costs, constrained advanced packaging capacity) and the key uncertainty—whether AI value creation is durable enough to justify the current capital spending wave.

IDEAS WORTH REMEMBERING

5 ideas

Transformers made language modeling massively parallel—and GPUs made it economically feasible.

Attention allows models to use wide context, but it’s computationally heavy (quadratic with context length). Because attention operations parallelize well, GPUs unlocked practical large-scale training versus earlier sequential RNN approaches.

OpenAI’s success depended as much on capital structure and cloud access as on research.

To compete with Google’s resources, OpenAI created a capped-profit entity (2019) and partnered with Microsoft for funding and exclusive cloud compute—enabling GPT-3 through ChatGPT and GPT-4.

Nvidia’s advantage is “platform + system,” not just a fast chip.

CUDA (compiler/runtime/libraries) plus integrated hardware (DGX, NVLink, Grace CPU) and networking (InfiniBand) creates an end-to-end solution that’s hard to replicate with piecemeal components.

The bottleneck in modern AI shifted from compute to memory and interconnect.

LLMs require huge on-package HBM and extremely fast chip-to-chip and rack-to-rack communication. This makes advanced packaging (TSMC CoWoS) and networking (Mellanox) central to performance and supply.

Mellanox/InfiniBand became a strategic masterstroke once “the data center is the computer.”

Training frontier models requires treating hundreds/thousands of GPUs as one machine. InfiniBand’s bandwidth/latency outclasses Ethernet for these clusters, making Nvidia’s 2020 acquisition disproportionately valuable in the LLM era.

WORDS WORTH SAVING

5 quotes

In our April 2022 episodes, we never once said the word 'generative.' That is how fast things have changed.

Ben Gilbert

In November of 2022, AI definitely had its Netscape moment… it may have even been its iPhone moment.

Ben Gilbert

The more accurately an LLM predicts that next word… ipso facto, the greater its understanding.

David Rosenthal (summarizing Ilya Sutskever)

The data center is the computer.

Ben Gilbert (describing Jensen Huang’s framing)

Starting price for DGX Cloud is $37,000 a month… that’s three-month payback on the CapEx.

David Rosenthal

AlexNet and the 2012 AI “Big Bang”OpenAI’s founding motivation and Microsoft partnershipTransformers and the attention mechanism (parallelism, O(n^2))Von Neumann bottleneck and GPU accelerationScaling laws: parameters, data, and emergent capabilitiesNvidia’s data-center stack: Hopper, Grace, DGX, NVLinkMellanox/InfiniBand and AI cluster networkingTSMC CoWoS + HBM packaging constraintsDGX Cloud and shifting customer relationshipsNvidia moats: CUDA ecosystem, switching costs, cornered capacity2023 earnings explosion and the reframed “$1T TAM”Bull vs bear: durability of AI demand and competitive responses

High quality AI-generated summary created from speaker-labeled transcript.

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome