Skip to content
AcquiredAcquired

Nvidia Part III: The Dawn of the AI Era (2022-2023) (Audio)

It’s a(nother) new era for Nvidia. We thought we’d closed the Acquired book on Nvidia back in April 2022. The story was all wrapped up: Jensen & crew had set out on an amazing journey to accelerate the world’s computing workloads. Along the way they’d discovered a wondrous opportunity (machine learning powered social media feed recommendations). They forged incredible Power in the CUDA platform, and used it to triumph over seemingly insurmountable adversity — the stock market penalty-box. But, it turned out that was only the precursor to an even wilder journey. Over the past 18 months Nvidia has weathered one of the steepest stock crashes in history ($500B+ market cap wiped away peak-to-trough!). And, it has of course also experienced an even more fantastical rise — becoming the platform that’s powering the emergence of perhaps a new form of intelligence itself… and in the process becoming a trillion-dollar company. Today we tell another chapter in the amazing Nvidia saga: the dawn of the AI era. Tune in! Sponsors: Thanks to our fantastic partners, any member of the Acquired community can now get: Scalable, clean and low-cost cloud AI compute from Crusoe, and listen to our recent ACQ2 interview with CEO Chase Lochmiller https://bit.ly/acquiredcrusoe https://bit.ly/CrusoeACQ2 Your product growth powered by Statsig https://bit.ly/statsigacquired Free access to our episode research on Blinkist, plus our favorite books on Ben & David’s Bookshelf https://bit.ly/BlinkistNvidia https://bit.ly/BlinkistBookshelf More Acquired!: Get email updates with hints on next episode and follow-ups from recent episodes https://www.acquired.fm/email Join the Slack http://acquired.fm/slack Subscribe to ACQ2 https://pod.link/acquiredlp Become an LP and support the show. Help us pick episodes, Zoom calls and more https://acquired.fm/lp ACQ hats are back in stock in the ACQ Merch Store! https://www.acquired.fm/store Links: Asianometry on AI Hardware https://youtu.be/5tmGKTNW8DQ?si=m4PJpgnrERddk99E Episode sources https://docs.google.com/document/d/1d5vDY0pFcLGeZEg9t0vNMVnMvFTwMwcv9QhQHca4Aus/edit?usp=sharing Carve Outs: Alias https://www.imdb.com/title/tt0285333/ Moana https://movies.disney.com/moana Note: Acquired hosts and guests may hold assets discussed in this episode. This podcast is not investment advice, and is intended for informational and entertainment purposes only. You should do your own research and make your own independent decisions when considering any financial transactions. © Copyright ACQ, LLC

Ben GilberthostDavid Rosenthalhost
Sep 6, 20232h 54mWatch on YouTube ↗

CHAPTERS

  1. Why Nvidia needs a Part III: AI’s “Netscape/iPhone moment” (2022-2023)

    Ben and David set the context for why the last 18 months required a new Nvidia episode: the sudden mainstream breakthrough of large language models and generative AI. They contrast the bleak 2022 macro backdrop with the explosive adoption of ChatGPT and the broader AI race it triggered.

  2. The trillion-dollar TAM debate: what actually had to be true (and what changed)

    They revisit the 2021 Nvidia slide claiming a $1T TAM and how it seemed to require robotics/autonomy/Omniverse to materialize quickly. Then they connect an offhand “what if the digital world gets a new foundational layer?” comment to what generative AI ended up becoming.

  3. AlexNet (2012): the AI Big Bang powered by consumer Nvidia GPUs

    They rewind to AlexNet’s ImageNet breakthrough and explain why it mattered: neural networks were known for decades but were too computationally expensive on CPUs. Training on two consumer GeForce GTX 580s using CUDA demonstrated the GPU-parallelism unlock that would reshape ML and Nvidia.

  4. The researcher pipeline: Google/Facebook AI duopoly forms (and why it alarmed people)

    They track how leading AI talent consolidated inside Google and Facebook after AlexNet, producing major business wins like the modern YouTube feed and ad targeting. This concentration created strategic concerns for competitors, startups, and society—setting up the OpenAI origin story.

  5. OpenAI’s founding dinner (2015): the one defector who mattered—Sutskever

    Elon Musk and Sam Altman convene top researchers to break the Google/Facebook lock on AI talent. Nearly everyone declines, but Ilya Sutskever leaves Google to co-found OpenAI, a pivotal decision that later connects directly to the LLM boom.

  6. From early language-model vision to the transformer breakthrough (2015-2017)

    They highlight how language models were envisioned before they were feasible, including Andrej Karpathy’s early articulation of chatbot-like systems trained on internet text. The chapter culminates in Google Brain’s 2017 “Attention Is All You Need” transformer paper and why attention changes everything.

  7. OpenAI’s pivot: expensive scaling, for-profit structure, and Microsoft partnership (2018-2023)

    OpenAI lags initially as Google accelerates, then pivots decisively to transformers. Compute costs force a structural change: OpenAI creates a capped-profit entity, raises capital, and partners deeply with Microsoft—leading to GPT-3, Copilot, ChatGPT, and GPT-4.

  8. Why generative AI is a “perfect storm” for Nvidia’s data-center strategy

    They argue generative AI’s rise coincided with Nvidia’s multi-year push to re-architect the data center around GPU-accelerated computing. The opportunity (LLMs) met Nvidia’s preparation: building a full-stack platform to make the “data center as the computer” real.

  9. The hardware fundamentals: von Neumann bottleneck and why memory/networking dominate now

    Ben explains classic CPU architecture and the von Neumann bottleneck: too many cycles are spent moving data to/from memory. Modern AI shifts constraints to on-chip memory capacity and interconnect speed, making large multi-GPU, multi-rack “single computers” essential.

  10. Nvidia’s three-legged data-center stool: Mellanox, Grace CPU, and Hopper + CoWoS/HBM

    They detail the core strategic build: Mellanox InfiniBand networking, Nvidia’s Grace CPU for orchestration, and the Hopper GPU architecture optimized for AI. Advanced packaging (CoWoS/2.5D) plus high-bandwidth memory becomes a key supply constraint and competitive wedge.

  11. How Nvidia sells the AI era: chips, DGX boxes, SuperPOD “AI walls,” and DGX Cloud

    They break down Nvidia’s go-to-market from selling raw H100s to hyperscalers, to turnkey DGX systems for enterprises, to massive SuperPOD installations. Nvidia also introduces DGX Cloud—DGX-as-a-service hosted inside other clouds—capturing margin and direct customer relationships.

  12. 2023 financial shockwave: the data-center business explodes and the TAM reframes

    They recount Nvidia’s historic 2023 earnings acceleration, with data center revenue jumping from roughly $4B to $10B+ in a single quarter. Jensen’s TAM story evolves from speculative industry capture to a grounded “$1T installed data-center base + $250B annual spend” narrative.

  13. CUDA as the enduring moat: Nvidia as a platform (not “just hardware”)

    They reintroduce CUDA as the foundational software stack—compiler, runtime, tools, language, and libraries—powering most AI workloads. CUDA’s rapid developer growth and ecosystem depth position Nvidia more like Microsoft/IBM than Cisco/Intel, underpinning margins and lock-in.

  14. Durable powers, plus bull vs. bear: competition, hype risk, and the path to unseating Nvidia

    They apply the “7 Powers” framework to Nvidia: scale economies (CUDA investment), switching costs (code + data-center architecture), cornered resources (TSMC CoWoS/HBM capacity), and strong ecosystem effects. The debate ends with major bear risks (competition, overhype, inference commoditization, China) versus bull cases (accelerated computing everywhere, AI value compounding, Nvidia’s speed/culture, data-center replatforming).

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome