Nvidia: The Machine Learning Company (2006-2022)

Nvidia: The Machine Learning Company (2006-2022)

AcquiredApr 20, 20222h 15m

Ben Gilbert (host), David Rosenthal (host)

CUDA as a full-stack platform betEarly competitive advantages: ship cadence, driver control, programmability2008–2011 stumbles: earnings whiffs, Tegra mobile detourImageNet/AlexNet and the deep learning inflectionData center monetization and margin expansionSegmentation tactics: enterprise GPUs, GeForce restrictions, crypto miningStrategic frontiers: Mellanox/DPUs, Arm attempt, Grace CPU, Omniverse, automotive

In this episode of Acquired, featuring Ben Gilbert and David Rosenthal, Nvidia: The Machine Learning Company (2006-2022) explores nvidia’s CUDA bet transformed a gaming chipmaker into AI platform The episode traces Nvidia’s post-2006 pivot from a dominant gaming GPU business to a broader mission: general-purpose computing on GPUs, enabled by the CUDA software platform.

Nvidia’s CUDA bet transformed a gaming chipmaker into AI platform

The episode traces Nvidia’s post-2006 pivot from a dominant gaming GPU business to a broader mission: general-purpose computing on GPUs, enabled by the CUDA software platform.

That bet looked irrational for years—costly, early, and with an unclear market—leading to major stock drawdowns (2008, 2011) while Nvidia invested heavily in software, drivers, and developer tooling.

The “miracle” catalyst was deep learning’s breakthrough moment (ImageNet/AlexNet in 2012), which used CUDA on Nvidia GPUs and ignited massive enterprise/hyperscaler demand for GPU compute in data centers.

The hosts argue Nvidia’s moat now comes from full-stack integration (hardware + CUDA + libraries/SDKs + systems), enabling high margins and expanding ambitions into networking (Mellanox), CPUs (Grace), automotive, and Omniverse/digital twins.

Key Takeaways

CUDA created lock-in by making Nvidia a platform, not a chip vendor.

Nvidia gave CUDA away for free, but kept it proprietary to Nvidia hardware—an Apple-like play that built switching costs for developers and enterprises that standardized on CUDA libraries and tooling.

Get the full analysis with uListen AI

Owning drivers and the developer relationship was a hidden early moat.

Unlike peers that outsourced drivers, Nvidia internalized them, building deep low-level software capability and tighter user experience control—foundational for later full-stack ambitions.

Get the full analysis with uListen AI

The CUDA investment was an iPhone-sized bet made while already successful.

Nvidia pursued general-purpose GPU computing despite unclear market size, high cost, and long time-to-utility—an unusually bold move for a multi-billion-dollar public company.

Get the full analysis with uListen AI

AlexNet (2012) was the demand shock that made CUDA inevitable.

Deep learning’s computational intensity mapped perfectly to GPU parallelism; AlexNet’s success on CUDA/Nvidia GPUs turned “maybe someday” into immediate, compounding enterprise and hyperscaler demand.

Get the full analysis with uListen AI

Data center economics changed Nvidia’s business quality dramatically.

Enterprise GPUs and systems sell at far higher price points than consumer cards (tens of thousands vs. ...

Get the full analysis with uListen AI

Nvidia increasingly sells ‘solutions,’ not components—expanding its margin envelope.

With systems, interconnect, libraries, and now software licensing, Nvidia bundles more of the stack, increasing differentiation and making it harder for rivals to compete on a single layer.

Get the full analysis with uListen AI

Segmentation is a core tactic to protect pricing power.

Nvidia restricted GeForce use in data centers, differentiated enterprise cards (no video-out), throttled consumer mining, and introduced dedicated mining products—reducing arbitrage and stabilizing demand.

Get the full analysis with uListen AI

Notable Quotes

If you don't build it, they can't come.

David Rosenthal (citing Jensen Huang’s framing)

CUDA… is entirely free… [but] closed source and proprietary exclusively to Nvidia's hardware.

David Rosenthal

This was the Big Bang moment for artificial intelligence, and NVIDIA and CUDA were right there.

David Rosenthal

Every single [deep learning startup] effectively comes in building on NVIDIA's platforms… We’d put in all of our money to NVIDIA.

Ben Gilbert (quoting Marc Andreessen)

You say solutions, I hear gross margin.

Ben Gilbert

Questions Answered in This Episode

CUDA was free-but-proprietary: what specific moments (products, libraries, partnerships) most accelerated developer lock-in versus OpenCL?

The episode traces Nvidia’s post-2006 pivot from a dominant gaming GPU business to a broader mission: general-purpose computing on GPUs, enabled by the CUDA software platform.

Get the full analysis with uListen AI

How much of Nvidia’s current moat is ‘CUDA the API’ versus the higher-level libraries (e.g., cuDNN, CUDA-X, SDKs) and enterprise support?

That bet looked irrational for years—costly, early, and with an unclear market—leading to major stock drawdowns (2008, 2011) while Nvidia invested heavily in software, drivers, and developer tooling.

Get the full analysis with uListen AI

Was Tegra a strategic mistake, a necessary hedge, or an option that later paid off via Switch/auto infotainment entry?

The “miracle” catalyst was deep learning’s breakthrough moment (ImageNet/AlexNet in 2012), which used CUDA on Nvidia GPUs and ignited massive enterprise/hyperscaler demand for GPU compute in data centers.

Get the full analysis with uListen AI

What concrete evidence supports or refutes the idea that hyperscalers’ custom silicon (TPUs, internal accelerators) will materially cap Nvidia’s data center growth?

The hosts argue Nvidia’s moat now comes from full-stack integration (hardware + CUDA + libraries/SDKs + systems), enabling high margins and expanding ambitions into networking (Mellanox), CPUs (Grace), automotive, and Omniverse/digital twins.

Get the full analysis with uListen AI

Nvidia’s segmentation tactics (GeForce restrictions, mining throttles) improved predictability—but do they risk ecosystem backlash long-term?

Get the full analysis with uListen AI

Transcript Preview

Ben Gilbert

Still got Swedish House Mafia, Greyhound in my head from the pump up.

David Rosenthal

Nice. Nice. [laughing]

Ben Gilbert

It is funny how all like GPU companies, like I was watching a bunch of NVIDIA keynotes and AMD keynotes to get ready for this, and everyone is so like techno, neon lighting.

David Rosenthal

[chuckles]

Ben Gilbert

Like, it's like crypto before crypto.

Speaker

[singing] Who got the truth? Is it you? Is it you? Is it you? Who got the truth now? Is it you? Is it you? Is it you? Sit me down, say it straight. Another story on the way. Who got the truth?

Ben Gilbert

Welcome to Season Ten, Episode Six of Acquired, the podcast about great technology companies and the stories and playbooks behind them. I'm Ben Gilbert, and I am the co-founder and managing director of Seattle-based Pioneer Square Labs, and our venture fund, PSL Ventures.

David Rosenthal

And I'm David Rosenthal, and I am an angel investor based in San Francisco.

Ben Gilbert

And we are your hosts. When I was a kid, David, I used to stare into backyard bonfires and wonder if that fire flickering was doing so in a random way, or if I knew about every input in the world, all the air, exactly the physical construction of the wood, all the variables in the environment, if it was actually predictable. And I don't think I knew the term at the time, but modelable. If I could know what the flame could look like if I knew all those inputs. And we now know, of course, it is indeed predictable, but the data and compute required to actually know that is extremely difficult. But that is what NVIDIA is doing today.

David Rosenthal

Ben, I love that intro. That's great!

Ben Gilbert

[laughing]

David Rosenthal

I was thinking, like, "Where is Ben going with this?"

Ben Gilbert

And this was occurring to me as I was watching Jensen sharing the omniverse vision for NVIDIA, and realizing NVIDIA has really built all the building blocks: the hardware, the software for developers to use that hardware, all the user-facing software now, and services to simulate everything in our physical world with their unbelievably efficient and powerful GPU architecture. And these building blocks, listeners, aren't just for gamers anymore. They are making it possible to recreate the real world in a digital twin, to do things like predict airflow over a wing, or simulate cell interaction to quickly discover new drugs without ever once touching a petri dish, or even model and predict how climate change will play out precisely. And there is so much to unpack here, especially in how NVIDIA went from making commodity graphics cards to now owning the whole stack in industries from gaming to enterprise data centers, to scientific computing, and now even basically off-the-shelf, self-driving car architecture for manufacturers. And at the scale that they're operating at, these improvements that they're making are literally unfathomable to the human mind. And just to illustrate, if you are training one single speech recognition machine learning model these days, one, just one model, the number of math operations, like adds or multiplies, to accomplish it is actually greater than the number of grains of sand on the Earth.

Install uListen to search the full transcript and get AI-powered insights

Get Full Transcript

Get more from every podcast

AI summaries, searchable transcripts, and fact-checking. Free forever.

Add to Chrome