
NVIDIA’s Jensen Huang on Reasoning Models, Robotics, and Refuting the “AI Bubble” Narrative
Elad Gil (host), Jensen Huang (guest), Sarah Guo (host), Elad Gil (host)
In this episode of No Priors, featuring Elad Gil and Jensen Huang, NVIDIA’s Jensen Huang on Reasoning Models, Robotics, and Refuting the “AI Bubble” Narrative explores jensen Huang argues AI’s next wave is grounded, embodied, diverse Huang frames 2025 as a year of major practical improvements—better grounding, reasoning, and “routers” that trigger research—reducing hallucinations and making enterprise tokens economically valuable.
Jensen Huang argues AI’s next wave is grounded, embodied, diverse
Huang frames 2025 as a year of major practical improvements—better grounding, reasoning, and “routers” that trigger research—reducing hallucinations and making enterprise tokens economically valuable.
He argues AI is not just software but new infrastructure that requires “AI factories,” creating demand for skilled labor (construction, electricians, technicians) while shifting work from tasks to higher-level job purposes.
He strongly defends open source as essential for startups, legacy industries, and research, rejecting the idea of a single “God AI” or monolithic model that makes vertical apps obsolete.
Looking ahead, he predicts “ChatGPT moments” in digital biology (protein/chemical generation), rapid robotics progress via end-to-end + reasoning models, and sustained growth constrained primarily by energy and capacity rather than demand; he also calls for nuanced US–China policy and rejects the “AI bubble” framing as overly chatbot-centric.
Key Takeaways
Grounding and reasoning shifted AI from impressive demos to trusted tools.
Huang highlights industry-wide advances that reduce hallucinations: stronger grounding, better reasoning, tighter integration with search, and “routers” that decide when to do additional research based on confidence.
Get the full analysis with uListen AI
AI is becoming infrastructure, and infrastructure creates broad-based jobs.
Because AI generates tokens anew each use, it needs continual compute—driving buildouts of chip fabs, supercomputer manufacturing, and data-center-scale “AI factories,” which pull in construction and skilled trades at scale.
Get the full analysis with uListen AI
Productivity gains change what work is, not whether work exists.
Using radiology as the example, he argues AI automates tasks (reading scans) while expanding the purpose (better diagnosis, more patients, more research), which can increase headcount rather than reduce it.
Get the full analysis with uListen AI
Robotics is positioned to scale faster than self-driving did.
He describes four eras of autonomy (sensors → modular stacks → end-to-end → end-to-end + reasoning) and claims robotics benefits from lessons learned and modern foundation-model techniques, reducing the “10–15 year slog.”
Get the full analysis with uListen AI
Open source is a prerequisite for most real-world AI verticalization.
Closed frontier models can coexist with open models, but Huang argues that without open source, startups, higher education, and century-old industrial firms would be “suffocated” because they need adaptable pretrained foundations.
Get the full analysis with uListen AI
Compute will get dramatically cheaper, widening competition and use cases.
He expects compounded gains from hardware (5–10×/year), algorithms, and model architectures (e. ...
Get the full analysis with uListen AI
The ‘AI bubble’ question is misframed when it fixates on chatbot revenue.
Huang points to multi-industry demand and capacity shortages—autonomous vehicles, finance/quant, robotics, biology—plus a broader shift from CPU-era general computing to accelerated computing as the underlying structural change.
Get the full analysis with uListen AI
Notable Quotes
“AI is software… but it’s not prerecorded software.”
— Jensen Huang
“A job has tasks and has purpose… the task is to study scans, but the purpose is to diagnose disease.”
— Jensen Huang
“I guess someday we will have God AI… that someday is probably on biblical scales… galactic scales.”
— Jensen Huang
“DeepSeek… [was] probably the single greatest contribution to American AI last year.”
— Jensen Huang
“Without energy, there can be no new industry.”
— Jensen Huang
Questions Answered in This Episode
When you describe “routers in front of models,” what does the routing policy look like in practice (confidence thresholds, retrieval triggers, tool selection)?
Huang frames 2025 as a year of major practical improvements—better grounding, reasoning, and “routers” that trigger research—reducing hallucinations and making enterprise tokens economically valuable.
Get the full analysis with uListen AI
In the radiology example, what specific metrics show the “purpose vs task” effect—more scans per radiologist, better outcomes, or simply higher demand?
He argues AI is not just software but new infrastructure that requires “AI factories,” creating demand for skilled labor (construction, electricians, technicians) while shifting work from tasks to higher-level job purposes.
Get the full analysis with uListen AI
You claim token generation—especially reasoning tokens—is now highly profitable; what unit economics (compute, pricing, utilization) are required to sustain that?
He strongly defends open source as essential for startups, legacy industries, and research, rejecting the idea of a single “God AI” or monolithic model that makes vertical apps obsolete.
Get the full analysis with uListen AI
What are the biggest technical blockers to ‘end-to-end + reasoning’ in self-driving and robotics: data, simulation realism, safety validation, or real-time compute?
Looking ahead, he predicts “ChatGPT moments” in digital biology (protein/chemical generation), rapid robotics progress via end-to-end + reasoning models, and sustained growth constrained primarily by energy and capacity rather than demand; he also calls for nuanced US–China policy and rejects the “AI bubble” framing as overly chatbot-centric.
Get the full analysis with uListen AI
You argue open source is essential and shouldn’t be damaged by policy—what concrete export-control or regulatory proposals most threaten open AI ecosystems?
Get the full analysis with uListen AI
Transcript Preview
[upbeat music] Benson, thanks so much for joining us today.
So great to have you guys.
Yeah.
What an amazing year!
What a year. Things just happen.
Happy Hanukkah. Merry Christmas.
Happy holidays.
Happy New Year coming up. Yep, happy holidays.
Yeah.
So, uh, with everything that's happened in twenty twenty-five, um, and, you know, being in the middle of the vortex with it, what do you reflect on and say, like, this surprised you most, or this is the biggest change?
Let's see. There, there are some things that didn't surprise me. Like, for example, the scaling laws didn't surprise me, because we already knew about that. The technology advancement didn't surprise me. I was pleased with the improvements of grounding. I was pleased with the improvements of reasoning. I was pleased with im- uh, uh, the connection of all of the models to, to, to search. I'm pleased that it... That, uh, there are now routers that are in front of these models so that it could, depending on the confidence of the answers, go off and do necessary research and, and just generally improve the quality and the accuracy of answers.
Mm-hmm.
I'm hugely proud of that. I think the whole industry addressed one of the biggest skeptical responses of AI, which is hallucination and, um, generating gibberish and all of that stuff. I, I thought that this year, the whole industry, everything from every and every field, from language to vision, to robotics, to self-driving cars, the up- the application of reasoning and the grounding of the, of, of, of, of the answers, um, big, big leaps. Would you guys say this year?
Oh, huge. Yeah. I mean, things like open evidence too, for medical information, where doctors are now really using that as a trusted resource. Like you, uh, Harvey, for legal, you're, you're really starting to see AI emerge as one of these things that's become a trusted tool or counterparty for, you know, experts to actually be able to do what they do much better.
That's, that's right. And so, so in a lot of ways, I was expecting it, but I'm still pleased by it. I'm proud of it. I'm proud of all of the industry's work in this area. I'm really pleased and, and, uh, uh, and probably a little bit surprised, in fact, that token generation rate for inference, especially reasoning tokens, are growing so fast, several exponentials at the same time, it seems. [chuckles] And, uh, and I'm so pleased that, that these tokens are now profitable, that people are generating... I heard somebody, uh, or heard, heard today that, that open evidence, speaking of them-
Mm-hmm
... Ninety percent gross margins.
Mm-hmm.
I mean, those are very profitable tokens.
Yeah.
And so they're obviously doing very profitable work, very valuable work. Cursor, their margins are great. Uh, Claude's margins are great.
Mm.
For the enterprise use of OpenAI, their margins are great. Um, so anyways, it's really terrific to see that, that, um, we're now generating tokens that are sufficiently good, so good in value, that, that people are willing to pay good money for it. And so I, I think these are, are really great grounding for the year. I mean, some of the things that-- the narrative that, that, um, uh, of course, the conversation with China really, really, you know, occupied a lot of my, my time this year. Geopolitics, uh, the importance of technology in each one of the countries. Uh, I spent more time traveling around the world this year than just about any time in hi- all of my life combined. You know, [chuckles] my average elevation this year is probably about seventeen thousand feet-
Install uListen to search the full transcript and get AI-powered insights
Get Full TranscriptGet more from every podcast
AI summaries, searchable transcripts, and fact-checking. Free forever.
Add to Chrome