Skip to content
No PriorsNo Priors

No Priors Ep. 54 | With Sarah Guo & Elad Gil

Host-only episode discussing NVIDIA, Meta and Google earnings, Gemini and Mistral model launches, the open-vs-closed source debate, domain specific foundation models, if we’ll see real competition in chips, and the state of AI ROI and adoption. Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil Show Notes: 0:00 Introduction 0:27 Model news and product launches 5:01 Google enters the competitive space with Gemini 1.5 8:23 Biology and robotics using LLMs 10:22 Agent-centric companies 14:22 NVIDIA earnings 17:29 ROI in AI 20:43 Impact from AI 25:45 Building effective AI tools in house 29:09 What would it take to compete with NVIDIA 33:23 The architectural approach to compute 35:42 the roadblocks to chip production in the US 38:30 The virtuous tech cycles in AI

Elad GilhostSarah Guohost
Mar 6, 202442mWatch on YouTube ↗

At a glance

WHAT IT’S REALLY ABOUT

AI Models, Chips, and Agents: Google, Nvidia, and Enterprise Disruption

  1. Sarah Guo and Elad Gil survey the rapid progress in AI models, including Google's Gemini 1.5, OpenAI's Sora, and Mistral's fast-rising large models, and debate the future of retrieval, long context windows, and agentic systems.
  2. They argue Google has reawakened as a serious AI contender, while specialized models in biology, robotics, and other scientific domains are poised to proliferate in 2024–2025.
  3. On the infrastructure side, they discuss Nvidia’s continued dominance, the economics of massive GPU capex, and how hyperscalers drive most AI spend, with Meta as a clear example of ROI from heavy AI investment.
  4. They highlight early but real enterprise impact—like Klarna’s AI assistant replacing the work of 700 agents—as a sign that services, software, and labor markets will be reshaped as AI applications finally catch up with the infrastructure wave.

IDEAS WORTH REMEMBERING

5 ideas

Longer context windows expand, not replace, retrieval-based systems.

Models like Gemini 1.5 with million-token context windows enable new use cases (e.g., full-protein reasoning) but mainly broaden the design space of trade-offs between retrieval, context stuffing, and model reasoning rather than making RAG obsolete.

Google has reemerged as a serious AI heavyweight.

Gemini 1.5 and rapid shipping cadence show that once Google applied real organizational will, its compute, data, distribution, and research depth positioned it as a central player again, shifting questions from technical capability to strategic focus and risk appetite.

Agents will work first in narrow, feedback-rich environments.

Rather than generic ‘do-anything’ agents, progress is coming from systems embedded in constrained domains (games, code, specific web apps) where reinforcement learning, sampling, and validation are possible and data for post-training can be economically collected.

AI infrastructure spend is led by hyperscalers, and Nvidia’s moat is deep.

Most of the capital flowing into GPUs comes from cloud and big tech (Microsoft, Meta, Amazon, Google, etc.), driven by strong incentives to upgrade across A100 → H100 → H200 → B100 generations; Nvidia’s chip performance, CUDA ecosystem, and interconnect give it a durable lead, with second sources facing manufacturing and ecosystem hurdles.

Big-tech AI capex can already show outsized enterprise value creation.

Meta’s multibillion-dollar AI infrastructure investments translated into materially better targeting, recommendations, and tools, helping add nearly $200B in market cap in a single earnings reaction, illustrating that large AI bets can pay off at scale.

WORDS WORTH SAVING

5 quotes

The thing that was lacking until recently was the will. And it seems like now, because of the competitive dynamic, the will has been reborn.

Elad Gil (on Google and Gemini)

I’m more of the belief that [large context] just opens up the set of trade-offs you can make between retrieval, more sophisticated retrieval, and model reasoning.

Sarah Guo

We haven’t seen anything yet really in the app wave… many of the apps so far were started by people who were very close to the research community.

Elad Gil

There’s a joke that the foundation model companies are here to replace all the jobs, but they don’t understand what any of the jobs are.

Sarah Guo

The more I learn, the less I know in AI, and it’s the opposite of every other field I’ve ever been in.

Elad Gil

State of frontier models: Gemini 1.5, Mistral Large, Sora, and context windowsRetrieval vs. long-context reasoning and the evolving role of RAGSpecialized models in biology, robotics, and other scientific domainsAgent architectures, reinforcement learning, and targeted application domainsNvidia’s dominance, GPU upgrade cycles, and hyperscaler AI capexEnterprise AI adoption, automation of services (e.g., customer support), and ROISemiconductor supply chain, second sources to Nvidia, and fab/geopolitics dynamics

High quality AI-generated summary created from speaker-labeled transcript.

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome