E167: Google's Woke AI disaster, Nvidia smashes earnings (again), Groq's LPU breakthrough & more

E167: Google's Woke AI disaster, Nvidia smashes earnings (again), Groq's LPU breakthrough & more

All-In PodcastFeb 23, 20241h 20m

Jason Calacanis (host), Chamath Palihapitiya (host), David Sacks (host), Narrator, David Friedberg (host)

Nvidia’s record earnings, data center GPU demand, and long‑term sustainabilityCompetition in AI hardware: Groq’s LPU architecture versus Nvidia GPUsDeep tech investing: timelines, risk profiles, and portfolio roleAI economics: training vs. inference, cloud capex, and application-layer valueGoogle Gemini’s ‘woke’ misfires, AI bias, and the primacy of truth in modelsOpen source and model customization as alternatives to centralized, biased AIGeopolitical update: Russia’s advances in Ukraine and tensions in Transnistria/Moldova

In this episode of All-In Podcast, featuring Jason Calacanis and Chamath Palihapitiya, E167: Google's Woke AI disaster, Nvidia smashes earnings (again), Groq's LPU breakthrough & more explores nvidia’s AI gold rush, Groq’s chip challenge, and Google’s flop The hosts dissect Nvidia’s blowout earnings, arguing its GPU dominance is fueling an AI infrastructure boom that may echo Cisco’s dot‑com era rise but with a stronger moat and more grounded valuation.

Nvidia’s AI gold rush, Groq’s chip challenge, and Google’s flop

The hosts dissect Nvidia’s blowout earnings, arguing its GPU dominance is fueling an AI infrastructure boom that may echo Cisco’s dot‑com era rise but with a stronger moat and more grounded valuation.

They highlight Groq’s long‑gestating LPU (Language Processing Unit) breakthrough as a potential disruptor in AI inference, using it to explore the economics and timelines of deep tech versus quick-win software plays.

A major segment critiques Google’s Gemini image and answer bias as the product of an ideologically captured culture, debating whether AI systems should prioritize truth, safety, or value-laden social goals—and how that affects user trust.

The episode closes with a brief geopolitical update on the Russia–Ukraine war, including rising tensions in Moldova’s Transnistria region and the risk of broader escalation.

Key Takeaways

Nvidia’s current growth is extraordinary but partly driven by one‑time AI infrastructure build‑out.

Massive GPU purchases by cash‑rich tech giants are often capitalized as data center capex, enabling huge near‑term Nvidia revenues that may not fully represent steady‑state demand once the initial build‑out normalizes.

Get the full analysis with uListen AI

The eventual value in AI may accrue more at the application layer than at the hardware layer.

Drawing parallels to Cisco and early internet infrastructure, the hosts argue that while Nvidia will likely remain dominant, the largest long‑term winners may be those who build compelling AI applications that billions of users pay for.

Get the full analysis with uListen AI

Groq’s LPU chips target the inference problem—speed and cost—rather than training brute force.

By designing smaller, specialized compute units networked together and paired with a custom compiler, Groq aims to deliver far faster and cheaper inference than GPUs, which could sharply change AI serving economics if scaled.

Get the full analysis with uListen AI

Deep tech ventures require long, capital‑intensive grinds but can create huge moats when they work.

Groq, SpaceX, Tesla, and certain biotech efforts illustrate that projects needing multiple hard technical steps to align over 7–10 years can be unfundable by consensus VC but yield outsized outcomes and defensibility when successful.

Get the full analysis with uListen AI

AI systems that prioritize ideology or ‘safety’ over factual accuracy risk losing user trust.

The Gemini controversy—hallucinated diverse Founding Fathers, evasive responses, and overt value injections—shows how tuning for social goals can distort obvious facts; the hosts argue ‘tell the truth’ must be the primary design principle.

Get the full analysis with uListen AI

Customization and transparency are critical paths to reconciling AI bias with diverse user values.

Rather than hard‑coding one moral framework, the group suggests giving users choices (e. ...

Get the full analysis with uListen AI

Geopolitical flashpoints around Ukraine may widen, not freeze, the conflict.

With Russia’s continued territorial gains and a potential annexation request from Transnistria in Moldova, Sacks warns that Western narratives of stalemate are misleading and that new fronts could escalate tensions across Eastern Europe.

Get the full analysis with uListen AI

Notable Quotes

In capitalism, when you over‑earn for enough time, competitors step up to compete away those profits.

Chamath Palihapitiya

Most of the apps we’re seeing in AI today are toy apps—proofs of concept and demos, not production code.

Chamath Palihapitiya

The Gemini rollout was a joke. The AI isn’t capable of giving you accurate answers because it’s been so programmed with diversity and inclusion.

David Sacks

The first base principle of every AI product should be that it is accurate and right.

Chamath Palihapitiya

An overnight success can take eight years.

David Friedberg

Questions Answered in This Episode

How sustainable is Nvidia’s current growth once the initial AI data center build‑out moderates, and what scenarios could meaningfully reduce its long‑term valuation?

The hosts dissect Nvidia’s blowout earnings, arguing its GPU dominance is fueling an AI infrastructure boom that may echo Cisco’s dot‑com era rise but with a stronger moat and more grounded valuation.

Get the full analysis with uListen AI

What specific technical and economic advantages must Groq demonstrate at scale to truly disrupt Nvidia’s dominance in AI inference workloads?

They highlight Groq’s long‑gestating LPU (Language Processing Unit) breakthrough as a potential disruptor in AI inference, using it to explore the economics and timelines of deep tech versus quick-win software plays.

Get the full analysis with uListen AI

How should investors decide when a deep tech opportunity—like new chips, fusion, or biotech—is ‘physics‑bounded but fundable’ versus effectively science fiction?

A major segment critiques Google’s Gemini image and answer bias as the product of an ideologically captured culture, debating whether AI systems should prioritize truth, safety, or value-laden social goals—and how that affects user trust.

Get the full analysis with uListen AI

What governance or product design mechanisms could ensure that large AI systems prioritize factual truth while still addressing legitimate concerns about harm and bias?

The episode closes with a brief geopolitical update on the Russia–Ukraine war, including rising tensions in Moldova’s Transnistria region and the risk of broader escalation.

Get the full analysis with uListen AI

If open‑source models and user‑tunable preferences become mainstream, how might that fragment the information ecosystem compared with today’s more centralized search paradigm?

Get the full analysis with uListen AI

Transcript Preview

Jason Calacanis

All right, everybody. Welcome back to your favorite podcast of all time, the All In Podcast, episode 160 something. With me again, Chamath Palihapitiya. He's a CEO of a company and he invests in startups, and, uh, his firm is called Social Capital. We also have David Freiberg, The Sultan of Science. He's now a CEO as well. And we have David Sacks from Craft Ventures in some undisclosed hotel room somewhere. How we doing, boys?

Chamath Palihapitiya

Good. Thank you. This is an odd intro.

David Sacks

Ah, could your intro be any more low energy and dragged out?

Jason Calacanis

(laughs) I'm sick. What do you want me to do? You want me to drink a-

David Sacks

Geez, try and fake the effort.

Jason Calacanis

... throat lozenge? All right here, give, give me one more shot. Watch this. (clears throat)

David Sacks

(laughs)

Jason Calacanis

Watch this. Watch profession- You want professionalism? Here we go.

David Sacks

Fake the effort, come on.

Jason Calacanis

Here we go. You want professionalism? I'll show you guys professionalism.

Chamath Palihapitiya

Is that Binaca? What was that?

Jason Calacanis

This?

Chamath Palihapitiya

Is that Binaca?

Jason Calacanis

Oh, it is a secret.

Chamath Palihapitiya

(laughs)

Jason Calacanis

(laughs)

Chamath Palihapitiya

(laughs)

Jason Calacanis

Banana boat.

Narrator

We're going all in. Let your winners ride. Rain Man David Sacks. We're going all in. And I said we open sourced it to the fans and they've just gone crazy with it. Love you guys. Queen of Quinoa. I'm going all in.

Jason Calacanis

All right, everybody. Welcome to the All In Podcast, episode 167, 168.

Chamath Palihapitiya

(laughs)

Jason Calacanis

With me, of course, the Rain Man himself, David Sacks, the Dictator Chairman, Chamath Palihapitiya, and our Sultan of Science, David Freiberg. How we doing, boys?

Chamath Palihapitiya

Great.

David Sacks

Oh, great.

Chamath Palihapitiya

How are we doing?

Jason Calacanis

Is that high energy enough for you?

Chamath Palihapitiya

Yeah.

David Sacks

Is it 167 or 168?

Jason Calacanis

I don't know. Who cares?

David Sacks

Can we at least get you to know the episode number?

Jason Calacanis

Who cares? We... Unfortunately or fortunately, we're going to be doing this thing forever. The, the audience demands it.

Chamath Palihapitiya

(laughs)

Jason Calacanis

It doesn't matter. This is like a Twilight Zone episode. We're going to be trapped in these four bubbles forever. You know, like Superman?

Chamath Palihapitiya

Black Mirror.

Jason Calacanis

It's up... It is. It's, this is like the... It is, uh, The Gift and the Curse.

Chamath Palihapitiya

Oh, when they were trapped in that glass, uh-

Jason Calacanis

Yeah, yeah.

Chamath Palihapitiya

... Zed? Was that his name?

Jason Calacanis

Zod.

Chamath Palihapitiya

Zed? Zod, yeah.

Jason Calacanis

Zod. Kneel Before Zod.

Chamath Palihapitiya

And he spun through the universe in the plastic-

Jason Calacanis

Yeah.

Chamath Palihapitiya

... thing forever for, for infinity?

Jason Calacanis

Yeah, until, until Superman took the nuclear bomb out of the, uh, Eiffel Tower and threw it into space and blew it up and freed them.

Chamath Palihapitiya

You know, my background today-

Jason Calacanis

And he fought with him.

Chamath Palihapitiya

... I think I'm gonna have to change now that you've referenced this important scene.

Install uListen to search the full transcript and get AI-powered insights

Get Full Transcript

Get more from every podcast

AI summaries, searchable transcripts, and fact-checking. Free forever.

Add to Chrome