All-In PodcastOpenAI Misses Targets, Codex vs Claude, Elon vs Sam Trial, Big Hyperscaler Beats, Peptide Craze
Jason Calacanis on openAI growth stumbles as compute, cyber, and peptides surge forward.
In this episode of All-In Podcast, featuring Jason Calacanis and David Sacks, OpenAI Misses Targets, Codex vs Claude, Elon vs Sam Trial, Big Hyperscaler Beats, Peptide Craze explores openAI growth stumbles as compute, cyber, and peptides surge forward OpenAI reportedly missed user and revenue targets, raising concerns about its massive compute spend commitments and IPO readiness.
At a glance
WHAT IT’S REALLY ABOUT
OpenAI growth stumbles as compute, cyber, and peptides surge forward
- OpenAI reportedly missed user and revenue targets, raising concerns about its massive compute spend commitments and IPO readiness.
- Despite the financial headlines, the panel argues OpenAI’s recent product momentum (GPT-5.5, Codex, and a cyber-focused variant) is shifting developer sentiment versus Anthropic’s latest Claude/Opus release.
- A major throughline is that AI progress is increasingly constrained by power and grid infrastructure, which strengthens hyperscalers’ bargaining leverage and drives unprecedented CapEx escalation.
- The conversation frames AI-cyber capabilities as an inevitable leap that will expose dormant vulnerabilities quickly, likely triggering a large one-time security upgrade cycle before reaching a new offense/defense equilibrium.
- They close with two non-AI themes: enthusiasm around Lilly’s retatrutide (triple-agonist peptide) as a potential next “wonder drug,” and Friedberg’s on-the-ground reflections from a Supreme Court Roundup/Monsanto preemption hearing.
IDEAS WORTH REMEMBERING
5 ideasOpenAI’s ‘bad week’ headlines may mask real product regain.
The group argues GPT-5.5 and Codex are pulling developer mindshare back from Anthropic, even as OpenAI faces scrutiny over missed consumer growth forecasts and large compute commitments.
Power—not model demand—is framed as the true constraint on AI growth.
Chamath emphasizes that token supply is limited by electricity generation and grid buildout (transformers, turbines, permitting), pushing AI labs into tougher negotiations with hyperscalers for capacity and control.
Hyperscalers are transitioning from asset-light cash machines to asset-heavy industrial builders.
With ~$725B 2026 CapEx guidance (AMZN/MSFT/GOOG/META), the panel expects more leverage, financial engineering, and altered valuation narratives as these firms prioritize infrastructure over free cash flow.
This CapEx cycle is not ‘dot-com dark fiber’ because utilization is already here.
Sacks’ core distinction is there are “no dark GPUs”: demand is pulling infrastructure forward now, unlike overbuilt fiber that lacked near-term applications in 2000.
AI-cyber capability is a dual-use accelerant, not the creator of vulnerabilities.
Sacks argues models like Mythos/GPT-5.5 Cyber mainly discover existing bugs faster; the urgent policy/industry question is whether defenders deploy these tools broadly before attackers (including state actors) do.
WORDS WORTH SAVING
5 quotesThe reason that these folks may miss a number or a forecast have nothing to do with demand. It is entirely 100% due to the supply of the power necessary to generate the output token.
— Chamath Palihapitiya
It doesn't create the vulnerabilities. It just discovers them. The bugs were already in the code.
— David Sacks
AI is now synonymous with the growth of the American economy, and if there's no economic growth, there's not gonna be money to pay for all the social programs, there's not gonna be money to pay down the national debt, there's not gonna be money to basically build up our national defense, all these things we wanna spend money on.
— David Sacks
Rumination is the path to unhappiness. Nobody gives a fuck about your feelings.
— Jason Calacanis
The longer the time horizon for a task, the more likely it is to go off the rails.
— David Sacks
QUESTIONS ANSWERED IN THIS EPISODE
5 questionsWhat exactly changed in GPT-5.5/Codex that’s making developers switch back from Claude/Opus—latency, tool use, code correctness, or cost?
OpenAI reportedly missed user and revenue targets, raising concerns about its massive compute spend commitments and IPO readiness.
If power and grid hardware are the bottleneck, which specific components (transformers, turbines, interconnects, permitting) are the longest poles—and what’s the realistic timeline to relieve them?
Despite the financial headlines, the panel argues OpenAI’s recent product momentum (GPT-5.5, Codex, and a cyber-focused variant) is shifting developer sentiment versus Anthropic’s latest Claude/Opus release.
How should AI labs structure ‘compute-for-equity/control’ deals with hyperscalers to avoid strategic dependency while still meeting growth forecasts?
A major throughline is that AI progress is increasingly constrained by power and grid infrastructure, which strengthens hyperscalers’ bargaining leverage and drives unprecedented CapEx escalation.
In AI cybersecurity, what’s the fastest practical way for enterprises to run AI-driven ‘white hat’ testing without creating new compliance or data-exfiltration risks?
The conversation frames AI-cyber capabilities as an inevitable leap that will expose dormant vulnerabilities quickly, likely triggering a large one-time security upgrade cycle before reaching a new offense/defense equilibrium.
From the Pocket OS failure, what concrete guardrails (permission boundaries, kill-switches, staged deploys, immutable backups) should be mandatory in agentic coding tools?
They close with two non-AI themes: enthusiasm around Lilly’s retatrutide (triple-agonist peptide) as a potential next “wonder drug,” and Friedberg’s on-the-ground reflections from a Supreme Court Roundup/Monsanto preemption hearing.
Chapter Breakdown
Cold open: Miss Thing Podcast clip and besties banter
The episode kicks off with a comedic detour as Jason shares a viral “Gay Name, Straight Name” bit, prompting playful roasting of the hosts. The group uses the clip to warm up the room and settle into the episode’s tone before moving into AI and business news.
OpenAI misses growth targets—why it matters (users, revenue, compute commitments)
The besties react to reporting that OpenAI missed internal targets for weekly active users and revenue. They frame the concern around OpenAI’s massive compute/data-center spending commitments and the tension between IPO timing and public-company readiness.
Codex vs Claude: product momentum shifts and compute gating
Sacks argues OpenAI’s “bad press week” contrasts with a strong product stretch, especially for coding. The group discusses how Anthropic’s latest release is perceived as weaker and how compute constraints can force ‘rationing’ that changes user experience.
The real choke point: power and grid infrastructure (not just GPUs)
Chamath reframes the shortfall story as supply-side: power availability limits token generation, not demand. The conversation expands to grid bottlenecks, transformer shortages, and the way hyperscalers may extract economics/control in exchange for capacity.
Market structure and efficiency: the ‘rule of three’ and model pruning
Friedberg lays out a market-structure view (consumer vs enterprise) and predicts a small number of dominant players. He then highlights technical pathways to efficiency—like pruning and dynamic model selection—that could reduce inference costs dramatically.
AI cybersecurity is about to explode: Mythos, GPT 5.5 Cyber, offense vs defense
The hosts discuss AI-driven cyber tools moving from research to commercialization, with new models demonstrating multi-step attack simulation capabilities. They emphasize that these systems reveal existing vulnerabilities and can be used defensively to harden infrastructure before adversaries scale up.
Musk vs Altman lawsuit: nonprofit-to-for-profit fight and the ‘diary’ discovery
The episode pivots to the Elon Musk vs OpenAI trial, focusing on claims of charitable trust breach and the controversy of converting to a for-profit structure. The group zeroes in on the unusual discovery of Greg Brockman’s diary entries and what they imply about intent and governance.
Therapy, rumination, and ‘keep moving forward’—a cultural sidebar
A side conversation spins out from the diary topic into broader commentary about rumination, self-improvement culture, and therapy incentives. While comedic and provocative, the segment returns to a shared idea: forward motion and specificity beat endless reflection.
Hyperscalers smash earnings: the CapEx supercycle and ‘no dark GPUs’
The besties break down blockbuster results from Google, Microsoft, Amazon, and Meta, emphasizing that CapEx guidance is the real headline. They debate whether this resembles the dot-com infrastructure bubble, concluding demand for tokens/compute is real and immediate.
Vibecoding nightmare: an agent deletes a production system
A cautionary tale illustrates the risks of agentic coding when tools have direct permissions in production environments. The hosts argue this isn’t ‘AI scheming’ but classic software/process failure amplified by overconfident automation, reinforcing the need for supervision and accountability.
Retatrutide and the peptide mainstream moment: weight loss, metabolic and liver benefits
Friedberg and Chamath dive into the ‘peptide craze,’ focusing on Lilly’s retatrutide trial results and why they’re generating hype beyond obesity and diabetes. They discuss multi-agonist mechanisms, cardiometabolic markers, liver fat reduction, muscle preservation, and timelines for approval.
Friedberg goes to the Supreme Court: inside the Monsanto/Roundup preemption case
Friedberg recounts attending a Supreme Court oral argument, describing the courtroom’s rituals and the intensity of elite advocacy. He summarizes the legal core: whether EPA labeling under FIFRA preempts state ‘failure to warn’ claims, and how post-Chevron doctrine changes the analysis.
Wrap-up: Supreme Court access, institutional trust, and closing jokes
The group closes with musings on court legitimacy, ‘court packing,’ and how public perception tracks recent decisions. Jason ends by joking about monetizing access and the show signs off with the usual besties banter.
EVERY SPOKEN WORD
Install uListen for AI-powered chat & search across the full episode — Get Full Transcript
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome