Skip to content
All-In PodcastAll-In Podcast

Why Anthropic's best model is locked up despite a $30B ramp

Mythos scored so well on cyber offense that Anthropic delayed its release; a 100-day hardening window did not stop Claude Code from driving a $30B run rate.

Jason CalacanishostBrad GerstnerguestDavid SackshostChamath Palihapitiyahost
Apr 9, 20261h 29mWatch on YouTube ↗

FREQUENTLY ASKED QUESTIONS

Direct answers grounded in the episode transcript. Tap any timestamp to verify against the source.

  1. Why did Anthropic delay Mythos?

    David Sacks said Anthropic's Mythos delay was credible because better coding models also become better cyber tools. He first criticized Anthropic's pattern of fear-heavy launches, including the earlier blackmail study, but said this cyber case made logical sense. As coding models become more capable, they can find bugs, find vulnerabilities, and chain several vulnerabilities into exploits. Sacks expected a one-time catch-up period of roughly six months, when AI-driven cyber could surface dormant bugs across many systems. The pre-release window gives software companies with existing code bases a chance to use the capability themselves, detect vulnerabilities, and patch before similar tools become widely available. He said CISOs and IT departments should take the next few months seriously. If everyone reacts correctly, he did not expect Anthropic's doomsday scenario, but he thought the fear could drive useful defensive behavior.

    11:34 in transcript
  2. What happened to OpenClaw and the $200 Claude plan?

    Jason Calacanis said OpenClaw users lost the cheap path from Claude subscriptions to heavy agent usage. His explanation was that OpenClaw users had been connecting $200 Anthropic subscriptions to the tool, while those subscriptions were priced on blended usage across many users. Most subscribers used less than they paid for, but OpenClaw power users could consume far more. Jason said some were using what he described as $2,000 or even $20,000 worth of tokens under the $200 plan. Anthropic then told those users they could no longer use the subscription that way and had to move to the API, where they would pay by usage. He framed that as Anthropic "ankling" OpenClaw, especially because Anthropic announced its own agent technology soon after. Brad Gerstner pushed back that companies can set rational prices when power users are effectively buying dollars for ten cents.

    25:41 in transcript
  3. What is David Sacks' coding tokens flywheel?

    David Sacks linked coding-token leadership to a possible data flywheel for agents. He said AI-generated code might be only about five percent of code today, but he expected it to move toward ninety-five percent over the next few years. If one AI model company has fifty to sixty percent share in coding, Sacks argued, it could have the most developers using it and the most access to code bases. That could mean the most training tokens, creating data-scale effects that help the early leader consolidate its lead. He then connected coding to agents: agents often need to write code in order to complete tasks. Because coding could lead into the agent market, Sacks argued that companies should avoid discriminatory tactics, keep pricing fair, and behave cleanly before regulators later review the market with hindsight.

    39:47 in transcript
  4. Why did Brad Gerstner call AI the TAM for intelligence?

    Brad Gerstner said Anthropic's ramp showed demand for intelligence was scaling like an exponential. He argued that the product crossed a capability threshold where companies no longer treated it as a normal IT budget item. They saw it as labor augmentation and labor replacement. He pointed to Anthropic reaching its planned $30 billion year-end exit run rate by the end of March and said the company had over a thousand enterprise customers paying more than $1 million annually. For Brad, the signal was not a sudden go-to-market trick, but millions of self-interested consumers and enterprises demanding a product because it made them better at their work. He said compute was still the limiter, with only one and a half to two gigawatts available, and that the unit cost of intelligence was plummeting as model capability improved.

    44:13 in transcript
  5. What is the gross revenue versus net revenue issue with Anthropic and OpenAI?

    Chamath Palihapitiya said the Anthropic revenue debate was still stuck before profitability metrics. He described a cycle where, when companies cannot talk about free cash flow, they talk about EBITDA, then margins, then revenue, then gross revenue. In his view, the AI discussion sat between gross revenue and net revenue. He cited an article, likely in The Information, that tried to distinguish Anthropic presenting gross revenue from OpenAI presenting net revenue. Chamath said outsiders did not know the take rates, and the companies had not provided enough clarity. That left confusion about recognized revenue, run-rate revenue, and how to multiply those numbers. He stressed that the market was not yet discussing steady-state free cash flow margin or EBITDA. The practical question, he said, was how gross-margin-negative the revenue growth might be, which outsiders did not know.

    50:14 in transcript

Answers are AI-generated from the transcript and may contain errors. Tap a question to verify against the source.

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome