Skip to content
Dwarkesh PodcastDwarkesh Podcast

Dario Amodei on Dwarkesh Patel: Why the Exponential Ends

Why the big blob of compute predicts log-linear gains through 2025: AIME-tested RL and pre-training confirm the curve; SWE task breadth is the remaining gap.

Dwarkesh PatelhostDario Amodeiguest
Feb 13, 20262h 22mWatch on YouTube ↗

Episode Details

EPISODE INFO

Released
February 13, 2026
Duration
2h 22m
Channel
Dwarkesh Podcast
Watch on YouTube
▶ Open ↗

EPISODE DESCRIPTION

Dario Amodei thinks we are just a few years away from “a country of geniuses in a data center”. In this episode, we discuss what to make of the scaling hypothesis in the current RL regime, how AI will diffuse throughout the economy, whether Anthropic is underinvesting in compute given their timelines, how frontier labs will ever make money, whether regulation will destroy the boons of this technology, US-China competition, and much more. 𝐄𝐏𝐈𝐒𝐎𝐃𝐄 𝐋𝐈𝐍𝐊𝐒

𝐒𝐏𝐎𝐍𝐒𝐎𝐑𝐒

  • Labelbox can get you the RL tasks and environments you need. Their massive network of subject-matter experts ensures realism across domains, and their in-house tooling lets them continuously tweak task difficulty to optimize learning. Reach out at https://labelbox.com/dwarkesh
  • Jane Street sent me another puzzle… this time, they’ve trained backdoors into 3 different language models — they want you to find the triggers. Jane Street isn’t even sure this is possible, but they’ve set aside $50,000 for the best attempts and write-ups. They’re accepting submissions until April 1st at https://janestreet.com/dwarkesh
  • Mercury’s personal accounts make it easy to share finances with a partner, a roommate… or OpenClaw. Last week, I wanted to try OpenClaw for myself, so I used Mercury to spin up a virtual debit card with a small spend limit, and then I let my agent loose. No matter your use case, apply at https://mercury.com/personal-banking

To sponsor a future episode, visit https://dwarkesh.com/advertise. 𝐓𝐈𝐌𝐄𝐒𝐓𝐀𝐌𝐏𝐒 00:00:00 - What exactly are we scaling? 00:12:36 - Is diffusion cope? 00:29:42 - Is continual learning necessary? 00:46:20 - If AGI is imminent, why not buy more compute? 00:58:49 - How will AI labs actually make profit? 01:31:19 - Will regulations destroy the boons of AGI? 01:47:41 - Why can’t China and America both have a country of geniuses in a datacenter? 02:05:46 - Claude's constitution

SPEAKERS

  • Dwarkesh Patel

    host
  • Dario Amodei

    guest

EPISODE SUMMARY

In this episode of Dwarkesh Podcast, featuring Dwarkesh Patel and Dario Amodei, Dario Amodei on Dwarkesh Patel: Why the Exponential Ends explores dario Amodei argues scaling continues, but exponential progress is ending soon Dario Amodei claims the core scaling thesis hasn’t changed since his 2017 “Big Blob of Compute” view: compute, data (quantity/quality), training time, scalable objectives, and stability tricks dominate progress more than clever new algorithms.

RELATED EPISODES

David Reich – Bronze Age shock, the Neanderthal puzzle, & the sudden spread of farming

David Reich – Bronze Age shock, the Neanderthal puzzle, & the sudden spread of farming

Jensen Huang – TPU competition, why we should sell chips to China, & Nvidia’s supply chain moat

Jensen Huang – TPU competition, why we should sell chips to China, & Nvidia’s supply chain moat

Andrej Karpathy — “We’re summoning ghosts, not building animals”

Andrej Karpathy — “We’re summoning ghosts, not building animals”

Why Leonardo was a saboteur, Gutenberg went broke, and Florence was weird – Ada Palmer

Why Leonardo was a saboteur, Gutenberg went broke, and Florence was weird – Ada Palmer

Richard Sutton – Father of RL thinks LLMs are a dead end

Richard Sutton – Father of RL thinks LLMs are a dead end

Elon Musk – "In 36 months, the cheapest place to put AI will be space”

Elon Musk – "In 36 months, the cheapest place to put AI will be space”

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome