Skip to content
The Twenty Minute VCThe Twenty Minute VC

The Early Days of Anthropic & How 21 of 22 VCs Rejected It | The Four Bottlenecks in AI | Anj Midha

Anjney Midha is the founder of AMP, and a founding investor in Anthropic. Most recently, Anj was General Partner at Andreessen Horowitz, leading frontier AI investments. He serves on the boards of Mistral, Black Forest Labs, Sesame, LMArena, OpenRouter, Luma AI and Periodic Labs and is an early angel in ElevenLabs among others. Prior to that, Anj was the cofounder/CEO of Ubiquity6 (acquired by Discord) and a partner at Kleiner Perkins. ----------------------------------------------- Timestamps: 00:00 Intro 01:25 Are Scaling Laws Dead? 02:55 The Four Bottlenecks Holding AI Back 07:36 Why AI for Science Sucked 09:36 Sovereign Data & the Cloud Act 13:31 The Investment Thesis Behind Mistral 14:27 The Brutal Early Days of Anthropic 20:52 Public Benefit Corporations: Mission vs Profit in the Age of AI 23:06 The AMP Grid: Building the Electricity Grid for Compute 25:21 Co-Founding Companies Like Kleiner Used to 35:30 We're in a GPU Wastage Bubble, Not an AI Bubble 37:49 Why Compute Isn't Fungible 42:16 How China Is Winning the AI Race 45:15 Coordinating Defense Against AI Distillation Attacks 49:07 Perfect Competition Is for Losers 01:01:43 Quick-Fire Round ----------------------------------------------- Subscribe on Spotify: https://open.spotify.com/show/3j2KMcZTtgTNBKwtZBMHvl?si=85bc9196860e4466 Subscribe on Apple Podcasts: https://podcasts.apple.com/us/podcast/the-twenty-minute-vc-20vc-venture-capital-startup/id958230465 Follow Harry Stebbings on X: https://twitter.com/HarryStebbings Follow Anjney Midha on X: https://twitter.com/AnjneyMidha Follow 20VC on Instagram: https://www.instagram.com/20vchq Follow 20VC on TikTok: https://www.tiktok.com/@20vc_tok Visit our Website: https://www.20vc.com Subscribe to our Newsletter: https://www.thetwentyminutevc.com/contact ----------------------------------------------- #20vc #harrystebbings #anjneymidha #founder #investor #vc #anthropic #mistral #ai #amp

Anjney MidhaguestHarry Stebbingshost
Apr 13, 20261h 15mWatch on YouTube ↗

At a glance

WHAT IT’S REALLY ABOUT

Anj Midha on AI bottlenecks, compute grids, security, and investing

  1. Midha argues scaling laws are not dead; returns look saturated only in heavily explored domains like coding, while underexplored domains like materials science still show outsized gains from more compute and better feedback loops.
  2. He frames AI progress around four core bottlenecks—context/feedback, compute, capital, and culture—claiming culture is the most underrated driver because it attracts the talent that produces algorithmic breakthroughs.
  3. He contends “AI for science” underperformed because the necessary physics/chemistry data is scarce online and locked in labs, requiring vertically integrated data-creation loops (e.g., robots + experiments) to generate proprietary training signals.
  4. Midha describes a “GPU wastage bubble,” not an AI bubble: massive stranded compute exists because compute is not fungible across chip types/generations and lacks standards, motivating his “AMP Grid” concept as a coordinating layer akin to an electricity grid.
  5. On geopolitics, he warns China is advancing via full-stack systems co-design plus adversarial distillation, and calls for a coordinated “Iron Dome” for frontier inference to detect and respond to distillation and insider threats across Western labs.

IDEAS WORTH REMEMBERING

5 ideas

Scaling still works; “diminishing returns” is domain-dependent.

Midha says coding evals look saturated because they’re heavily optimized, but domains like materials and superconductors remain wide open—so more compute plus tight experiment-to-training loops can produce rapid gains.

The biggest capability unlock is often a better context/feedback loop, not a new model architecture.

He treats algorithmic innovation as downstream of having the right team and culture; the real edge comes from deploying models in a domain, collecting high-quality feedback, and feeding it back into training (including real-world verification).

AI-for-science fails without new data creation, not better prompting.

He claims frontier science reasoning is bottlenecked by missing physics/chemistry datasets on the open internet, pushing teams toward lab integration (robots, synthesis, measurement) to generate proprietary ground truth.

Sovereign data constraints are creating openings against hyperscalers.

Using the CLOUD Act example, he argues some European defense/logistics/industrial workloads cannot run on US-managed clouds, driving demand for local compute + local models—central to his Mistral thesis.

We’re in an infrastructure utilization crisis, not a capabilities bubble.

Midha says “stranded” clusters exist because FLOPS aren’t interchangeable across GPU generations and configurations; without standards, the market overbuys in the wrong places while innovators still can’t access what they need.

WORDS WORTH SAVING

5 quotes

AI alignment, don't get me wrong, is hard, but not the hardest problem. Human alignment is really the problem right now.

Anjney Midha

There's no saturation in superconductor discovery at all.

Anjney Midha

We are definitely in a GPU wastage bubble, not an AI bubble.

Anjney Midha

Compute is not fungible today.

Anjney Midha

If we don't secure frontier model inference… behind a coordinated Iron Dome, I don't think we have a sustainable shot at staying at the frontier over the next decade.

Anjney Midha

Are scaling laws dead? (domain saturation vs new frontiers)The four bottlenecks: context, compute, capital, cultureVertical integration for scientific data and feedback loopsSovereign data, CLOUD Act, and Europe’s AI infrastructureMistral investment thesis: independent full-stack sovereigntyAnthropic’s early fundraising rejections and Amazon partnershipGPU wastage bubble and non-fungible computeAMP Grid: pooling capacity like an electricity gridCompute standardization and misaligned incentivesChina’s systems co-design and adversarial distillationCoordinated inference defense (“Iron Dome”)Venture returning to incubation/co-founding model

High quality AI-generated summary created from speaker-labeled transcript.

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome