The Twenty Minute VCThe Early Days of Anthropic & How 21 of 22 VCs Rejected It | The Four Bottlenecks in AI | Anj Midha
CHAPTERS
- 0:00 – 1:25
Why scaling laws still work—diminishing returns depend on the domain
Anj argues scaling laws are not “dead,” but some benchmarks (like coding evals) appear saturated, making gains look expensive. In less-explored domains like materials discovery, he claims additional compute still yields outsized improvements when paired with tight experimentation loops.
- 1:25 – 2:55
The four bottlenecks holding AI back: context/feedback, compute, capital, and culture
Anj lays out a framework for what limits frontier progress today, emphasizing that algorithms are less of a bottleneck than the systems around them. He highlights culture as the meta-bottleneck that enables top talent and flexible research direction.
- 2:55 – 7:36
Why “AI for science” underperformed: missing data and weak real-world feedback loops
He describes benchmarking frontier models on physics/chemistry and finding them surprisingly weak relative to the hype. The core issue is lack of high-quality scientific data on the public internet and difficulty accessing lab/manufacturing datasets, motivating vertically integrated data generation.
- 7:36 – 9:36
Vertically integrated model companies and the “Claudification” question
Harry probes how to predict which vertical AI companies get commoditized by foundation models. Anj reframes moats: unique context/feedback access makes progress visible and can support superior economics, but it’s not automatically a permanent moat.
- 9:36 – 13:31
Sovereign data, the CLOUD Act, and why Europe wants local AI infrastructure
Anj explains how legal and geopolitical constraints make certain workloads impossible to run on US-managed clouds. He uses the CLOUD Act to motivate “sovereign” infrastructure and local providers that can serve sensitive enterprise and government workloads.
- 13:31 – 14:27
The investment thesis behind Mistral: full-stack European independence
He describes Mistral as a bet on European sovereignty across the AI stack—power, facilities, compute, and locally trained models—alongside open deployment. The goal is independence at scale rather than relying on US hyperscalers and labs.
- 14:27 – 20:52
The brutal early days of Anthropic: 21 of 22 VCs said no
Anj recounts helping founders translate a scaling-law research hypothesis into a business plan, then facing broad investor skepticism. He describes how many VCs didn’t understand GPT-3 or compute-driven scaling economics, while strategics like Amazon immediately saw the alignment.
- 20:52 – 23:06
Public Benefit Corporations (PBCs): resolving mission vs profit tensions
Anj defends PBC governance as a mechanism to make long-term decisions that aren’t maximally profit-seeking in the short run. He positions AMP as mission-aligned infrastructure + standards advocacy, including offering compute at cost to support frontier progress.
- 23:06 – 25:21
The AMP Grid: building an electricity-grid analog for compute
Anj describes AMP as an “independent system operator” coordinating compute capacity rather than owning datacenters like a cloud provider. The thesis is that pooling and dispatching capacity improves utilization, reduces overprovisioning, and accelerates frontier output—similar to early electricity markets.
- 25:21 – 35:30
Back-to-the-future venture: co-founding and incubation like the early Valley
He argues frontier companies require deep operational partnership, not just check-writing. Drawing on Intel/Genentech/Apple examples, he suggests value accrues to investors who help build companies hands-on—especially where CapEx, infrastructure, and scientific execution dominate.
- 35:30 – 37:49
GPU wastage bubble, not an AI bubble: stranded compute and poor utilization
Anj claims the core problem is infrastructure inefficiency rather than capability hype. Large pockets of compute sit idle due to fragmentation, mismatched needs, and lack of coordination—creating the appearance of a bubble even as capability demand remains real.
- 37:49 – 42:16
Why compute isn’t fungible: chip heterogeneity and missing standards
He explains that unlike electricity, compute capacity can’t be easily swapped across chip types or clusters, even within Nvidia generations. This prevents workloads from moving efficiently, creates stranded assets, and amplifies boom/bust dynamics typical of pre-standardization infrastructure eras.
- 42:16 – 45:15
China’s systems co-design advantage and distillation as a catch-up engine
Anj argues China is competing via full-stack optimization rather than leading-edge chips alone—co-designing chips, infrastructure, and training to boost efficiency. He highlights adversarial distillation from Western endpoints as a mechanism to accelerate iteration and close the gap.
- 45:15 – 49:07
Coordinating defense: an “Iron Dome” for inference and frontier security
He warns that distillation and insider threats exploit fragmented defenses across Western labs. His proposal is a shared proxy/coordination layer across inference providers so attacks seen by one actor can trigger rapid collective response.
- 49:07 – 1:01:43
Perfect competition is for losers: aiming for “optimal competition” in AI markets
Updating Thiel’s “competition is for losers,” Anj argues the real enemy is perfect competition that commoditizes everyone and wastes scarce resources. He advocates an “optimal competition” structure with a small number of strong players per layer to maintain innovation without monopoly stagnation.
- 1:01:43 – 1:15:18
Quick-fire: LP advice, building to learn, legacy, and personal reflections
In the closing segment, Anj emphasizes that LPs and investors must do the work—read, build, and understand bottlenecks—rather than outsource judgment. The conversation shifts to leadership qualities, health and time, independence as motivation, and what he wants to be remembered for.
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome