
The Early Days of Anthropic & How 21 of 22 VCs Rejected It | The Four Bottlenecks in AI | Anj Midha
Anjney Midha (guest), Harry Stebbings (host)
In this episode of The Twenty Minute VC, featuring Anjney Midha and Harry Stebbings, The Early Days of Anthropic & How 21 of 22 VCs Rejected It | The Four Bottlenecks in AI | Anj Midha explores anj Midha on AI bottlenecks, compute grids, security, and investing Midha argues scaling laws are not dead; returns look saturated only in heavily explored domains like coding, while underexplored domains like materials science still show outsized gains from more compute and better feedback loops.
Anj Midha on AI bottlenecks, compute grids, security, and investing
Midha argues scaling laws are not dead; returns look saturated only in heavily explored domains like coding, while underexplored domains like materials science still show outsized gains from more compute and better feedback loops.
He frames AI progress around four core bottlenecks—context/feedback, compute, capital, and culture—claiming culture is the most underrated driver because it attracts the talent that produces algorithmic breakthroughs.
He contends “AI for science” underperformed because the necessary physics/chemistry data is scarce online and locked in labs, requiring vertically integrated data-creation loops (e.g., robots + experiments) to generate proprietary training signals.
Midha describes a “GPU wastage bubble,” not an AI bubble: massive stranded compute exists because compute is not fungible across chip types/generations and lacks standards, motivating his “AMP Grid” concept as a coordinating layer akin to an electricity grid.
On geopolitics, he warns China is advancing via full-stack systems co-design plus adversarial distillation, and calls for a coordinated “Iron Dome” for frontier inference to detect and respond to distillation and insider threats across Western labs.
Key Takeaways
Scaling still works; “diminishing returns” is domain-dependent.
Midha says coding evals look saturated because they’re heavily optimized, but domains like materials and superconductors remain wide open—so more compute plus tight experiment-to-training loops can produce rapid gains.
Get the full analysis with uListen
The biggest capability unlock is often a better context/feedback loop, not a new model architecture.
He treats algorithmic innovation as downstream of having the right team and culture; the real edge comes from deploying models in a domain, collecting high-quality feedback, and feeding it back into training (including real-world verification).
Get the full analysis with uListen
AI-for-science fails without new data creation, not better prompting.
He claims frontier science reasoning is bottlenecked by missing physics/chemistry datasets on the open internet, pushing teams toward lab integration (robots, synthesis, measurement) to generate proprietary ground truth.
Get the full analysis with uListen
Sovereign data constraints are creating openings against hyperscalers.
Using the CLOUD Act example, he argues some European defense/logistics/industrial workloads cannot run on US-managed clouds, driving demand for local compute + local models—central to his Mistral thesis.
Get the full analysis with uListen
We’re in an infrastructure utilization crisis, not a capabilities bubble.
Midha says “stranded” clusters exist because FLOPS aren’t interchangeable across GPU generations and configurations; without standards, the market overbuys in the wrong places while innovators still can’t access what they need.
Get the full analysis with uListen
Standardization is blocked by human misalignment more than technical limits.
He argues procurement and regulation still treat statistical AI systems too much like deterministic software, preventing an RFC-like process (analogous to TCP/IP or AC/DC) that would enable interoperable, secure compute markets.
Get the full analysis with uListen
Frontier advantage will increasingly depend on coordinated security at inference-time.
He describes widespread distillation attempts and insider threats, and proposes a shared proxy/telemetry layer so labs can detect attacks collectively—an “Iron Dome” for inference across the Western ecosystem.
Get the full analysis with uListen
Notable Quotes
“AI alignment, don't get me wrong, is hard, but not the hardest problem. Human alignment is really the problem right now.”
— Anjney Midha
“There's no saturation in superconductor discovery at all.”
— Anjney Midha
“We are definitely in a GPU wastage bubble, not an AI bubble.”
— Anjney Midha
“Compute is not fungible today.”
— Anjney Midha
“If we don't secure frontier model inference… behind a coordinated Iron Dome, I don't think we have a sustainable shot at staying at the frontier over the next decade.”
— Anjney Midha
Questions Answered in This Episode
On the four bottlenecks: how would you rank context vs compute vs capital vs culture for (a) a frontier lab, (b) an applied enterprise AI company, and (c) an AI-for-science startup?
Midha argues scaling laws are not dead; returns look saturated only in heavily explored domains like coding, while underexplored domains like materials science still show outsized gains from more compute and better feedback loops.
Get the full analysis with uListen AI
What concrete technical standards would make compute meaningfully “fungible”—is it workload portability, memory/network baselines, compiler targets, scheduling APIs, or something else?
He frames AI progress around four core bottlenecks—context/feedback, compute, capital, and culture—claiming culture is the most underrated driver because it attracts the talent that produces algorithmic breakthroughs.
Get the full analysis with uListen AI
Your ‘AMP Grid’ analogy implies an independent system operator—what governance model prevents capture by hyperscalers or the largest labs?
He contends “AI for science” underperformed because the necessary physics/chemistry data is scarce online and locked in labs, requiring vertically integrated data-creation loops (e. ...
Get the full analysis with uListen AI
In the ‘AI for science sucked’ section, what specific benchmarks or tasks failed most, and what did you observe as the biggest missing primitives (units, error bars, lab protocols, simulators, etc.)?
Midha describes a “GPU wastage bubble,” not an AI bubble: massive stranded compute exists because compute is not fungible across chip types/generations and lacks standards, motivating his “AMP Grid” concept as a coordinating layer akin to an electricity grid.
Get the full analysis with uListen AI
For Europe’s sovereignty thesis: what’s the minimum viable ‘sovereign stack’ (land/power/shell, cloud layer, model layer, security) that actually satisfies regulators and defense customers?
On geopolitics, he warns China is advancing via full-stack systems co-design plus adversarial distillation, and calls for a coordinated “Iron Dome” for frontier inference to detect and respond to distillation and insider threats across Western labs.
Get the full analysis with uListen AI
Transcript Preview
AI alignment, don't get me wrong, is hard, but not the hardest problem. Human alignment is really the problem right now.
Our guest today is the most prominent AI investor in the ecosystem, Anj Midha. Why is he the most prominent? Three reasons. Number one, he's one of the founding investors of Anthropic. Number two, he led AI investments for Andreessen Horowitz, where he made investments in Black Forest Labs, Mistral, Sesame, among others. And then third and finally, today he's the founder of AMP, where he provides compute and invests in the world's best AI companies.
If we don't secure frontier model inference or what I call state-of-the-art inference behind a coordinated Iron Dome, I don't think we have a sustainable shot at staying at the frontier over the next decade. There's no saturation in superconductor discovery at all.
Ready to go? [rock music] Anj, I am so looking forward to this, dude. I have stalked the shit out of you for the last three or four days. I spoke to Bing Gordon. I had a catch-up with Bing before this. Very nice to speak to him. Uh, so thank you so much for joining me today, dude.
Thank, thanks for having me. It's, it... Too long. It only took us what? Eight years, nine years? I forget when it was. [chuckles]
I was twelve when we last did it. [laughing]
[laughing] Well, twelve in startup land is twenty-five, right? So-
Dude, I'm confused. Help me out. I had Demis on the show the other day from DeepMind. He was like, "Yeah, I'm not sure if we're seeing scaling laws, but we are definitely seeing slightly diminishing, like return/performance as we scale." So potentially, are we getting to a stage where increased compute is no longer leading to increased performance?
Oh, no, absolutely not. [chuckles] No, that's, that's, that's not true at all. In, in certain domains that are well explored, like coding, for example, yes, there's an increasing amount of compute required to get an incremental gain in some eval that's v-super saturated. But if you said, "Anj, what about material science?" You know, I'm sitting here at Periodic Labs office. This is my incubat-- Like the, my latest incu-incubation is called Periodic Labs. I spend three days a week here in, in Menlo Park. We have a thirty-thousand-square-foot facility where we have LLMs that then predict new materials, new supercon-conductors. We then have robots synthesize those new materials, and then we have, we have physical machines like X-ray diffraction machines validate whether those materials have the properties that were predicted by the LLMs, and then we pipe that, we, we, we pipe that verification data back into our training run, you know, how many other times we need. And I can tell you, throwing more compute at the problem is probably having, yeah, super exponential gains right now per iteration. So it depends on which domain you're talking about, which modality. There's no saturation in superconductor discovery, for example, at all. The bitter lesson is holding is well and alive.
Install uListen to search the full transcript and get AI-powered insights
Get Full TranscriptGet more from every podcast
AI summaries, searchable transcripts, and fact-checking. Free forever.
Add to Chrome