All-In PodcastHow the Pentagon cut Anthropic over terms-of-service clauses
Anthropic's contract barred defense uses. The Pentagon called it a supply chain risk; Emil Michael frames LUCAS drones as the model that replaces it.
At a glance
WHAT IT’S REALLY ABOUT
Iran war update and Pentagon clash with Anthropic over AI
- The hosts frame an “emergency pod” around the escalating U.S.-Israel operation against Iran, asking Under Secretary of Defense for Research & Engineering Emil Michael about objectives, duration, and the likelihood of boots on the ground.
- Michael argues the campaign is intended to be “weeks not months,” focused on degrading Iran’s ability to fund/arm proxies (Hezbollah, Hamas, etc.), drones, missiles, and nuclear capabilities—while rejecting an Afghanistan/Iraq-style occupation.
- The conversation pivots to how modern war is changing—especially drone swarms, autonomy, AI at the edge, and missile defense (e.g., Golden Dome concepts)—and how rules of engagement and operational experience affect outcomes.
- A major segment covers the Pentagon’s break with Anthropic: Michael says Anthropic’s contract terms and governance posture created operational risk, prompting cancellation and a formal “supply chain risk” designation, while the panel debates broader implications for AI vendor power, deplatforming dynamics, and multi-model redundancy.
IDEAS WORTH REMEMBERING
5 ideasThe administration’s stated Iran goal is capability degradation, not occupation.
Michael describes a “weeks not months” effort aimed at disarming Iran’s capacity to supply terror proxies and field drones/ballistic missiles, while dismissing a protracted Iraq/Afghanistan-style ground campaign.
Operational success is attributed to experience, planning, and relaxed rules of engagement.
Michael claims post–Global War on Terror leaders learned hard lessons, and that prior restrictive ROE hindered effectiveness; he argues updated ROE plus long-planned contingency “war games” improve speed and outcomes.
Drone warfare is now the central battlefield innovation—and autonomy is the next step.
The group cites Ukraine as evidence that drones drive a majority of casualties; Michael expects “drone swarms” with AI-enabled discrimination, decoys, and coordination (heterogeneous autonomy).
AI’s acceptability depends on scenario risk—missile defense is the strongest near-term case.
Michael argues humans can’t react fast enough for hypersonic threats (e.g., ~90 seconds), making AI-assisted space/air defense comparatively low-civilian-risk versus autonomous action in dense populations.
The Strait of Hormuz insurance response is as consequential as the fighting.
Friedberg explains maritime insurance markets can freeze shipping; the U.S. offering political/war-risk insurance is framed as stabilizing energy supply, limiting inflation, and potentially seeding a U.S.-based maritime underwriting industry.
WORDS WORTH SAVING
5 quotes“There’s no scenario where we have some protracted boots on the ground, Afghanistan, Iraq two-like scenario.”
— Emil Michael
“Drone-on-drone warfare, robot-on-robot warfare, those things are the future for sure.”
— Emil Michael
“Chinese hypersonic missile comes up, you’ve got ninety seconds… and a human can’t… have the reaction time.”
— Emil Michael
“All lawful use seems like a good thing… It’s our province to decide how we fight and win wars, so long as they’re lawful.”
— Emil Michael
“Just call me if you need another exception.”
— Emil Michael (describing Anthropic’s proposed approach)
High quality AI-generated summary created from speaker-labeled transcript.
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome