CHAPTERS
Why “Little Tech” needs its own voice in policy
The hosts set up the core problem: policy conversations in DC and state capitals have long been dominated by large, institutional players, leaving startups underrepresented. McCune and Perault explain why startup realities—limited staff, capital, and compliance capacity—require a distinct advocacy lens.
The Little Tech Agenda: origins, pillars, and the 10-year time horizon
McCune outlines how the agenda emerged inside the firm and how it maps to a16z’s “verticalized” policy work across AI, crypto, bio/health, fintech, and defense. Perault adds that venture’s long time horizon demands a stable, trusted ecosystem—meaning smart governance, not a regulatory vacuum.
Clarifying the mantra: “Regulate use, not development”
Perault explains the most-misunderstood element of their framework: focusing regulation on harmful uses rather than restricting model development itself. They argue existing bodies of law—consumer protection, civil rights, and criminal law—provide a robust enforcement baseline, yet the slogan is often misread as ‘no regulation.’
How AI policy debates accelerated: Senate hearings, ‘doom’ narratives, and the Biden EO
McCune traces a key inflection in fall 2023 Senate hearings where major CEOs signaled both desire for regulation and discussed existential-risk scenarios. He argues that testimony, combined with broader safety-oriented advocacy, spooked policymakers and helped catalyze rapid regulatory pushes—via executive action and state bills.
Effective altruism, think-tank influence, and interest-group dynamics
McCune argues that ‘safetyism’ and doomer narratives benefited from a decade-long head start, with well-funded networks shaping think tanks and nonprofits. They describe their role as playing catch-up to rebalance the debate toward innovation, competition, and practical governance.
Big Tech at the table: voluntary commitments and ‘small number of frontier players’ assumptions
Perault points to White House voluntary commitments negotiated by a small set of large AI companies as evidence that Little Tech wasn’t represented. Both warn that recurring policy assumptions—‘only 3–7 companies will build frontier AI’—normalize rules that entrench incumbents and shrink the competitive frontier.
Licensing regimes and open-source restrictions: ‘nuclear-style’ regulation as an anti-competitive trap
They revisit proposals to require licenses to build frontier models and to constrain open source, calling these unprecedented for software and likely to entrench monopolies. McCune uses nuclear regulation as a cautionary tale: well-intended oversight can drastically reduce new entrants and output over decades.
National security, China, and the ‘lock it down’ paradox
McCune emphasizes that overly restrictive domestic regulation can cause the US to lose strategic advantage to China. They argue that attempts to block diffusion—especially of open source—can backfire, pushing global markets toward Chinese alternatives while weakening US soft power.
Parallels to crypto: policy debates as proxies for older unresolved fights
McCune draws an analogy to crypto regulation, where battles over tokens sometimes become a proxy for broader securities-law reform. He suggests AI policymaking similarly attracts attempts to re-litigate older internet governance issues (privacy, content moderation, algorithmic bias) by funneling them through AI frameworks.
State laws, impact assessments, and the Colorado example
Perault critiques state “high-risk/low-risk” AI frameworks (like Colorado’s) as paperwork-heavy and hard for startups without counsel to navigate. He contrasts this with a more direct approach: explicitly prohibiting the use of AI to violate anti-discrimination law, which targets harm without broad administrative regimes.
Why ex-ante control sounds appealing but often fails in practice
They address fears about future catastrophic harms (bioterrorism, cybercrime) and whether regulating use is enough. Perault argues the legal system generally punishes unlawful conduct rather than preemptively policing predicted wrongdoing, and that preemptive surveillance-style regulation is both intrusive and unreliable.
Where things stand now: AI Action Plan, open source support, and workforce measures
Perault and McCune say the policy climate has improved relative to two years ago, with greater support for startup-friendly approaches and open source. They highlight under-discussed elements of the AI Action Plan, especially worker retraining and labor-market monitoring to respond to potential disruptions.
The moratorium fight: perception, coalitions, and political organizing
McCune explains the controversy around a proposed moratorium/preemption concept, arguing it was widely mischaracterized as wiping out all state law for 10 years. He attributes its failure to perception, strong opposition networks, partisan legislative dynamics, and insufficient pro-preemption coalition organization—prompting renewed efforts to coordinate advocacy and political strategy.
State vs. federal roles: Dormant Commerce Clause, preemption, and a workable standard
They outline a constitutional division of labor: federal leadership on interstate commerce and AI development standards, states enforcing harmful conduct within borders. Perault adds that some state proposals may run into Dormant Commerce Clause issues by imposing heavy out-of-state burdens with limited local benefit—another reason to focus states on harmful-use enforcement while pursuing federal standards for development.
Next 6–12 months: federal standard, AI literacy, infrastructure, and shifting industry alliances
They predict near-term focus on a federal framework that prevents a patchwork while preserving state enforcement of harms. They also flag workforce training, AI literacy, and infrastructure (energy/data centers), plus potential government resources to lower startup barriers (compute/data access). Finally, they anticipate periods of both convergence and divergence across Big/Medium/Little Tech—and emphasize that their positions follow principles, not party or incumbents.
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome