Skip to content
a16za16z

Box CEO on the AI Adoption Gap | The a16z Show

Erik Torenberg, Steven Sinofsky, and Martin Casado speak to Aaron Levie, CEO at Box, about what happens to enterprise software when agents become the primary users. They discuss why coding agents succeed where other knowledge work agents struggle, what abstraction layers mean for the workforce, and how data access and systems of record must change in an agent-first world. Timestamps: 0:00—Intro 0:51—Building software for agents vs. humans 2:10—Can non-technical workers actually use AI agents? 14:31—CFO/CIO pushback: the real fear of agents doing integration 18:39—Treating agents like employees and why it breaks down 27:35—Diffusion gap: startups vs. enterprises 42:53—Wall Street's economics are off by an order of magnitude Read the full transcript here: https://www.a16z.news/s/podcast Resources: Follow Aaron Levie on X: https://twitter.com/levie Follow Steve Sinofsky on X: https://twitter.com/stevesi Follow Martin Casado on X: https://twitter.com/martin_casado Follow Erik Torenberg on X: https://twitter.com/eriktorenberg Stay Updated: If you enjoyed this episode, be sure to like, subscribe, and share with your friends! Find a16z on X: https://twitter.com/a16z Find a16z on LinkedIn: https://www.linkedin.com/company/a16z Listen to the a16z Podcast on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYX Listen to the a16z Podcast on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711 Follow our host: https://x.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see http://a16z.com/disclosures.

Aaron LevieguestSteven SinofskyhostMartin Casadohost
Apr 8, 202658mWatch on YouTube ↗

CHAPTERS

  1. Why AI adoption will be slower than Silicon Valley expects

    The conversation opens with a core thesis: AI capability is moving fast, but real organizational adoption will lag. The hosts frame the rest of the episode around why enterprise constraints, legacy systems, and operational risk slow diffusion even as agent technology improves.

  2. Designing software for a world with 100–1,000× more agents than humans

    Levie argues that if agents massively outnumber people, software must be designed with agent interaction as a first-class interface. They discuss what “agent-ready” software looks like (APIs, CLIs, MCP-like protocols) and how workflows change when agents can both use tools and generate code.

  3. The real bottleneck: most workers can’t specify workflows algorithmically (yet)

    Sinofsky challenges the assumption that non-technical users will easily direct agents to automate their jobs. He argues that asking people to formalize work into flowcharts exposes a deep skills gap, implying a need for new abstraction layers and specialized “builders” inside organizations.

  4. From interns to spreadsheets to agents: how abstraction layers collapse over time

    Using a story about early spreadsheet adoption, Sinofsky explains how organizations initially rely on specialists (or “cadres”) before the tooling becomes mainstream. The group maps this to agents: today it takes elite skill to orchestrate many agents, but the complexity will likely collapse into simpler, domain-native tools.

  5. Computer-use agents vs. code-generation agents: which direction is winning?

    Casado argues the trend is shifting away from pure code-generation toward agents that operate software like humans (computer use). Levie counters with a hybrid model—agents should choose dynamically between using existing skills/tools, calling APIs, or writing code when needed.

  6. Integration on demand meets enterprise reality: CFO/CIO fear of breaking systems of record

    They dig into why leadership teams push back: the scary part isn’t just agents, it’s letting humans/agents create new integrations that can corrupt or expose systems of record. Read-only and “consumption layer” use cases feel safer, while write operations and cross-system automation raise governance and blast-radius concerns.

  7. Box CLI as a case study: what happens when agents can operate enterprise content at scale

    Levie describes giving coding agents access to Box via a CLI and how powerful—and chaotic—that becomes. They surface practical issues like runaway loops, massive operation volume, coordination conflicts, and the need for new operational controls when many agents concurrently manipulate shared repositories.

  8. Why “treat agents like employees” breaks down: oversight, liability, and prompt-injection risk

    Casado proposes treating agents like distinct humans with their own accounts, phone numbers, and credit cards, leveraging existing RBAC systems. Levie argues the analogy fails in enterprises because agents are extensions of the owner (no privacy, high oversight needs), and prompt injection/social engineering makes confidential context hard to protect.

  9. Security déjà vu and standards debates: open-source parallels and the coming clampdown

    Sinofsky compares today’s agent governance issues to early open-source adoption: enthusiasm preceded policy, then norms and controls emerged. They predict enterprises will temporarily “close everything off,” while threat vectors become more sophisticated than typical human insider risk models.

  10. The diffusion gap: startups sprint while enterprises crawl (and vendors feel the squeeze)

    They zoom out to the macro dynamic: startups and individuals will adopt agent workflows faster, creating a widening capability gap with regulated incumbents. This creates tension for SaaS vendors whose monetization and product assumptions were built around human UIs rather than high-volume agent access to data and operations.

  11. Agents will choose better systems: interface matters less than semantics and system quality

    Casado pushes back on the idea of ‘marketing to agents’ via better interfaces; he claims agents are already good at navigating interfaces. What matters is system semantics—durability, cost, parameters, correctness—and agents may force the market toward better underlying systems, shifting power away from traditional sales-driven procurement.

  12. Wall Street is underestimating the opportunity: new economics, new business models

    Sinofsky argues financial models are off by an order of magnitude because they assume a fixed revenue pie and linear growth. They compare AI’s economics to past platform shifts (PCs, cloud, Salesforce) where reduced friction created massively expanded usage and entirely new monetization models.

  13. The coming ‘compute budget’ shock: tokens, usage-based pricing, and the transistor moment

    They close on the near-term pain of token rationing and the strategic question of how much compute to allocate to engineering work. While Levie stresses this is a real CFO/engineering management problem today, Sinofsky predicts it will be transient—capacity, hardware, and algorithmic breakthroughs will drive costs down, creating a “transistor moment.”

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome