Skip to content
a16za16z

Box CEO: Why Big Companies Are Falling Behind on AI | a16z

Steven Sinofsky, board partner at a16z, Aaron Levie, CEO of Box, and Martin Casado, general partner at a16z, discuss the reality of AI inside enterprises. They cover the gap between Silicon Valley and the rest of the world, why most AI initiatives fail in large organizations, and how agents, infrastructure, and workflows are evolving beyond the hype. Timestamps: 00:00 - Trailer 01:05 - Introductions & The Silicon Valley vs Enterprise Gap 04:30 - Why Enterprise AI Efforts Keep Failing 09:16 - The Architectural Shift: Treating AI as a User, Not Software 14:38 - The Integration Wall Agents Can't Climb 20:12 - Should Agents Be Treated Like Humans? 24:40 - Salesforce Goes Headless & What It Means for SaaS 39:16 - Scale, Entropy & Why AI Coding Creates as Many Problems as It Solves 47:53 - Will AI Kill Jobs or Create More of Them? Resources: Follow Aaron Levie on X: https://twitter.com/levie Follow Steve Sinofsky on X: https://twitter.com/stevesi Follow Martin Casado on X: https://twitter.com/martin_casado Stay Updated: If you enjoyed this episode, be sure to like, subscribe, and share with your friends! Find a16z on X: https://twitter.com/a16z Find a16z on LinkedIn: https://www.linkedin.com/company/a16z Listen to the a16z Podcast on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYX Listen to the a16z Podcast on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711 Follow our host: https://x.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see http://a16z.com/disclosures.

Aaron Levieguest
Apr 28, 202658mWatch on YouTube ↗

CHAPTERS

  1. Trailer tease: enterprise AI is stalling at integration and complexity

    A quick cold open frames two recurring themes: boards demand “more AI” without operational alignment, and even powerful agents will slam into enterprise integration and security realities. The hosts also preview a later debate about whether AI coding actually increases complexity and ongoing work.

  2. Introductions and the Silicon Valley vs. enterprise workflow gap

    The panel introduces themselves and sets up the core divide: AI feels transformative in startup/engineering contexts but diffuses slowly into mainstream enterprise knowledge work. They attribute the gap to different tool freedom, technical aptitude, and the fragmented, legacy-heavy enterprise stack.

  3. Why enterprise AI programs fail: board pressure, consultants, and centralization

    They dissect the oft-quoted statistic that “most enterprise AI projects fail,” arguing it reflects organizational dynamics rather than lack of utility. Individuals use AI successfully, but centralized initiatives driven by boards and consultants often lack operational alignment and clear ownership.

  4. Paralysis from rapid change: architecture bets, vendor churn, and risk of lock-in

    Because AI paradigms and vendor capabilities change rapidly, enterprises hesitate to commit. CIOs and architecture teams fear being burned by picking the wrong approach, but supporting multiple approaches increases complexity and cost.

  5. Architectural shift: treat AI as a user, not embedded software

    Martin describes a major pivot in how products integrate AI: instead of “fusing” AI into the UI, make software more agent-consumable (CLI/API/tooling) and let the agent operate as a user. The group compares this evolution to the messy transitional phases of early cloud computing.

  6. The integration wall agents can’t climb: permissions, exceptions, and sources of truth

    They zoom in on integration as the practical limiter of enterprise agent adoption. Agents face the same barriers humans do—access controls, siloed data, and undocumented tribal knowledge—except agents can’t easily “tap Sally on the shoulder” to resolve exceptions.

  7. Start with “read/learn” agents before “act” agents

    A pragmatic adoption path emerges: deploy agents first to discover, search, and synthesize information, then graduate to execution with approvals. They argue enterprise search across documents may be AI’s first major internal win, enabling safer early value.

  8. Should agents be treated like humans? onboarding, identity, and process reuse

    They explore the idea that LLM agents resemble non-deterministic humans more than deterministic software, so enterprises should reuse human-oriented processes. This leads to concepts like agent identities, email accounts, onboarding, and orientation—while noting agents still lack social/organizational context.

  9. Salesforce “goes headless”: implications for SaaS, pricing, and the ‘SaaSpocalypse’

    Salesforce’s move toward headless/agent-friendly access is framed as a bellwether for enterprise software. The discussion shifts to licensing, identity, and access control: agents must be first-class users, but pricing models will need to evolve.

  10. APIs vs. browser automation: layered reality and the path to agent-native interfaces

    A debate unfolds: will agents primarily use APIs (efficient, controllable) or operate UIs like humans (more universally compatible)? They conclude both will persist—APIs where available, browser/computer-use where anti-scraping, missing APIs, or legacy constraints exist—echoing the idea that layers rarely disappear.

  11. Scale shock: what happens when agents hit SaaS systems 500x harder?

    They raise the operational concern that agent usage could multiply request volume dramatically, overwhelming systems designed for human interaction rates. The counterpoint is that classic distributed systems techniques (caching, rate limits, state management) handle much of this—though not all vendors are prepared.

  12. Scale, entropy, and AI coding: speed gains vs. long-term complexity

    They argue AI-assisted coding increases output but may also increase entropy—more code, more dependencies, and more security/maintenance burden. Big-company constraints (reviews, compliance, risk management) exist to prevent catastrophic failures, so the naive “vibe coding” mindset doesn’t translate.

  13. Box’s pragmatic deployment: 2–3x gains, guardrails, and where AI helps most

    Levie describes Box’s approach: meaningful productivity improvements, but not the “10x” hype, because the process is still constrained by security and review. He emphasizes targeted use cases—like anomaly detection and document intelligence—where AI amplifies humans instead of replacing them.

  14. Will AI kill jobs or create more? complexity creates demand across industries

    They reject the “end of work” narrative, arguing that automation historically increases scope, complexity, and demand for expertise. AI expands software creation beyond tech companies into every industry, creating new engineering and operational roles to build, manage, and govern AI-enabled systems.

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome