Skip to content
a16za16z

Former Microsoft Executive Explains Where We Are in the AI Cycle w/ Anish Acharya & Steven Sinofsky

In this episode of ‘This Week in Consumer’, a16z General Partners Anish Acharya and Erik Torenberg are joined by Steven Sinofsky - Board Partner at a16z and former President of Microsoft’s Windows division - for a deep dive on how today’s AI moment mirrors (and diverges from) past computing transitions. They explore whether we’re at the “Windows 3.1” stage of AI or still in the earliest innings, why consumer adoption is outpacing developer readiness, and how frameworks like partial autonomy, jagged intelligence, and “vibe coding” are shaping what gets built next. They also dig into where the real bottlenecks lie, not in the tech, but in how companies, products, and people work. Timecodes: 00:00 Introduction 00:35 Discussing the Andrej Karpathy Talk 02:17 The Early Stages of AI and Tools 03:23 Vibe Writing and Vibe Coding 07:33 Automation and Human Judgment 15:13 The Future of Product Management 15:55 Platform Transitions and Vibe Coding 17:54 The Evolution of Programming Languages 23:07 AI in Creative Writing 28:06 Google's Position in the Tech Industry Resources: Find Anish on X: https://x.com/illscience Find Steven on X: https://x.com/stevesi Stay Updated: Let us know what you think: https://ratethispodcast.com/a16z Find a16z on Twitter: https://twitter.com/a16z Find a16z on LinkedIn: https://www.linkedin.com/company/a16z Subscribe on your favorite podcast app: https://a16z.simplecast.com/ Follow our host: https://x.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.

Steven SinofskyguestAnish AcharyahostErik Torenberghost
Jun 27, 202530mWatch on YouTube ↗

CHAPTERS

  1. How early are we? The “64K IBM PC” moment for modern AI

    Steven frames today’s AI as analogous to the earliest microcomputer era: exciting, but constrained and still poorly understood. The chapter sets expectations that many headline use cases are premature because the tech still fails at basic reliability tasks.

  2. Lessons from Karpathy: new tools require relearning the human–machine relationship

    The group reacts to Andrej Karpathy’s “jagged intelligence” framing and the idea that LLMs invert our typical relationship with computing tools. Productivity will come from learning how to work with these systems’ strengths and sharp edges, not by treating them like traditional software.

  3. Why early platforms obsess over tooling—and why devs lead adoption

    Steven argues that early platform transitions are defined by tool-building, and developers are uniquely motivated to force tools into usefulness. Coding becomes a natural early win because the platform’s earliest “customers” are developers themselves.

  4. Vibe writing vs. vibe coding: where autonomy works today—and where it doesn’t

    They contrast “vibe writing” and “vibe coding,” debating how much autonomy is realistic. Anish argues writing can reach full autonomy sooner, while Steven stresses that real-world stakes (grades, salary, legal risk) demand verification even when output looks good.

  5. Agents and automation: the slider between no autonomy and full autonomy

    Building on Karpathy’s “Iron Man” slider analogy, they discuss the reality that agents won’t arrive as a single breakthrough year. Steven argues we’re in a “decade of agents,” because automation repeatedly hits hard limits around edge cases, trust, and verification.

  6. Where automation lands first: high-friction, low-judgment tasks (and why incentives matter)

    Anish proposes a 2x2: automation arrives first where tasks are high friction but require low judgment (e.g., refinancing shopping). Steven adds an economic layer: markets need differentiation and incentives, so a pure “headless API” future doesn’t align with how businesses attract and monetize customers.

  7. Correctness vs. judgment: why some domains go fully autonomous and others won’t

    Steven distinguishes between domains with formal correctness (chess/Go) and those dominated by uncertainty and human judgment (medicine, taxes, operations). In many real jobs, the work is largely exception handling, which resists clean automation even as tools improve.

  8. The future of product management: ambiguity as the job that doesn’t go away

    They address recurring claims that AI will eliminate product management. Anish argues PM work is fundamentally about resolving ambiguity inside complex adaptive systems, so the role may evolve but won’t disappear because organizations will still need judgment and alignment.

  9. Vibe coding for clout—and the hidden reality that prompts become programming

    Steven critiques social-media-driven demos and “it worked instantly” narratives, arguing they often mask long debugging sessions and fragile outputs. He predicts text-to-app will trend toward structured prompting—effectively creating new programming languages and abstractions.

  10. Programming language hype cycles: from OO and low-code to today’s AI claims

    They compare current AI coding hype to earlier waves (object-oriented, Smalltalk, C++, Delphi/PowerBuilder, low-code). Steven’s thesis: transitions routinely overpromise “making programming easy,” delivering mostly constant-factor gains rather than order-of-magnitude improvements.

  11. Why writing may be the first true order-of-magnitude shift (even with new error types)

    Steven argues AI changes the economics of writing more dramatically than past tooling changes, even if accuracy isn’t perfect. They liken it to autocorrect: it removes common friction but introduces new classes of mistakes—still net-positive for many business contexts.

  12. AI in creative writing: bestsellers, ‘slop,’ and raising the ceiling for artists

    They predict AI-assisted novels will succeed commercially, likely via new creators and pseudonyms, with disclosure arriving later. Anish highlights the challenge of pushing models to cultural edges rather than averages, and both note that most content is middling—where AI can have outsized impact.

  13. Google’s position after I/O: ‘demise’ is overstated, but influence can shift

    They reject the idea that Google is “dying,” emphasizing that big incumbents can mount shock-and-awe launches across the stack. The real question is whether Google can change product culture and go-to-market behavior to fit the new platform era, not whether it can demo features.

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome