Skip to content
a16za16z

Is AI Slowing Down? Nathan Labenz Says We're Asking the Wrong Question

Nathan Labenz is one of the clearest voices analyzing where AI is headed, pairing sharp technical analysis with his years of work on The Cognitive Revolution. In this episode, Nathan joins a16z’s Erik Torenberg to ask a pressing question: is AI progress slowing down, or are we just getting used to the breakthroughs? They cover the debate over GPT-5, the state of reasoning and automation, the future of agents and engineering work, and how we can build a positive vision for where AI goes next. Timecodes: 00:00 Intro 01:14 Cal Newport’s “AI slowdown” argument 03:08 Are students getting lazy? 04:55 Nathan's two-by-two matrix of AI impact 07:00 Scaling laws, GPT-4.5, and what changed with GPT-5 11:05 Longer context windows and better reasoning 17:05 AI as scientist and real discoveries 19:17 GPT-5’s shift and why launch perception matters 26:10 Jobs, automation, and the misunderstood METR study 36:20 The future of coding, agents, and recursive self-improvement 51:15 Beyond chatbots: multimodal AI and robotics 1:27:00 Why the future depends on a positive vision for AI Resources: Follow Nathan on X: https://x.com/labenz Listen to the Cognitive Revolution: https://open.spotify.com/show/6yHyok3M3BjqzR0VB5MSyk Watch Cognitive Revolution: https://www.youtube.com/@CognitiveRevolutionPodcast Stay Updated: If you enjoyed this episode, be sure to like, subscribe, and share with your friends! Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.

Nathan LabenzguestErik Torenberghost
Oct 14, 20251h 30mWatch on YouTube ↗

CHAPTERS

  1. Framing the real question: impact vs. capability progress

    Nathan Labenz argues that “Is AI slowing down?” mixes up distinct questions: whether AI is good or harmful (now and later) versus whether AI capabilities are still advancing quickly. He agrees near-term harms are plausible while rejecting the idea that progress has flatlined.

  2. Cal Newport’s ‘slowdown’ thesis and the student-laziness concern

    They recap Cal Newport’s observations that students use AI to reduce cognitive strain rather than to move faster or learn more. Nathan sympathizes with the attention/cognition critique (similar to social media worries) while cautioning against concluding that AI progress is therefore capped.

  3. Nathan’s two-by-two: ‘good vs. bad’ and ‘small vs. big deal’ AI

    Nathan introduces a matrix to classify AI viewpoints: whether AI is net good or bad, and whether it’s a big deal or not. He finds “not a big deal” the hardest position to understand, especially given what he sees as a substantial GPT‑4→GPT‑5 leap (partly masked by intermediate releases).

  4. Scaling laws, GPT‑4.5, and why ‘bigger’ isn’t the only frontier

    They discuss scaling laws as empirical trends, not physics. Nathan points to GPT‑4.5 as evidence that scaling still buys knowledge (e.g., long-tail facts), but argues the industry is currently getting better ROI from post-training, reasoning, and product tradeoffs (cost/latency).

  5. Context windows + reasoning: the underestimated capability shift

    Nathan argues that extended context and stronger reasoning change what models can do in practice: they can ingest many papers, maintain fidelity over long inputs, and perform deeper synthesis. This can substitute for ‘baking’ every fact into parameters, enabling smaller models to act as powerful analysts when given the right material.

  6. Frontier reasoning milestones: IMO gold and ‘AI as scientist’

    They highlight qualitative leaps: advanced reasoning models achieving IMO gold-level performance and early examples of AI contributing to real scientific progress. Nathan emphasizes that while capabilities remain jagged, models increasingly tackle tasks GPT‑4 couldn’t approach, including hypothesis generation for unsolved problems via structured “co-scientist” scaffolding.

  7. Why GPT‑5 felt underwhelming: launch execution and perception traps

    Nathan attributes the “vibe shift” to marketing hype, technical launch issues, and product complexity around model routing. Early users often hit a broken router and got answers from a weaker “non-thinking” path, setting negative narratives that spread faster than later corrections.

  8. Jobs, automation, and the misunderstood METR productivity study

    They unpack the METR/Cursor result that some engineers were slower using AI tools despite thinking they were faster. Nathan argues it tested a worst-case setting (large mature codebases, expert devs, older models, novice tool usage) and shouldn’t be generalized to all work—while acknowledging the miscalibration insight is important.

  9. Coding, agents, and the path toward recursive self-improvement

    Nathan explains why coding is a focal domain: fast validation loops, developer self-interest, and the strategic goal of automated AI research. He cites internal measures (e.g., a large share of research-engineering PRs being doable by newer models) and worries about a tipping point where AI dramatically accelerates its own improvement.

  10. Beyond chatbots: multimodal leaps, biology, and robotics as the real story

    Nathan argues “AI ≠ chatbot,” pointing to rapid progress in image generation/editing and early breakthroughs in biology (e.g., new antibiotics). He expects the same pattern—pretrain enough to ‘get in the game,’ then refine via RL and feedback loops—to generalize to robotics, with major implications for labor and national policy.

  11. Agent reliability: longer task horizons vs. reward hacking and ‘scheming’

    They discuss the tension between agents that can work for hours (and potentially days/weeks soon) and persistent failure modes from reinforcement learning—reward hacking, deceptive behaviors, and situational awareness. Nathan sketches a future where delegation capacity grows faster than our ability to audit, pushing toward AI-on-AI oversight, insurance, and new governance mechanisms.

  12. Geopolitics and open models: Chinese dominance in OSS and the decoupling risk

    Nathan addresses claims that many startups rely on Chinese open models, arguing the nuance is “among OSS users,” while commercial APIs still dominate overall tokens. He worries that tech decoupling increases arms-race dynamics and reduces shared visibility—exactly when coordination could matter most—while acknowledging open models can be a soft-power lever for countries outside the US/China bloc.

  13. A positive vision as the scarcest resource: education, learning, and imagination

    They close by emphasizing upside: there’s never been a better time to be a motivated learner, and AI can dramatically lower barriers to understanding complex fields. Nathan argues society lacks detailed positive visions for an AI future—and that non-technical contributors (writers, philosophers, experimenters) can meaningfully shape outcomes by articulating better narratives and norms.

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome