Skip to content
a16za16z

Dwarkesh Patel and Noah Smith on AGI and the Economy

In this episode, Erik Torenberg is joined in the studio by @DwarkeshPatel and Noah Smith to explore one of the biggest questions in tech: what exactly is artificial general intelligence (AGI), and how close are we to achieving it? They break down: - Competing definitions of AGI - economic vs. cognitive vs. “godlike” - Why reasoning alone isn’t enough - and what capabilities models still lack - The debate over substitution vs. complementarity between AI and human labor - What an AI-saturated economy might look like - from growth projections to UBI, sovereign wealth funds, and galaxy-colonizing robots - How AGI could reshape global power, geopolitics, and the future of work Along the way, they tackle failed predictions, surprising AI limitations, and the philosophical and economic consequences of building machines that think—and perhaps one day, act—like us. Timecodes: 0:00 Intro 0:33 Defining AGI and General Intelligence 2:38 Human and AI Capabilities Compared 7:00 AI Replacing Jobs and Shifting Employment 15:00 Economic Growth Trajectories After AGI 17:17 Consumer Demand in an AI-Driven Economy 31:14 Redistribution, UBI, and the Future of Income 31:58 Human Roles and the Evolving Meaning of Work 41:21 Technology, Society, and the Human Future 45:43 AGI Timelines and Forecasting Horizons 54:04 The Challenge of Predicting AI's Path 57:37 Nationalization and the Global AI Race 1:07:10 Brand and Network Effects in AI Dominance 1:09:31 Final Thoughts and Preparation for What’s Next Resources: Find Dwarkesh on X: https://x.com/dwarkesh_sp Find Dwarkesh on YT: https://www.youtube.com/c/DwarkeshPatel Subscribe to Dwarkesh’s Substack: https://www.dwarkesh.com/ Find Noah on X: https://x.com/noahpinion Subscribe to Noah’s Substack: https://www.noahpinion.blog/ Stay Updated: Let us know what you think: https://ratethispodcast.com/a16z Find a16z on Twitter: https://twitter.com/a16z Find a16z on LinkedIn: https://www.linkedin.com/company/a16z Subscribe on your favorite podcast app: https://a16z.simplecast.com/ Follow our host: https://x.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details, please see a16z.com/disclosures.

Erik TorenberghostDwarkesh Patelguest
Aug 3, 20251h 10mWatch on YouTube ↗

At a glance

WHAT IT’S REALLY ABOUT

AGI’s economic meaning, labor disruption, growth prospects, and governance dilemmas debated

  1. AGI is framed primarily as an economic threshold—AI that can cheaply and reliably automate most jobs, especially white-collar work—rather than as a philosophical milestone about “thinking like humans.”
  2. They argue today’s models show impressive reasoning yet still fail at key work requirements like persistent context, long-horizon execution, and on-the-job learning, which may explain why economic impact lags headline capabilities.
  3. The discussion contrasts substitution vs complementarity: Noah stresses historical precedent that technology often complements labor, while Dwarkesh emphasizes AI’s uniquely low marginal “subsistence” cost could still drive human wages toward (or below) subsistence without redistribution.
  4. They debate post-AGI growth and “who buys the output,” raising the possibility that growth could come from investment-heavy frontiers (e.g., space/large projects) even if mass labor income collapses, but noting GDP concepts may become strained.
  5. Policy and power questions dominate the back half: redistribution mechanisms (UBI vs in-kind vs sovereign wealth funds), the fragility of meaning-from-work narratives, compute/resource constraints shaping timelines, and geopolitical risk if AI systems exploit human conflict dynamics.

IDEAS WORTH REMEMBERING

5 ideas

They define AGI by economic substitutability, not inner cognition.

Dwarkesh’s operational test is whether AI can do ~98% of jobs (or ~95% of white-collar work) as well/fast/cheaply as humans, because that’s when automation meaningfully hits GDP and labor markets.

Reasoning is not the main bottleneck to economic impact.

Even with strong “reasoning,” models still struggle with the glue-work of real jobs—building context, learning user preferences over months, and executing multi-step workflows reliably—so the capability-to-value mapping remains weak.

Persistent context and on-the-job learning are portrayed as the missing ‘employee’ feature.

Dwarkesh argues humans become valuable through training, memory, and self-correction; current LLM sessions “expunge” context, and prompt/RLHF tweaks don’t replicate workplace learning dynamics.

Demand-side inertia may be smaller than people expect if AI becomes clearly better.

Using Waymo as an analogy, Dwarkesh suggests that once AI systems are genuinely superior and convenient, many consumers will rapidly prefer them (e.g., medical triage/chat) despite initial hesitation about ‘no human involved.’

Comparative advantage won’t automatically protect wages in the long run.

Noah notes humans could keep high-paying niches if AI faces aggregate constraints; Dwarkesh counters that scalable compute/robot supply can expand until AI wages fall far below human subsistence costs, making redistribution—not niche jobs—the real crux.

WORDS WORTH SAVING

5 quotes

So the ultimate definition is can do almost any job, say, like, 98% of jobs at least, as well, fast, uh, cheaply as a human.

Dwarkesh Patel

The reason humans are so valuable is not just their raw intellect. It's not mainly their raw intellect, although that's important. It's their ability to build up context. It's to interrogate their own failures and pick up small efficiencies and improvements as they practice a task.

Dwarkesh Patel

Here's two things people have been saying since the beginning of the Industrial Revolution, neither of which has ever remotely come close to being true, even in specific subdomains. Um, the first one is, "Here's a thing technology will never be able to do." And the second one is, "Human labor will be made obsolete."

Noah Smith

Phones have destroyed the human race.

Noah Smith

I, I, m- my suspicion is that humans have just adapted to so much, like agricultural revolution, industrial revolution, the growth of states, the, um, you know, like once in a while, like a communist or fascist regime will come around or something. Like, the, the idea that, uh, being free and having millions of dollars is the thing that finally gets us, um, I'm just suspicious of.

Dwarkesh Patel

Economic definition of AGI (job automation threshold)Why current AI hasn’t translated into trillions of productivityContinual learning, memory, and long-horizon task executionSubstitution vs complementarity and comparative advantageCompute scaling limits and AGI timeline uncertaintyDemand, profits, overproduction, and what “GDP” means post-AGIRedistribution: UBI, taxes, sovereign wealth funds, asset ownershipTechnology’s social effects (fertility, online life, meaning)US–China competition, coordination, and “AI playing us off”Industry structure: consolidation, entry barriers, brand vs technical moats

High quality AI-generated summary created from speaker-labeled transcript.

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome