Skip to content
a16za16z

Amjad Masad & Adam D’Angelo: How Far Are We From AGI?

Adam D’Angelo (Quora/Poe) thinks we're 5 years from automating remote work. Amjad Masad (Replit) thinks we're brute-forcing intelligence without understanding it. In this conversation, two technical founders who are building the AI future disagree on almost everything: whether LLMs are hitting limits, if we're anywhere close to AGI, and what happens when entry-level jobs disappear but experts remain irreplaceable. They dig into the uncomfortable reality that AI might create a "missing middle" in the job market, why everyone in SF is suddenly too focused on getting rich to do weird experiments, and whether consciousness research has been abandoned for prompt engineering. Plus: Why coding agents can now run for 20+ hours straight, the return of the "sovereign individual" thesis, and the surprising sophistication of everyday users juggling multiple AIs. Timestamps 00:00 Introduction 00:41 The Bearishness Paradox: "I don't know what people are talking about" 04:25 "Functional AGI" - Brute Forcing Your Way to Automation 11:18 "We are in a human expertise regime" 15:31 The Weird Equilibrium: Automating Entry-Level but Not Experts 17:22 The Expert Data Paradox 24:44 The Sovereign Individual: A Prediction Framework for the AI Era 28:51 "Vastly increased what a single person can do" 45:04 "It's gonna be the decade of agents" 49:01 Managing Tens of Agents in Parallel 52:56 "I actually think vibe coding is unbelievably high potential" 58:47 Claude 4.5's Strange New Awareness Resources Follow Amjad on X: https://x.com/amasad Follow Adam on X: https://x.com/adamdangelo Stay Updated: If you enjoyed this episode, be sure to like, subscribe, and share with your friends! Find a16z on X: https://x.com/a16z Find a16z on LinkedIn: https://www.linkedin.com/company/a16z Listen to the a16z Podcast on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYX Listen to the a16z Podcast on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711 Follow our host: https://x.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.

Amjad MasadguestErik Torenberghost
Nov 6, 20251h 2mWatch on YouTube ↗

At a glance

WHAT IT’S REALLY ABOUT

AGI timelines, brute-force automation, and the coming agent-led economy shifts

  1. Adam D’Angelo argues recent leaps in reasoning, coding, and multimodal generation suggest progress is accelerating and that key blockers are increasingly about context, tooling (e.g., computer use), and integration rather than fundamental model limits.
  2. Amjad Masad contends today’s systems are powerful but qualitatively different from human intelligence, increasingly reliant on non-scalable human expertise (labels, RL task design), and therefore not on a direct path to his “learn-anywhere” definition of AGI.
  3. Both foresee substantial near-term automation through “functional AGI,” where domain-specific RL environments, verifiers, and product scaffolding brute-force end-to-end workflows even if true general learning remains unsolved.
  4. They highlight labor-market and data flywheel paradoxes: AI may replace entry-level work faster than expert work, potentially choking off the pipeline of future experts and the expert data needed to train the next generation of models.
  5. The conversation broadens to economic and political implications (e.g., The Sovereign Individual), a likely boom in solo entrepreneurship and agent management, and open questions about whether AI centralizes power in hyperscalers or decentralizes it to individuals.

IDEAS WORTH REMEMBERING

5 ideas

Near-term progress may be gated more by context and tooling than raw model IQ.

Adam’s view is that models are already “smart enough” for many tasks; the practical limiter is feeding the right context and enabling reliable computer use, which could unlock much broader automation quickly.

“Functional AGI” can arrive without solving general intelligence.

Amjad argues companies can brute-force automation by building domain RL environments, verifiers, and workflow infrastructure—achieving high job automation in specific areas even if models still fail at basic out-of-distribution learning.

We may be in a “human expertise regime” where progress depends on scarce expert labor.

If improvements require heavy labeling, contracting, and handcrafted RL setups, then human expertise becomes the constraint—unlike earlier eras where simply scaling data/compute produced large gains.

Automating junior roles first could break the talent pipeline.

They describe a “weird equilibrium” where AI substitutes for entry-level workers but not experts, reducing hiring and training, which can lower the future supply of senior talent and slow organizational learning.

There’s an ‘expert data paradox’ that could limit model improvement.

If models require expert-generated data to advance, but automation reduces the number of experts (or incentives to produce expert work), future training signal could become harder to obtain unless synthetic/RL environments substitute effectively.

WORDS WORTH SAVING

5 quotes

I actually honestly, I don't know what people are talking about.

Adam D’Angelo

I don't think a lot of what's holding back the models these days is not actually intelligence. It's getting the right context into the model so that it can be able to, to use its intelligence.

Adam D’Angelo

I try to coin this term I call functional AGI, which is the idea that you can automate a lot of aspects of a lot of jobs by just going in and, like, collecting as much data and creating these RL environments.

Amjad Masad

Today we are in a human expertise regime.

Amjad Masad

Nothing seems fundamentally so hard that it couldn't be solved by the smartest people in the world working incredibly hard for the next five years on it.

Adam D’Angelo

Bearishness vs optimism on LLM progressDefinitions of AGI vs “functional AGI”Human expertise regime vs Bitter Lesson scalingContext, tooling, and computer-use as bottlenecksEntry-level automation and the training pipeline problemExpert-data paradox and RL environment qualityAgents, parallel agents, and “vibe coding” democratizationIncumbents vs startups; sustaining vs disruptive innovationSovereign Individual thesis and political reconfigurationConsciousness, philosophy of mind, and limits of computation

High quality AI-generated summary created from speaker-labeled transcript.

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome