Skip to content
Lenny's PodcastLenny's Podcast

Anthropic co-founder Ben Mann: Why 2028 is his bet on AGI

Mann reframes AGI as an economic Turing test for money-weighted jobs; x-risk sits at 0 to 10 percent, with safety research now shaping Claude at Anthropic.

Lenny RachitskyhostBenjamin Mannguest
Jul 19, 20251h 14mWatch on YouTube ↗

At a glance

WHAT IT’S REALLY ABOUT

Anthropic’s Ben Mann on AI safety, superintelligence, and jobs’ future

  1. Anthropic co‑founder Ben Mann discusses why he left OpenAI to build a company where safety and alignment are the top priority while still pushing the frontier of AI capabilities.
  2. He argues that scaling laws are still accelerating, predicts a ~2028 median timeline for superintelligence, and expects profound economic upheaval, including potentially 20% unemployment and a transformed version of capitalism.
  3. Mann explains Anthropic’s approach to alignment—especially Constitutional AI and reinforcement learning from AI feedback—and why he believes society is underinvesting in mitigating existential AI risk, which he pegs at between 0–10%.
  4. He also shares practical advice on future‑proofing careers by aggressively using AI tools, his philosophy for raising kids in an AI-saturated world, and how Anthropic operationalizes safety without sacrificing product velocity.

IDEAS WORTH REMEMBERING

5 ideas

AI progress is accelerating, not plateauing, driven by faster model release cycles and new training techniques.

Mann notes that what looks like stagnation is often benchmark saturation and time compression—models are improving so quickly and being released so frequently that incremental gains feel smaller, even as underlying scaling laws continue to hold (and may even be strengthening).

Transformative AI will be defined by economic displacement, not a sci‑fi definition of AGI.

He favors an “economic Turing test”: when AI can be profitably hired instead of humans for a large share of money‑weighted jobs and global GDP growth jumps (e.g., >10% annually), we’ll know we’ve crossed into the transformative/superintelligence era.

Massive labor impact is coming, with a turbulent transition but eventual abundance.

Mann expects dramatic productivity gains (e.g., 80%+ automated customer support resolution, 10–20x coding output) and significant job displacement, especially in lower‑skill or narrow tasks, followed by a world where labor is nearly free and even capitalism may look unrecognizable.

Safety and competitiveness can be mutually reinforcing rather than in tension.

Anthropic’s safety research directly shapes Claude’s personality (helpful, honest, harmless) and trustworthiness; Constitutional AI lets the model learn from natural-language principles, producing behavior that both customers like and regulators can understand, while keeping Anthropic at the frontier.

Current AI risk is nontrivial but still under-addressed, with very few people working on it.

Mann estimates the probability of existential or extremely bad outcomes from AI somewhere between 0–10%, emphasizes that alignment likely becomes impossible once superintelligence is reached, and argues that society should treat even low probabilities very seriously, given the stakes.

WORDS WORTH SAVING

5 quotes

We felt like safety wasn't the top priority there.

Ben Mann (on leaving OpenAI to start Anthropic)

I think 50th percentile chance of hitting some kind of super intelligence is now, like, 2028.

Ben Mann

Once we get to super intelligence, it will be too late to align the models.

Ben Mann

My best granularity forecast for, like, could we have an x-risk or extremely bad outcome is somewhere between 0 and 10%.

Ben Mann

In a world of abundance where labor is almost free and anything you want to do, you can just ask an expert to do it for you, then what do jobs even look like?

Ben Mann

Recruiting wars and economic scale of frontier AI labsScaling laws, transformative AI, and timelines to superintelligenceEconomic impacts of AI on jobs, unemployment, and capitalismAnthropic’s founding story and divergence from OpenAI on safetyAnthropic’s safety and alignment methods (Constitutional AI, RLAIF, ASL levels)Recursive self-improvement, agents, and risks of misuse (bio, cyber, robotics)Personal philosophy: careers, parenting, and coping with existential AI risk

High quality AI-generated summary created from speaker-labeled transcript.

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome