Skip to content
Lenny's PodcastLenny's Podcast

Anthropic co-founder Ben Mann: Why 2028 is his bet on AGI

Mann reframes AGI as an economic Turing test for money-weighted jobs; x-risk sits at 0 to 10 percent, with safety research now shaping Claude at Anthropic.

Lenny RachitskyhostBenjamin Mannguest
Jul 20, 20251h 14mWatch on YouTube ↗

CHAPTERS

  1. 0:00 – 4:43

    Introduction to Benjamin

  2. 4:43 – 6:28

    The AI talent war

  3. 6:28 – 10:50

    AI progress and scaling laws

  4. 10:50 – 12:26

    Defining AGI and the economic Turing test

  5. 12:26 – 17:45

    The impact of AI on jobs

  6. 17:45 – 24:05

    Preparing for an AI future

  7. 24:05 – 27:06

    Founding Anthropic

  8. 27:06 – 29:10

    Balancing AI safety and progress

  9. 29:10 – 34:21

    Constitutional AI and model alignment

  10. 34:21 – 43:40

    The importance of AI safety

  11. 43:40 – 45:40

    The risks of autonomous agents

  12. 45:40 – 48:36

    Forecasting superintelligence

  13. 48:36 – 53:19

    How hard is it to align AI?

  14. 53:19 – 57:03

    Reinforcement learning from AI feedback (RLAIF)

  15. 57:03 – 1:00:11

    AI's biggest bottlenecks

  16. 1:00:11 – 1:02:36

    Personal reflections on responsibilities

  17. 1:02:36 – 1:07:48

    Anthropic’s growth and innovations

  18. 1:07:48 – 1:14:58

    Lightning round and final thoughts

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome