Lenny's PodcastAnthropic co-founder Ben Mann: Why 2028 is his bet on AGI
Mann reframes AGI as an economic Turing test for money-weighted jobs; x-risk sits at 0 to 10 percent, with safety research now shaping Claude at Anthropic.
Lenny RachitskyhostBenjamin Mannguest
CHAPTERS
- 0:00 – 4:43
Introduction to Benjamin
- 4:43 – 6:28
The AI talent war
- 6:28 – 10:50
AI progress and scaling laws
- 10:50 – 12:26
Defining AGI and the economic Turing test
- 12:26 – 17:45
The impact of AI on jobs
- 17:45 – 24:05
Preparing for an AI future
- 24:05 – 27:06
Founding Anthropic
- 27:06 – 29:10
Balancing AI safety and progress
- 29:10 – 34:21
Constitutional AI and model alignment
- 34:21 – 43:40
The importance of AI safety
- 43:40 – 45:40
The risks of autonomous agents
- 45:40 – 48:36
Forecasting superintelligence
- 48:36 – 53:19
How hard is it to align AI?
- 53:19 – 57:03
Reinforcement learning from AI feedback (RLAIF)
- 57:03 – 1:00:11
AI's biggest bottlenecks
- 1:00:11 – 1:02:36
Personal reflections on responsibilities
- 1:02:36 – 1:07:48
Anthropic’s growth and innovations
- 1:07:48 – 1:14:58
Lightning round and final thoughts
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome