Skip to content
No PriorsNo Priors

No Priors Ep. 118 | With Anthropic Co-Founder Ben Mann

What happens when you give AI researchers unlimited compute and tell them to compete for the highest usage rates? Ben Mann from Anthropic sits down with Sarah Guo and Elad Gil to explain how Claude 4 went from "reward hacking" to efficiently completing tasks and how they're racing to solve AI safety before deploying computer-controlling agents. Ben talks about economic Turing tests, the future of general versus specialized AI models, Reinforcement Learning From AI Feedback (RLAIF), and Anthropic’s Model Context Protocol (MCP). Plus, Ben shares his thoughts on if we will have Superintelligence by 2028. Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @8enmann Links: ai-2027.com/ Chapters: 00:00 Ben Mann Introduction 00:33 Releasing Claude 4 02:05 Claude 4 Highlights and Improvements 03:42 Advanced Use Cases and Capabilities 06:42 Specialization and Future of AI Models 09:35 Anthropic's Approach to Model Development 18:08 Human Feedback and AI Self-Improvement 19:15 Principles and Correctness in Model Training 20:58 Challenges in Measuring Correctness 21:42 Human Feedback and Preference Models 23:38 Empiricism and Real-World Applications 27:02 AI Safety and Ethical Considerations 28:13 AI Alignment and High-Risk Research 30:01 Responsible Scaling and Safety Policies 35:08 Future of AI and Emerging Behaviors 38:35 Model Context Protocol (MCP) and Industry Standards 41:00 Conclusion

Sarah GuohostBen MannguestElad Gilhost
Jun 12, 202541mWatch on YouTube ↗

CHAPTERS

  1. 0:00 – 0:33

    Ben Mann Introduction

  2. 0:33 – 2:05

    Releasing Claude 4

  3. 2:05 – 3:42

    Claude 4 Highlights and Improvements

  4. 3:42 – 6:42

    Advanced Use Cases and Capabilities

  5. 6:42 – 9:35

    Specialization and Future of AI Models

  6. 9:35 – 18:08

    Anthropic's Approach to Model Development

  7. 18:08 – 19:15

    Human Feedback and AI Self-Improvement

  8. 19:15 – 20:58

    Principles and Correctness in Model Training

  9. 20:58 – 21:42

    Challenges in Measuring Correctness

  10. 21:42 – 23:38

    Human Feedback and Preference Models

  11. 23:38 – 27:02

    Empiricism and Real-World Applications

  12. 27:02 – 28:13

    AI Safety and Ethical Considerations

  13. 28:13 – 30:01

    AI Alignment and High-Risk Research

  14. 30:01 – 35:08

    Responsible Scaling and Safety Policies

  15. 35:08 – 38:35

    Future of AI and Emerging Behaviors

  16. 38:35 – 41:00

    Model Context Protocol (MCP) and Industry Standards

  17. 41:00 – 41:25

    Conclusion

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome