Skip to content
Lex Fridman PodcastLex Fridman Podcast

Raschka & Lambert on Lex Fridman: Why Post-Training Won 2025

Rlvr and inference time scaling, not architecture, drove 2025 AI gains. Deepseek open-weight releases showed frontier performance need not be closed-source.

Lex FridmanhostSebastian RaschkaguestNathan Lambertguest
Jan 31, 20264h 25mWatch on YouTube ↗

Episode Details

EPISODE INFO

Released
January 31, 2026
Duration
4h 25m
Channel
Lex Fridman Podcast
Watch on YouTube
▶ Open ↗

EPISODE DESCRIPTION

Nathan Lambert and Sebastian Raschka are machine learning researchers, engineers, and educators. Nathan is the post-training lead at the Allen Institute for AI (Ai2) and the author of The RLHF Book. Sebastian Raschka is the author of Build a Large Language Model (From Scratch) and Build a Reasoning Model (From Scratch). Thank you for listening ❤ Check out our sponsors: https://lexfridman.com/sponsors/ep490-sb See below for timestamps, transcript, and to give feedback, submit questions, contact Lex, etc. *Transcript:* https://lexfridman.com/ai-sota-2026-transcript *Correction:* Here's an updated image listing a collection of recent open & closed AI models with some improvements & fixes: https://lexfridman.com/wordpress/wp-content/uploads/2026/01/ai_models_2025.png *CONTACT LEX:* *Feedback* - give feedback to Lex: https://lexfridman.com/survey *AMA* - submit questions, videos or call-in: https://lexfridman.com/ama *Hiring* - join our team: https://lexfridman.com/hiring *Other* - other ways to get in touch: https://lexfridman.com/contact *EPISODE LINKS:* Nathan's X: https://x.com/natolambert Nathan's Blog: https://interconnects.ai Nathan's Website: https://natolambert.com Nathan's YouTube: https://youtube.com/@natolambert Nathan's GitHub: https://github.com/natolambert Nathan's Book: https://rlhfbook.com Sebastian's X: https://x.com/rasbt Sebastian's Blog: https://magazine.sebastianraschka.com Sebastian's Website: https://sebastianraschka.com Sebastian's YouTube: https://youtube.com/@SebastianRaschka Sebastian's GitHub: https://github.com/rasbt Sebastian's Books: Build a Large Language Model (From Scratch): https://manning.com/books/build-a-large-language-model-from-scratch Build a Reasoning Model (From Scratch): https://manning.com/books/build-a-reasoning-model-from-scratch *SPONSORS:* To support this podcast, check out our sponsors & get discounts: *Box:* Intelligent content management platform. Go to https://lexfridman.com/s/box-ep490-sb *Quo:* Phone system (calls, texts, contacts) for businesses. Go to https://lexfridman.com/s/quo-ep490-sb *UPLIFT Desk:* Standing desks and office ergonomics. Go to https://lexfridman.com/s/uplift_desk-ep490-sb *Fin:* AI agent for customer service. Go to https://lexfridman.com/s/fin-ep490-sb *Shopify:* Sell stuff online. Go to https://lexfridman.com/s/shopify-ep490-sb *CodeRabbit:* AI-powered code reviews. Go to https://lexfridman.com/s/coderabbit-ep490-sb *LMNT:* Zero-sugar electrolyte drink mix. Go to https://lexfridman.com/s/lmnt-ep490-sb *Perplexity:* AI-powered answer engine. Go to https://lexfridman.com/s/perplexity-ep490-sb *OUTLINE:* 0:00 - Introduction 1:57 - China vs US: Who wins the AI race? 10:38 - ChatGPT vs Claude vs Gemini vs Grok: Who is winning? 21:38 - Best AI for coding 28:29 - Open Source vs Closed Source LLMs 40:08 - Transformers: Evolution of LLMs since 2019 48:05 - AI Scaling Laws: Are they dead or still holding? 1:04:12 - How AI is trained: Pre-training, Mid-training, and Post-training 1:37:18 - Post-training explained: Exciting new research directions in LLMs 1:58:11 - Advice for beginners on how to get into AI development & research 2:21:03 - Work culture in AI (72+ hour weeks) 2:24:49 - Silicon Valley bubble 2:28:46 - Text diffusion models and other new research directions 2:34:28 - Tool use 2:38:44 - Continual learning 2:44:06 - Long context 2:50:21 - Robotics 2:59:31 - Timeline to AGI 3:06:47 - Will AI replace programmers? 3:25:18 - Is the dream of AGI dying? 3:32:07 - How AI will make money? 3:36:29 - Big acquisitions in 2026 3:41:01 - Future of OpenAI, Anthropic, Google DeepMind, xAI, Meta 3:53:35 - Manhattan Project for AI 4:00:10 - Future of NVIDIA, GPUs, and AI compute clusters 4:08:15 - Future of human civilization *PODCAST LINKS:*

*SOCIAL LINKS:*

SPEAKERS

  • Lex Fridman

    host
  • Sebastian Raschka

    guest
  • Nathan Lambert

    guest

EPISODE SUMMARY

In this episode of Lex Fridman Podcast, featuring Lex Fridman and Sebastian Raschka, Raschka & Lambert on Lex Fridman: Why Post-Training Won 2025 explores aI in 2026: scaling, post-training, open models, agents, geopolitics, compute Lex Fridman hosts Sebastian Raschka and Nathan Lambert to map the “state of AI” entering 2026, using the “DeepSeek moment” as a turning point for open-weight Chinese models and intensified global competition.

RELATED EPISODES

Keoki Jackson: Lockheed Martin | Lex Fridman Podcast #33

Keoki Jackson: Lockheed Martin | Lex Fridman Podcast #33

Elon Musk: Neuralink, AI, Autopilot, and the Pale Blue Dot | Lex Fridman Podcast #49

Elon Musk: Neuralink, AI, Autopilot, and the Pale Blue Dot | Lex Fridman Podcast #49

Grant Sanderson: 3Blue1Brown and the Beauty of Mathematics | Lex Fridman Podcast #64

Grant Sanderson: 3Blue1Brown and the Beauty of Mathematics | Lex Fridman Podcast #64

Rohit Prasad: Amazon Alexa and Conversational AI | Lex Fridman Podcast #57

Rohit Prasad: Amazon Alexa and Conversational AI | Lex Fridman Podcast #57

Gary Marcus: Toward a Hybrid of Deep Learning and Symbolic AI | Lex Fridman Podcast #43

Gary Marcus: Toward a Hybrid of Deep Learning and Symbolic AI | Lex Fridman Podcast #43

Christof Koch: Consciousness | Lex Fridman Podcast #2

Christof Koch: Consciousness | Lex Fridman Podcast #2

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome