Lex Fridman PodcastRaschka & Lambert on Lex Fridman: Why Post-Training Won 2025
Rlvr and inference time scaling, not architecture, drove 2025 AI gains. Deepseek open-weight releases showed frontier performance need not be closed-source.
Lex FridmanhostSebastian RaschkaguestNathan Lambertguest
CHAPTERS
- 0:00 – 1:57
Introduction
- 1:57 – 10:38
China vs US: Who wins the AI race?
- 10:38 – 21:38
ChatGPT vs Claude vs Gemini vs Grok: Who is winning?
- 21:38 – 28:29
Best AI for coding
- 28:29 – 40:08
Open Source vs Closed Source LLMs
- 40:08 – 48:05
Transformers: Evolution of LLMs since 2019
- 48:05 – 1:04:12
AI Scaling Laws: Are they dead or still holding?
- 1:04:12 – 1:37:18
How AI is trained: Pre-training, Mid-training, and Post-training
- 1:37:18 – 1:58:11
Post-training explained: Exciting new research directions in LLMs
- 1:58:11 – 2:21:03
Advice for beginners on how to get into AI development & research
- 2:21:03 – 2:24:49
Work culture in AI (72+ hour weeks)
- 2:24:49 – 2:28:46
Silicon Valley bubble
- 2:28:46 – 2:34:28
Text diffusion models and other new research directions
- 2:34:28 – 2:38:44
Tool use
- 2:38:44 – 2:44:06
Continual learning
- 2:44:06 – 2:50:21
Long context
- 2:50:21 – 2:59:31
Robotics
- 2:59:31 – 3:06:47
Timeline to AGI
- 3:06:47 – 3:25:18
Will AI replace programmers?
- 3:25:18 – 3:32:07
Is the dream of AGI dying?
- 3:32:07 – 3:36:29
How AI will make money?
- 3:36:29 – 3:41:01
Big acquisitions in 2026
- 3:41:01 – 3:53:35
Future of OpenAI, Anthropic, Google DeepMind, xAI, Meta
- 3:53:35 – 4:00:10
Manhattan Project for AI
- 4:00:10 – 4:08:15
Future of NVIDIA, GPUs, and AI compute clusters
- 4:08:15 – 4:25:12
Future of human civilization
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome