
Grant Sanderson: Math, Manim, Neural Networks & Teaching with 3Blue1Brown | Lex Fridman Podcast #118
Lex Fridman (host), Grant Sanderson (guest), Narrator, Narrator
In this episode of Lex Fridman Podcast, featuring Lex Fridman and Grant Sanderson, Grant Sanderson: Math, Manim, Neural Networks & Teaching with 3Blue1Brown | Lex Fridman Podcast #118 explores grant Sanderson on Feynman, math beauty, teaching, and exponential progress Lex Fridman and Grant Sanderson (3Blue1Brown) discuss Richard Feynman’s depth as a scientist and explainer, and how Grant tries to emulate Feynman’s habit of re‑deriving ideas to gain real ownership of concepts.
Grant Sanderson on Feynman, math beauty, teaching, and exponential progress
Lex Fridman and Grant Sanderson (3Blue1Brown) discuss Richard Feynman’s depth as a scientist and explainer, and how Grant tries to emulate Feynman’s habit of re‑deriving ideas to gain real ownership of concepts.
They explore math education, visualization, and interactivity, including why beautifully clear lectures often fail long‑term retention and how tools like Manim and online video can ‘commoditize explanation’ for the benefit of students worldwide.
The conversation ranges through exponential growth, pandemics, Moore’s law, and GPT‑3, using each as a springboard to talk about intuition, abstraction, and what actually counts as understanding in math and science.
They close by reflecting on remote teaching during COVID, the loneliness of solo creative work, the dangers and temptations of social media, and a grounded view of meaning in life centered on human connection and shared curiosity.
Key Takeaways
Deep understanding often comes from re-deriving ideas yourself.
Grant echoes Feynman’s habit of trying to solve problems before reading others’ solutions; although slower and impractical for everything, selectively doing this on core topics builds inarticulable intuitions, counterexamples, and a stronger sense of ownership.
Beautiful explanations can be deceptive if not paired with active learning.
Feynman’s lectures (and 3Blue1Brown videos) feel incredibly clear in the moment, but retention is low without problem-solving, spaced repetition, or hands-on play; intellectual “candy” must be followed by deliberate practice to stick.
Topology and other advanced fields can be motivated through concrete puzzles.
Grant argues topology is often popularized either too squishily (mugs and donuts) or too axiomatically; starting from specific impossibility puzzles (like embedding a Möbius strip without self-intersection) and only then formalizing concepts like orientability makes the subject much more meaningful.
Humans are bad at large-number exponential growth but can learn the intuition.
We may naturally think logarithmically for small numbers, but examples like lily pads doubling on a lake or R₀ dynamics in epidemics show how shockingly fast exponentials explode; exposure to such examples (and seeing where they break down) trains better intuition for pandemics and tech trends.
Good online teaching should treat explanations as reusable, public artifacts.
Grant advocates that each great teacher pick a few topics and make the best short explanation they can, publishing it so it can be reused globally; classroom time can then focus on interaction and problem-solving while explanation itself becomes a shared, “canonical” online resource.
Solo creative work benefits from structure and limited connectivity.
Grant’s most productive scripting days involve early exercise, turning off the internet, reading briefly when blocked, and accepting that writing is the hardest part; social media and metrics (views, comments) can distort self-perception and happiness, so deliberate limits are crucial.
Neural networks and large language models show the power and limits of pattern recognition.
Grant praises neural networks’ layered abstractions and GPT‑3’s pattern-matching prowess, but stresses that math-level understanding also demands hypothesis testing and mechanistic explanation, not just compressing and regurgitating observed patterns.
Notable Quotes
“Everything is either trivial or impossible, and it’s a shockingly thin line between the two.”
— Grant Sanderson
“The very same thing that’s so admirable about Feynman’s lectures, which is how damn satisfying they are to consume, might actually also reveal a little bit of the flaw… that does not correlate with long-term learning.”
— Grant Sanderson
“It just seems inefficient to me that a lesson is taught millions of times over in parallel… What should happen is that there’s a small handful of explanations online, and the time in classroom is spent on all of the parts of teaching that aren’t explanation.”
— Grant Sanderson
“Most of the educational content is posted by people who were just starting to research it two weeks ago… The people who have the self-awareness to not post are probably the people best positioned to give a good, honest explanation of it.”
— Grant Sanderson
“I don’t think life has a meaning. I think meaning is something that’s ascribed to stuff that’s created with purpose… Interactions with like-minded people, I think, is the meaning of— in that sense.”
— Grant Sanderson
Questions Answered in This Episode
How can educators practically balance the Feynman-style ‘reinvent it yourself’ approach with the time pressures of modern curricula?
Lex Fridman and Grant Sanderson (3Blue1Brown) discuss Richard Feynman’s depth as a scientist and explainer, and how Grant tries to emulate Feynman’s habit of re‑deriving ideas to gain real ownership of concepts.
What concrete steps could universities take to encourage more professors to create ‘canonical’ online explanations in their areas of expertise?
They explore math education, visualization, and interactivity, including why beautifully clear lectures often fail long‑term retention and how tools like Manim and online video can ‘commoditize explanation’ for the benefit of students worldwide.
Where is the line between powerful visualization that builds intuition and visualization that gives a false sense of understanding?
The conversation ranges through exponential growth, pandemics, Moore’s law, and GPT‑3, using each as a springboard to talk about intuition, abstraction, and what actually counts as understanding in math and science.
In what ways might large models like GPT‑3 (or its successors) become genuine collaborators in mathematical discovery, rather than just pattern mimics?
They close by reflecting on remote teaching during COVID, the loneliness of solo creative work, the dangers and temptations of social media, and a grounded view of meaning in life centered on human connection and shared curiosity.
What are realistic strategies for maintaining deep, serendipitous intellectual collaboration (a la Bell Labs) in a world of remote work and online interaction?
EVERY SPOKEN WORD
Install uListen for AI-powered chat & search across the full episode — Get Full Transcript
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome