Grant Sanderson (@3blue1brown) — Past, present, & future of mathematics

Grant Sanderson (@3blue1brown) — Past, present, & future of mathematics

Dwarkesh PodcastOct 12, 20231h 31m

Dwarkesh Patel (host), Grant Sanderson (guest), Narrator, Narrator

Definitions and benchmarks of AGI versus current mathematical AI capabilitiesCareer allocation of mathematically talented people and impact beyond academia/finance/CSNature of mathematical creativity, explanation quality, and pedagogy (online vs classroom)History and drivers of mathematical discovery (why much math is recent; tools and ‘miracle years’)Self-learning in technical fields, the role of calculation practice, and motivationTooling and production of math explanations (Manim, YouTube, Summer of Math Exposition)Social and psychological dynamics of education, teacher influence, and peer effects

In this episode of Dwarkesh Podcast, featuring Dwarkesh Patel and Grant Sanderson, Grant Sanderson (@3blue1brown) — Past, present, & future of mathematics explores grant Sanderson reimagines math, education, and AI’s evolving intelligence frontier Grant Sanderson (3Blue1Brown) and Dwarkesh Patel discuss how modern AI intersects with mathematical creativity, questioning whether achievements like IMO gold medals mark genuine ‘AGI’ or just another impressive but narrow milestone. They explore the underutilized potential of mathematically talented people outside academia/finance/CS, and how structural incentives push students into narrow career tracks instead of high-impact, real-world applications. Sanderson reflects on what makes good mathematical explanations hard, why in‑person teaching remains irreplaceable despite online content, and why he eventually wants to spend years as a high school math teacher. Throughout, they touch on the history and sociology of math, self-learning, tooling (like Manim), and how small teacher interactions can permanently alter a student’s trajectory.

Grant Sanderson reimagines math, education, and AI’s evolving intelligence frontier

Grant Sanderson (3Blue1Brown) and Dwarkesh Patel discuss how modern AI intersects with mathematical creativity, questioning whether achievements like IMO gold medals mark genuine ‘AGI’ or just another impressive but narrow milestone. They explore the underutilized potential of mathematically talented people outside academia/finance/CS, and how structural incentives push students into narrow career tracks instead of high-impact, real-world applications. Sanderson reflects on what makes good mathematical explanations hard, why in‑person teaching remains irreplaceable despite online content, and why he eventually wants to spend years as a high school math teacher. Throughout, they touch on the history and sociology of math, self-learning, tooling (like Manim), and how small teacher interactions can permanently alter a student’s trajectory.

Key Takeaways

AGI is a fuzzy label; continuous progress matters more than a single milestone.

Sanderson argues that abilities like AI winning IMO gold medals would be impressive but still part of a continuous spectrum of capability, not a clean ‘pre‑AGI vs post‑AGI’ phase change; practical job replacement depends on broader capacities like context, relationships, and long-horizon reasoning.

Get the full analysis with uListen AI

Mathematical talent is overconcentrated in academia, finance, and CS, leaving other sectors underserved.

He suspects there is an ‘overallocation of talent’ into a few traditional math-heavy fields, and calls for more stories and structures that help math people move into neglected areas like logistics, urban planning, taxation systems, or manufacturing where their problem-solving could have outsized impact.

Get the full analysis with uListen AI

Great explanations require empathy for not-knowing, which erodes quickly with expertise.

Remembering what it feels like to lack even basic abstractions (like variables or numbers) is intrinsically hard; Sanderson sees this loss of empathy as a main reason good explanations are rare and as motivation for periodically returning to real classrooms to stay in touch with learners’ perspectives.

Get the full analysis with uListen AI

In-person educators do far more than transmit information; they ‘educe’ and redirect lives.

He distinguishes explanation from education, noting that a 30‑second comment or a single research problem from a teacher can permanently alter a student’s trajectory—something online videos cannot replicate—so top educators should augment, not abandon, face-to-face teaching.

Get the full analysis with uListen AI

Practice with calculations is essential; self-learners often sabotage themselves by skipping it.

For people teaching themselves fields like physics or quantum mechanics, Sanderson warns that treating integrals and algebraic manipulations as disposable ‘details’ prevents deep intuition from forming; working through the math on paper is where much real understanding crystallizes.

Get the full analysis with uListen AI

Motivation and social context, not content availability, are the binding constraints on learning.

Given today’s abundance of explanations (books, videos, interactive texts), he argues the real bottleneck is motivation—especially peers, role models, and projects that make learning instrumentally or emotionally important—so improving those levers matters more than yet another better lecture.

Get the full analysis with uListen AI

Simple structural nudges can redirect mathematical talent and enrich research.

Ideas like requiring pure mathematicians on grants to collaborate 10% of their time with other departments, or encouraging mid-career stints in teaching, could expose them to new analogies, real-world problems, and student realities, improving both research and education.

Get the full analysis with uListen AI

Notable Quotes

I’m very impressed by AIs that could solve IMO problems—but that feels distinct from the impediments between where we are now and AIs taking over all of our jobs.

Grant Sanderson

Math academia, finance, and computer science almost certainly have an overallocation of talent.

Grant Sanderson

The job of an educator is not to take their knowledge and shove it into the heads of someone else. The job is to bring it out.

Grant Sanderson

Online explanations are valuable, but they have nothing to do with all of that important stuff that’s actually happening in a classroom.

Grant Sanderson

Where a lot of self‑learners shoot themselves in the foot is by skipping calculations, thinking they’re incidental to the core understanding.

Grant Sanderson

Questions Answered in This Episode

If AI reaches the level of consistently winning IMO gold medals, what concrete economic or social changes should we expect—and which should we *not* overinterpret from that achievement?

Grant Sanderson (3Blue1Brown) and Dwarkesh Patel discuss how modern AI intersects with mathematical creativity, questioning whether achievements like IMO gold medals mark genuine ‘AGI’ or just another impressive but narrow milestone. ...

Get the full analysis with uListen AI

How could universities and funding agencies practically redesign incentives so that more mathematicians spend meaningful time on high-impact real-world problems outside of traditional fields?

Get the full analysis with uListen AI

What structures or tools could help expert explainers systematically retain empathy for novice perspectives, instead of gradually ‘forgetting’ what confusion feels like?

Get the full analysis with uListen AI

Given that motivation and peer groups are so central to learning, what scalable interventions might mimic the effect of a great teacher or inspiring classmate in largely online environments?

Get the full analysis with uListen AI

If you were designing a ‘tour of duty’ system where technically skilled people spend a few years teaching, how would you structure it so it benefits both students and the professionals’ long-term careers?

Get the full analysis with uListen AI

Transcript Preview

Dwarkesh Patel

... Grant. (piano music plays)

Grant Sanderson

You know, the videos were really inspiring, like, "You're the reason I'm, like, going into grad school." And there's this little bell in the back of my mind that's like, "Do, do I want that?"

Dwarkesh Patel

(laughs)

Narrator

(laughs)

Grant Sanderson

To get more people going into math PhDs? Math academia, finance, and computer science almost certainly have an over-allocation of talent. Actually, I'm, I'm quite determined at some point to, like, be a high school math teacher for some number of years. Math lends itself to synthetic data, how AlphaGo is trained. You could have it produce a lot of proofs and just train on a whole bunch of those. And the thing that takes, at most, 30 minutes of the teacher's time, maybe even 30 seconds, has these completely monumental rippling effects for the life of the student they were talking to that then sets them on this whole different trajectory.

Dwarkesh Patel

Okay, today, I have the pleasure of interviewing Grant Sanderson of the YouTube channel 3Blue1Brown. You all know who Grant is. I'm really excited about this one. By the time that an AI model can get gold in the International Math Olympiad, is that just AGI, given the amount of creative problem-solving and chain of thought required to do that?

Grant Sanderson

I, to be honest, have no idea what people mean when they use the word AGI.

Dwarkesh Patel

Yeah.

Grant Sanderson

I think if you ask 10 different people, like, what they mean by it, you're gonna get 10 slightly different answers. And it seems like what people want to get at is a discrete change that I don't think actually exists, where you've got, okay, AI's up to a certain point, they're not AGI. They might be really smart, but it's not AGI. And then after some point, that's the benchmark when, like, now they're, now it's generally intelligent. The reason that world model doesn't really fit is it feels a lot more continuous, where, you know, GPT-4 feels general in the sense that you have one training algorithm that applies to a very, very large set of different kinds of tasks that someone might wanna be able to do. And that's cool, that's, like, an invention that people in the '60s might not have expected to be true for, um, the nature of how artificial intelligence can be programmed. So it's, it's generally intelligent, but maybe what people mean by, "Oh, it's not AGI," is you've got certain benchmarks where, you know, it's better than most people at some things, but it's not better at most people than others. You know, at this point, it's better than most people at math. You know, it, it's better than most people at solving AMC problems and, like, IMO problems. It's just not better than the best. And so, maybe at the point when it's getting golds in the IMO, that's a sign that, okay, it's, it's as good as the best, and we've ticked off another domain, but I don't know. Like, i- is what you mean by AGI that you've, you've enumerated all the possible domains that something could be good at, and now it's better than humans at all of them?

Install uListen to search the full transcript and get AI-powered insights

Get Full Transcript

Get more from every podcast

AI summaries, searchable transcripts, and fact-checking. Free forever.

Add to Chrome