
Jeff Hawkins: Thousand Brains Theory of Intelligence | Lex Fridman Podcast #25
Lex Fridman (host), Jeff Hawkins (guest), Narrator, Narrator, Narrator
In this episode of Lex Fridman Podcast, featuring Lex Fridman and Jeff Hawkins, Jeff Hawkins: Thousand Brains Theory of Intelligence | Lex Fridman Podcast #25 explores jeff Hawkins maps how brain’s thousand models could reinvent intelligence Jeff Hawkins argues that true artificial intelligence will only emerge from understanding the neocortex, the brain structure underlying human intelligence, rather than scaling current deep learning methods.
Jeff Hawkins maps how brain’s thousand models could reinvent intelligence
Jeff Hawkins argues that true artificial intelligence will only emerge from understanding the neocortex, the brain structure underlying human intelligence, rather than scaling current deep learning methods.
He presents his Thousand Brains Theory, where thousands of cortical columns each learn full object models using spatial reference frames and vote together to infer what we perceive and think.
Hawkins contrasts this brain-based view with today’s deep learning, highlighting biological mechanisms like sparse, predictive neurons and synaptogenesis that support rapid, continual, and robust learning.
He believes we are past a key theoretical breakthrough in understanding the neocortex, and that brain-inspired approaches can both advance AI and preserve human knowledge far beyond our species.
Key Takeaways
Understanding the neocortex is central to building true intelligence.
Hawkins insists that without a principled model of how the neocortex works, AI will remain narrow and brittle; brain-inspired architectures are, in his view, the fastest path to general intelligence.
Get the full analysis with uListen AI
The brain represents the world through thousands of spatial reference frames.
Each small cortical region (or "column") learns complete models of objects and concepts by encoding locations within object-centered reference frames and moving through them over time, then collectively voting to recognize what is sensed.
Get the full analysis with uListen AI
Time and movement are fundamental to perception and cognition.
Brains don’t classify static snapshots; they continually process changing inputs as we move eyes, hands, and attention, building models by linking sequences of sensations across time rather than via single images.
Get the full analysis with uListen AI
Abstract thought likely reuses the same spatial machinery as perception.
Evidence from memory palaces and fMRI suggests concepts like birds or mathematical ideas are organized as navigable “spaces” in the cortex, using grid-cell-like mechanisms originally evolved for physical navigation.
Get the full analysis with uListen AI
Biological neurons are predictive, sparse, and learn via new connections, not fine weight tweaks.
Real neurons use thousands of synapses, sparse population codes, dendritic prediction, and synaptogenesis (plus silent synapses) to support rapid, robust, and continual learning—properties largely absent from standard deep nets.
Get the full analysis with uListen AI
Sparsity and many small models could make AI more robust and data-efficient.
Hawkins’ group shows that enforcing sparse representations in deep networks improves robustness to adversarial examples, and they aim to extend this with Thousand Brains–style ensembles of simple, voting models.
Get the full analysis with uListen AI
Intelligent machines need not be human-like or dangerous by default.
Hawkins separates neocortical-style intelligence from emotions, drives, and self-preservation, arguing we can design systems that reason and explore without human-like urges, and that current AI risks are more about misuse than existential doom.
Get the full analysis with uListen AI
Notable Quotes
“We will not be able to create fully intelligent machines until we understand how the human brain works.”
— Jeff Hawkins
“The neocortex all works on the same principle… language, hearing, touch, vision, engineering are basically built in the same computational substrate.”
— Jeff Hawkins
“There isn’t one model of a cup. There are thousands of models of this cup.”
— Jeff Hawkins
“Real neurons in the brain are time-based prediction engines… there’s no concept of this at all in artificial point neurons.”
— Jeff Hawkins
“What is special about our species is not our genes. It’s our knowledge. That’s the rare thing we should preserve.”
— Jeff Hawkins
Questions Answered in This Episode
If the Thousand Brains Theory is correct, what practical AI architectures could most faithfully implement cortical columns, reference frames, and voting today?
Jeff Hawkins argues that true artificial intelligence will only emerge from understanding the neocortex, the brain structure underlying human intelligence, rather than scaling current deep learning methods.
Get the full analysis with uListen AI
How might adopting sparse, predictive, dendrite-inspired neurons change the current deep learning toolkit and its benchmarks?
He presents his Thousand Brains Theory, where thousands of cortical columns each learn full object models using spatial reference frames and vote together to infer what we perceive and think.
Get the full analysis with uListen AI
What kinds of new evaluation tasks or environments would better test time-based, movement-based intelligence rather than static pattern classification?
Hawkins contrasts this brain-based view with today’s deep learning, highlighting biological mechanisms like sparse, predictive neurons and synaptogenesis that support rapid, continual, and robust learning.
Get the full analysis with uListen AI
How far can the “concepts as spaces” idea be pushed—could all reasoning and language truly be reframed as navigation in learned reference frames?
He believes we are past a key theoretical breakthrough in understanding the neocortex, and that brain-inspired approaches can both advance AI and preserve human knowledge far beyond our species.
Get the full analysis with uListen AI
What governance or design principles are needed to ensure future brain-inspired AI preserves human knowledge without inheriting our most problematic drives or biases?
Get the full analysis with uListen AI
Transcript Preview
The following is a conversation with Jeff Hawkins. He's the founder of the Redwood Center for Theoretical Neuroscience in 2002, and Pneumenta in 2005. In his 2004 book, titled On Intelligence, and in the research before and after, he and his team have worked to reverse engineer the neocortex and propose artificial intelligence architectures, approaches, and ideas that are inspired by the human brain. These ideas include hierarchical temporal memory, HTM, from 2004, and new work, the Thousands Brains Theory of Intelligence from 2017, '18, and '19. Jeff's ideas have been an inspiration to many who have looked for progress beyond the current machine learning approaches, but they have also received criticism for lacking a body of empirical evidence supporting the models. This is always a challenge when seeking more than small incremental steps forward in AI. Jeff is a brilliant mind and many of the ideas he has developed and aggregated from neuroscience are worth understanding and thinking about. There are limits to deep learning as it is currently defined. Forward progress in AI is shrouded in mystery. My hope is that conversations like this can help provide an inspiring spark for new ideas. This is the Artificial Intelligence podcast. If you enjoy it, subscribe on YouTube, iTunes, or simply connect with me on Twitter at Lex Fridman, spelled F-R-I-D. And now here's my conversation with Jeff Hawkins. Are you more interested in understanding the human brain or in creating artificial systems that have many of the same qualities but don't necessarily require that you actually understand the underpinning workings of our mind?
So, there's a clear answer to that question: My primary interest is understanding the human brain. No question about it. But, um, I also firmly believe that we will not be able to create fully intelligent machines until we understand how the human brain works. So I don't see those as separate problems. Um, I think there's limits to what can be done with machine intelligence if you don't understand the principles by which the brain works, and so I actually believe that studying the brain is actually the frast- the fastest way (laughs) to get to machine intelligence.
And within that, let me ask the impossible question: How do you not define, but at least think about what it means to be intelligent?
So, I didn't try to answer that question first. We said, "Let's just talk about how the brain works and let's figure out how c- certain parts of the brain," mostly the neocortex, but some other parts too. The parts of the brain most associated with intelligence. And let's discover the principles by how they work, 'cause i- i- intelligence isn't just like some mechanism and it's not just some capabilities. It's like, okay, we don't even have, know where to begin on this stuff. And so now that we've made a lot of progress on this, after we've made a lot of progress on how the neocortex works, and we can talk about that, I now have a very good idea what's gonna be required to make intelligent machines. I g- I can tell you today, we know some of the things are gonna be necessary, I believe, to create intelligent machines.
Install uListen to search the full transcript and get AI-powered insights
Get Full TranscriptGet more from every podcast
AI summaries, searchable transcripts, and fact-checking. Free forever.
Add to Chrome