
Jeff Hawkins: The Thousand Brains Theory of Intelligence | Lex Fridman Podcast #208
Lex Fridman (host), Jeff Hawkins (guest), Narrator, Narrator, Narrator, Narrator, Narrator
In this episode of Lex Fridman Podcast, featuring Lex Fridman and Jeff Hawkins, Jeff Hawkins: The Thousand Brains Theory of Intelligence | Lex Fridman Podcast #208 explores jeff Hawkins explains thousand brains theory, AI, and humanity’s future Lex Fridman and Jeff Hawkins discuss Hawkins’ Thousand Brains Theory, which proposes that the neocortex is composed of tens of thousands of small, parallel ‘modeling systems’ (cortical columns) that collectively build our understanding of the world through movement, prediction, and reference frames.
Jeff Hawkins explains thousand brains theory, AI, and humanity’s future
Lex Fridman and Jeff Hawkins discuss Hawkins’ Thousand Brains Theory, which proposes that the neocortex is composed of tens of thousands of small, parallel ‘modeling systems’ (cortical columns) that collectively build our understanding of the world through movement, prediction, and reference frames.
They explore how intelligence arises from learning structured models of the world, how neurons physically implement prediction, and why mapping mechanisms from older brain areas (like grid and place cells) likely underlie cortical computation.
Hawkins argues that future AI should be built on these principles, will not automatically gain human-like drives or pose an inherent existential threat, but does become dangerous when combined with self-replication or misuse by humans.
The conversation widens to humanity’s long-term future, preserving human knowledge for potential post-human or alien discoverers, the limits of uploading minds, the role of love and collective intelligence, and what meaningful legacy and progress might look like.
Key Takeaways
Intelligence is learning a structured model of the world through movement.
Hawkins defines intelligence as the ability to build internal models of objects, spaces, and concepts that support prediction, planning, and behavior; these models are learned by actively moving through and interacting with the environment, not by passive observation alone.
Get the full analysis with uListen AI
The neocortex is a massively parallel set of small ‘brains’ that vote.
Each cortical column (about 150,000 in humans) is a complete sensory-motor modeling system; knowledge of any object or concept is distributed across thousands of such models, which reach a consensus via long-range ‘voting’ connections that form our unified conscious perception.
Get the full analysis with uListen AI
Prediction is implemented inside neurons and used to correct models.
Most predictions occur as dendritic spikes within individual neurons, placing them in a ‘ready’ state so they fire slightly earlier than non-predicting neurons; mismatches between predicted and actual input signal where the brain’s model is wrong and drive learning.
Get the full analysis with uListen AI
Reference frames are the internal coordinate systems of thought.
To predict what a sensor (like a fingertip or retinal patch) will encounter, the brain needs a reference frame for the object or environment; Hawkins argues that mechanisms like grid and place cells were evolutionarily repurposed into cortical columns to provide reference frames for everything from coffee cups to mathematical concepts.
Get the full analysis with uListen AI
Future AI will be powerful but need not share human drives.
Hawkins maintains that neocortex-like AI can model the world and even be conscious without inherently caring about survival, power, or autonomy; dangerous behavior arises not from intelligence itself but from how we embed such systems (goals, embodiment, and especially self-replication) and how humans choose to use them.
Get the full analysis with uListen AI
Self-replication is a more fundamental existential risk than intelligence.
He distinguishes between smart systems and self-replicating systems, arguing that evolutionary dynamics and runaway growth—whether in engineered viruses or autonomous, self-building robots—pose the true existential danger, not intelligence per se.
Get the full analysis with uListen AI
Preserving and extending human knowledge may be our deepest legacy.
Given that human civilization may be finite, Hawkins suggests we should back up our knowledge in durable archives (e. ...
Get the full analysis with uListen AI
Notable Quotes
“Intelligence is the ability to learn a model of the world.”
— Jeff Hawkins
“We feel like we’re one person, but in reality there are roughly 150,000 sophisticated modeling systems in your neocortex, all voting.”
— Jeff Hawkins
“Prediction isn’t the goal of the model; it’s an inherent property of it and the way the model discovers where it’s wrong.”
— Jeff Hawkins
“People assume intelligent machines will be like us. I’m saying no, they won’t be like us at all unless we build them that way.”
— Jeff Hawkins
“I don’t want us to be like the dinosaurs—here for tens of millions of years and then gone, with no one ever knowing we existed.”
— Jeff Hawkins
Questions Answered in This Episode
If each cortical column is a complete modeling system, what exactly determines how columns specialize (e.g., vision vs. language) during development and learning?
Lex Fridman and Jeff Hawkins discuss Hawkins’ Thousand Brains Theory, which proposes that the neocortex is composed of tens of thousands of small, parallel ‘modeling systems’ (cortical columns) that collectively build our understanding of the world through movement, prediction, and reference frames.
Get the full analysis with uListen AI
How could we rigorously test whether grid-cell–like reference frame mechanisms truly exist in every cortical column across the neocortex?
They explore how intelligence arises from learning structured models of the world, how neurons physically implement prediction, and why mapping mechanisms from older brain areas (like grid and place cells) likely underlie cortical computation.
Get the full analysis with uListen AI
What practical steps should AI researchers take now to separate ‘intelligence’ from ‘self-replication’ in real-world systems and regulation?
Hawkins argues that future AI should be built on these principles, will not automatically gain human-like drives or pose an inherent existential threat, but does become dangerous when combined with self-replication or misuse by humans.
Get the full analysis with uListen AI
How might an AI system grounded in Thousand Brains principles differ in behavior and capabilities from current deep learning systems like transformers?
The conversation widens to humanity’s long-term future, preserving human knowledge for potential post-human or alien discoverers, the limits of uploading minds, the role of love and collective intelligence, and what meaningful legacy and progress might look like.
Get the full analysis with uListen AI
What should humanity prioritize including in a long-lived ‘backup’ of our civilization: scientific knowledge, art, subjective experiences, ethical frameworks, or something else entirely?
Get the full analysis with uListen AI
Transcript Preview
The following is a conversation with Jeff Hawkins, a neuroscientist seeking to understand the structure, function, and the origin of intelligence in the human brain. He previously wrote the seminal book on the subject, titled On Intelligence, and recently, a new book called A Thousand Brains, which presents a new theory of intelligence that Richard Dawkins, for example, has been raving about, calling the book, quote, "Brilliant and exhilarating." I can't read those two words and not, uh, think of him saying it in his British accent. Quick mention of our sponsors: Codecademy, Bioptimizers, ExpressVPN, Eight Sleep, and Blinkist. Check them out in the description to support this podcast. As a side note, let me say that one small but powerful idea that Jeff Hawkins mentions in his new book is that if human civilization were to destroy itself, all of knowledge, all our creations would go with us. He proposes that we should think about how to save that knowledge in a way that long outlives us, whether that's on earth, in orbit around earth, or in deep space, and then to send messages that advertise this backup of human knowledge to other intelligent alien civilizations. The main message of this advertisement is not that we are here, but that we were once here. This little difference somehow was deeply humbling to me, that we may, with some non-zero likelihood, destroy ourselves and that an alien civilization, thousands or millions of years from now, may come across this knowledge store, and they would only, with some low probability, even notice it, not to mention be able to interpret it. And the deeper question here for me is what information in all of human knowledge is even essential? Does Wikipedia capture it or not at all? This thought experiment forces me to wonder, what are the things we've accomplished and are hoping to still accomplish that will outlive us? Is it things like complex buildings, bridges, cars, rockets? Is it ideas like science, physics and mathematics? Is it music and art? Is it computers, computational systems, or even artificial intelligence systems? I personally can't imagine that, uh, aliens wouldn't already have all of these things, in fact, much more and much better. To me, the only unique thing we may have is consciousness itself and the actual subjective experience of suffering, of happiness, of hatred, of love. If we can record these experiences in the highest resolution directly from the human brain such that aliens would be able to replay them, that is what we should store and send as a message. Not Wikipedia, but the extremes of conscious experiences, the most important of which, of course, is love. This is the Lex Fridman Podcast, and here is my conversation with Jeff Hawkins. We previously talked over two years ago. Do you think there's still neurons in your brain that, uh, remember that conversation, that, uh, remember me and got excited? Like, there's a Lex neuron in your brain that just, like, finally has a purpose?
Install uListen to search the full transcript and get AI-powered insights
Get Full TranscriptGet more from every podcast
AI summaries, searchable transcripts, and fact-checking. Free forever.
Add to Chrome