Lex Fridman PodcastJeff Hawkins: The Thousand Brains Theory of Intelligence | Lex Fridman Podcast #208
At a glance
WHAT IT’S REALLY ABOUT
Jeff Hawkins explains thousand brains theory, AI, and humanity’s future
- Lex Fridman and Jeff Hawkins discuss Hawkins’ Thousand Brains Theory, which proposes that the neocortex is composed of tens of thousands of small, parallel ‘modeling systems’ (cortical columns) that collectively build our understanding of the world through movement, prediction, and reference frames.
- They explore how intelligence arises from learning structured models of the world, how neurons physically implement prediction, and why mapping mechanisms from older brain areas (like grid and place cells) likely underlie cortical computation.
- Hawkins argues that future AI should be built on these principles, will not automatically gain human-like drives or pose an inherent existential threat, but does become dangerous when combined with self-replication or misuse by humans.
- The conversation widens to humanity’s long-term future, preserving human knowledge for potential post-human or alien discoverers, the limits of uploading minds, the role of love and collective intelligence, and what meaningful legacy and progress might look like.
IDEAS WORTH REMEMBERING
5 ideasIntelligence is learning a structured model of the world through movement.
Hawkins defines intelligence as the ability to build internal models of objects, spaces, and concepts that support prediction, planning, and behavior; these models are learned by actively moving through and interacting with the environment, not by passive observation alone.
The neocortex is a massively parallel set of small ‘brains’ that vote.
Each cortical column (about 150,000 in humans) is a complete sensory-motor modeling system; knowledge of any object or concept is distributed across thousands of such models, which reach a consensus via long-range ‘voting’ connections that form our unified conscious perception.
Prediction is implemented inside neurons and used to correct models.
Most predictions occur as dendritic spikes within individual neurons, placing them in a ‘ready’ state so they fire slightly earlier than non-predicting neurons; mismatches between predicted and actual input signal where the brain’s model is wrong and drive learning.
Reference frames are the internal coordinate systems of thought.
To predict what a sensor (like a fingertip or retinal patch) will encounter, the brain needs a reference frame for the object or environment; Hawkins argues that mechanisms like grid and place cells were evolutionarily repurposed into cortical columns to provide reference frames for everything from coffee cups to mathematical concepts.
Future AI will be powerful but need not share human drives.
Hawkins maintains that neocortex-like AI can model the world and even be conscious without inherently caring about survival, power, or autonomy; dangerous behavior arises not from intelligence itself but from how we embed such systems (goals, embodiment, and especially self-replication) and how humans choose to use them.
WORDS WORTH SAVING
5 quotesIntelligence is the ability to learn a model of the world.
— Jeff Hawkins
We feel like we’re one person, but in reality there are roughly 150,000 sophisticated modeling systems in your neocortex, all voting.
— Jeff Hawkins
Prediction isn’t the goal of the model; it’s an inherent property of it and the way the model discovers where it’s wrong.
— Jeff Hawkins
People assume intelligent machines will be like us. I’m saying no, they won’t be like us at all unless we build them that way.
— Jeff Hawkins
I don’t want us to be like the dinosaurs—here for tens of millions of years and then gone, with no one ever knowing we existed.
— Jeff Hawkins
High quality AI-generated summary created from speaker-labeled transcript.
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome