Lex Fridman PodcastJeff Hawkins: Thousand Brains Theory of Intelligence | Lex Fridman Podcast #25
At a glance
WHAT IT’S REALLY ABOUT
Jeff Hawkins maps how brain’s thousand models could reinvent intelligence
- Jeff Hawkins argues that true artificial intelligence will only emerge from understanding the neocortex, the brain structure underlying human intelligence, rather than scaling current deep learning methods.
- He presents his Thousand Brains Theory, where thousands of cortical columns each learn full object models using spatial reference frames and vote together to infer what we perceive and think.
- Hawkins contrasts this brain-based view with today’s deep learning, highlighting biological mechanisms like sparse, predictive neurons and synaptogenesis that support rapid, continual, and robust learning.
- He believes we are past a key theoretical breakthrough in understanding the neocortex, and that brain-inspired approaches can both advance AI and preserve human knowledge far beyond our species.
IDEAS WORTH REMEMBERING
5 ideasUnderstanding the neocortex is central to building true intelligence.
Hawkins insists that without a principled model of how the neocortex works, AI will remain narrow and brittle; brain-inspired architectures are, in his view, the fastest path to general intelligence.
The brain represents the world through thousands of spatial reference frames.
Each small cortical region (or "column") learns complete models of objects and concepts by encoding locations within object-centered reference frames and moving through them over time, then collectively voting to recognize what is sensed.
Time and movement are fundamental to perception and cognition.
Brains don’t classify static snapshots; they continually process changing inputs as we move eyes, hands, and attention, building models by linking sequences of sensations across time rather than via single images.
Abstract thought likely reuses the same spatial machinery as perception.
Evidence from memory palaces and fMRI suggests concepts like birds or mathematical ideas are organized as navigable “spaces” in the cortex, using grid-cell-like mechanisms originally evolved for physical navigation.
Biological neurons are predictive, sparse, and learn via new connections, not fine weight tweaks.
Real neurons use thousands of synapses, sparse population codes, dendritic prediction, and synaptogenesis (plus silent synapses) to support rapid, robust, and continual learning—properties largely absent from standard deep nets.
WORDS WORTH SAVING
5 quotesWe will not be able to create fully intelligent machines until we understand how the human brain works.
— Jeff Hawkins
The neocortex all works on the same principle… language, hearing, touch, vision, engineering are basically built in the same computational substrate.
— Jeff Hawkins
There isn’t one model of a cup. There are thousands of models of this cup.
— Jeff Hawkins
Real neurons in the brain are time-based prediction engines… there’s no concept of this at all in artificial point neurons.
— Jeff Hawkins
What is special about our species is not our genes. It’s our knowledge. That’s the rare thing we should preserve.
— Jeff Hawkins
High quality AI-generated summary created from speaker-labeled transcript.
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome