Skip to content
Lex Fridman PodcastLex Fridman Podcast

Dileep George: Brain-Inspired AI | Lex Fridman Podcast #115

Dileep George is a researcher at the intersection of neuroscience and artificial intelligence, co-founder of Vicarious, formerly co-founder of Numenta. From the early work on Hierarchical temporal memory to Recursive Cortical Networks to today, Dileep's always sought to engineer intelligence that is closely inspired by the human brain. Support this channel by supporting our sponsors. Click links, get discount: - Babbel: https://babbel.com and use code LEX - MasterClass: https://masterclass.com/lex - Raycon: https://buyraycon.com/lex EPISODE LINKS: Dileep's Twitter: https://twitter.com/dileeplearning Vicarious Research: https://www.vicarious.com/science PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ Full episodes playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4 Clips playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOeciFP3CBCIEElOJeitOr41 OUTLINE: 0:00 - Introduction 4:50 - Building a model of the brain 17:11 - Visual cortex 27:50 - Probabilistic graphical models 31:35 - Encoding information in the brain 36:56 - Recursive Cortical Network 51:09 - Solving CAPTCHAs algorithmically 1:06:48 - Hype around brain-inspired AI 1:18:21 - How does the brain learn? 1:21:32 - Perception and cognition 1:25:43 - Open problems in brain-inspired AI 1:30:33 - GPT-3 1:40:41 - Memory 1:45:08 - Neuralink 1:51:32 - Consciousness 1:57:59 - Book recommendations 2:06:49 - Meaning of life CONNECT: - Subscribe to this YouTube channel - Twitter: https://twitter.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/LexFridmanPage - Instagram: https://www.instagram.com/lexfridman - Medium: https://medium.com/@lexfridman - Support on Patreon: https://www.patreon.com/lexfridman

Lex FridmanhostDileep Georgeguest
Aug 13, 20202h 10mWatch on YouTube ↗

At a glance

WHAT IT’S REALLY ABOUT

Dileep George explains brain-inspired AI, inference, and human-like vision

  1. Lex Fridman and Dileep George explore how insights from neuroscience can guide the design of artificial intelligence, focusing especially on perception and vision. George critiques brute‑force brain simulations like Blue Brain and instead advocates for explicit computational theories that explain how neural microcircuits implement inference and world modeling. He describes his Recursive Cortical Network (RCN), a brain‑inspired vision system that uses feedback, lateral connections, and probabilistic inference to solve tasks like CAPTCHAs with very little training data. The conversation broadens to concept learning, grounded language understanding, memory, GPT‑style large language models, brain–computer interfaces, and what it really means to “understand” the brain if our goal is to build AGI.

IDEAS WORTH REMEMBERING

5 ideas

You can’t build a brain by simulating neurons without a theory of computation.

Projects like Blue Brain try to wire up detailed biophysical neuron models and hope intelligence emerges, but without a functional theory—what each structure computes and why—there’s no way to debug or guide the system when it fails.

Feedback and lateral connections are central to how the cortex performs inference.

Unlike purely feedforward deep nets, the visual cortex is densely interconnected with feedback and lateral pathways that let top–down expectations shape perception, explain illusions, and resolve ambiguity via iterative inference over competing hypotheses.

Brain-inspired graphical models can achieve strong data efficiency and robustness.

George’s Recursive Cortical Network uses probabilistic graphical models with feedback and lateral constraints to jointly perform recognition and segmentation, breaking many CAPTCHAs and achieving high MNIST accuracy from tens–hundreds of examples rather than tens of thousands.

Perception should be designed as part of a full cognition stack, not a standalone preprocessor.

George argues perception must be top‑down controllable and generative so higher-level cognition can “imagine,” manipulate internal models, and query perceptual knowledge—mirroring how humans visualize and mentally simulate scenarios from language or thought.

Grounded concepts and world models can’t be learned from text alone.

Text captures correlations in language, not the full causal, sensorimotor structure of the world; systems like GPT-3 can model text impressively but lack the ability to run physical simulations or access rich common-sense knowledge encoded in perception and action.

WORDS WORTH SAVING

5 quotes

Getting a single neuron's model 99% right is like getting a transistor model right and then trying to build a microprocessor without understanding Boolean logic.

Dileep George

What we are seeing is not just a feedforward thing. We are constantly projecting our expectations onto the world, and the final percept is a combination of that projection with the actual sensory input.

Dileep George

To really solve CAPTCHA, you have to solve the whole problem of intelligence.

Dileep George

Language is simulation-controlled. Your perceptual and motor systems are building a simulation of the world, and language is a way of querying and controlling that simulation.

Dileep George

I don't treat brain-inspired as a marketing term. I'm looking into the details of biology and grappling with them.

Dileep George

Limits of brute-force brain simulation (e.g., Blue Brain) versus theory-driven modelsCortical microcircuits, feedback, and inference as a model of perceptionRecursive Cortical Networks and brain-inspired computer vision (e.g., CAPTCHAs, MNIST)Concept learning, world models, and simulation-based language understandingEpisodic memory, hippocampus–cortex interaction, and abstractionStrengths and limits of large language models like GPT-3Brain–computer interfaces (Neuralink, Paradromics) and long-term implications

High quality AI-generated summary created from speaker-labeled transcript.

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome