
Dileep George: Brain-Inspired AI | Lex Fridman Podcast #115
Lex Fridman (host), Dileep George (guest), Narrator
In this episode of Lex Fridman Podcast, featuring Lex Fridman and Dileep George, Dileep George: Brain-Inspired AI | Lex Fridman Podcast #115 explores dileep George explains brain-inspired AI, inference, and human-like vision Lex Fridman and Dileep George explore how insights from neuroscience can guide the design of artificial intelligence, focusing especially on perception and vision. George critiques brute‑force brain simulations like Blue Brain and instead advocates for explicit computational theories that explain how neural microcircuits implement inference and world modeling. He describes his Recursive Cortical Network (RCN), a brain‑inspired vision system that uses feedback, lateral connections, and probabilistic inference to solve tasks like CAPTCHAs with very little training data. The conversation broadens to concept learning, grounded language understanding, memory, GPT‑style large language models, brain–computer interfaces, and what it really means to “understand” the brain if our goal is to build AGI.
Dileep George explains brain-inspired AI, inference, and human-like vision
Lex Fridman and Dileep George explore how insights from neuroscience can guide the design of artificial intelligence, focusing especially on perception and vision. George critiques brute‑force brain simulations like Blue Brain and instead advocates for explicit computational theories that explain how neural microcircuits implement inference and world modeling. He describes his Recursive Cortical Network (RCN), a brain‑inspired vision system that uses feedback, lateral connections, and probabilistic inference to solve tasks like CAPTCHAs with very little training data. The conversation broadens to concept learning, grounded language understanding, memory, GPT‑style large language models, brain–computer interfaces, and what it really means to “understand” the brain if our goal is to build AGI.
Key Takeaways
You can’t build a brain by simulating neurons without a theory of computation.
Projects like Blue Brain try to wire up detailed biophysical neuron models and hope intelligence emerges, but without a functional theory—what each structure computes and why—there’s no way to debug or guide the system when it fails.
Get the full analysis with uListen AI
Feedback and lateral connections are central to how the cortex performs inference.
Unlike purely feedforward deep nets, the visual cortex is densely interconnected with feedback and lateral pathways that let top–down expectations shape perception, explain illusions, and resolve ambiguity via iterative inference over competing hypotheses.
Get the full analysis with uListen AI
Brain-inspired graphical models can achieve strong data efficiency and robustness.
George’s Recursive Cortical Network uses probabilistic graphical models with feedback and lateral constraints to jointly perform recognition and segmentation, breaking many CAPTCHAs and achieving high MNIST accuracy from tens–hundreds of examples rather than tens of thousands.
Get the full analysis with uListen AI
Perception should be designed as part of a full cognition stack, not a standalone preprocessor.
George argues perception must be top‑down controllable and generative so higher-level cognition can “imagine,” manipulate internal models, and query perceptual knowledge—mirroring how humans visualize and mentally simulate scenarios from language or thought.
Get the full analysis with uListen AI
Grounded concepts and world models can’t be learned from text alone.
Text captures correlations in language, not the full causal, sensorimotor structure of the world; systems like GPT-3 can model text impressively but lack the ability to run physical simulations or access rich common-sense knowledge encoded in perception and action.
Get the full analysis with uListen AI
Episodic memory and statistical world models are distinct but coupled.
The brain appears to store one-off experiences as hippocampal sequences that index into a more general cortical model; replay lets us re-simulate past episodes, assess their consequences, and generalize lessons to structurally similar future situations.
Get the full analysis with uListen AI
Brain inspiration should guide principles, not enforce strict biological copying.
George stresses using neuroscience to uncover computational principles (e. ...
Get the full analysis with uListen AI
Notable Quotes
“Getting a single neuron's model 99% right is like getting a transistor model right and then trying to build a microprocessor without understanding Boolean logic.”
— Dileep George
“What we are seeing is not just a feedforward thing. We are constantly projecting our expectations onto the world, and the final percept is a combination of that projection with the actual sensory input.”
— Dileep George
“To really solve CAPTCHA, you have to solve the whole problem of intelligence.”
— Dileep George
“Language is simulation-controlled. Your perceptual and motor systems are building a simulation of the world, and language is a way of querying and controlling that simulation.”
— Dileep George
“I don't treat brain-inspired as a marketing term. I'm looking into the details of biology and grappling with them.”
— Dileep George
Questions Answered in This Episode
How could current deep learning architectures be modified to incorporate the kind of iterative inference and feedback George describes from cortical microcircuits?
Lex Fridman and Dileep George explore how insights from neuroscience can guide the design of artificial intelligence, focusing especially on perception and vision. ...
Get the full analysis with uListen AI
What experimental neuroscience results would most strongly confirm or falsify the specific microcircuit hypotheses behind Recursive Cortical Networks?
Get the full analysis with uListen AI
Where is the practical limit of data efficiency for brain-inspired models—could we ever approach human-level one-shot learning on complex visual tasks?
Get the full analysis with uListen AI
How might grounded, simulation-based language systems interact with large language models like GPT-4 or GPT-5—are they complementary or fundamentally at odds?
Get the full analysis with uListen AI
What are the most realistic ethical and technical challenges of high-bandwidth brain–computer interfaces, especially if they start rewiring cortical circuits over long timescales?
Get the full analysis with uListen AI
Transcript Preview
The following is a conversation with Daleep George, a researcher at the intersection of neuroscience and artificial intelligence, co-founder of Vicarious with Scott Phoenix, and formerly co-founder of Numenta with Jeff Hawkins, who's been on this podcast, and Donna Dubinsky. From his early work on hierarchical temporal memory to recursive cortical networks to today, Daleep's always sought to engineer intelligence that is closely inspired by the human brain. As a side note, I think we understand very little about the fundamental principles underlying the function of the human brain, but the little we do know gives hints that may be more useful for engineering intelligence than any idea in mathematics, computer science, physics, and scientific fields outside of biology. And so the brain is a kind of existence proof that says it's possible, keep at it. I should also say that brain-inspired AI is often overhyped and used as fodder just as quantum computing for, uh, marketing speak. But I'm not afraid of exploring these sometimes overhyped areas since where there's smoke, there's sometimes fire. Quick summary of the ads. Three sponsors, Babbel, Raycon earbuds, and MasterClass. Please consider supporting this podcast by clicking the special links in the description to get the discount. It really is the best way to support this podcast. If you enjoy this thing, subscribe on YouTube, review it with five stars on Apple Podcasts, support on Patreon, or connect with me on Twitter @LexFridman. As usual, I'll do a few minutes of ads now and never any ads in the middle that can break the flow of the conversation. This show is sponsored by Babbel, an app and website that gets you speaking in a new language within weeks. Go to babbel.com and use code LEX to get three months free. They offer 14 languages, including Spanish, French, Italian, German, and yes, Russian. Daily lessons are 10 to 15 minutes, super easy, effective, designed by over 100 language experts. Let me read a few lines from the Russian poem, (Russian) by Alexander Blok that you'll start to understand if you sign up to Babbel. (Russian) . Now, I say that you'll only start to understand this poem because Russian starts with language and ends with vodka. Now, the latter part is definitely not endorsed or provided by Babbel and will probably lose me the sponsorship, but once you graduate from Babbel, you can enroll in my advanced course of late night Russian conversation over vodka. I have not yet developed an app for that. It's in progress. So get started by visiting babbel.com and use code LEX to get three months free. This show is sponsored by Raycon earbuds. Get them at buyraycon.com/lex. They've become my main method of listening to podcasts, audiobooks, and music when I run, do push-ups and pull-ups, or just living life. In fact, I often listen to brown noise with them when I'm thinking deeply about something. It helps me focus. They're super comfortable, pair easily, great sound, great bass, six hours of playtime. I've been putting in a lot of miles to get ready for a potential ultra-marathon and listening to audiobooks on World War II. The sound is rich and really comes in clear. So again, get them at buyraycon.com/lex. This show is sponsored by MasterClass. Sign up at masterclass.com/lex to get a discount and to support this podcast. When I first heard about MasterClass, I thought it was too good to be true. I still think it's too good to be true. For 180 bucks a year, you get an all-access pass to watch courses from, to list some of my favorites, Chris Hadfield on space exploration, Neil deGrasse Tyson on scientific thinking and communication, Will Wright, creator of SimCity and Sims, on game design. Every time I do this read (laughs) , I really want to play a city builder game. Carlos Santana on guitar, Garry Kasparov on chess, Daniel Negreanu on poker, and many more. Chris Hadfield explaining how rockets work and the experience of being launched into space alone is worth the money. By the way, you can watch it on basically any device. Once again, sign up at masterclass.com to get a discount and to support this podcast. And now, here's my conversation with Daleep George. Do you think we need to understand a brain in order to build it?
Install uListen to search the full transcript and get AI-powered insights
Get Full TranscriptGet more from every podcast
AI summaries, searchable transcripts, and fact-checking. Free forever.
Add to Chrome