Jay McClelland: Neural Networks and the Emergence of Cognition | Lex Fridman Podcast #222

Jay McClelland: Neural Networks and the Emergence of Cognition | Lex Fridman Podcast #222

Lex Fridman PodcastSep 20, 20212h 31m

Lex Fridman (host), Jay McClelland (guest)

Neural networks as bridges between biology and cognitionHistory of connectionism, PDP, and the invention of backpropagationEmergence: evolution, development, cognition, and complex systemsDistributed representation, semantic cognition, and semantic dementiaIntuition vs. formalism in mathematics and languageSymbolic AI vs. connectionism and modern deep learningPersonal meaning, intrinsic motivation, and scientific legacy

In this episode of Lex Fridman Podcast, featuring Lex Fridman and Jay McClelland, Jay McClelland: Neural Networks and the Emergence of Cognition | Lex Fridman Podcast #222 explores neural Networks, Emergent Mind, and the Limits of Human Understanding Jay McClelland traces how neural networks bridge biology and cognition, arguing that thought and meaning emerge from massively parallel, connectionist systems rather than explicit symbolic rules. He recounts the early days of connectionism with Rumelhart and Hinton, including the birth of backpropagation and interactive activation models, and how these reshaped cognitive science and AI.

Neural Networks, Emergent Mind, and the Limits of Human Understanding

Jay McClelland traces how neural networks bridge biology and cognition, arguing that thought and meaning emerge from massively parallel, connectionist systems rather than explicit symbolic rules. He recounts the early days of connectionism with Rumelhart and Hinton, including the birth of backpropagation and interactive activation models, and how these reshaped cognitive science and AI.

The conversation explores emergence across evolution, brain development, language, and mathematics, contrasting intuitive, distributed representations with formal logic and symbolic AI. McClelland also discusses neurological conditions like semantic dementia as windows into how meaning is represented and lost in the brain.

He reflects on the influence of formal training on how experts misunderstand ordinary cognition, the interplay of intuition and proof in mathematics, and what large neural networks suggest about creativity and insight. The discussion ends with personal themes: intrinsic motivation, legacy, degeneration versus death, and the idea that humans create meaning rather than discover a pre-given one.

Key Takeaways

Neural networks offer a mechanistic link between brain biology and thought.

By modeling simple neuron-like units connected in parallel and layers, neural networks show how high-level cognition can emerge from low-level biological processes, dissolving the old Cartesian separation between body and mind.

Get the full analysis with uListen AI

Connectionism encodes knowledge in weights, not in explicit symbols or rules.

In McClelland’s view, systems like interactive activation models and modern CNNs demonstrate that what we call ‘knowledge’ is distributed across connections; there is no internal dictionary, only patterns of connectivity that shape input–output behavior.

Get the full analysis with uListen AI

Backpropagation arose from reframing learning as optimization, not biology mimicry.

Hinton’s push to ‘define an objective function and minimize error’ led Rumelhart to generalize the single-layer delta rule into backpropagation, propagating error signals backward through layers—now the core of deep learning.

Get the full analysis with uListen AI

Emergent phenomena can be real and important without being explicitly represented.

From evolution and developmental stages to insights in math and language, McClelland argues for a ‘radical emergentist’ stance: high-level structures (thoughts, concepts, sand dunes) are real patterns arising from lower-level dynamics, even if they aren’t discrete symbols inside the system.

Get the full analysis with uListen AI

Semantic dementia reveals how distributed meaning representations degrade.

Patients who progressively lose semantic distinctions (e. ...

Get the full analysis with uListen AI

Formal training can distort our intuitions about ordinary cognition.

Experts in linguistics or mathematics often project their systematized, formal intuitions onto ‘the human mind’ at large, but studies show lay intuitions differ; McClelland warns about expert blind spots and overgeneralizing from enculturated reasoning styles.

Get the full analysis with uListen AI

Intrinsic motivation and collaborative synergy drive lasting scientific impact.

McClelland emphasizes following what genuinely excites you, resisting narrow labels (e. ...

Get the full analysis with uListen AI

Notable Quotes

If I think about the mind in terms of a neural network, it will help me answer the questions about the mind that I'm trying to answer.

Jay McClelland

The unit for the word ‘time’ isn’t a unit for the word ‘time’ for any other reason than it’s got the connections to the letters that make up the word ‘time.’

Jay McClelland

It is by logic that we prove, but by intuition that we discover.

Henri Poincaré (quoted by Jay McClelland)

I used to sometimes tell people I was a radical eliminative connectionist… I don’t like the word ‘eliminative’ anymore… I would call myself a radical emergentist connectionist.

Jay McClelland

Meaning is what we make of it… we are an emergent result of a process that happened naturally without guidance.

Jay McClelland

Questions Answered in This Episode

If concepts and meanings are distributed across connections, how far can we push interpretability before we lose fidelity to how cognition actually works?

Jay McClelland traces how neural networks bridge biology and cognition, arguing that thought and meaning emerge from massively parallel, connectionist systems rather than explicit symbolic rules. ...

Get the full analysis with uListen AI

Can large-scale neural networks ever truly capture the kind of intuitive mathematical insight that Poincaré and other great mathematicians describe, or is there a qualitative gap?

The conversation explores emergence across evolution, brain development, language, and mathematics, contrasting intuitive, distributed representations with formal logic and symbolic AI. ...

Get the full analysis with uListen AI

What does semantic dementia imply about how robust—or fragile—our own everyday conceptual categories really are?

He reflects on the influence of formal training on how experts misunderstand ordinary cognition, the interplay of intuition and proof in mathematics, and what large neural networks suggest about creativity and insight. ...

Get the full analysis with uListen AI

How might we design AI systems that combine the strengths of formal symbolic reasoning with the emergent, intuitive power of connectionist networks without merely bolting one onto the other?

Get the full analysis with uListen AI

To what extent does formal education in logic, math, or linguistics reshape the architecture of our own cognition, and should that influence how we teach these subjects?

Get the full analysis with uListen AI

Transcript Preview

Lex Fridman

The following is a conversation with Jay McClelland, a cognitive scientist at Stanford and one of the seminal figures in the history of artificial intelligence, and specifically neural networks, having written the Parallel Distributed Processing book with David Rumelhart, who co-authored the Backpropagation paper with Geoff Hinton. In their collaborations, they've paved the way for many of the ideas at the center of the neural network-based machine learning revolution of the past 15 years. To support this podcast, please check out our sponsors in the description. This is the Lex Fridman podcast, and here is my conversation with Jay McClelland. You are one of the seminal figures in the history of neural networks at the intersection of, uh, cognitive psychology and computer science. What to you has, over the decades, emerged as the most beautiful aspect about neural networks, both artificial and biological?

Jay McClelland

The fundamental thing I think about with neural networks is how they allow us to link biology with the mysteries of thought. And, um, you know, in the. When I was first entering the field myself in the late '60s, early '70s, cogni- cognitive psychology had just become a field. There was a book published in '67 called Cognitive Psychology. Um, and the author said that, you know, the study of the nervous system was only of peripheral interest. It wasn't going to tell us anything about the mind, and I didn't agree with that. I, I always felt, "Oh, look, I'm, I'm a physical being." I... From dust to dust, you know, ashes to ashes, and somehow I emerged from that. Um-

Lex Fridman

So, so that's really interesting. So there was a sense with cognitive psychology that in understanding the sort of neuronal structure of things, you're not going to be able to understand the mind? And then your sense is if we study these neural networks, we might be able to get at least very close to understanding the fundamentals of the human mind.

Jay McClelland

Yeah. I used to think, um, or I used to talk about the idea of awakening from the Cartesian dream.

Lex Fridman

(laughs)

Jay McClelland

So Descartes, um, you know, thought about these things, right? He, he was walking in the gardens of Versailles one day, and he stepped on a stone and a statue moved. And he walked a little further, he stepped on another stone and another statue moved, and he, like, "Why did the statue move when I stepped on the stone?" And he went and talked to the gardeners, and he found out that they had a hydraulic system that allowed the physical contact with the stone to cause water to flow in various directions, which caused water to flow under the statue and moved the statue. And he used this as the beginnings of a theory about how animals act, and he had this notion that these little fibers that people had identified that weren't carrying the blood, you know, were these little hydraulic tubes that if you touch something, there would be pressure and it would send a signal of pressure to the other parts of the system and that would cause action. So he had a mechanistic theory of animal behavior, and he thought that the human had this animal body, but that some divine something else had to have come down and been placed in him to give him the ability to think, right? So the physical world includes the body in action, but it doesn't include thought according to Descartes, right?

Install uListen to search the full transcript and get AI-powered insights

Get Full Transcript

Get more from every podcast

AI summaries, searchable transcripts, and fact-checking. Free forever.

Add to Chrome