Lex Fridman PodcastJohn Hopfield: Physics View of the Mind and Neurobiology | Lex Fridman Podcast #76
EVERY SPOKEN WORD
100 min read · 20,173 words- 0:00 – 2:35
Introduction
- LFLex Fridman
The following is a conversation with John Hopfield, professor at Princeton whose life's work weaved beautifully through biology, chemistry, neuroscience, and physics. Most crucially, he saw the messy world of biology through the piercing eyes of a physicist. He's perhaps best known for his work on associative neural networks, now known as Hopfield networks. They were one of the early ideas that catalyzed the development of the modern field of deep learning. As his 2019 Franklin Medal in Physics award states, "He applied concepts of theoretical physics to provide new insights on important biological questions in a variety of areas, including genetics and neuroscience with significant impact on machine learning." And as John says in his 2018 article titled Now What, his accomplishments (laughs) have often come about by asking that very question, "Now what?" And often responding by a major change of direction. This is the Artificial Intelligence podcast. If you enjoy it, subscribe on YouTube, give it five stars on Apple Podcast, support it on Patreon, or simply connect with me on Twitter, @lexfridman, spelled F-R-I-D-M-A-N. As usual, I'll do one or two minutes of ads now and never any ads in the middle that can break the flow of the conversation. I hope that works for you and doesn't hurt the listening experience. This show is presented by Cash App, the number one finance app in the App Store. When you get it, use code LEXPODCAST. Cash App lets you send money to friends, buy Bitcoin, and invest in the stock market with as little as $1. Since Cash App does fractional share trading, let me mention that the order execution algorithm that works behind the scenes to create the abstraction of fractional orders is, to me, an algorithmic marvel. So big props to the Cash App engineers for solving a hard problem that in the end provides an easy interface that takes a step up the next layer of abstraction over the stock market, making trading more accessible for new investors and diversification much easier. So again, if you get Cash App from the App Store or Google Play and use code LEXPODCAST, you'll get $10 and Cash App will also donate $10 to FIRST, one of my favorite organizations that is helping advance robotics and STEM education for young people around the world. And now here's my conversation with John Hopfield.
- 2:35 – 8:49
Difference between biological and artificial neural networks
- LFLex Fridman
What difference between biological neural networks and artificial neural networks is most captivating and profound to you? At the higher philosophical level. Let's not get technical just yet.
- JHJohn Hopfield
One of the things that very much intrigues me is the fact that neurons have all kinds of components, properties to them, and in evolutionary biology, if you have some little quirk in how y- y- how a molecule works or how a cell works, and it can be made use of, evolution will sharpen it up and make it into a f- a useful feature rather than a glitch. And so you expect in neurobiology for evolution to have captured all kinds of possibilities of getting neurons, of like how you get neurons to do things for you, and that aspect has been completely suppressed in artificial neural networks.
- LFLex Fridman
Mm-hmm. So the glitches become features in the m- in the biological neural network?
- JHJohn Hopfield
They, they, they can. Look, let, let me take one of the things that I used to do research on. If you take things which oscillate and have rhythms which are sort of close to each other, under some circumstances these things will have a phase transition and suddenly the rhythm will... everybody will fall into step. There was a marvelous physical example of that in the Millennium Bridge across the Thames River about, uh, built about 2001.
- LFLex Fridman
Mm-hmm.
- JHJohn Hopfield
And pedestrians walking across, pedestrians don't walk s- synchronized, they don't walk in lock, lockstep. But they're all walking about the same frequency and the bridge could sway at that frequency and the slight sway made pedestrians tend a little bit to lock into step and after a while, the bridge was oscillating back and forth and the pedestrians were walking in step to it.
- LFLex Fridman
Wow.
- JHJohn Hopfield
And you could see it in the movies made out at the bridge. And the engineers made a simple-minded mistake, they had assumed when you walk it's step, step, step and it's back and forth motion. But when you walk it's also right foot, left foot, side to side motion. And it's the side to side motion for which the bridge was strong enough, but it wasn't... it wasn't stiff enough. And as a result, you would feel the motion and you'd fall into step with it. And people were very uncomfortable with it. They closed the bridge for two years while they-
- LFLex Fridman
(laughs)
- JHJohn Hopfield
... while they built stiffening for it. Now nerves... Look, nerve cells produce action potentials. You have a bunch of cells which are loosely coupled together producing action potentials at the same rate, there'll be some circumstances under which these things can lock together.
- LFLex Fridman
Mm-hmm.
- JHJohn Hopfield
Other circumstances in which they won't. Well, if they fire together, y- you can be sure that the other cells are gonna notice it. So you could make a computational feature out of this in your... in an evolving brain. Most artificial neural networks don't even have action potentials, let alone have, have the poten- possibility for synchronizing them.
- LFLex Fridman
And you mentioned the evolutionary process. So there...... the evolutionary process that builds on top of biological systems leverages that, the, the weird mess of it somehow. So, w- how do you make sense of that ability to leverage all the different kinds of complexities in the, in the biological brain?
- JHJohn Hopfield
Well, look, in the bi- at the biological molecule level, you have a piece of DNA which enc- would enc- encode for a particular protein. You could duplicate that piece of DNA and now one part of it encodes for that protein, but the other one could itself change a little bit and thus start coding for a molecule which is slightly different. Now, if that molecule was just slightly different, had an e- a function which helped any old chemical reaction, it was as important to the cell. It would go ahead and let that ev- try, and evolution would slowly im- improve that function. And so you have the possibility of duplicating and then having things drift apart. One of them retain the old function, the other one do something new for you. And there's evolutionary pressure to improve. Look, there is in computers too, but it's, improvement has to do with closing some companies and openings of others.
- LFLex Fridman
(laughs) Yeah.
- JHJohn Hopfield
The evolutionary process looks a little different.
- LFLex Fridman
Yeah.
- JHJohn Hopfield
Uh-
- LFLex Fridman
Similar time scale, perhaps. No-
- JHJohn Hopfield
Much, much shorter in time scale.
- LFLex Fridman
Companies close, yeah, go bankrupt, and they're born. Yeah. Shorter. But not much shorter. Some, some company lasts a century. Couples... But yeah, you're right. I mean if you think of companies as a single organism that builds and you're on that, yeah, that's a fascinating, uh, dual correspondence there between biological organisms-
- JHJohn Hopfield
And, and, and companies have difficulty having a new product competing with an old product.
- LFLex Fridman
Yeah.
- JHJohn Hopfield
And, um...
- LFLex Fridman
(laughs)
- JHJohn Hopfield
When IBM built its first PC, you've probably read th- read the book, they made a little isolated internal unit to make the PC. And for the first time in IBM's history, they didn't insist that you build it out of IBM components.
- LFLex Fridman
Hmm.
- JHJohn Hopfield
But they understood that they could get into this market, which is a very different thing, by completely changing their culture. And biology finds other markets in a more adaptive way.
- LFLex Fridman
(laughs) Yeah, it's better at it. It's better at that kind of integration.
- 8:49 – 13:45
Adaptation
- LFLex Fridman
So maybe you've already said it, but what to you is the most beautiful aspect or mechanism of the human mind? Is it the adaptive, the ability to adapt as you've described? Or is there some other little quirk that you particularly like?
- JHJohn Hopfield
Adaptation is everything when you get down to it. But the difference, uh, there's diff- there are differences b- between adaptation where your learning goes on only over generations and over evolutionary time, or as your learning goes on at the time scale of when an individual must learn from the environment during that individual's lifetime. And biology has both kinds of learning in it, and the thing which makes neurobiology hard is that it, a mathematical system as it were, built on this other kind of evolutionary system.
- LFLex Fridman
W- what do you mean by mathematical system? Where (laughs) , where is the math in the biology?
- JHJohn Hopfield
Well, when you talk to a computer scientist about neural networks, it's all math.
- LFLex Fridman
(laughs)
- JHJohn Hopfield
The fact that biology actually came about from evolution, and the thing, and the fact that biology is about a system which you can build in three dimensions.
- LFLex Fridman
Mm-hmm.
- JHJohn Hopfield
If you look at computer chips, computer chips are basically two-dimensional structures-
- LFLex Fridman
Yeah.
- JHJohn Hopfield
...maybe 2.1 dimensions, but they really have difficulty doing three-dimensional wiring. Biology's, uh, biology is, the neocortex is actually also sheet-like and it sits on top of white matter which is about 10 times the volume of the gray matter, and contains all of what you might call the wires. But there's a hu- huge, the effect, the effect of computer structure on what is easy and what is hard is immense.
- LFLex Fridman
So-
- JHJohn Hopfield
And biology does, it makes some things easy that are very difficult to understand how to do computationally. On the other hand, you can't do simple floating-point arithmetic because it's awfully stupid.
- LFLex Fridman
Yeah. And you're saying this kind of three-dimensional complicated structure, uh, makes, it's still math, it's still doing math.
- JHJohn Hopfield
It's-
- LFLex Fridman
The kind of math it's doing enables you to solve problems of a very different kind.
- JHJohn Hopfield
That's right. That's right.
- LFLex Fridman
So you mentioned two kinds of adaptation, the evolutionary adaptation at the, and the adaptation or learning at the scale of a single human life. Which do you, uh, which is, uh, particularly beautiful to you and interesting from a research and from just a human perspective? And which is more powerful?
- JHJohn Hopfield
I find things most interesting that I begin to see how to get into the edges- edges of them and tease them apart a little bit and see how they work. And since I can't see the evolutionary process going on, I w- I am in awe of it, but I find it just a, a, a black hole as far as trying to understand what to do. And so in a certain sense, I'm in awe of it, but I couldn't be interested in working on it.
- LFLex Fridman
The human life's timescale is, however, a thing you can tease apart and study.
- JHJohn Hopfield
Yeah. You can do the... there's developmental neurobiology which understands how the connections, uh, and s- how the structure evolves from a combination of what the genetics is like and the real, the fact that this is, you're building a system in three dimensions.
- LFLex Fridman
In, in just days and months. Those early, early days of a human life are really interesting.
- JHJohn Hopfield
They are, and, uh, of course, there are times of immense cell multiplication. There are also times of the greatest cell death in the brain-
- LFLex Fridman
Hmm.
- JHJohn Hopfield
... is during infancy.
- LFLex Fridman
It's turnover. (laughs)
- JHJohn Hopfield
So what is, what, what, uh, what is not effective, what is not wired well enough to use at the moment, throw it out.
- LFLex Fridman
It's a mysterious process.
- 13:45 – 23:03
Physics view of the mind
- LFLex Fridman
From... let me ask, from what field do you think the biggest breakthroughs in understanding the mind will come in the next decades? Is it neuroscience, computer science, neurobiology, psychology, physics, maybe math, maybe literature? (laughs)
- JHJohn Hopfield
Well, of course, I see the world always through a lens of physics. I grew up in physics, and the way I pick problems is very characteristic of, of physics and of a, uh, intellectual background which is not psychology, which is not chemistry and so on and so on.
- LFLex Fridman
Uh, both of your parents were physicists?
- JHJohn Hopfield
Both of my parents were physicists, and, uh, the real thing I got out of that was a feeling that the world is an understandable place, and if you do enough experiments and think about what they mean and structure things that you can do the mathematics, uh, of the ex- relevant to the experiments, you ought to be able to understand how things work.
- LFLex Fridman
But that was, that was a few years ago. Did you change your mind at all through, uh, many decades of trying to understand the mind, of studying it different kinds of way? Not even the mind, just biological systems. Do you still have hope that physics... that you can understand? (laughs)
- JHJohn Hopfield
There's a question of, what do you mean by understand?
- LFLex Fridman
(laughs) Of course.
- JHJohn Hopfield
When I taught freshman physics, I used to say I wanted to give physics to understand the subject, to understand Newton's laws. I didn't want them simply to memorize a set of examples for which they knew the, the equations to write down to generate the answers. I had this nebulous i- idea of understanding so that if you looked at a situation, you could say, "Oh, I expect the ball to make that trajectory" or, "I, I expect..." some intuitive notion of understanding. And I don't know how to express that very well, and I've never known how to express it well. And you run smack up against it when you choose these, look at these simple neural nets, feedforward neural nets, which do amazing things and yet you know contain nothing of the essence of what I would have felt was understanding.
- LFLex Fridman
The-
- JHJohn Hopfield
Understanding is more than just an enormous lookup table.
- LFLex Fridman
Let's linger on that, how sure you are of that. What if the table gets really big? So I mean, asked another way, these feedforward neural networks, do you think they'll ever understand?
- JHJohn Hopfield
I could answer that in two ways. I think if you look at real systems, feedback is an essential aspect of how these real systems compute. On the other hand, if I have a mathematical system with feedback, I know I can unlayer this and do it-
- LFLex Fridman
(laughs) Yeah.
- JHJohn Hopfield
... before th- but, uh, but I have an exponential expansion in the amount of stuff I have to build if I're gonna solve the problem that way.
- LFLex Fridman
Right. So feedback is essential. So we can talk even about recurrent n- uh, neural net-
- JHJohn Hopfield
Yeah, yeah.
- LFLex Fridman
... so recurrence. But do you think all of the pieces are there to achieve understanding through these simple mechanisms? Like, back to our original question, what is the fundamental... is there a fundamental difference between artificial neural networks and biological, or is it just a bunch of surface stuff?
- JHJohn Hopfield
Suppose you ask a neurosurgeon, "When is somebody dead?"
- LFLex Fridman
(laughs) Yeah.
- JHJohn Hopfield
He'll probably go back to saying, "Well, I can look at the brain rhythms and tell you this is a brain which is never going to function again. This is one that's a... this other one is one which if we, uh, treat it well, uh, is still recoverable." And then just, th- do that by sending electrodes and looking at simple elec- electrical patterns which don't look in any detail at all what individual neurons are doing.These rhythms are utterly absent from anything which goes on at Google. (laughs)
- LFLex Fridman
(inhales deeply) Yeah, yeah, but the rhythms-
- JHJohn Hopfield
But the rhythms what?
- LFLex Fridman
So, well, that's like comparing... Okay. I'll tell you. It's like you're comparing, um, the, the greatest classical musician in the world to a child first learning to play. The question I'm a- but they're still both playing the piano. Uh, I'm, I'm asking, is there... Will it ever go on at Google? Do you have a hope? Because you were one of the seminal (laughs) figures in both, launching both disciplines, both sides of the, of the river. Uh-
- JHJohn Hopfield
I think it's going to go on generation after generation-
- LFLex Fridman
(laughs)
- JHJohn Hopfield
... the way it has for what you might call the AI computer science community says, "Let's take the following. This is our modeled neurobiology at the moment. Let's pretend it's g- good enough and do everything we can with it." And it does interesting things and after a while, it, there are grinds into the sand. And you say, "Ah, something else is needed for neurobiology." And some other grand thing comes in and, and they usually go a lot further, but will go into the sand again. And I think it could be generations of this evolution, and I don't know how many of them, and each one is going to get you, uh, further into what a brain does, w- uh, and... In some sense, pass the Turing test longer and in more, more broad aspects. And how, how many of these are there, are going to have to be before you say, "I've made something, I've made a human-"
- LFLex Fridman
(laughs)
- JHJohn Hopfield
"... I don't know."
- LFLex Fridman
But your sense is, it might be a couple? It might be-
- JHJohn Hopfield
My sense is, it might be a couple more.
- 23:03 – 35:22
Hopfield networks and associative memory
- JHJohn Hopfield
- LFLex Fridman
So if we can just take a stroll to, uh, some of your work that is incredibly surprising, that it works as well as it does, that launched a lot of the, uh, recent work with neural networks. If we go to what are now called Hopfield networks, uh, can you tell me what is associative memory in the mind for the human side? Let's explore memory for a bit.
- JHJohn Hopfield
Okay, what you mean by associative memory is, ah, you have a m- memory of each of your friends. Your friend has all kinds of properties from what they look like, as what their voice sounds like, to where they went to college, where you met them, uh, go on and on, what, what, what science papers they've written. And if I start talking about a 5'10", wiry, cognitive scientist who's got a very bad back, it doesn't take very long for you to say, "Oh, he's talking about Geoff Hinton." I never men- I never mentioned the name or anything very particular.... but somehow a f- a few facts that are associated with this, with a particular person, enables you to get a hold of the rest of the facts.
- LFLex Fridman
Yeah.
- JHJohn Hopfield
Or n- or not the rest of them, o- another subset of them. And it's this ability to link things together, link experiences together, which it goes under the general name of associative memory. And a large part of intelligent behavior is actually just large associative memories at work, as far as I can see.
- LFLex Fridman
What do you think is the mechanism of how it works in the mind? Is it, is it a mystery to you still? Do you have inklings of how this essential thing for cognition works?
- JHJohn Hopfield
What I made, uh, 35 years ago was, of course, a, uh, crude physics model to show the ki- just to actually enable you to understand my old sense of understanding as a physicist because you could say, "Ah, I understand why this goes to stable states. It's like things going down- downhill."
- LFLex Fridman
Right.
- JHJohn Hopfield
And that gives you th- something with which to think in physical terms, rather than only in mathematical terms.
- LFLex Fridman
So you've created these associative artificial neural networks?
- JHJohn Hopfield
Ye- ye- yeah, that's right. Uh, now, if you, if you look at what I did, I didn't at all describe a system which gracefully learns. I described a sy- system (laughs) in which you could d- understand how things, how learning could link things together, how very crudely it might learn. One of the things which intrigues me as I reinvestigate that system now, to some extent, is... Look, I see you, I'll see you if f- every second for the next f- hour or what have you.
- LFLex Fridman
(laughs)
- JHJohn Hopfield
Each gl- each look at you is a little bit different. I don't store all those second by second images. I don't store 3,000 images. I somehow compact this information.
- LFLex Fridman
Mm-hmm.
- JHJohn Hopfield
So I now have a, uh, view of you which can b- which I can u- use. It doesn't slavishly remember anything in particular, but it compacts the information into useful chunks, which are, uh, s- somehow it's these chunks which are not just activities of neurons from bigger things than that, which are the real entities which are s- which are useful, which are useful to you.
- LFLex Fridman
Useful descr- uh, useful to you to describe, to compress this information coming at- Coming at you.
- JHJohn Hopfield
Yeah, yeah, yeah. And you have to compress it in such a way that if, if I get, if the information comes in just like this again, I don't bother to, bother to rewrite it.
- LFLex Fridman
Right.
- JHJohn Hopfield
Or, or, or efforts to rewrite it simply do not yield anything because those things are already written. And that needs to be not looked this up, have I re- written it, so I started somewhere already. It's gotta be s- something which is much more automatic in the machine hardware.
- LFLex Fridman
Right. So, in the human mind, how complicated is that process do you think? So you, you've created... It feels weird to be sitting with John Hopfield calling him Hopfield networks, but, uh-
- JHJohn Hopfield
It is weird.
- LFLex Fridman
(laughs) Yeah. But nevertheless, that's what everyone calls them, so, uh, here we are. Uh, so that's a s- a simplification. That's what a physicist would do. You and Richard Feynman sat down and talked about associative memory. Now, if you, as a, uh... If you look at the mind where you can't quite simplify it so perfectly, do you, do you think that-
- JHJohn Hopfield
Well, let, l- l- let me back tr- track just a little bit.
- LFLex Fridman
Yeah.
- JHJohn Hopfield
Biology is th- is about dynamical systems. Computers are dynamical systems. Um, you can ask... If you want to me- to model biology, if you want to model neurobiology, what is the time scale? There's a dynamical system in which, u- of a f- fairly fast time scale in which you could say, "The synapses don't change much during this computation, so I'll think of the synapses as fixed and just do the dynamics of the activity." Or you can say, "The synapses are changing fast enough that I have to have the synaptic dynamics working at the same time as the system dynamics in order to ga- understand the biology." Most artif- If you look at the feedforward artificial neural nets, they're all done as learnings. Uh, first of all, I spend some time learning, not performing. Then I turn off learning, and I perform.
- LFLex Fridman
Right.
- JHJohn Hopfield
That's not biology. And so y- as I look more deeply at neurobiology, even at associative memory, I've got to face the fact that the dynamics of synapse change is going on all the time, and I can't just get by by saying, "I'll do the a- dynamics of activity with fixed synapses."
- LFLex Fridman
So the, the synaptic- the dynamics of the synapses is actually fundamental to the whole system?
- JHJohn Hopfield
Yeah, yeah.
- LFLex Fridman
Wow.
- 35:22 – 37:29
Boltzmann machines
- LFLex Fridman
so these kinds of networks actually led to a lot of the work that is going on now in neural networks, uh, artificial neural networks. So the follow on work with, uh, restricted Boltzmann machines and deep belief nets, uh, followed on from the, from these ideas of the Hopfield network. So what, what do you think about this continued progress of that work towards now re- revigorated exploration of feedforward neural networks and recurrent neural networks and convolutional neural networks and, uh, kinds of networks that are helping solve image recognition, natural language processing, all that kind of stuff?
- JHJohn Hopfield
It's always intrigued me that one of the most long-lived of the learning systems is the Boltzmann machine which is intrinsically a feedback network.
- LFLex Fridman
Mm-hmm.
- JHJohn Hopfield
And, uh, with the brilliance of Hinden, Hinden and Sinofsky to understand how to do learning in that. And it's still a useful way to understand learning and understand-... and the learning that you understand in that has something to do with the way that feedforward systems work. But it's not always exactly simple to express that intuition. But it- it always amuses me, is the hint going- going back to the will yet again, on a form of the Boltzmann machine, because really, that which has feedback and interesting probabilities in it, is a lovely encapsulation of something computational.
- LFLex Fridman
Something computational?
- JHJohn Hopfield
Something both computational and physical. Computational in the- it's very much related to feedforward networks. Physical in that Boltzmann machine learning is really learning a set of parameters for a physics Hamiltonian or energy function.
- LFLex Fridman
Mm-hmm.
- 37:29 – 39:53
Learning
- LFLex Fridman
What do you think about learning in this whole domain? Do you, uh, do you think the aforementioned guy, uh, Geoff Hinton, all the work there with back propagation, all- all the kind of, um, learning that goes on in these networks, how do you, um... if you compare it to learning in the brain, for example, is there echoes of the same kind of power that back propagation reveals about these kinds of recurrent networks? Or is it something fundamentally different going on in the brain?
- JHJohn Hopfield
I don't think the brain is, um, as deep as the deepest networks go, the deepest, uh, computer science networks. And I do wonder whether part of that depth of the computer science networks is necessitated by the fact that the only learning that's easily done on a machine is- is feedforward. And so there is the question of to what extent is the biology, which has some feedforward and some feedback, been captured by something which is... it's got many more neurons, bu- much more depth to the neurons in it.
- LFLex Fridman
So part of you wonders if the feedback is actually more essential than the number of neurons or the depth, the- the dynamics of the feedback?
- JHJohn Hopfield
The dynamics of the feedback, look, if you don't have, if you don't have feedback, it's a little bit like building a big computer and having, running it through one clock cycle, and then you can't do anything 'til you put, 'til- 'til you reload something coming in. How- how do you use the fact that there are multiple clock cycles? How do I use the fact that you can close your eyes, stop listening to me, and think about a chess board for two minutes without any input whatsoever?
- LFLex Fridman
(laughs) Yeah. That memory thing, that's fundamentally a feedback kind of mechanism. You're going back to something. Yes. It's har- it's hard. It's hard to understand. It's hard to introspect,
- 39:53 – 48:45
Consciousness
- LFLex Fridman
let alone, uh, consciousness (laughs) -
- JHJohn Hopfield
Oh-
- LFLex Fridman
... 'cause that's all-
- JHJohn Hopfield
... let- let alone consciousness. Yes, yes.
- LFLex Fridman
'Cause that's tied up in there too. You can't just put that on anot- on another shelf. (laughs)
- JHJohn Hopfield
Every once in a while, I get interested in consciousness, and then I go and, I've done that for years, and- and ask one of my betters, as it were, uh, their view on consciousness. It's been interesting collecting them.
- LFLex Fridman
What are- (laughs)
- JHJohn Hopfield
Omega-
- LFLex Fridman
(laughs)
- JHJohn Hopfield
... uh-
- LFLex Fridman
What is consciousness? Let's- let's try to take a brief step into that room.
- JHJohn Hopfield
Well, I asked Marvin Minsky this, your consciousness, and Marvin said, "Consciousness is basically overrated. It may be an epiphenomenon. After all, all the things your brain does which are, uh, which are actually hard computations, you do non-consciously."
- LFLex Fridman
Hm.
- JHJohn Hopfield
And there's so much evidence that even the things, the simple things you do, you can make decisions, you can make committed decisions about them. The neurobiologist can say, "He's now committed. He's going to move the hand left," before you know it.
- LFLex Fridman
So his view that consciousness is not, that's just like little icing on the cake. The real cake is in the subconscious.
- JHJohn Hopfield
Yeah. Yeah. Subconscious, non-conscious.
- LFLex Fridman
Non-conscious-
- JHJohn Hopfield
You're a-
- LFLex Fridman
... is the better word. Sorry.
- JHJohn Hopfield
There's, it's only that Freud captured the other word.
- LFLex Fridman
Yeah. It's, that's a confusing word, subconscious.
- JHJohn Hopfield
Nic- Nicholas Chater wrote an interesting book, um, I think the title of it is The Mind is Flat.
- LFLex Fridman
Hm. (laughs)
- JHJohn Hopfield
And flat in-s- in a neural net sense might be, uh, flat as something which is of very broad neural net without really having any layers and depth, whereas a deep brain would be many layers and not so broad. In the same sense that, uh, if you push Minsky hard enough, he would probably have said, "Consciousness is your effort to explain to yourself that which you have already done."
- LFLex Fridman
(laughs) Yeah. It's the weaving of the narrative around the things that already been computed for you. Wh-
- JHJohn Hopfield
That's right. And then- then so much of what we do for our memories of events, for example, if there's some traumatic event you witness, you will have a few facts about it correctly done-... if somebody asks you about it, you will weave a narrative which is actually much more rich in detail than that, based on some anchor points you have of correct things-
- LFLex Fridman
Yeah.
- JHJohn Hopfield
... and pulling together general knowledge on the other, but you will have a narrative. And once you generate that narrative, you're very likely to repeat that narrative and claim that all the things you have in it are actually the correct things. There was a marvelous example of that in the, um, Watergate/impeachment era of John Dean. John Dean, you're too young to know, had been the personal lawyer of Nixon. And so John Dean was involved in the cover-up and, uh, John Dean ultimately realized the only way to keep himself out of jail for a long time was actually to tell some of the truths about Nixon. And John Dean was a tremendous witness. He would remember these conversations in great detail and in very convincing detail. And long afterward, some of th- some of the tapes, the secret tapes as it were fr- from which these k- John was re- Dean was recalling these conversations, were published, and one found out that John Dean had a good but not exceptional memory. What he had was an ability to paint vividly, and in some sense accurately, the tone of what was going on.
- LFLex Fridman
By the way, that's a beautiful description of consciousness. Uh, (laughs) do you en- like where d- where do you stand in your, uh, today (laughs) , so m- uh, perhaps this changes day to day, but where do you stand on the importance of consciousness in our whole big mess of cognition? Is it just a little narrative maker or is it actually fundamental to intelligence?
- JHJohn Hopfield
That's a, that's a, a very hard one. When I asked Francis Crick about consciousness, he launched forward in a long monologue about-
- 48:45 – 53:14
Attractor networks and dynamical systems
- JHJohn Hopfield
- LFLex Fridman
Can we go there for a second? Uh, you've talked about attractor networks, uh, and just, maybe you could say what are attractor networks, and more broadly, what are interesting network dynamics that emerge in these or other complex systems?
- JHJohn Hopfield
You have to be willing to think in a huge number of dimensions, 'cause in a huge number of dimensions, the behavior of a system can be thought of as- as just the motion of a point over time in this huge number of dimensions.
- LFLex Fridman
(laughs) Right. Yeah.
- JHJohn Hopfield
And an attractor network is simply a network where there is a line and other lines converge on it in time. That's the essence of an attractor network. That's how you-
- LFLex Fridman
In a highly, um, highly dimensional space.
- JHJohn Hopfield
And the, and the easiest way to get that is to do it in a high dimensional space where some of the dimensions provide the dissipation which makes, which... like I have a physical system, trajectories can't c- contract everywhere. They have to contract in some places and expand in others. There's a fundamental classical theorem of statistical mechanics which goes under the name of Liouville's theorem which says you can't contract everywhere. You have to contr- if you contract somewhere, you, uh, expand somewhere else. To get, and is, in interesting physical systems, you get driven systems where you have a small subsystem which is the interesting part and the rest of the contraction and expansion, the physicists will say it's entropy flow in this other part of the system. But ba- but basically, attractor networks are dynamics funneling down so that the, you can't be any... so that if you start somewhere in the dynamical system, you will soon find yourself on a pretty well determined pathway which goes somewhere. If you start somewhere else, you'll wind up on a different pathway. But you, you don't have just all possible things. You have some defined pathways which are allowed and onto which you will converge. And that's the way you make a stable computer, and that's the way you make a stable behavior.
- LFLex Fridman
So, in general, looking at the physics of the emergent stability in these net- in networks, what are some interesting characteristics that, um, what are some interesting insights from studying the dynamics of such high dimensional systems?
- JHJohn Hopfield
Most dynamical systems, most dyn- dyn- driven dynamical systems, by driven they're coupled somehow to an energy source and so their dynamics keeps going because of this coupling to the energy source. Most of them, it's very difficult to uh, to understand at all what the dynamical, the dy- dynamical behavior is going to be.
- LFLex Fridman
Right. You have to run it. (laughs)
- JHJohn Hopfield
You have, you have to run it. There's a s- there's a subset of systems which has what is accurately known to mathematicians as a Lyapunov function. And those systems, you can understand convergent dynamics by saying you're going downhill on something or other, and that's what I found with- without ever knowing what Lyapunov functions were, in the simple model I made in the early '80s, was an energy function so you could understand how you could get this channeling on the, on the pathways without having to follow the dynamics in an in- infinite detail. You started r- roll- rolling a ball off the top of a mountain that's going to wind up in the bottom of a valley, you know that's true without actually watching the ball fall, roll down.
- LFLex Fridman
There are certain properties of the system that, uh, when you can know that.
- JHJohn Hopfield
That, that, that's right. And not all systems behave that way. But-
- LFLex Fridman
Most don't probably.
- JHJohn Hopfield
Most don't. But it provides you with a metaphor for thinking about systems which are stable and who do have these attractors behave even if you can't find the, a Lyapunov function behind them or an energy function behind them. It gives you a metaphor for thought.
- 53:14 – 57:11
How do we build intelligent systems?
- LFLex Fridman
Speaking of thought, if I had, uh, a glint in my eye with excitement and said, um, "You know, I'm really excited about this something called deep learning and neural networks, and I would like to create an intelligent system," and came to you as an advisor, uh, what would you recommend? Uh, is it a hopeless pursuit to use neural networks to achieve thought? Is it, uh, what kind of mechanisms should we explore? What kind of ideas should we explore?
- JHJohn Hopfield
Well, you look at the sy- sy- the, um, simple networks, the one-pass networks, they don't support multiple hypotheses very well. As I th- have tried to work with very simple systems which do something which you might consider to be thinking, thought has to do with the ability to do mental exploration before you make, take a physical action.
- LFLex Fridman
Almost a, uh, like we were mentioning playing chess, visualizing, simulating inside your head different outcomes.
- JHJohn Hopfield
Yeah. Yeah. And, um, now you could do that in a feedforward network because you've precalculated all kinds of things. But I think the way neurobiology does it hasn't precalculated everything.... it actually has parts of a dynamical system in which you're doing exploration in- in- in a way which is ...
- LFLex Fridman
There's a creative element, like there's an-
- JHJohn Hopfield
There's- there's- there's- there's a creative element and in a s- in a simple-minded neural net, you have a- a, um, constellation of instances from which you've learned.
- LFLex Fridman
Mm-hmm.
- JHJohn Hopfield
And if you are within that space, if a- if a- if a new- if a new question is a question within this space, you can actually rely on that re- per- system pretty well to come up with a good suggestion for what to do. If, on the other hand, the query comes from outside the space, you have no way of knowing how the system's going to behave. There are no limitations on what could happen, and so the artificial neural net world is always very much, I have a- a population of examples. The test set must be drawn from this equivalent population. If the test set has examples which are from a population which is completely different, there's no way that you- y- you could expect to get the answer right.
- LFLex Fridman
Yeah, out-
- JHJohn Hopfield
And so if-
- LFLex Fridman
... what they call h- o- outside the distribution.
- JHJohn Hopfield
That's right, that's right.
- LFLex Fridman
Uh-huh.
- JHJohn Hopfield
And so if you see a- a ball rolling across the street at dusk, if that wasn't in your- your- your- your training set, the idea that a child may be coming close behind that is not going to occur to the neural net.
- LFLex Fridman
And it is to our... there's something in the neurobiology that allows that?
- JHJohn Hopfield
Yeah. There's- there's something in the way of what it means to be outside of the p- of the- of the population of the training set. The population of the training set isn't just for the set of examples, it's- it's... there- there's more to it than that and it gets back to my own f- uh, question of what is it to understand something?
- LFLex Fridman
(laughs) Yeah.
- 57:11 – 59:12
Deep thinking as the way to arrive at breakthroughs
- LFLex Fridman
You know, as- in a small tangent, you've talked about the value of thinking, of deductive reasoning in science versus large data collection, s- sort of-
- JHJohn Hopfield
Yeah.
- LFLex Fridman
... thinking about the problem. It's- I suppose it's the physics side of you of, uh, going back to first principles and thinking, but what do you think is the value of deductive reasoning in- in the scientific process?
- JHJohn Hopfield
Well, look, there are obviously scientific questions in which the root of the answer to it comes through the an- analysis of one hell of a lot of data.
- LFLex Fridman
Right. Cosmology, that kind of stuff.
- JHJohn Hopfield
And that's- that's never been the kind of problem in which I've had any particular insight, though I must say if you look at, um... cosmology is- was one of those, but if you look at the actual things that Jim Peebles, one of this year's Nobel Prize for the Phy- in Physics ones from the local physics department, the kinds of things he's done, he's never crunched large data. Never, never, never. He's used the encapsulation of- of- of the work of others in this regard.
- LFLex Fridman
Right. But it ultimately boiled down to, uh, thinking through the problem. Like, what are the principles under which a particular phenomena operates?
- JHJohn Hopfield
Yeah, yeah. And look, physics is always gonna look for ways in which you can describe the system in a way which rises above the det- rises above the details, and to the hard-died, real biologist, biology works because of the details. In physics, the p- the physicists we want an explanation which is right in spite of the details, and there will be questions which we cannot answer as physicists because the answer cannot be found that way.
- 59:12 – 1:06:10
Brain-computer interfaces
- LFLex Fridman
There's, uh... I'm not sure if you're familiar with the entire field of brain-computer interfaces that's become more and more intensely researched and developed recently, especially with companies like Neuralink with Elon Musk.
- JHJohn Hopfield
Yeah, I know they've always been the i- interest both in, um, things like getting the eyes to be able to control things or getting the thought patterns to be able to move what had been, uh, uh, a conne- a connected limb which is now connected through a computer.
- LFLex Fridman
That's right. So, uh, in the case of Neuralink, they're doing a thousand plus connections where they're able to do two-way, activate and read spikes, neuro- neural spikes. Do you have hope for that kind of computer-brain interaction in the near or maybe even far future of being able to expand the ability of the mind of cognition or understand the mind?
- JHJohn Hopfield
It's interesting watching things go, uh... when I first became interested in neurobiology-
- LFLex Fridman
(clears throat)
- JHJohn Hopfield
... most of the practitioners thought you would be able to understand neurobiology by techniques which allowed you to re- uh, record only one cell at a time.
- LFLex Fridman
One cell, yeah.
- JHJohn Hopfield
People like, um, David Hubel very strongly reflected that point of view, and that's been taken over by a generation...... a couple of- a couple of generations later by a set of people who says, "Not until we c- can record from 10 to the 4, 10 to the 5 at a time will we actually be able to understand how the brain actually works." And in a general sense, I think that's right. You have to look, you have to begin to be able to look for the collective modes, collective operation of things. It doesn't rely on this action potential or that cell, it relies on the collective properties of this set of cells connected with this kind of pattern and so on. And you're not going to succeed in seeing what those collective activities are without recording many cells at once.
- LFLex Fridman
Uh, the question is how many at once. What's the threshold? And that's the- that's the-
- JHJohn Hopfield
Yeah. And- and, um, look, it's being pursued hard in the motor cortex. The motor cortex does something which is complex and yet the problem you're trying to address is very- is fairly simple. Now neurobiology does it in ways that are different from the way an engineer would do it. An engineer would put in six highly accurate stepping motors for controlling a limb-
- LFLex Fridman
Right.
- JHJohn Hopfield
... rather than un- 100,000 muscle fibers, each of which has to be individually controlled. And so understanding how to do things in a way which is much more forgiving and much more neural I think would benefit the engineering world. The engineering world, ah, touch, let's put in a pressure sensor or two, let- rather than a- an array of- of a gazillion pressure- pressure sensors, none of which are accurate, all of which are perpetually recalibrating themselves.
- LFLex Fridman
So you're saying your hope is, your advice for the engineers of the future is to embrace the large chaos of a messy, error-prone system like those of the biological systems? Like that's probably the way to solve some of these big challenges.
- JHJohn Hopfield
I think you'll be able to make better compu- computations/robotics that way than by trying to force things into a- into a robotics where joint motors are powerful and stepping motors are accurate.
- LFLex Fridman
But then the physicists- the physicist in you will be lost forever in such systems, 'cause there's no simple fundamentals to explore in systems that are so large and messy.
- JHJohn Hopfield
Well, yes, you- you- you say that, and yet, um, there's a lot of physics in the Navier–Stokes equations, the equations of non- nonlinear hydrodynamics. Huge amount of physics in them. All of the physics of atoms and molecules has been lost, but it's been replaced by this other set of equations which is- is just as true as the equations at the bottom.
- LFLex Fridman
Yes.
- JHJohn Hopfield
Now the di- those- those equations are going to be harder to find in neurobiology, but the physicist in me says there are probably some equations of that sort-
- LFLex Fridman
They're out there.
- JHJohn Hopfield
... which- They're- they're out there, and if physics is going to contribute anything, it may contribute to trying to find out what those equations are and how to capture them from the biology.
- LFLex Fridman
Would you say that's one of the main open problems of our age, is to discover those equations?
- JHJohn Hopfield
Yeah. If you look at... There's molecules and there's psychological behavior.
- LFLex Fridman
Mm-hmm.
- JHJohn Hopfield
And these two are somehow related. There are layers of detail, there are layers of collectiveness, and to capture th- to capture that in some vague way, several stages on the way up to see how these things ac- can actually be linked together.
- LFLex Fridman
So it seems in our universe there's a lot of- a lot of elegant equations that can describe the fundamental way that things behave, which is a surprise. I mean, it's compressible into equations, it's simple and beautiful, eh, at the... But there isn't- it's still an open question whether that link is equally between molecules and the brain is equally compressible into elegant equations. But your- your sense, s- some- well, you're both a physicist and a dreamer. You have a sense that, uh-
- JHJohn Hopfield
Yeah, but I can only- I can only dream physics dreams.
- LFLex Fridman
(laughs) Yeah, physics dreams.
- JHJohn Hopfield
There was an interesting book called Einstein's Dreams which alternates between chapters on his life and descriptions of the way time might have been but isn't, eh, thus, uh, the linking between these being important, uh, ideas that Einstein might have had to think about the essence of time as he was thinking about time.
- 1:06:10 – 1:08:12
Mortality
- LFLex Fridman
So speaking of the essence of time and your biology, you're one human, famous, impactful human, but just one human with a brain living the human condition, but you're ultimately mortal just like all of us. Has studying the mind as a mechanism changed the way you think about your own mortality?
- JHJohn Hopfield
It has really because it... Particularly as you get older and the body comes apart in various ways, I became much more aware of the fact that what is somebody is contained in the brain.... and not in the body that you worry about burying. And it is to a certain extent true, that for people who write things down, equations, dreams, notepads, diaries, fractions of their thought does continue to live after they're dead and gone, after their body is dead and gone.
- LFLex Fridman
Yeah.
- JHJohn Hopfield
And, uh, there's a sea change in that going on in my lifetime between when my father died when... Except for the things which were actually written by him, as it were, very few facts about him will have ever been recorded. And the number of facts that are recorded about each and every one of us, forever now as far as I can see, in- in the digital world. And so the whole question of what is death may be different for people a generation ago and a generation further ahead.
- LFLex Fridman
Maybe we have become immortal under some definitions.
- JHJohn Hopfield
Yeah. Yeah.
- 1:08:12 – 1:12:42
Meaning of life
- JHJohn Hopfield
- LFLex Fridman
Last easy question. What is the meaning of life? Looking back, you've studied the mind, us weird descendants of apes. What's the meaning of our existence on this little earth?
- JHJohn Hopfield
Oh, the word meaning is as slippery as the word understand. (laughs)
- LFLex Fridman
(laughs) Interconnected somehow, perhaps. Is there... It's slippery, but is there something that you, uh, despite being slippery, can hold long enough to express? (laughs)
- JHJohn Hopfield
I've been amazed at how hard it is to define the things in a living system, uh, in the sense that one hydrogen atom is pretty much like another, but one bacterium is not so much like ano- like ano- another bacterium, even of the same nominal species. In fact, the whole notion of what is a species gets a little bit fuzzy. And do species exist in the absence of certain classes of environments? And pretty soon, one winds up with, um, with a biology which the whole thing is living. But whether there's actually any element of it which by itself would be said to be living is, becomes a little bit vague in my mind.
- LFLex Fridman
(laughs) So in a sense, the idea of meaning is something that's possessed by an individual, like a conscious creature, and you're saying that, uh, it's all interconnected in some kind of way, that there might not even be an individual. We're all kind of this complicated mess of biological systems at all different levels. Where- where the human starts and when the human ends is unclear.
- JHJohn Hopfield
Yeah, yeah. And we're... In neurobiology, we're the, um... Oh, you say the neocortex does the thinking, but there's lots of things which are done in the spinal cord. And so we say, what is the essence of thought? Is it just gonna be neocortex? Can't be. Can't be. (laughs)
- LFLex Fridman
Yeah, maybe to, uh, to understand and to build thought, you have to build the universe along with the- with the neocortex. It's all interlinked through the spinal cord. John, it's a huge honor talking today. Thank you so much for your time. I really appreciate it.
- JHJohn Hopfield
Well, thank you for the challenge of talking with you, and it'll be interesting to see whether you can winnow five mins- five minutes out of this-
- LFLex Fridman
Yeah.
- JHJohn Hopfield
... with just coherent sense to any winner.
- LFLex Fridman
(laughs) Beautiful. Thanks for listening to this conversation with John Hopfield, and thank you to our presenting sponsor, Cash App. Download it, use code LEXPODCAST. You'll get $10, and $10 will go to FIRST, an organization that inspires and educates young minds to become science and technology innovators of tomorrow. If you enjoyed this podcast, subscribe on YouTube, give it five stars on Apple Podcasts, support it on Patreon, or simply connect with me on Twitter @LexFriedman. And now, let me leave you with some words of wisdom from John Hopfield in his article titled Now What? "Choosing problems is the primary determinant of what one accomplishes in science. I have generally had a relatively short attention span in science problems. Thus, I have always been on the lookout for more interesting questions, either as my present ones get worked out or as they get classified by me as intractable, given my particular talents." He then goes on to say, "What I have done in science relies entirely on experimental and theoretical studies by experts. I have a great respect for them, especially for those who are willing to attempt communication with someone who is not an expert in the field. I would only add that experts are good at answering questions. If you're brash enough, ask your own. Don't worry too much about how you found them." Thank you for listening, and hope to see you next time.
Episode duration: 1:12:48
Install uListen for AI-powered chat & search across the full episode — Get Full Transcript
Transcript of episode DKyzcbNr8WE
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome