Skip to content
Lex Fridman PodcastLex Fridman Podcast

Jeff Hawkins: The Thousand Brains Theory of Intelligence | Lex Fridman Podcast #208

Jeff Hawkins is a neuroscientist and cofounder of Numenta, a neuroscience research company. Please support this podcast by checking out our sponsors: - Codecademy: https://codecademy.com and use code LEX to get 15% off - BiOptimizers: http://www.magbreakthrough.com/lex to get 10% off - ExpressVPN: https://expressvpn.com/lexpod and use code LexPod to get 3 months free - Eight Sleep: https://www.eightsleep.com/lex and use code LEX to get special savings - Blinkist: https://blinkist.com/lex and use code LEX to get 25% off premium EPISODE LINKS: A Thousand Brain (book): https://amzn.to/3AmxJt7 Numenta's Twitter: https://twitter.com/Numenta Numenta's Website: https://numenta.com/ PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ Full episodes playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4 Clips playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOeciFP3CBCIEElOJeitOr41 OUTLINE: 0:00 - Introduction 3:04 - Collective intelligence 9:46 - The origin of intelligence in the human brain 22:59 - How intelligent life evolved on Earth 33:58 - Why humans are special in the universe 37:16 - Neurons 41:30 - A Thousand Brains theory of intelligence 50:10 - How to build superintelligent AI 1:08:10 - Sam Harris and existential risk of AI 1:20:12 - Neuralink 1:27:02 - Will AI prevent the self-destruction of human civilization? 1:32:34 - Communicating human knowledge to alien civilizations 1:42:50 - Devil's advocate 1:47:45 - Human nature 1:56:07 - Hardware for AI 2:02:46 - Advice for young people SOCIAL: - Twitter: https://twitter.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - Medium: https://medium.com/@lexfridman - Reddit: https://reddit.com/r/lexfridman - Support on Patreon: https://www.patreon.com/lexfridman

Lex FridmanhostJeff Hawkinsguest
Aug 8, 20212h 18mWatch on YouTube ↗

EVERY SPOKEN WORD

  1. 0:003:04

    Introduction

    1. LF

      The following is a conversation with Jeff Hawkins, a neuroscientist seeking to understand the structure, function, and the origin of intelligence in the human brain. He previously wrote the seminal book on the subject, titled On Intelligence, and recently, a new book called A Thousand Brains, which presents a new theory of intelligence that Richard Dawkins, for example, has been raving about, calling the book, quote, "Brilliant and exhilarating." I can't read those two words and not, uh, think of him saying it in his British accent. Quick mention of our sponsors: Codecademy, Bioptimizers, ExpressVPN, Eight Sleep, and Blinkist. Check them out in the description to support this podcast. As a side note, let me say that one small but powerful idea that Jeff Hawkins mentions in his new book is that if human civilization were to destroy itself, all of knowledge, all our creations would go with us. He proposes that we should think about how to save that knowledge in a way that long outlives us, whether that's on earth, in orbit around earth, or in deep space, and then to send messages that advertise this backup of human knowledge to other intelligent alien civilizations. The main message of this advertisement is not that we are here, but that we were once here. This little difference somehow was deeply humbling to me, that we may, with some non-zero likelihood, destroy ourselves and that an alien civilization, thousands or millions of years from now, may come across this knowledge store, and they would only, with some low probability, even notice it, not to mention be able to interpret it. And the deeper question here for me is what information in all of human knowledge is even essential? Does Wikipedia capture it or not at all? This thought experiment forces me to wonder, what are the things we've accomplished and are hoping to still accomplish that will outlive us? Is it things like complex buildings, bridges, cars, rockets? Is it ideas like science, physics and mathematics? Is it music and art? Is it computers, computational systems, or even artificial intelligence systems? I personally can't imagine that, uh, aliens wouldn't already have all of these things, in fact, much more and much better. To me, the only unique thing we may have is consciousness itself and the actual subjective experience of suffering, of happiness, of hatred, of love. If we can record these experiences in the highest resolution directly from the human brain such that aliens would be able to replay them, that is what we should store and send as a message. Not Wikipedia, but the extremes of conscious experiences, the most important of which, of course, is love. This is the Lex Fridman Podcast, and here is my conversation with Jeff Hawkins.

  2. 3:049:46

    Collective intelligence

    1. LF

      We previously talked over two years ago. Do you think there's still neurons in your brain that, uh, remember that conversation, that, uh, remember me and got excited? Like, there's a Lex neuron in your brain that just, like, finally has a purpose?

    2. JH

      I do remember our conversation, or I have some memories of it, and I've formed additional memories of you in the meantime. Um, I wouldn't say there's a neuron or a neurons in my brain that know you, but there are synapses in my brain that have formed that reflect my knowledge of you and the model I have of you and the world. And whether they're the exact same synapses were formed two years ago, it's hard to say, 'cause these things come and go all the time. But, um, we know from... one thing to note about brains is that when you think of things, you often erase the memory and rewrite it again, so... Yes, but I have a memory of you, and I have... that's instantiated in synapses. There's a simpler way to think about it, Lex. So you have, we have a, a model of the world in your head, and that model is continually being updated. I updated this morning. You offered me this water and said it was from the refrigerator. I remember these things, and so we... and so the model includes where we live, the places we know, the words, the objects in the world. There's this monstrous model, and it's constantly being updated, and people are just part of that model. So are animals, so are other physical objects, so are events we've done. So, um, it's, it's no special, in my mind, special place for the memories of humans. I mean, obviously, I know, uh, you know, I know a lot about my wife, um, but... and friends, uh, and so on, but it's, it's not like a, a special place for humans are over here. Uh, but we model everything, and we model other people's behaviors too. So if I said yours was a copy of your mind in my mind, it's just because I know how humans... I've learned how humans behave, and, um, and I've learned some things about you. Um, and that's part of my world model.

    3. LF

      Well, I just also mean, uh, the collective intelligence of the human species. I wonder if there's something, um, fundamental to the brain that enables that, so modeling other humans with their ideas.

    4. JH

      Um, you're, you're, you're actually jumping into a lot of big topics. Like, collective intelligence is a separate topic that a lot of people like to talk about. We could talk about that. Uh, but, um... and so that's interesting. Like, you know, we're not just individuals. We live in society and so on. Uh, but from our research point of view, uh, and so again, let's just talk. We studied the neocortex. It's a sheet of neural tissue. It's about 75% of your brain. It runs on this very repetitive algorithm. It's a very repetitive circuit. And so you can apply that algorithm to lots of different problems, but it's all... underneath, it's the same thing. We're just building this model. So from our point of view, we wouldn't look for these special circuits someplace buried in your brain that might be related to other, you know, understanding other humans. It's more like, you know, how do we build a model of anything? How do we understand anything in the world? And humans are just another part of the things we understand.

    5. LF

      So there's nothing, uh, there's nothing to the brain that knows the emergent phenomena of collective intelligence?

    6. JH

      Well, I certainly know about that. I've heard the terms, I've read-

    7. LF

      No, but that's-

    8. JH

      Right? Right? Well, okay, right.

    9. LF

      ... as an idea.

    10. JH

      Well, I think we have language which is- is sort of built into our brains, and that's a key part of collective intelligence. So there are some, uh, you know, uh, prior assumptions about the world we're gonna live in. When we're born, we're not just a blank slate. Um, and so, you know, did we evolve to take advantage of those situations? Yes. But again, we study only part of the brain, the neocortex. There's other parts of the brain that are very much involved in societal interactions and human emotions and, um, and how we interact and even societal, um, uh, issues about, you know, how we are... how we interact with other people, when we support them, when we're greedy and things like that.

    11. LF

      I- I mean, certainly the brain is a great place where to study intelligence. I wonder if it's the fundamental, um, atom of intelligence.

    12. JH

      Well, I- I would say it's- it's- it's absolutely an essential component. Even if you believe in collective intelligence as, um, hey, that's where it's all happening, that's what we need to study, which I don't believe that, by the way. I think it's really important, but I don't think that is the thing. Um, but even if you do believe that, then you have to understand how the brain works in- in doing that. Um, it's... you know, it's more like we are intelligent in... we are intelligent individuals, and together, we are much more magnified, our intelligence. We can do things which we couldn't do individually. But even as individuals, we're pretty damn smart, and we can model things and understand the world and interact with it. So, um, to me, if you're gonna start someplace, you need to start with the brain. Um, then you could say, well, how do brains interact with each other? And what is the nature of language? And how do we share models, that I've learned something about the world, how do I share it with you? Which is really what, you know, sort of communal intelligence is. I know something, you know something, we've had different experiences in the world. I've learned something about brains, maybe I can impart that to you. You've learned something about, you know, physics and you can impart that to me. Um, but it all comes down to... e- even just the epistemological question of, well, what is knowledge and how do you represent it in the brain? Right? Uh, it's not... that's where it's gonna reside, right? Or in our writings.

    13. LF

      It's obvious that, uh, human collaboration, human interaction is how we build societies.

    14. JH

      Yeah.

    15. LF

      Right? But some of the things you talk about and work on, s- some of those elements of what makes up an intelligent entity is there with a single person.

    16. JH

      Oh, absolutely. I mean, it h- it'd be... we can't deny that the brain is the core element here in- in, uh... at least I ca- I think it's obvious the brain is the core element in all theories of intelligence. Uh, it's where knowledge is represented, it's where knowledge is created. Um, we interact, we share, we build upon each other's work, but, uh, without a brain, you'd have nothing. You know, there would be no intelligence without brains. And so, um, uh, so that's where we start. I got into this field because I just was curious as to who I am. You know, how... you know, how do I think? What's going on in my head when I'm- when I'm thinking? What does it mean to know something? You know, I can ask what it means for me to know something independent of how I learned it from you or from someone else or from society. What does it mean for me to know that I have a model of you in my head? What does it mean to know I know what this microphone does and how it works physically, even though I can't see it right now? Um, how do I know that? Um, what does it mean? How do the neurons do that at the fundamental level of neurons and synapses and so on? Those are really fascinating questions, and, uh, I'm happy to- to... just happy to understand those, if I could.

    17. LF

      (laughs)

  3. 9:4622:59

    The origin of intelligence in the human brain

    1. LF

      So in your, um... in your, uh, new book, you talk about our brain, our mind as being made up of many brains. Uh, so the book is called A Thousand Brains: A Thousand Brain Theory of Intelligence. What is the key idea of this book?

    2. JH

      Uh, the book has three sections, and it has sort of maybe three big ideas. So the first section is all about what we've learned about the neocortex, and that's the thousand brains theory. Just to be complete the picture, the second section is all about AI, and the third section is about the future of humanity. Um, so, um, the thousand brains theory, the- the big idea there, if you've... if I had to summarize into one big idea, is that we think of the- the brain, the neocortex as learning this model of the world, but what we learned is actually there's tens of thousands of independent modeling systems going on. And so each, uh, what we call a column in the cortex, there's about 150,000 of them, is a complete modeling system. So it's a collective intelligence in your head in some sense.

    3. LF

      Mm-hmm.

    4. JH

      So the thousand brains theory says about where do I have knowledge about, you know, this coffee cup, or where's the model of this cell phone? It's not in one place. It's in thousands of separate models that are complementary and they communicate with each other through voting. So this idea that we have, we feel like we're one person, you know, that's our experience, we can explain that, but reality, there's lots of these like... almost like little brains, like, but they're- they're sophisticated modeling systems, about 150,000 of them in each of... a human brain. Um, and that's a total different way of thinking about how the neocortex is structured than, uh, we or anyone else thought of even just five years ago.

    5. LF

      So you mentioned you started this journey on just looking in the mirror and trying to understand who you are. So if you have many brains, who are you then?

    6. JH

      So it's interesting, we have a singular perception, right? You know, we think, oh, I'm just here, I'm looking at you. But it's- it's composed of all these things, like there's sounds, and there's- and there's, um, there's vision, and there's touch, and all kinds of inputs, yet we have the singular perception. And what the thousand brain theory says, we have these models that are visual models, we have o- models that are auditory models, models that are tactile models, and so on. But they vote.

    7. LF

      Mm-hmm.

    8. JH

      And so, um, they send... in the cortex, you can think about the- these columns as about like little grains of rice, 150,000 stacked next to each other. And, um, each one is its own little modeling system, but they have these long range connections that go between them.And we call those voting connections or voting neurons. Um, and so the- the different columns are, you know, try to reach a consensus, like, "What am I looking at? Okay," You know, I, "each one has some ambiguity," but they come to a consensus, "Oh, that's a water bottle I'm looking at." Um, we are only consciously pre- able to perceive the voting.

    9. LF

      Mm-hmm.

    10. JH

      We're not able to perceive anything what goes under- under- under the hood. So the voting is what we're- we're- we're- we're aware of.

    11. LF

      The results of the voting.

    12. JH

      Yeah, the voting. Well, it's- it's- you can imagine it this way. We were just talking about eye movements a moment ago. So, as I'm looking at something, my eyes are moving about three times a second.

    13. LF

      Mm-hmm.

    14. JH

      And with each movement, a completely new input is coming into the brain. It's not repetitive, it's not shifting it around, it's completely new. I'm totally unaware of it. I can't perceive it.

    15. LF

      Mm-hmm.

    16. JH

      But yet, if I looked at the neurons in your brain, they're going on and off, on and off, on and, on off. But the voting neurons are not.

    17. LF

      Mm-hmm.

    18. JH

      The voting neurons are saying, "You know what? We all agree, even though I'm looking at different parts of this, this is a water bottle right now." And that's not changing and it's at some position and- and pose relative to me. So I have this perception of the water bottle about two feet away from me at a certain pose to me. Um, that is not changing. That's the only part I'm aware of. I can't be aware of the fact that the inputs to the, from the eyes are moving and changing, and all this other stuff happening. So, these long-range connections are the part we can be conscious of.

    19. LF

      Mm-hmm.

    20. JH

      The individual activity in each column is, doesn't go anywhere else. It doesn't get shared anywhere else. It doesn't, there's no way to extract it and talk about it, or extract it and even remember it to say, "Oh, yes, I can recall that." Um, so, but these long-range connections are the things that are accessible to language and to our, you know, it's like the hippocampus, our- our- our memories, you know, our n- short-term memory systems and so on. So we're, we're not aware of 95%, or maybe it's even 98% of what's going on in your brain. Um, we're only aware of this sort of stable, somewhat stable, um, voting outcome of all these things that are going on underneath the hood.

    21. LF

      So, what would you say is the basic element in the thousand brains theory of intelligence, of intelligence? Like, what's the atom of intelligence when you think about it? Is it the individual brains? And then what is a brain?

    22. JH

      Well, let's- let's, can we just talk about what intelligence is first?

    23. LF

      Yeah.

    24. JH

      And then, and then we can talk what the elements are. So in my, in my book, intelligence is the ability to learn a model of the world. So to build internal to your head, a model that represents the structure of everything.

    25. LF

      Mm-hmm.

    26. JH

      You know, th- to know that this is a table, and that's a coffee cup, and this is a gooseneck lamp, and all of the sudden, to know these things, I have to have a model of them in my head. I just don't look at them and go, "What is that?" I already have internal representations of these things in my head, and I had to learn them. I wasn't born with any of that knowledge. Uh, you were, you know, we have some lights in the room here. I, you know, that's not part of my evolutionary heritage, right? It's not in my genes.

    27. LF

      Mm-hmm.

    28. JH

      So, um, we have this incredible model, and the model includes not only what things look like and feel like, but where they are relative to each other and how they behave. I've never picked up this water bottle before, but I know that if I took my hand on that blue thing and I turn it, it'll probably make a funny little sound as the little plastic things detach.

    29. LF

      Mm-hmm.

    30. JH

      And then it'll rotate, and it'll rotate a certain way, and it'll come off. How do I know that, right? 'Cause I have this model in my head. So, the essence of intelligence is our ability to learn a model. And the more sophisticated our model is, the smarter we are.

  4. 22:5933:58

    How intelligent life evolved on Earth

    1. JH

    2. LF

      So in this whole process, where does intelligence originate, would you say? So it w- (laughs) you know, if we look at things that are much less intelligence than humans and you start to build up a human through the process of evolution, where's this magic thing that, uh, has a, a prediction model or a model that's able to predict-

    3. JH

      Yeah.

    4. LF

      ... that starts to look a lot more like intelligence?

    5. JH

      Yeah.

    6. LF

      Is there a place where... Richard Dawkins wrote an introduction to your, uh, to your book, an excellent introduction. I mean, uh, uh, it's, it puts a lot of things into context, and it's funny just looking at parallels for your book and Darwin's Origin of Species. So Darwin wrote about the origin of species, so what is the origin of intelligence?

    7. JH

      Yeah. Well, we have a theory about it, and it's just that, it's a theory. Theory goes as follows. As soon as living things started to move, they're not just floating in sea, they're not just a plant, you know, grounded someplace. As soon as they started to move, there was an advantage to moving intelligently, to moving in certain ways, and there's some very simple things you can do. You know, bacteria or single-cell organisms can move towards a source of gradient of food or something like that. But an animal that might know where it is and know where it's been and how to get back to that place or an animal that might say, "Oh, there was a source of food someplace. How do I get to it?" Or there was a, a danger. "How do I get...?" Or, "There was a mate, how do I get to them?"... um, there was a big evolutionary advantage to that. So early on, there was a pressure to start understanding your environment, like, "Where am I? And where have I been? And what happened in those different places?" So we still have this neural mechanism in our brains. Um, it's in, in the, in the mammals, it's in the hippocampus and entorhinal cortex. These are older parts of the brain, um, and these are very well-studied. Um, we build a map of the, of our environment. So th- these neurons in these parts of the brain know where I am in this room and where the door was and things like that.

    8. LF

      So a lot of other mammals have this (overlapping dialogue)

    9. JH

      All mammals have this, right?

    10. LF

      Okay.

    11. JH

      And alm- almost any, any animal that knows where it is and get around must have some mapping system, must have some way of saying, "I've learned a map of my environment." I have hummingbirds in my backyard, and they, and they go to the same places all the time, and they have to... They must know where they are. They just know where they are when to f- They're not just randomly flying around. They know. They know particular flowers they come back to. So we all have this, and it turns out it's very tricky to get neurons to do this-

    12. LF

      Mm-hmm.

    13. JH

      ... to build a map of an environment. It's just... And so we now know there's this, these, these famous studies that's still very active about place cells and grid cells and these other types of cells in the older parts of the brain and how they build these maps of the world, and it's really clever. It's obviously been under a lot of evolutionary pressure over a long period of time to get good at this.

    14. LF

      Mm-hmm.

    15. JH

      So animals know, know where they are. What we think has happened, uh, and there's a lot of evidence to suggest this, is that, that mechanism we learn to map, like, a, a space, um, is, was repackaged. The same type of neurons was repackaged into a more compact form, and that became the cortical column, and it was, it was in some sense genericized, if that's a word. It was turned into a very specific thing about learning maps of environments-

    16. LF

      Mm-hmm.

    17. JH

      ... to learning maps of anything, learning a model of anything, not just your space but coffee cups and so on, and it got sort of repackaged into a more compact version, ma- a more universal version, and then replicated.

    18. LF

      Mm-hmm.

    19. JH

      So the reason we're so flexible is we have a very generic version of this mapping algorithm, and we have 150,000 copies of it.

    20. LF

      Sounds a lot like the progress of deep learning. (laughs)

    21. JH

      How so?

    22. LF

      Uh, (laughs) so take neural networks that, uh, seem to work well for a specific task, uh, compress them, and, uh, multiply it by a lot, and then you just stack them on top of-

    23. JH

      Yeah.

    24. LF

      It's like the story of transformers in, uh- (laughs)

    25. JH

      Yeah. But interesting-

    26. LF

      ... natural language processing (overlapping dialogue)

    27. JH

      ... deep learning networks, they end up either replicating an element, but you still need the entire network to do anything.

    28. LF

      Right.

    29. JH

      Here, what, what's going on is each individual element is a complete learning system.

    30. LF

      Mm-hmm.

  5. 33:5837:16

    Why humans are special in the universe

    1. LF

      From the origin of the universe, like, (sighs) this com- pockets of complexities that form-

    2. JH

      Yeah.

    3. LF

      ... living organisms. I wonder if, if we're just... If you look at humans, we feel like we're at the top, but I wonder if there's like just... Where everybody probably... Every living type pocket of complexity is probably... Thinks they're the, uh, pardon the French, they're the shit.

    4. JH

      Yeah. Well-

    5. LF

      They're, they're, they're at the top of the pyramid.

    6. JH

      Well, if- i- if they're thinking, um...

    7. LF

      Well, then- then- then what is thinking? What the...

    8. JH

      Oh, we can-

    9. LF

      In a sense, the, the whole point is, in their sense of the world, they-

    10. JH

      Yeah.

    11. LF

      ... their sense is that they're at- at the top of it.

    12. JH

      I think, uh-

    13. LF

      Like, what does a turtle think?

    14. JH

      But you're, you're, you're bringing up, you know, the, the, the problems of complexity and complexity theory are, you know, it's a, it's a huge interesting problem in science. Um, and, you know, I think we've made surprisingly little progress in understanding complex systems-

    15. LF

      Right.

    16. JH

      ... in general. Um, and so, you know, the Santa Fe Institute was founded to, to study this. And, and even the scientists there will say, "It's really hard. We haven't really been able to figure out exactly," you know, it... That science hasn't really congealed yet. We, we're still trying to figure out the basic elements of that science. Uh, what... You know, where does complexity come from, and what is it, and how do you define it? Whether it's DNA creating bodies with phenotypes, or it's individuals creating societies, or ants, and you know, markets and so on. It's, it's a very complex thing. I'm not a complexity theorist- person, right? Um, uh, I, I think it's interesting to ask, well, the brain itself is a complex system, so can we understand that? Um, I think we've made a lot of progress understanding how the brain works. So, um, but I haven't s- broadened it out to like, "Oh, well, where are we on the complexity spectrum?" (laughs) You know? It's like-

    17. LF

      (laughs)

    18. JH

      ... um, it's a great question.

    19. LF

      I'd prefer for that answer to be, "We're, we're not special." It seems like, if we're honest, most likely, we're not special. So if there is a spectrum-

    20. JH

      Yeah.

    21. LF

      ... we're probably not in some kind of significant place in that spectrum.

    22. JH

      I, I think there's one thing we could say that we are special, and, and again, only here on Earth, I'm not saying that... Is that if we think about knowledge, what we know, um, we clearly... Human brains have, um... Are the only brains that have certain types of knowledge. We're the only brains on, on this Earth to understand, uh, what the Earth is, how old it is, that the universe is a picture as a whole. We're the only organisms that understand DNA and the origins of, you know, of species. Uh, no other species on, on this planet has that knowledge. So if we think about... I, I like to think about...... you know, one of the endeavors of humanity is to understand the universe as much as we can. Um, I think our species is further along in that, undeniably. Um, whether our theories are right or wrong, we can debate, but at least we have theories. You know, we, we know that, what the sun is and how s- fusion is and how, what black holes are, and, you know, we know general theory of relativity and no other animal has any of this knowledge. (laughs) So, it's in that sense that we're special. Uh, are we special in terms of the, the, the hierarchy of complexity in, in the universe? Probably not.

  6. 37:1641:30

    Neurons

    1. LF

      Can we look at a neuron? Uh, you say that prediction happens in the neuron. What does that mean? So the neuron traditionally is seen as the basic element of the, the brain.

    2. JH

      So we, I mentioned this earlier, uh, that prediction was our research agenda.

    3. LF

      Yeah.

    4. JH

      We said, "Okay-"

    5. LF

      (laughs)

    6. JH

      "... um, how does the brain make a prediction?" Like, like I, I'm about to grab this water bottle and my brain is predicting what I'm gonna feel on, on, on all my parts of my fingers. If I felt something really odd on any part here, I'd notice it.

    7. LF

      Mm-hmm.

    8. JH

      So my brain is predicting what it's gonna feel as I grab this thing. So what does that... How does that manifest itself in neural tissue, right? We got brains made of neurons and there's chemicals and there's neurons and there's spikes and they're connect- you know, where is, where is the prediction going on?

    9. LF

      Mm-hmm.

    10. JH

      And one argument could be that, well, when I'm predicting something, um, a neuron must be firing in advance. It's like, okay, this neuron represents what you're gonna feel and it's firing, it's sending a spike. And certainly that happens to some extent, but our predictions are so ubiquitous, that we're making so many of them which we're totally unaware of, just the vast majority of them, you have no idea that you're doing this. Um, that it, there wasn't really... We were trying to figure how could this be? Where, where is these, where are these happening, right? And I won't walk you through the whole story unless you insist upon it, but we came to the realization that pred- most of your predictions are occurring inside individual neurons, especially these, the most common neuron, the pyramidal cells. And there are, there's a property of neurons. We, everyone knows or most people know that a neuron is a cell and it has this spike called an action potential and it sends information. But we now know that there's these spikes internal to the neuron, they're called dendritic spikes. They travel along the branches of, of the neuron and they don't leave the neuron. They're just internal only. There's far more dendritic spikes than there are action potentials.

    11. LF

      Mm-hmm.

    12. JH

      Far more. They're happening all the time. And what we came to understand that those dendritic spikes, the ones that are occurring, are actually a form of prediction. They're telling the neuron, the neuron is saying, "I expect that I might become active shortly." And that internal... so the internal spike is a way of saying, "You're gonna... you might be generating an external spike soon. I predicted you're gonna become active." And the, and we've, we've, we've, we wrote a paper in 2016 which explained then how this manifests itself in neural tissue and how it is that this all works together. But the vast ma- we think it's... there's a lot of evidence supporting it. Um, so we, that's where we think that most of these predictions are internal. That's why you can't be pre- they're internal in the neuron, you can't perceive them, but-

    13. LF

      Well, uh, f- from understanding the, the prediction mechanism of a single neuron, do you think there's deep insights to be gained about the prediction capabilities of the mini brains within the bigger brain and the brain-

    14. JH

      Oh, yeah. Yeah, yeah. So having a prediction inside a neur- individual neuron is not that useful, you know? What... so what, right? (laughs)

    15. LF

      (laughs)

    16. JH

      Um, the way it manifests itself in neural tissue is that when a neuron, a neuron emits these spikes, so a very singular type event, if a neuron is predicting that it's gonna be active, it, it emits its spike very, a little bit sooner, just a few milliseconds sooner than it would have otherwise. It's like, I give the analogy in the book, it's like a sprinter on a, on a starting blocks in a, in a race.

    17. LF

      Mm-hmm.

    18. JH

      And if someone says, "Get ready, set," you get up and you're ready to go, and then when they're ready to start, you get a little bit earlier start. So that, it's that, that ready, set is like the prediction and the neuron's like ready to go quicker. And what happens is when you have a whole bunch of neurons together and they're all getting these inputs, the ones that are in the predictive state, the ones that are, are in- anticipating to become active, if they do become active, they, they happen sooner, they disable everything else and it leads to different representations in the brain. So you have to... I- i- it's not isolated just to the neuron. The prediction occurs within the neuron, but the network behavior changes. So what happens under different predictions, different inputs have different representations. So how I, what I predict, um, is gonna be different under different contexts, you know? What my input will be, uh, is different under different contexts. So this is, this is a key to the whole theory, how this works.

    19. LF

      So s- for the theory of the thousand brains,

  7. 41:3050:10

    A Thousand Brains theory of intelligence

    1. LF

      if you were to count the number of brains, how would you do it?

    2. JH

      The thousand brain theory says that basically every cortical column, uh, in the, in your neocortex is a complete modeling system.

    3. LF

      Okay.

    4. JH

      And that when I ask where do I have a model of something like a coffee cup, it's not in one of those models, it's in thousands of those models. There's thousands of models of coffee cups. That's what the thousand brains theory says.

    5. LF

      Then there's a voting mechanism.

    6. JH

      Then there's a voting mechanism which you leads, which you're, which is the thing you're, which you're conscious of, which leads your singular perception.

    7. LF

      Mm-hmm.

    8. JH

      Um, that's why you, you perceive something. So that's the thousand brains theory. The details of how we got to that theory, um, are complicated. It wasn't that we just thought of it one day. And one of those details where we had to ask, how does a, a model make predictions? And when you talked about just these predictive neurons.

    9. LF

      Mm-hmm.

    10. JH

      That's part of this theory. It's like saying, oh, it's a detail, but we... it was like a crack in the door. It's like, how are we gonna figure out how these neurons build... do this, you know? What is going on here? So we, we just looked at prediction as like, well, we know that's ubiquitous. We know that every part of the cortex is making predictions. Therefore, whatever the predictive system is, it's gonna be everywhere.

    11. LF

      Mm-hmm.

    12. JH

      We know there's a gazillion predictions happening at once. So let's see if we can start teasing apart, y- you know, ask questions about...... you know, how could neurons be making these predictions? And that sort of built up to now what we have this Thousand Brains theory, which is comple-, you know, which is as sim- I can state it simply, but we just didn't think of it. We had to get there step by step, very com- It took years, uh, to get there.

    13. LF

      And where does, uh, reference frames fit in? So, yeah.

    14. JH

      Okay. So again, a reference frame, I, I mentioned, um, earlier about the, you know, a model of a house and I said, "If you're gonna build a model of a house in the computer, you have a reference frame." And you can think of reference frame like, uh, Cartesian coordinates, like X, Y, and Z axes. So I could say, "Oh, I, I'm gonna design a house." I can say, "Well, the, the front door is at this location X, Y, Z and the, the roof is at this location X, Y, Z," and so on. That's a type of reference frame. So turns out for you to make a prediction, and I walk you through the thought experiment in the book where I was predicting what my finger was gonna feel when I touch the coffee cup. It was a ceramic coffee cup, but this one will do. Um, and what I realized is that to make a prediction what my finger was gonna feel like, it's gonna feel different than this, which will feel different if I touch the hole or this thing on the bottom. Make that prediction, the cortex needs to know where the finger is, the tip of the finger-

    15. LF

      Mm-hmm.

    16. JH

      ... relative to the coffee cup, and exactly relative to the coffee cup. And to do that, I have to have a reference frame for the coffee cup. It has to have a way of representing the location of my finger to the coffee cup. And then we realized, of course, every part of your skin has to have a reference frame relative to things it touch, and then we did the same thing with vision. But, so the idea that a reference frame is necessary to make a prediction when you're touching something or when you're seeing something and you're moving your eyes or you're moving your fingers. It's just requirement-

    17. LF

      Mm-hmm.

    18. JH

      ... to, to know what to predict. If I'm have a str- if I have a structure, I'm gonna make a prediction, I have to, uh, I have to know where it is I'm looking or touching it. So then we say, "Well, how do neurons make reference frames?" It's not obvious. Y- you know, X, Y, Z coordinates don't exist in the brain. It's just not the way it works. So that's when we looked at the older part of the brain, the hippocampus and the entorhinal cortex where we knew that in that part of the brain, there's a reference frame for a room or reference frame for environment. Remember I talked earlier about how you could know, make a map of this room.

    19. LF

      Mm-hmm.

    20. JH

      So we said, "Oh, um, that, they are implementing reference frames there." So we knew that reference frames needed to exist in every cortical column and so that was a deductive thing. We just deduced it, has to exist.

    21. LF

      So (laughs) you take the old mammalian ability to know where you are in a particular space and you start applying that to higher and higher levels, yeah.

    22. JH

      Yeah. You, first you apply it to phys-, like where your finger is. So here's the way to think about it. The old part of the brain says, "Where's my body in this room?"

    23. LF

      Yeah.

    24. JH

      The new part of the brain says, "Where's my finger relative to this, this object?"

    25. LF

      Yeah.

    26. JH

      Where is a, the, a, a section of my retina relative to this object? Like where, where is it? I'm looking at one little corner, where is that relative to this patch of my retina?

    27. LF

      Yeah.

    28. JH

      Um, and then we take the same thing and apply it to concepts, mathematics, physics, you know, humanity, whatever you want to think about.

    29. LF

      And eventually you're pondering your own mortality.

    30. JH

      Well, whatever.

  8. 50:101:08:10

    How to build superintelligent AI

    1. LF

      reference. Does it give you any inclination or hope about how difficult it is to engineer common sense reasoning? So how complicated this, is this whole process? So looking at the brain, is this a marvel of engineering or is it pretty dumb stuff stacked on top of each other over, uh (laughs) -

    2. JH

      It can be both. Can't-

    3. LF

      ... a pretty extensive copy?

    4. JH

      It can't be both? It can't, it can't be both, right?

    5. LF

      I don't know if it can be bo- uh, both, because, uh, if it's an incredible engineering job, that means it's ve- so evolution did a lot of work. It, uh...

    6. JH

      Yeah, but then, but then it just copied that, right? So as I said earlier, the figuring out how to model something, like a space, is really hard, and evolution had to go through a lot of trick... And these, these, these cells I was talking about, these grid cells and place cells, they're really complicated. This is not simple stuff, the- this neural tissue works on these really unexpected, weird mechanisms. Um, but it did it, it figured it out. But, but now you could just make lots of copies of it, right? (laughs)

    7. LF

      But it, but then finding, yeah, so it's a, it's a very interesting idea that it's a lot of copies of a basic mini brain, but the question is how difficult it is to find that mini brain that you can copy and paste, uh, effectively, like it...

    8. JH

      Well, we, today, we know enough to build this. I'm sitting here with the, you know, I, I know the steps we have to go. There's still some engineering problems to solve, but we know enough. And this is not like, "Oh, this is an interesting idea. We have to go think about it for another few decades." No, we actually understand it pretty well details. So not all the details, but most of them. So it's complicated, but it is an engineering problem. So in my company, we are working on that. We are basically laid out a roadmap of how we do this. Um, it's not gonna take decades, it's matter of a few years, um, optimistically, but I think that's possible. Um, it's, you know, complex things, if you understand them, you can build them.

    9. LF

      So in which domain do you think it's best to build them? Are we talking about robotics, like, uh, entities that operate in the physical world that are able to interact with that world? Are we talking about entities that operate in the digital world? Are we talking about something more like, uh, more specific, like, is done in the, uh, machine learning community where you look at natural language or computer vision? Wh- what, where do you think is easiest to ?

    10. JH

      It's the first, it's the first two more than the third one, I would say. Um, ag- again, let's just use computers as an analogy. Um, the pioneers in computing, people like John von Neumann and Alan Turing, they created this thing, you know, we now call the universal Turing machine, which is a computer, right? Did they know how it was gonna be applied? Where it was gonna be used? You know, could they inv- envision any of the future? No. They just said, "This is, like, a really interesting computational idea about algorithms and how you can implement them in, in a machine." And we're doing something similar to that today. Like, we are, we are building this sort of universal learning principle that can be applied to many, many different things.

    11. LF

      But the, the robotics piece of that-

    12. JH

      Okay.

    13. LF

      ... th- the interactive elements.

    14. JH

      Okay, all right, let's, let us be specific. You can think of this cortical column as a s- what we call a sensory motor learning system. It has the idea that there's a sensor and that it's moving. That sensor can be physical, it could be, like, my finger and it's moving in the world, it could be, like, my eye and it's physically moving. It can also be virtual. So it could be, um, an example would be, I could have a, uh, a system that lives in the internet that, that actually samples information on the internet and moves by following links. That's-

    15. LF

      Mm-hmm.

    16. JH

      ... that's a sensory motor system, so-

    17. LF

      (laughs) something that echoes the, the process of a finger moving along a co- a coffee cup.

    18. JH

      But in a very, very loose sense. It's, it's, like, again, learning is inherently about discovering the structure in the world, and to discover the structure in the world, you have to move through the world.

    19. LF

      Mm-hmm.

    20. JH

      Even if it's a virtual world, even if it's a conceptual world, you have to move through it. You don't, it doesn't exist in one... It has some structure to it.

    21. LF

      Mm-hmm.

    22. JH

      So here's, here's a couple of predictions, again, what you're talking about. In humans, the same algorithm is, does robotics, right? It moves my arms, my eyes, my body, right? Um, and so in my- in the future, to me, robotics and AI will merge. They're not gonna be separate fields because they're gonna, the- the- the, the algorithms for really controlling robots are gonna be the same algorithms we have in our bread- uh, brain at these sensory motor algorithms. I, today, we're not there, but I think that's gonna happen. And, um...And then so- but not all AI systems will have, be robotics. Um, you can have systems that, uh, have very different types of embodiments. Some will have physical movements, some will have, not have physical movements. It's a very generic learning system. Again, it's like computers, uh, the Turing machine is, is like, doesn't say how it's supposed to be implemented, doesn't tell you how big it is, doesn't tell you what you can apply it to, but it's an int- it's a computational principle. Cortical column equivalent is a computational principle about learning.

    23. LF

      Mm-hmm.

    24. JH

      It's about how you learn and it can be applied to a gazillion things. This is what I think, this is... I think this impact of AI is gonna be as large if not larger than computing has been in the last century, by far. Because it's a f- it's getting at a fundamental thing. It's not a vision system or a learning system, it's a, a, a... It's not a vision system or a hearing system, it is a learning system. It's a fundamental principle how you learn the structure in the world, how you gain knowledge and be intelligent, and that's what The Thousand Brains says was going on. And we have a particular implementation in our head, but it doesn't have to be like that at all.

    25. LF

      Do you think there's going to be some kind of impact... Okay, let me ask it another way. What do, uh, increasingly intelligent AI systems do with us humans-

    26. JH

      (laughs)

    27. LF

      ... in the following way? Like, how hard is the human in the loop problem? How hard is it in, to, in, to, to interact the finger on the coffee cup equivalent of having a conversation with a human being? So, how hard is it to fit into our little human world?

    28. JH

      Uh, I don't... I think it's a lot of engineering problems. I don't think it's a fundamental problem.

    29. LF

      Okay.

    30. JH

      I could ask you the same question, how hard is it for computers to fit into a human world?

  9. 1:08:101:20:12

    Sam Harris and existential risk of AI

    1. LF

      so what's your intuition here? You had, you had a (laughs) conversation with Sam Harris recently that was, uh, sort of, um... You've had a bit of a disagreement and you're sticking on this point. You know, Elon Musk, uh, Stuart Russell, kinda have a worry, existential threats of AI. What's your intuition? Why, if we engineer an increasingly intelligent neocortex type of system in the computer, why that shouldn't be a thing that we, we can worry about?

    2. JH

      It was intu... He used the word intuition, and Sam Harris used the word intuition too. And, and when he used that intuition, that word, I immediately stopped and said, "Oh, that's the crux of the problem."

    3. LF

      Mm-hmm.

    4. JH

      He's using intuition. I'm not speaking about my intuition.

    5. LF

      Yes.

    6. JH

      I'm speaking about something I understand, something I'm gonna build, something I am building-

    7. LF

      Mm-hmm.

    8. JH

      ... something I'm, I, I understand completely, or at least well enough to know what... It's not I'm guessing. I know what this thing's gonna do. And I think most people who are worried, they have trouble separating out... I mean, they don't have, they don't have the knowledge or the understanding about like what is intelligence, how does it manifest in the brain? How is it separate from these other functions in the brain? And so they imagine it's gonna be human-like or animal-like. It's gonna have, it's gonna have the same sort of drives and emotions we have. But there's no reason for that. That's just because there's, there's unknown. If you're... If the unknown is like, "Oh my God, you know, I don't know what this is gonna do. We have to be careful. It could be like us, but really smarter."

    9. LF

      Mm-hmm.

    10. JH

      I'm saying, no, it won't be like us. It'll be really smart, but it won't be like us at all. And, um, and... But I, I'm coming from that not because I'm just guessing, I'm not intuit- uh, u- using intuition. I'm basing it on like, "Okay, I understand how this thing works. This is what it does. L- it makes money to you."

    11. LF

      Okay. But, uh, to push back... So I, I also disagree with the, the intuitions that Sam has. But, but I also disagree with what you just said, which, you know... What's a good, uh, analogy? So if you look at the Twitter algorithm in, in the early days, just recommender systems, you can understand how recommender systems work. What you can't understand in the early days is when you apply that recommender system at scale to thousands and millions of people, how that can change societies.

    12. JH

      Yeah.

    13. LF

      So, the, the question is, yes, you're just saying this is how an engineered neocortex works, but the que- like, when you have a very useful, uh, TikTok type of service that get-

    14. JH

      Yup.

    15. LF

      ... goes viral.

    16. JH

      Yeah.

    17. LF

      When your neocortex goes viral- (laughs)

    18. JH

      Yeah.

    19. LF

      ... and then millions of people start using it-

    20. JH

      Yeah.

    21. LF

      ... can that destroy the world?

    22. JH

      No. Uh, well, first of all, this is back... One thing I wanna say is that, um, AI is a dangerous technology. I don't, I'm not denying that.

    23. LF

      All technology is dangerous, right?

    24. JH

      Well, an AI may- maybe particularly so.

    25. LF

      Yeah.

    26. JH

      Okay, so, um, am I worried about it? Yeah, I'm totally worried about it. The, the thing where... The, the narrow component we're talking about now is the existential risk of AI.

    27. LF

      Ah.

    28. JH

      Right?

    29. LF

      Yeah.

    30. JH

      So, I wanna make that distinction, 'cause I think AI can be applied poorly, it can be applied in ways that, you know, people are gonna-

Episode duration: 2:18:29

Install uListen for AI-powered chat & search across the full episode — Get Full Transcript

Transcript of episode Z1KwkpTUbkg

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome