Skip to content
Lex Fridman PodcastLex Fridman Podcast

Risto Miikkulainen: Neuroevolution and Evolutionary Computation | Lex Fridman Podcast #177

Risto Miikkulainen is a computer scientist at UT Austin. Please support this podcast by checking out our sponsors: - The Jordan Harbinger Show: https://jordanharbinger.com/lex/ - Grammarly: https://grammarly.com/lex to get 20% off premium - Belcampo: https://belcampo.com/lex and use code LEX to get 20% off first order - Indeed: https://indeed.com/lex to get $75 credit EPISODE LINKS: Risto's Website: https://www.cs.utexas.edu/users/risto/ PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ Full episodes playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4 Clips playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOeciFP3CBCIEElOJeitOr41 OUTLINE: 0:00 - Introduction 1:07 - If we re-ran Earth over 1 million times 4:24 - Would aliens detect humans? 7:02 - Evolution of intelligent life 10:47 - Fear of death 17:03 - Hyenas 20:28 - Language 23:59 - The magic of programming 29:59 - Neuralink 37:31 - Surprising discoveries by AI 41:06 - How evolutionary computation works 52:28 - Learning to walk 55:41 - Robots and a theory of mind 1:04:45 - Neuroevolution 1:15:03 - Tesla Autopilot 1:18:28 - Language and vision 1:24:09 - Aliens communicating with humans 1:29:45 - Would AI learn to lie to humans? 1:36:20 - Artificial life 1:41:12 - Cellular automata 1:46:49 - Advice for young people 1:51:25 - Meaning of life SOCIAL: - Twitter: https://twitter.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - Medium: https://medium.com/@lexfridman - Reddit: https://reddit.com/r/lexfridman - Support on Patreon: https://www.patreon.com/lexfridman

Lex FridmanhostRisto Miikkulainenguest
Apr 19, 20211h 56mWatch on YouTube ↗

EVERY SPOKEN WORD

  1. 0:001:07

    Introduction

    1. LF

      The following is a conversation with Risto Mikolainen, a computer scientist at University of Texas at Austin, and associate vice president of Evolutionary Artificial Intelligence at Cognizant. He specializes in evolutionary computation, but also many other topics in artificial intelligence, cognitive science, and neuroscience. Quick mention of our sponsors: Jordan Harbinger Show, Grammarly, Belcampo, and Indeed. Check them out in the description to support this podcast. As a side note, let me say that nature-inspired algorithms from ant colony optimization, to genetic algorithms, to cellular automata, to neural networks have always captivated my imagination, not only for their surprising power in the face of long odds, but because they always opened up doors to new ways of thinking about computation. It does seem that in the long arc of computing history running toward biology, not running away from it, is what leads to long term progress. This is the Lex Fridman podcast, and here is my conversation with Risto Mikolainen.

  2. 1:074:24

    If we re-ran Earth over 1 million times

    1. LF

      If we ran the Earth experiment, this fun little experiment we're on, over and over and over and over a million times and watched the evolution of life as it, uh, pans out, how much variation in the outcomes of that evolution do you think we would see?

    2. RM

      Mm-hmm.

    3. LF

      Now, we should say that you are a computer scientist.

    4. RM

      That's actually not such a bad question for computer scientist, because we are building simulations of these things, and we are simulating evolution. Uh, and that's a difficult question to answer in biology, but we can build a computational model-

    5. LF

      (laughs)

    6. RM

      ... and run it million times.

    7. LF

      Yes.

    8. RM

      And actually answer that question, how much variation do we see when we, when we simulate it. Uh, and, um, you know, that's a little bit beyond what we can do today (laughs) , but, but I think that we will see some regularities. Uh, and it took evolution also a really long time to get started, and then things accelerated really fast, uh, towards the end. Uh, but there are things that need to be discovered, and they probably will be over and over again, like manipulation, uh, of objects, uh, opposable thumbs, and, um, and also some way to communicate, uh, maybe orally like why we have speech. It might be some other kind of sounds. Um, and, and decision-making, but also vision. Uh, eye has evolved many times, uh, various vision systems have evolved. So we would see those kinds of solutions, I believe, emerge over and over again. They may look a little different, but they, they get the job done. The really interesting question is, would we have primates? Would we have humans or something that resembles humans? Uh, and, and would that be an apex (laughs) of evolution after a while? Um, we don't know where we're going from here, but we certainly see a lot of tool use and, and building, uh, or constructing our environment. So I think that we will get that. Uh, that we get some e- evolution producing some agents that can do that, manipulate the environment and build.

    9. LF

      What do you think is special about humans? Like, if you were running the simulation and you observe humans emerge, like these, like tool makers, they start a fire and all this stuff, start running around building buildings, and then running for president, all those kinds of things. Uh, what would be... How would you detect that? 'Cause you're like really busy as the creator of this evolutionary system, so you don't have much time to observe, like detect if any cool stuff came up.

    10. RM

      Mm-hmm.

    11. LF

      Right? How would you detect humans?

    12. RM

      Well, you are running the simulation, so, uh, you also put in visualization and measurement techniques there. So if you are looking for s- certain things like communication, uh, you'll have detectors to find out whether that's happening-

    13. LF

      Mm-hmm.

    14. RM

      ... even if it's a large simulation. Uh, and I think that that's, that's what, uh, what we would do. Uh, we know roughly what we want, intelligent agents that communicate, cooperate, manipulate, um, and we would build detections and visualizations of those processes. Yeah, it... And there's a lot of... We'd have to run it many times, and, uh, we have plenty of time to figure out how we detect the interesting things. But also I think we do have to run it many times because we don't quite know what shape those will take.

    15. LF

      Right.

    16. RM

      And our detectors may not be perfect for them-

    17. LF

      Well-

    18. RM

      ... uh, to

  3. 4:247:02

    Would aliens detect humans?

    1. RM

      begin with.

    2. LF

      Well, that seems really difficult to build a detector of intelligent or intelligent conver- c- uh, communication, sort of, uh... If we take an alien perspective observing Earth, are you sure that they would be able to detect humans is the special thing? Wouldn't they be already curious about other things? There's way more insects by body mass, I think, than humans, by far.

    3. RM

      Mm-hmm.

    4. LF

      Uh, ant colonies. Uh, obviously dolphins is the most intelligent, uh, creature on Earth. We all know this. So it could be the dolphins that they detect. It could be the rockets that we seem to be launching, that could be the intelligent creature they detect. Uh, it could be some other... Uh, trees. Trees have been here a long time. I just learned that sharks have been here 400 million years, and that's longer than trees have been here. So maybe it's the sharks, they go by age.

    5. RM

      Mm-hmm.

    6. LF

      Like there's a persistent thing, like if you survive long enough, especially through the mass extinctions, that could be the, the, the thing your detector is, uh, detecting.

    7. RM

      Mm-hmm.

    8. LF

      Humans have been here for a very short time and we're just creating a lot of pollution, but so is the other creatures, so I don't know. Do y- you, do you think you'd be able to detect humans? Like how would you go about detecting, in the computational sense, maybe we can leave humans behind, in the computational sense, detect interesting things? Uh, do you basically have to have a strict objective function by which you measure the performance of a system? Or can you find curiosities and interesting things?

    9. RM

      Yeah. Well, I think it... The first, um, measurement would be to detect...... how much of an effect you can have in your environment.

    10. LF

      Right.

    11. RM

      So, if you look at, look around, we have cities and that is constructed environments and that's where a lot of people live, most people live. So, that would be a good sign of intelligence that you, uh, don't just live in an environment but you construct it to your liking.

    12. LF

      Yeah.

    13. RM

      Uh, and that's something pretty unique. Uh, I mean, uh, certainly birds build nests and a hollow, but they don't build quite cities. Termites build, uh, mounds and hives and things like that. Uh, but the complexity of the human, uh, construction cities I think would stand out, even to an external observer.

    14. LF

      Of course, that's what a human would say.

    15. RM

      (laughs)

    16. LF

      (laughs)

    17. RM

      Yeah, and, uh, you know, you can certainly say that, uh, sharks are really smart because they've been around so long-

    18. LF

      Yeah.

    19. RM

      ... and they haven't destroyed their environment which humans are about to do (laughs) -

    20. LF

      Yeah.

    21. RM

      ... which is not a very smart thing. Uh, but we'll get over it. I'm, I'm, I, I believe.

    22. LF

      (laughs)

    23. RM

      Uh, and- and, uh-

    24. LF

      Yeah.

    25. RM

      ... we can get over it by doing some construction that actually is benign, uh, and maybe even enhances, uh, the, um, resilience of- of nature.

  4. 7:0210:47

    Evolution of intelligent life

    1. RM

    2. LF

      So, you mentioned, uh, this simulation that we run over and over might start slo- i- it's a slow start. So, do you think, uh, how unlikely... First of all, I don't know if you think about this kind of stuff, but how unlikely is step number zero which is the springing up, like the origin of life on Earth? And second, how unlikely is the, um, anything interesting happening beyond that?

    3. RM

      Mm.

    4. LF

      So, like the start, uh, that- that- that creates all the rich complexity that we see on Earth today.

    5. RM

      Yeah. There- there are people who are working on exactly that problem, uh, from primordial soup, how do you actually get self-replicating-

    6. LF

      Yeah.

    7. RM

      ... molecules. And they are very close. Uh, with a little bit of help, you can make that happen. Um, so, uh, we... of course, we know what we want, so they can set up the conditions and try out conditions-

    8. LF

      Right.

    9. RM

      ... that are conducive to that. Uh, for evolution to discover that, that took a long time. Uh, for us to recreate it, probably won't take that long. Uh, and the next steps from there, um, I think also with some handholding, I think we can make that happen. Um, but, uh, with evolution, what was really fascinating was eventually the runaway evolution of the brain that created humans and created-

    10. LF

      Right.

    11. RM

      ... well, also, uh, other higher animals. That that was something that happened really fast, uh, and that's a big question. Is that something replicable? Is that something that can happen? And if it happens, does it go in the same direction? Um, that is a big question to ask. Even in computational terms, I- I think that it's, uh, relatively possible to, uh, come up with, create an experiment where we look at the primordial soup and the first couple of steps of multicellular organisms even. Um, but to get something as complex as the brain, um, we don't quite know the conditions for that, uh, and how- how to even get started and whether we can get this kind of runaway evolution happening.

    12. LF

      From a detective perspective, if we're observing this evolution, w- what do you think is the brain? What do you think is the, um... let's say, what is intelligence? So, in terms of the thing that makes humans special, we seem to be able to reason, we seem to be able to communicate. But the core of that is this... something in the broad- broad category we- we might call intelligence.

    13. RM

      Mm.

    14. LF

      So, is, uh... if you put your computer scientist hat on, uh, m- is there favorite ways you like to think about that question of what is intelligence?

    15. RM

      Well, my goal is to create agents that are, that are intelligent.

    16. LF

      Not to define what (laughs) -

    17. RM

      And, and that- that is a way of defining it, and that means that it's some kind of an, um, object or- or a program, um, that has limited sensory and, uh, effective capabilities interacting with the world, and then also a mechanism for making decisions. So, with limited abilities like that, can it survive? Um, uh, survival is the simplest goal, but it could al- you could also give it other goals. Can it multiply? Can it, uh, solve problems that you give it? Uh, and that is quite a bit less than human intelligence. There are, uh... uh, animals would be intelligent, of course, with that definition. And you might have, uh, even- even some other forms of- of life. Even... so what- so intelligence in that sense is- is a survival, um, skill, uh, given resources that you have and using- using your resources so that you will stay around.

  5. 10:4717:03

    Fear of death

    1. RM

    2. LF

      Uh, do you think death, mortality is fundamental to an agent?

    3. RM

      Mm.

    4. LF

      So, like, there's a... I don't know if you're familiar, there's a philosopher named Ernest Becker who wrote The j- uh, Denial of Death. And, uh, his whole idea... and there's folks, psychologists, cognitive scientists that work on terror management theory, and they think that one of the special things about humans is that we're able to sort of foresee our death, right? We can, we can realize not just as animals do, sort of constantly fear in an instinctual sense, respond to all the dangers that are out there-

    5. RM

      Mm-hmm.

    6. LF

      ... but, like, understand that this ride ends eventually.

    7. RM

      Yeah.

    8. LF

      And that, in itself, is the most... is, uh... is the force behind all of the creative efforts of human nature.

    9. RM

      Yeah.

    10. LF

      That's- that's the philosophy.

    11. RM

      I think that makes sense. A lot of sense. I mean, animals probably don't think of death (laughs) the same way. But humans know that your time is limited and you want to make it count. Um, and you can make it count in many different ways. But I think that has a lot to do with creativity and the need for humans to do something beyond just surviving. Uh, and now going from that simple definition to something that's the next level, I think, that that could be the second decision... uh, uh, the second level of definition that, um, intelligence means something... that you do something that stays behind you, that's more than-... uh, your, uh, existence. Um, something, you create something that, um-

    12. LF

      (laughs)

    13. RM

      ... is useful for others, is useful in the future, not just for yourself. And I think that is, that's a nice d- definition of intelligence wi- in a next level. Uh, and it's also nice 'cause it doesn't require that they are humans or biological. They could be artificial agents that are intelligence. They could, th- they could achieve those kind of goals.

    14. LF

      So a particular agent, the, uh, the ripple effects of, of their existence on the entire idea of the system is significant, so like they leave a trace where the- there's like a, yeah, like ripple effects. It's the... but see, then you go back to the, the butterfly with the flap of a wing, and then you can, uh, trace a lot of, uh, like nuclear wars and-

    15. RM

      Mm-hmm.

    16. LF

      ... all the conflicts of human history somehow connected to that one butterfly that created all the, the chaos. So maybe that's not... may- maybe that's a very poetic way to think. Uh, tha- that's something we humans in a human-centric way wanna hope we have this impact, like that is the, the, the secondary effect of our intelligence if we had the long-lasting impact on the world. But maybe the entirety of physics in the universe has a very (laughs) long-lasting effects.

    17. RM

      Sure. But, um, th- you can also think of it, what if, um, like The Wonderful Life, what if you're not here?

    18. LF

      Yeah.

    19. RM

      Will somebody else do this? Uh, is it-

    20. LF

      Ah.

    21. RM

      ... is it something that you actually contributed because you had something unique to compute, that, that th- co- contribute. That's a pretty high bar, though. (laughs)

    22. LF

      Uniqueness.

    23. RM

      Yeah.

    24. LF

      Yeah.

    25. RM

      Yeah. So, you know, you have to be (laughs) Mozart or something to, to actually reach that level, that nobody would have developed that. But other people might have solved this equation, um, if you didn't do it. Um, but, but also within limited scope. I mean, during your lifetime or next year, um, you could contribute something that unique that other people did not see and, um, and then that could change the way things move forward for a while. Uh, so I don't think we have to be Mozarts (laughs) to be called intelligent, but we have this local effect that is changing. If, if you weren't there, that would not have happened, and it's a positive effect, of course. You want it to be a positive effect.

    26. LF

      Do you think it's possible to engineer into, uh, computational agents a fear of mortality? Like, uh, does that make-

    27. RM

      Mm-hmm.

    28. LF

      ... any s- any sense? So there's a very trivial thing where it's like th- you could just code in a parameter which is how long the life ends. But more of a fear of mortality, like awareness of the, the way that things end and somehow encoding a complex representation of that fear which is like maybe as it gets closer, you become more terrified.

    29. RM

      Mm.

    30. LF

      I mean, uh, th- there seems to be something really profound about this fear that's not currently encodable in a trivial way into our programs.

  6. 17:0320:28

    Hyenas

    1. RM

    2. LF

      Um, do you ever in your work or, like maybe o- on a coffee break think about-

    3. RM

      (laughs)

    4. LF

      ... what the heck is this thing consciousness and is it at all useful in our thinking about AI systems?

    5. RM

      Yes. It is an important question. Um, you can build representations and functions, I think, into these agents that act like emotions and consciousness perhaps. So I mentioned, uh, emotions being something that allow you to focus and pay attention, filter out what's important. Yeah, you can have that kind of a filter mechanism. Uh, and you can... it puts you in a different state. Your computation is in a different state. Certain things don't really get through and others are heightened. Uh, now, w- you label that box emotion. I don't know if that means it's an emotion-

    6. LF

      Hm.

    7. RM

      ... but it acts very much like we understand what emotions are. Uh, and, and we actually did some work like that, um, modeling hyenas, uh, who were trying to steal a kill from lions, uh, which happens in Africa. I mean, hyenas are quite intelligent but not really intelligent, uh, and, um, they bef- they have this behavior that's...... more complex than anything else they do. They can band together. If there's, uh, 30 of them or so, uh, they can, uh, coordinate their effort so that they push the lions away from a kill even though the lions are so strong (laughs) that they could just kill a lion... kill a hyena by, by striking with a paw. Uh, but when they work together and precisely time this attack, the lions will leave, and they get the kill.

    8. LF

      Mm-hmm.

    9. RM

      Uh, and probably there are some states, like emotions, that the hyenas go through. At first, they, uh, they call for reinforcements. They really want that kill, but there's not enough of them, so they vocalize and, and th- there's more peop- more people, more hyenas that come around. Uh, and then they have two emotions. They are very afraid of the lion, uh, so they wanna stay away, but they also have a strong affiliation, uh, bet- between each other. And then this is the balance of the two emotions. And ev- and also, yes, they also want the kill. So it's both repelled and attractive and then... But then this affiliation eventually is so strong that when they move, they move together. They act as a unit and they, they can, uh, perform that function. So there's an interesting behavior that seems to depend on these emotions-

    10. LF

      Mm-hmm.

    11. RM

      ... strongly and makes it possible, um-

    12. LF

      And I think-

    13. RM

      ... coordinating the actions.

    14. LF

      ... and I think a cr- (laughs) a critical aspect of that, uh, the way you're describing these emotion there is its mechanism of social communication-

    15. RM

      Yeah.

    16. LF

      ... of, uh, social interaction. Maybe that, maybe humans won't even be that intelligent, or most things we think of as intelligent wouldn't be that intelligent without the social component of interaction.

    17. RM

      Mm.

    18. LF

      Maybe mu- much of our intelligence is essentially an outgrowth of social interaction. And maybe for the creation of intelligent agents, we have to be creating-

    19. RM

      Yes.

    20. LF

      ... fundamentally social systems.

    21. RM

      Yes. I, I, I strongly believe that's true. And, uh, yes, the, uh, communication is multifaceted. I mean, they, they vocalize and call for friends, but they also rub against each other and they push and they do all kinds of gestures and so on. So they don't act alone. And I don't think people act alone, uh, very much either, at least normal, most of the time. Uh, and social systems are so strong for humans, um, that I think we build everything on top of these kind of structures.

  7. 20:2823:59

    Language

    1. RM

      And, um, one interesting theory around that, bigotedness theory, for instance, for language, lang- language origins, is that, uh, where did language come from? Uh, and, um, and it's a plausible theory that, uh, first came social systems that, uh, you have different roles in a society. Um, and then those roles are exchangeable. That, you know, I scratch your back, you scratch my back. We can exchange roles. And once you have the brain structures that allow you to understand actions in terms of roles that can be changed, that's the basis for language, for grammar. And now you can start using symbols to refer to, uh, objects in the world, uh, and you have this flexible structure. So there's a social structure that's fundame- fundamental for language to develop. Now, again, then you have language, you can, you can refer to things that are not here (laughs) right now.

    2. LF

      Mm-hmm.

    3. RM

      Uh, and that allows you to then build all the, all the good stuff about, uh, planning, for instance, and building things and so on. So, yeah, I think that very strongly, um, humans are social. And that gives us ability, um, uh, to structure the world. But also as a society, we can do so much more 'cause we don't... one person does not have to do everything. You can have different roles and together achieve a lot more. Uh, and that's also something we see in computational simulations today. I mean, we have multiagent systems that can perform tasks. This fascinating, uh, demonstration, Marco Dorigo, I think it was, um, these robots, little robots that had to navigate through an environment, and there was, there were things that are dangerous, like maybe a, uh, a big chasm or some kind of groove, a, a hole, and they could not get across it. But if they grab each other with their gripper, they form a robot (Lex laughs) that was much longer-

    4. LF

      Yeah.

    5. RM

      ... on a team, and this way, they could get ac- across that.

    6. LF

      Yeah.

    7. RM

      So this is great example of how together we can achieve things we couldn't otherwise. Like the hyenas.

    8. LF

      Mm-hmm.

    9. RM

      Uh, you know, alone, they couldn't, but as, as a team, they could. Uh, and I think humans do that all the time. We're really good at that.

    10. LF

      Yeah, and the way you described the, the system of hyenas, it almost sounds algorithmic. Like, the, the problem with humans is they're so complex, uh, it's hard to think of them as algorithms.

    11. RM

      Mm.

    12. LF

      But with hyenas, there's a... it's simple enough to where it feels like, um, at least hopeful that it's possible to create computational systems that mimic that.

    13. RM

      Yeah. That's exactly why, (laughs) why we looked at that-

    14. LF

      As opposed to humans. (laughs)

    15. RM

      Um, like I said, they are intelligent, but they are not quite as intelligence- intelligent as, say, baboons-

    16. LF

      Yeah.

    17. RM

      ... which would learn a lot and would be much more flexible. The hyenas are relatively rigid in what they can do, and therefore, you could look at this behavior like this is a breakthrough in evolution about to happen.

    18. LF

      Yes.

    19. RM

      That they've discovered something about so- social structures, communication, about cooperation, and, and it might then spill over to other things, too-

    20. LF

      Yeah.

    21. RM

      ... uh, in thousands of years in the future.

    22. LF

      Yeah, I, I think the problem with baboons and humans is probably too much is going on inside the head where we won't be able to measure it if we're observing the system.

    23. RM

      Hm.

    24. LF

      With hyenas, it's probably easier to observe the actual decision-making and the various motivations that are involved.

    25. RM

      Yeah, they are visible.

    26. LF

      Yeah.

    27. RM

      And we can even, um, quantify possibly their, uh, emotional state because they leave droppings behind.

    28. LF

      (laughs)

    29. RM

      And, and there are chemicals there that can be associated with-

    30. LF

      Nice.

  8. 23:5929:59

    The magic of programming

    1. RM

      Yeah.

    2. LF

      What to you is the most beautiful... Speaking of hyenas, uh, what to you is the most beautiful, uh, nature-inspired algorithm in your work that you've come across? Just something maybe early on in your work or maybe today.

    3. RM

      Hm. I, I think that evolution, uh, computation is the most amazing method. So what-... fascinates me most is that, uh, with computers is that you can, you can get more out than you put in. I mean-

    4. LF

      Mm-hmm.

    5. RM

      ... you can write a piece of code and your machine does what you told it. I mean, this happened to me in my freshman year. I- it did something very simple and I was just amazed. I was blown away that it would, it would get the number and it would co- compute the- the result, and I didn't have to do it myself. Very simple. Uh, but if you push that a little further, you can have machines that learn and they might learn patterns, uh, and already, say, deep learning neural networks, they can learn to recognize objects, sounds, um, patterns that humans have trouble with. And sometimes they do it better than humans, and that's so fascinating.

    6. LF

      Mm-hmm.

    7. RM

      And now if you take that one more step, you get something like evolutionary algorithms that discover things, they create things, they come up with solutions that you did not think of, and that just blows me away. It's so great that we can build systems, algorithms that can be in some sense smarter than we are, that they can discover solutions that we might miss.

    8. LF

      Mm-hmm.

    9. RM

      Um, a lot of times it is because we have, as humans, we have certain biases. We expect the solutions to be certain way and you don't put those biases into the algorithm so they are more free to explore. And evolution is just absolutely fantastic explorer. And that's what- what really is fascinating.

    10. LF

      Yeah. I think, uh, (laughs) I get made fun of a bit, 'cause I currently don't have any kids, but you mentioned programs. I mean, um... Do you have kids?

    11. RM

      Yeah.

    12. LF

      So maybe you could speak to this, but there's a magic to the creati- creative process. Like, I, uh, with- with Spot, the- the Boston Dynamics' Spot, but really any robot that I ever worked on, it just feels like s- the similar kind of joy I imagine I would have as a father. N- uh, not the same perhaps level, but like the same kind of wonderment, like-

    13. RM

      Yeah.

    14. LF

      ... that exactly this, which is like you know what you had to do, (laughs) initially, uh, to get this thing going, uh, let's- let's speak on the computer science side, like what the program looks like. But something about it, uh, doing more than-

    15. RM

      Yeah.

    16. LF

      ... what the program was written on paper I- is like that somehow connects to the magic of this entire universe. Like, that's- that's like I- I feel like I found God every time I like-

    17. RM

      (laughs) .

    18. LF

      It- it's like, uh, (laughs) 'cause you're- you've really created something that's-

    19. RM

      Yeah.

    20. LF

      ... living.

    21. RM

      Yeah.

    22. LF

      E- even if it's a simple program.

    23. RM

      It has a life of its own, has intelligence of its own. It's beyond what you actually thought.

    24. LF

      Yeah.

    25. RM

      Uh, and that is I think it's exactly spot on. That's exactly what it's about. Uh, you created something, it has a- a ability to, uh, live (laughs) its life and- and do good things, and, um, you just gave it a starting point. So in that sense, I think it's... That may be part of the joy, actually.

    26. LF

      Yeah. W- Uh, so but you mentioned creativity in this context, uh, especially in- i- in the context of evolutionary computation. So, you know, we don't often think of algorithms as creative. So how do you think about creativity?

    27. RM

      Mm. Yeah. Y- algorithms absolutely can be creative. Um, they can, uh, come up with solutions that you don't think about. I mean, creativity can be defined a couple of requirements. Have to... It has to be new, it has to be useful, and it has to be surprising. Uh, and those certainly are true with, uh, say evolution computation discovering solutions. Um, so maybe an example, for instance, we did, um, this collaboration with MIT Media Lab, Caleb Harvest Lab, uh, where they had a hydroponic, um, food computer they called it, environment that was completely computer controlled, nutrients, water, light, uh, temperature, everything is controlled. Now, um, what do you do (laughs) if you can control everything? Farmers know a lot about how to do... how to make plants grow in their own patch of land. But if you can control everything, it's too much. And it turns out that we don't actually know very much about it. So, uh, we built a system, evolution optimization system, um, together with a surrogate model of how plants grow, uh, and let, uh, this system explore recipes on its own. Uh, and initially, uh, we were focusing on light, uh, how strong, what wavelengths, how long the light was on, um, and we put some boundaries which we thought were reasonable. For instance, that there was, um, at least six hours of darkness like night because that's what we have in the world.

    28. LF

      Mm-hmm.

    29. RM

      Uh, and very quickly, um, the system evolution, uh, pushed all the recipes to that limit. Uh, we were trying to grow basil, um, and, uh, we had... initially had some 200, 300 recipes, exploration as well as known recipes, but- but now we are going beyond that and everything was like pushed that- that limit. So we look at it and say, "Well, you know, we can easily just change it. Let's have it your way." And it turns out, uh, the system discovered that basil does not need to sleep. Uh, 24 hours lights on and it will thrive. It will be bigger, it will be tastier. And this was a big surprise, um, not just to us, but also the biologist in the team, uh, that, uh, anticipated that this- th- th- that there are some constraints that- that are in the world for a reason. It turns out that evolution did not have the same bias and therefore it discovered something that was creative, it was surprising, it was useful and it was new.

    30. LF

      That's fascinating to think about, like, the things we think that are fundamental to living systems on Earth today, whether they're actually fundamental or they somehow shape, um, fit the constraints of the system, and all we have to do is just remove the constraints. Do you ever think about, um...

  9. 29:5937:31

    Neuralink

    1. LF

      Uh, I don't know how much you know about brain computer interfaces in Neuralink.

    2. RM

      Mm-hmm.

    3. LF

      The- the idea there is, you know, our brains are very limited.

    4. RM

      Mm-hmm.

    5. LF

      And if we just allow... we plug in... we- we- we provide a mechanism for a computer to speak with the brain so you're thereby expanding the computational power of the brain, the possibility is there sort of from a very high level of philosophical...... perspective is limitless. But I wonder how limitless it is. Are the constraints we have, like features that are fundamental to our intelligence? Or is this just, like, this weird constraint in terms of our brain size and skull and, uh, life span and senses, it's just a weird little, like, quirk of evolution and if we just open that up-

    6. RM

      Mm.

    7. LF

      ... like add much more senses, add much more computational power, the, uh, intelligence will be, will expand exponentially.

    8. RM

      Mm-hmm.

    9. LF

      Do you have a, do you have a sense about constraints?

    10. RM

      Mm.

    11. LF

      The relationship of evolution and computation to the constraints of the environment.

    12. RM

      Um, well, at- at first I'd like to comment on- on that like changing the inputs, uh, to human brain. Uh-

    13. LF

      Yes, that would be great.

    14. RM

      ... and flexibility of, of the brain. I think there's a lot of that. Uh, there are experiments that are done in animals like megangasuru, um, at MIT switching the, um, auditory and, and visual, uh, information-

    15. LF

      Mm-hmm.

    16. RM

      ... going, going to the wrong part of the cortex, and the animal, uh, was still able to hear and, and perceive the visual environment. And there are, um, kids that are, are born with severe disorders and sometimes they have to remove half of the brain, like one half, and they still grow up. They have the functions migrate to the other parts. There's a lot of flexibility like that. So I think it's quite possible to, um, hook up the brain with different kinds of sensors, for instance.

    17. LF

      Mm-hmm.

    18. RM

      Uh, and, uh, something that we don't even quite understand or have today, a different kind of wavelengths or- or whatever they are. Um, and then the brain can learn to make sense of it.

    19. LF

      Mm-hmm.

    20. RM

      Um, and that I think is, um, there's good hope that these prosthetic devices, for instance, work not because we make them so good and so easy to use, but the brain adapts to them and can learn to take advantage of them. Um, and so in that sense, if there's a trouble, a problem, I think the brain can, uh, be used to correct it. Now going beyond what we have today, can you get smarter?

    21. LF

      Mm-hmm.

    22. RM

      That's really much harder to do. Uh, giving the brain more, more input probably might overwhelm it. It would have to learn to filter it and focus, um, and in order to use the information effectively. Uh, and augmenting intelligence with some kind of external devices like that might be difficult, uh, I think. But, uh, replacing what's lost, I think is quite possible.

    23. LF

      Right. So our intuition allows us to- to sort of imagine that we can replace what's been lost, but expansion beyond what we have... I mean, we are already one of the most, if not the most intelligent things on this earth, right? So, it's hard to imagine, um, if the brain can hold up with an order of magnitude greater set of information thrown at it. I- if it can do, if it can reason through that. Part of me, this is the Russian thing, I think, is, uh, I tend to think that the limitations is where the p- the super power is.

    24. RM

      Mm.

    25. LF

      That, you know, immortality and, uh, huge increase in bandwidth of, um, information by connecting computers with the brain is not going to produce greater intelligence.

    26. RM

      Mm-hmm.

    27. LF

      It might produce lesser intelligence. So, I don't know. There's something about the scarcity being, uh, essential to, uh, um, fitness or performance.

    28. RM

      Mm-hmm.

    29. LF

      But that could be just 'cause we're so, uh... (laughs)

    30. RM

      Yeah.

  10. 37:3141:06

    Surprising discoveries by AI

    1. LF

      Is there... So you mentioned surprising being a characteristic of, uh, creativity. Is there something... You already mentioned a few examples, but is there something that jumps out at you as was particularly surprising from the various evolutionary computation systems you've worked on, the solutions that were, um, come up along the way? Not necessarily the final solutions, but maybe things that were even discarded.

    2. RM

      Mm-hmm.

    3. LF

      Is there something that just jumps to mind?

    4. RM

      It, it happens all the time. I mean, evolution is so creative, uh, so good at discovering, uh, solutions you don't anticipate. A lot of times they are taking advantage of something that you didn't think was there, like a bug in the software, for instance.

    5. LF

      Yeah.

    6. RM

      (laughs) A lot of... There's a great paper, uh, the community put it together about, uh, surprising anecdotes about evolution computation. A lot of them are indeed in some software environment there was a, a loophole or a bug and the system, uh, utilizes that.

    7. LF

      By the way, for people who want to read it, it's kind of fun to read. It's, uh, it's called The Surprising Creativity of Digital Evolution: A Collection of Anecdotes from the Evolutionary Computation and Artificial Life Research Communities. And there's just a bunch of stories from all the seminal figures in this community. Uh, you have a story in there, uh, that released to you at least on the tic-tac-toe memory bomb.

    8. RM

      (laughs)

    9. LF

      So can you, can you, uh, I guess, uh, describe that situation if you think that's... If it's still-

    10. RM

      Yeah, that was... That's a quite a bit smaller scale than our, um, Basil doesn't need to sleep surprise.

    11. LF

      (laughs)

    12. RM

      But, but it was actually done by students in my class-

    13. LF

      Mm-hmm.

    14. RM

      ... um, in a neural nets evolution computation class. Uh, there was an assignment, uh, it was perhaps the final project that people built game playing, uh, AI. It was an AI class. Uh, and this one... And, and it was for tic-tac-toe or five in a row in a large board. Uh, and this one team evolved a neural network, uh, to make these moves. Uh, and, um, they set it up, the evolution. They didn't really know what would come out. Um, but it turned out that they did really well. Evolution actually won the tournament. And most of the time when it won, it won because the other teams crashed. And then when you look at it like what was going on, was that it wasn't discovered that if it makes a move that's really, really far away, like millions of squares-

    15. LF

      Mm-hmm.

    16. RM

      ... uh, away, uh, the other, uh, teams, the other g- programs just expanded memory in order to take that into account until they ran out of memory and crashed. And then you win a tournament by crushing all your opponents.

    17. LF

      I think that's quite a profound example, uh, which is probably applies to most games from even a game theoretic perspective, that sometimes to win you don't have to be better within the rules of the game.

    18. RM

      Right.

    19. LF

      You have to come up with ways to break your opponent's, uh, brain, if it's a human. Like not through violence-

    20. RM

      Yeah.

    21. LF

      ... but through some hack where the brain just is not, um... You're basically, uh, how would you put it? You're, you're (laughs) the... You're going outside the constraints of where the brain is able to, uh, to function.

    22. RM

      You know, expectations of your opponent. I mean-

    23. LF

      Yeah.

    24. RM

      ... this was even Kasparov pointed that out that when Deep Blue was playing against Kasparov that it was not playing the same way as Kasparov expected. Uh, and this has to do with, you know, being... Not having the same biases. Uh, and that's, that's really one of the strengths of, of the AI approach. Yeah.

  11. 41:0652:28

    How evolutionary computation works

    1. RM

    2. LF

      Can you at a high level say what are the basic, uh, mechanisms of evolutionary computation algorithms that use something that could be called an evolutionary approach?

    3. RM

      Mm-hmm.

    4. LF

      Like how does it work? Uh, how... What are the connections to the... its... What are the echoes of the connection to its biological ?

    5. RM

      A lot of these algorithms really do take motivation from biology but they are caricatures. You try to essentialize it and take the elements that you believe matter. So in evolution computation, it is the creation of variation and then the selection upon that. Uh, so the creation of variation you have to have some mechanism that allow you to create new individuals that are very different from what you already have. That's the creativity part. Uh, and then you have to have some way of measuring how well they are doing, uh, and using the, uh, that measure to select who goes to the next generation and, and you continue.

    6. LF

      So first you also have to... You have to have some kind of digital representation of an individual that can be then modified, so I guess humans... I mean, biological systems have DNA and all those kinds of things, and so you have to have similar kind of encodings i- in a computer program.

    7. RM

      Yes, and that is a big question. How do you encode these individuals? So there's a genotype which is that encoding, and then a decoding mechanism which gives you the phenotype, which is the actual individual that then performs the task and in an environment, um, can be evaluated how good it is. So even that mapping is a big question then.

    8. LF

      Mm-hmm.

    9. RM

      How do you do it? Um, but typically the representations are either they are strings of numbers or they are some kind of trees. Those are something that we know very well in computer science and we try to do that. But they... And, you know, DNA in some sense is also a sequence, um, and it's a string. Um, so it's not that far from it but the-... DNA also has many other aspects that we don't take into account necessarily, like this folding, and, um, and interactions that are other than just the, the sequence itself. And lots of that is not yet captured, and we don't know whether they are really crucial. Um, evolution, biological evolution has produced wonderful things but if you look at them it's not necessarily the case that every piece is irreplaceable and essential. There's a lot of baggage because you have to construct it, and it has to go through various stages, and we still have appendice- a- a- appendix, and we have tailbones, and things like that, that are not really that useful. If you try to explain them now, it would make no sense, very hard. But if you think of, um, as this product of evolution you can see where they came from. They were useful at one point perhaps and no longer are, but they're still there. So, um, that process is- is complex, uh, and your representation should support it, uh, and that is quite difficult, um, if- if we are limited with strings or trees, uh, and then we are pretty much limited what can be constructed. And one thing that we are still missing in evolutionary computation in particular is what we saw in biology, major transitions, so that you just go from, for instance, single cell to multi-cell organisms-

    10. LF

      Mm-hmm.

    11. RM

      ... and eventually societies. There are transitions of level of selection and level of what a unit is, and that's something we haven't captured in evolutionary computation yet.

    12. LF

      Does that require a dramatic expansion of the representation? Is that what that- that is?

    13. RM

      Most likely it does but it's quite... we don't even understand it in biology very well-

    14. LF

      Yeah.

    15. RM

      ... where it's coming from. So it would be really good to look at major transitions in biology, try to characterize them, uh, little bit more in detail, what the processes are, how- how does a... so like a unit, a cell, is no longer evaluated alone. It's evaluated as part of a community, um-

    16. LF

      (laughs) .

    17. RM

      ... multi-cell organism.

    18. LF

      Right.

    19. RM

      Uh, even though it could reproduce, now it can't alone. Uh, it has- has to have its environment. So there's a- there's a push to another level, uh, at least to selection.

    20. LF

      And how do you make that jump to the next level-

    21. RM

      Yes, how do you make that jump? Yes.

    22. LF

      ... as part of the algorithm?

    23. RM

      Yeah. Yeah. So we haven't really seen that in- in computation, um, yet. And there are certainly attempts to have open-ended evolution, things that could add more complexity and start selecting at a higher level, but it- it is still not, um, (laughs) quite the same as going from single to multi to society for instance in- in biology.

    24. LF

      So- so the- there essentially will be... as opposed to having one agent, those agent all of a sudden spontaneously decide to then be together, and then your entire system would then be treating them as one-

    25. RM

      Yes.

    26. LF

      ... agent.

    27. RM

      Something like that. Yeah.

    28. LF

      Some kind- some kind of weird merger building. But also... so you mentioned... I think you mentioned selection. So basically there's an agent, and they don't get to live on if they don't do well, so there's some kind of measure of what doing well is and isn't.

    29. RM

      Right.

    30. LF

      And, uh, does, uh, mutation come into play at all in- in the process, and what role does it serve?

  12. 52:2855:41

    Learning to walk

    1. RM

      learning.

    2. LF

      There's very few things as entertaining as watching either evolution computation or reinforcement learning teaching a simulated robot to walk.

    3. RM

      Mm.

    4. LF

      I- I- maybe there's a higher level question that could be asked here, but do you find this whole space in- of applications in the robotics interesting for evolution computation?

    5. RM

      Yeah. Yeah, very much. Um, and indeed that's- there are fascinating videos of that. And that's actually one of the examples where you can contrast the difference. So if-

    6. LF

      Between reinforcement learning and-

    7. RM

      Between reinforcement learning and evolution, yes.

    8. LF

      Yeah.

    9. RM

      So, if you have a reinforcement learning agent, it tries to be conservative because it wants to walk as long as possible and be stable. But if you have evolutionary computation, it can afford these agents that go haywire. They fall flat on their face, and they- they could, uh, take a step and then they jump and then again fall flat.

    10. LF

      Yeah.

    11. RM

      And eventually what comes out of that is something like a falling that's controlled. (laughs)

    12. LF

      Yeah.

    13. RM

      Uh, you take another step, another step, and you no longer fall- f- fall. I- instead you run, you go fast. So that's a way of discovering something that's hard to discover step by step incrementally.

    14. LF

      Yeah.

    15. RM

      Because you can afford these, um, evolutionist dead ends, although they are not entirely dead ends in the sense that they can serve as stepping stones. When you take two of those, put them together, you get something that works even better. Um, and that is a great example of, uh, of this kind of discovery.

    16. LF

      Yeah, learning to walk is a- is fascinating. I talk quite a bit to Russ Tedrake who's at MIT. The- there's a- there's a community of folks who- who just, roboticists, who love the elegance and beauty of, uh, movement.

    17. RM

      Right.

    18. LF

      And, uh, walking, bipedal robotics, is, um, beautiful but also exceptionally dangerous in the sense that like you're constantly falling essentially if you want to do elegant movement.

    19. RM

      Mm-hmm.

    20. LF

      And, uh, the discovery of that is, uh ... (sighs) I mean, it- it's such a good example of, um, that the discovery of a good solution sometimes requires a leap of faith and patience and all those kinds of things. I wonder what other spaces we're yet to discover those kinds of things in.

    21. RM

      Mm-hmm. Yeah.

    22. LF

      Yeah.

    23. RM

      Yeah, and another interesting direction is, um, learning, um, for- for, um, virtual creatures learning to walk. Uh, we did a study, uh, wh- uh, in- in- in simulation obviously that, um, you create those creatures, not just their controller but also their body, so you have cylinders, you have muscles, you have joints, uh, and sensors, uh, and you're creating-... creatures that look quite different. Some of them have multiple legs, some of them have no legs at all. Uh, and then the goal was to get them to move, to walk, to run. Uh, and what was interesting is, is that when you evolve the controller together with the body-

    24. LF

      Mm.

    25. RM

      ... you get movements that look natural because they are optimized for that physical setup.

    26. LF

      Wow.

    27. RM

      And, and these creatures, you start believing them-

    28. LF

      Yeah.

    29. RM

      ... that they're alive because they walk in a way that you would expect somebody-

    30. LF

      Yeah.

  13. 55:411:04:45

    Robots and a theory of mind

    1. RM

    2. LF

      Yeah, there's a, there's something subjective also about that, right? Uh, I've been thinking a lot about that, especially in, um, the human, uh, robot interaction context. You know, I, I mentioned Spot, the Boston Dynamics robot, there is something about human/robot communication, let's say, let's put it in another context, something about human and, uh, dog context, like-

    3. RM

      Hmm.

    4. LF

      ... like a living dog, where there's, uh, there's a, there's a dance of communication. First of all, the eyes, you both look at the same thing and you c- uh, dogs communicate with their eyes as well. Like if, if the, if you and a dog want to, uh, like deal with a particular object, you will look at the person, uh, uh, the dog will look at you, and then look at the object, and look back at you, all those kinds of things. But there's also just a elegance of movement. I mean, there's the, of course the tail and all those kinds of mechanisms of communication, and it all seems natural and, uh, often joyful.

    5. RM

      Mm.

    6. LF

      And for robots to communicate that is, is really difficult how to figure that out because it's, it's almost seems impossible to hard code in. You can hard code it for a demo purpose whatso- you know, something like that. But it's essentially choreographed.

    7. RM

      Mm.

    8. LF

      Like if you watch some of the Boston Dynamics videos where they're dancing, all of that is choreographed by human beings. But to learn how to, with your movement, demonstrate a naturalness, a- an elegance, that's fascinating. Of course in the physical space that's very difficult to do, to learn the kind of, uh, scale that you're referring to, but the hope is that you could do that in simulation and then transfer it into the physical space.

    9. RM

      Mm. Mm.

    10. LF

      If you're able to model the robots sufficiently-

    11. RM

      Yeah.

    12. LF

      ... naturally.

    13. RM

      Yeah. And, and, uh, sometimes I think that that requires a theory of mind on the-

    14. LF

      Yes.

    15. RM

      ... on the, on the side of the robot that, um, that they es- they understand what you're doing because they themselves are doing something similar. And, uh, that's a big question too. Uh, we talked about, uh, intelligence in general and, and the social aspect of, of intelligence and I think that's what is required, that we humans understand other humans because we assume that they are similar to us. Um, we have one simulation we did a while ago, Ken Stanley, um, did that, um, two robots that were, uh, competing, um, simulation like I said, uh, they were foraging for food to gain energy, and then when they were really strong they would bounce into the other robot and win if they were stronger. Uh, and, uh, we watched evolution discover more and more complex behaviors. They first went to the nearest food, and then they started to, um, plot a trajectory so they get more, get more, but then they started to take- pay attention what the other robot was doing. And in the end there was a behavior where one of the robots, the more sophisticated one, uh, l- you know, sensed where the food pieces were and identified that the other robot was close to, uh, two of a very far distance, uh, and there was one more food nearby. So it faked (laughs) , I, that's, now I'm using anthropomorphized-

    16. LF

      Yeah.

    17. RM

      ... uh, terms, but it made a move towards those other pieces in order for the other robot to actually go and get them because it knew that the other, the last remaining piece of food was close and the other robot would have to travel a long way, lose its energy and then, uh, lose the whole competition. So there was like emergence of something like a theory of mind, knowing what the other robot would do-

    18. LF

      Yeah.

    19. RM

      ... to guide it towards bad behavior in order to win. So we can get things like that happen, uh, in, in simulation as well.

    20. LF

      But that's a complete n- natural emergence of a theory of mind, but I, I feel like if you add a little bit of a place for a theory of mind to emerge, like easier, then, um, you can go really far. I mean, some of these things with evolution, you know, you add a little bit of design in there, it'll really help. And I think, I tend to think that a very simple theory of mind will go a really long way for cooperation between agents and certainly for human/robot interaction.

    21. RM

      Mm.

    22. LF

      Like it doesn't have to be super complicated. Um, I've gotten a chance to, in the autonomous vehicle space, to watch vehicles interact with pedestrians or pedestrians interacting with vehicles in general. I mean, you would think that there's a very complicated theory of mind thing going on, but I, I have a sense, it's not well understood yet, but I have a sense it's pretty dumb.

    23. RM

      (laughs) Mm-hmm.

    24. LF

      Like it's pretty simple.

    25. RM

      Mm.

    26. LF

      There's a social contract there where, uh, between humans, a human driver and a, and a human crossing the road where, um, the, the human crossing the road trusts that the human in the car is not going to murder them. And there's something about, again, back to that mortality thing, there's some dance of ethics and morality that's built in, that you're mapping your own morality onto the, uh, the person in the car. And even if they're driving at a speed where y- you think if they don't stop they're going to kill you, you trust that if you step in front of them, they're going to hit the brakes-

    27. RM

      Yeah.

    28. LF

      ... and there's that weird dance that we do that I think is a pretty simple model, but of course is very difficult (laughs) to, to introspect what it is.

    29. RM

      Mm.

    30. LF

      And autonomous robots in the human/robot interaction context have to...... have to build that. Current robots are much less than what you're describing. They're currently just afraid of everything. They're, they're more, they're not the kind that fall and discover how to run. They're more like, "Please don't touch anything. Don't hurt anything. Stay as far away from humans as possible. Treat humans as ballistic objects that you can't, uh, that you do, um, with a large spatial envelope, make sure you do not collide with." That's how like I mentioned Elon, uh, Musk thinks about autonomous vehicles. I tend to think autonomous vehicles need to have a beautiful dance between human and machine, where it's not just the collision avoidance problem, but a, a weird-

  14. 1:04:451:15:03

    Neuroevolution

    1. RM

    2. LF

      So, how does, um, evolution computation apply to the world of neural networks, 'cause I've seen quite a bit of work from you and others-

    3. RM

      Right.

    4. LF

      ... on the, in the world of neuroevolution. So, maybe first, can you say, what is this field?

    5. RM

      Yeah. Neuroevolution is a combination of, of, uh, neural networks and evolution computation in many different forms. But, uh, the early versions were simply using evolution the way, um, as a way to construct a neural network instead of say, uh, stochastic gradient des- design or back propagation, um, because evolution can evolve these, uh, parameters, weight values in a neural network. Just like any other string of numbers, you can, you can do that. Uh, and that's useful because some cases you don't have those targets that you need to, um, back propagate from, and it might be an agent that's running a maze or a robot, uh, playing a game or something. Uh, you don't... Again, you don't know what the right answer, soz you don't have back prop, but this way, you can still evolve a neural net. And neural networks are really good at this task because they, um, they recognize patterns, and they, and, you know, generalize, interpolate between known situations. So, you want to have a neural network in such a task-

    6. LF

      Mm-hmm.

    7. RM

      ... even if you don't have the supervised targets. So, that's a reason, and that's a solution. Um, and also, more recently, now when we have all this deep learning literature, it turns out that we can use evolution to optimize many aspects of those designs. The deep learning architectures have become so complex that there's little hope for us little humans to understand their complexity and what actually makes a good design, uh, and now we can use evolution to give that design for you. And it might be, mean, um, optimizing hyper parameters, like the depth of layers and so on, uh, or the topology of the network, um, how many layers, how they're connected, but also other aspects, like what activation functions you use where in the network during the learning process, or what loss function you use. Uh, you could generize that, uh, generate that. Um, even data augmentation. All the different aspects of the design of, um, deep learning experiments could be optimized that way. Um, so that's an inter- interaction between two mechanisms. But there's also, um, when we get more into cognitive science and the topics that we've been talking about, you could have learning mechanisms at two level timescales. So, you do have an evolution that gives you baby neural networks that then learn during their lifetime, and you have this interaction of two timescales. And I think that can potentially be really powerful. Um, now in biology, we are not born with all our...... faculties. We have to learn, we have a developmental period. In humans it's really long. Uh, and, uh, most animals have something. And, and probably the reason is that evolution and DNA is not detailed enough or plentiful enough to describe them. We can't describe how to set the brain up.

    8. LF

      Mm-hmm.

    9. RM

      Um, but we can... Evolution can deci- uh, decide on a starting point, and then have a learning algorithm that will construct the final product. And this interaction of, you know, intelligent, um, (laughs) well, uh, evolution that has produced a, a good starting point for the loo- specific purpose of learning from it-

    10. LF

      Mm-hmm.

    11. RM

      ... uh, with the interaction of, uh, with the environment, that can be a really powerful mechanism for constructing brains and constructing behaviors.

    12. LF

      I like how you walked back from intelligence. So optimize starting point, maybe. Uh, (laughs) -

    13. RM

      Yeah.

    14. LF

      ... uh, okay. S- uh, there's a lot of fascinating things to ask here, and this is basically this dance between neural networks and evolutionary computation is, it could go under the category of automated machine learning to where you're opt- optimizing whether it's hyper-parameters of the topology or hyper-parameters taken broadly. But the topology thing is really interesting. I mean, that's not really done that effectively, or throughout the history of machine learning has not been done. Usually, there's a fixed architecture, maybe there's a few components you're playing with, but to grow a neural network essentially, the way you grow an organism is really fascinating space. How, how hard do, is it, do you think-

    15. RM

      Mm-hmm.

    16. LF

      ... to grow a neural network? And maybe what kind of neural networks are more amenable to this kind of idea than others?

    17. RM

      Mm-hmm.

    18. LF

      I've seen quite a bit of work on recurrent neural networks. Is there some architectures that are friendlier than others? And is, is this just a fun small-scale set of experiments, or do you have hope that we can be able to grow, um, powerful neural networks?

    19. RM

      I, I think we can. Uh, and most of the work up to now is taking architectures that already exist that humans have designed and try to optimize them further. And, and you can totally do that. Um, a few years ago, we did an experiment, we took a winner of the, uh, image captioning competition, um, and, um, the architecture, and just broke it into pieces and took the pieces and, and that was our search base.

    20. LF

      Mm-hmm.

    21. RM

      See if you can do better. And we indeed could. 15% better performance by just-

    22. LF

      Wow.

    23. RM

      ... searching around the network design that humans had come up with-

    24. LF

      Mm-hmm.

    25. RM

      ... ...

    26. LF

      Mm-hmm.

    27. RM

      ... and others. Uh, so, but that's starting from a point of, point that humans have produced, but we could do something more general. It doesn't have to be that kind of network. The, the hard part is just... uh, there are a couple of hard, uh, challenges. One of them is to define the search base: what are your elements, uh, and how you put them together. Uh, and the, the space is just really, really big.

    28. LF

      Mm-hmm.

    29. RM

      Uh, so you have to somehow constrain it and have some hunch what will work, um, because otherwise everything is possible. Um, and another challenge is that in order to evaluate how good your design is, um, you have to train it. I mean, you have to, uh, actually try it out, and that's currently very expensive, right? I mean, deep learning networks may take days, days to train. Well, imagine having a population of 100 and have to run it for 100 generations, it's not yet quite feasible computationally. Um, it will be, (laughs) but, but also there's a large carbon footprint and all that. I mean, we are using a lot of computation for doing that. So intelligent methods, um, and intelligent, I mean, um, we have to do some science in order-

    30. LF

      Mm-hmm.

  15. 1:15:031:18:28

    Tesla Autopilot

    1. LF

      First of all, it's fascinating to think about this context, uh, in terms of, uh, evolving architectures. So I've, uh, studied Tesla autopilot for a long time, it's one particular implementation of an AI system that's operating in the real world. I find it fascinating because of the scale at which it's used out in the real world. And, uh, I'm not sure if you're familiar with that system much but, you know, Andrej Karpathy leads that team on the machine learning side, and, uh, there's a multi-task, uh, network, multi-headed network where there's a core but it's trained on particular tasks and there's a bunch of different heads that are trained on that. Is there some lessons from evolutionary computation or neuroevolution that could be applied to this kind of multi-headed beast that's operating in the real world?

    2. RM

      Yes. It's a very good problem for neural evolution, uh, and the reason is that when you have multiple tasks, um, they support each other. Uh, so let's say you're learning to classify X-ray images, uh, to different pathologies. So you have one task is to classify, um, uh, this disease, and another one this disease, another one this one. And when you're learning from one disease, uh, that forces certain kinds of internal representations and embeddings, um, and they can serve as a helpful starting point for the other tasks.

    3. LF

      Mm-hmm.

    4. RM

      So you are combining the wisdom of multiple tasks into these representations, and it turns out that you can do better in each of these tasks when you're learning simultaneously other tasks-

    5. LF

      Yeah.

    6. RM

      ... than you would by one task alone.

    7. LF

      Which is a fascinating idea in itself, yeah.

    8. RM

      Yes. And- and people do that all the time. I mean, you use knowledge of domains that you know in new domains, uh, and- and certainly neural networks can do that. Where neural evolution comes in is that, um, what's the best way to combine these tasks? Now there's architectural design that allow you to decide where, uh, and how the- the, uh, embeddings, the internal representations are combined and how much you combine, uh, them. Uh, and, uh, there's quite a bit of research on that and- and- and my team, Elliot Maizen's worked on that, um, in particular. Like, what is a good internal representation that supports multiple tasks? Uh, and we're getting to understand how that's constructed and what's in it, uh, so that it- it is in a space that supports multiple different heads, like you said. Um, and- and that, uh, I think is fundamentally how biological intelligence works as well. Uh, you don't build a representation just for one task. You try to build something that's general, not only so that you can do better in one task or multiple tasks, but also future tasks and future challenges. So you learn the- learn the structure of- of the world, um, and- and that helps you, uh, in all kinds of future- future challenges.

    9. LF

      And so you're trying to design a representation that will support an arbitrary set of tasks in a particular sort of class of problem?

    10. RM

      Yeah. And- and also it turns out, and that's, again, a surprise that Elliot found, was that those tasks don't have to be very related. You know, you can learn to do better vision by learning language or better language by learning about DNA structure.

    11. LF

      Mm-hmm.

    12. RM

      You know, somehow the- the world-

    13. LF

      What? (laughs) Yeah, it rhymes. (laughs)

    14. RM

      (laughs)

    15. LF

      The world rhymes even if-

    16. RM

      Yes.

    17. LF

      ... it's very, uh, (laughs) very disparate fields.

  16. 1:18:281:24:09

    Language and vision

    1. LF

      Um, I mean, uh, o- on that small topic, let me ask you 'cause you've also on the competition neuroscience side, (sighs) you worked on both language and vision. What's- what's the connection between the two? Uh, what's more... Maybe there's a bunch of ways to ask this, but what's more difficult to build from an engineering perspective and evolutionary perspective, the human language system or the human vision system? Or the equivalent of, in the AI space, language and vision? Or is it the- the best is the multi-task idea that you're speaking to-

    2. RM

      Mm-hmm.

    3. LF

      ... that they- they need to be deeply integrated?

    4. RM

      Yeah. Yeah. Absolutely the latter. (laughs) Uh, the, uh, learning both at the same time I- I think is a fascinating direction in the- in the future. So we have datasets where there is visual component as well as verbal descriptions, for instance, and- and that way you can learn a deeper representation, a more useful representation for both. Uh, but it's still an interesting question of, um, which one is easier. Eventu- I mean, recognizing objects or even understanding sentences, that's relatively possible.... but where it becomes, where the challenges are is to understand the world.

    5. LF

      Mm-hmm.

    6. RM

      Like the visual world in 3D, uh, what are the objects doing and predicting what will happen, uh, the relationships. That's what makes vision difficult. And language, obviously it's, it's what's the me- what, what is being said, what the meaning is. And the meaning doesn't stop at who did what to whom. Um, there are goals and plans and themes. And you eventually have to understand the entire, uh, human society (laughs) and history in order to understand a sentence very much fully.

    7. LF

      (laughs)

    8. RM

      The- there are plenty of examples of, of those kind of short sentences when you bring in all the world knowledge, uh, to understand it, uh, and that's the big challenge. Now we are far from that, but even just bringing in the visual world, uh, together with the sentence will give you a l- already a lot deeper understanding, uh, of what's happening.

    9. LF

      Yeah.

    10. RM

      And I think that that's where we're going, uh, very soon. I mean, we've, we've had ImageNet for a long time and now we have all these, uh, text collections.

    11. LF

      Mm-hmm.

    12. RM

      But having both together, uh, and then learning a, a semantic understanding of what is happening-

    13. LF

      Mm-hmm.

    14. RM

      ... I think that, that will be, uh, next step in the next few years.

    15. LF

      Yeah, you're starting to see that with all the work with transformers was the, the community, the AI community's starting to dip their toe into this ide- uh, idea of having, uh, language models that are now doing stuff with images-

    16. RM

      Yeah.

    17. LF

      ... with vision, and then the, the connecting the two. I mean, right now it's like these little explorations, we're literally dipping the toe in. But like maybe at so- some point we'll just like dive into the pool and it'll just be all seen as the same thing. I, I, I do still wonder what's more fundamental, whether vision is, um, whether we don't think about vision correctly.

    18. RM

      Mm.

    19. LF

      Maybe the fact 'cause we're humans and we see things as beautiful and so on, that... and because we have cameras that take in pixels as a 2D image that we don't sufficiently think about vision as language.

    20. RM

      Hm.

    21. LF

      You know, maybe lang- maybe Chomsky is right all along.

    22. RM

      (laughs)

    23. LF

      That vision is fundamental to, uh, sorry, that language is fundamental to everything. To even cognition, to even consciousness. Like the base layer's all language, not necessarily like English but some weird abstract representation, uh, uh, the linguistic representation.

    24. RM

      Yeah.

    25. LF

      That-

    26. RM

      Well, earl- earlier we talked about the social structures and that may be what's underlying the language.

    27. LF

      (laughs)

    28. RM

      That's the more fundamental part and then-

    29. LF

      Yeah.

    30. RM

      ... language is being added on top of that.

Episode duration: 1:56:16

Install uListen for AI-powered chat & search across the full episode — Get Full Transcript

Transcript of episode CY_LEa9xQtg

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome