Skip to content
Lex Fridman PodcastLex Fridman Podcast

Melanie Mitchell: Concepts, Analogies, Common Sense & Future of AI | Lex Fridman Podcast #61

Lex Fridman and Melanie Mitchell on melanie Mitchell Explores Concepts, Analogies, Common Sense, Future AI.

Lex FridmanhostMelanie Mitchellguest
Dec 28, 20191h 52mWatch on YouTube ↗

EVERY SPOKEN WORD

  1. 0:0015:00

    The following is a…

    1. LF

      The following is a conversation with Melanie Mitchell. She's a professor of computer science at Portland State University, and an external professor at Santa Fe Institute. She has worked on and written about artificial intelligence from fascinating perspectives, including adaptive complex systems, genetic algorithms, and the copycat cognitive architecture, which places the process of analogy-making at the core of human cognition. From her doctoral work, with her advisors Douglas Hofstadter and John Holland, to today, she has contributed a lot of important ideas to the field of AI, including her recent book, simply called Artificial Intelligence: A Guide for Thinking Humans. This is the Artificial Intelligence podcast. If you enjoy it, subscribe on YouTube, give it five stars on Apple Podcasts, support it on Patreon, or simply connect with me on Twitter, @LexFridman, spelled F-R-I-D-M-A-N. I recently started doing ads at the end of the introduction. I'll do one or two minutes after introducing the episode, and never any ads in the middle that can break the flow of the conversation. I hope that works for you and doesn't hurt the listening experience. I provide timestamps for the start of the conversation, but it helps if you listen to the ad and support this podcast by trying out the product or service being advertised. This show is presented by Cash App, the number one finance app in the App Store. I personally use Cash App to send money to friends, but you can also use it to buy, sell, and deposit bitcoin in just seconds. Cash App also has a new investing feature. You can buy fractions of a stock, say $1 worth, no matter what the stock price is. Broker services are provided by Cash App Investing, a subsidiary of Square and member SIBC. I'm excited to be working with Cash App to support one of my favorite organizations called FIRST, best known for their FIRST robotics and LEGO competitions. They educate and inspire hundreds of thousands of students in over 110 countries, and have a perfect rating on Charity Navigator, which means the donated money is used to maximum effectiveness. When you get Cash App from the App Store or Google Play and use code LEXPODCAST, you'll get $10 and Cash App will also donate $10 to FIRST, which again is an organization that I've personally seen inspire girls and boys to dream of engineering a better world. And now, here's my conversation with Melanie Mitchell. The name of your new book is Artificial Intelligence, subtitle, A Guide for Thinking Humans. The name of this podcast is Artificial Intelligence. So let me take a step back and ask the old Shakespeare question about roses and, uh, what do you think of the term "artificial intelligence" for our big and complicated and interesting field?

    2. MM

      I'm not crazy about the term. (laughs) I think it has a few problems, um, because it, it's, means so many different things to different people. And intelligence is one of those words that isn't very clearly defined either. There's so many different kinds of intelligence, degrees of intelligence, approaches to intelligence. John McCarthy was the one who came up with the term "artificial intelligence." And what, from what I read, he called it that to differentiate it from cybernetics, which was another related movement at the time. And he later regretted calling it artificial intelligence. Uh, Herbert Simon was pushing for calling it complex information processing. (laughs) Which got nixed, but you know, probably is equally vague, I guess.

    3. LF

      Is it the intelligence or the artificial in terms of words that's-

    4. MM

      I think it-

    5. LF

      ... most problematic, would you say?

    6. MM

      Yeah, I think it's a little of both. But you know, it has some good sides because I personally was attracted to the field because I was interested in phenom- phenomenon of intelligence.

    7. LF

      Mm-hmm.

    8. MM

      And if it was called complex information processing, maybe I'd be doing something wholly different now.

    9. LF

      What do you think of, I've heard the term used cognitive systems, for example. So using cognitive...

    10. MM

      Yeah, I mean, cognitive has certain associations with it and people like to separate things like cognition and perception, which I don't actually think are separate, but often people talk about cognition as being different from sort of other aspects of, of intelligence. Uh, it's sort of higher level

    11. NA

      So to-

    12. LF

      So to you, cognition is this broad, beautiful mess of things that's encompasses the whole thing. Memory...

    13. MM

      Yeah.

    14. LF

      ... perception...

    15. MM

      I, I think it's hard to draw lines like that.

    16. LF

      Right.

    17. MM

      When I was coming out of grad school in the nine- in 1990, which is when I graduated, that was during one of the AI winters. And I was advised to not put AI, artificial intelligence, on my CV, but instead call it intelligent systems.

    18. LF

      Mm-hmm.

    19. MM

      So that was kind of an, an, a euphemism, (laughs) I guess.

    20. LF

      What about, to stick briefly on, uh, on terms and words, the idea of artificial general intelligence or, uh, or like Yann LeCun prefers human-level intelligence, sort of starting to talk about ideas that, that achieve higher and higher levels of intelligence and somehow artificial intelligence seems to be a term used more for the narrow, very specific applications of AI and sort of... The, there's, what set of terms appeal to you to describe the thing that perhaps we strive to create?

    21. MM

      People have been struggling with this for the whole history of the field. And defining exactly what it is that we're talking about. You know, John Searle had this distinction between strong AI and weak AI.

    22. LF

      Mm-hmm.

    23. MM

      And weak AI could be general AI but his, his idea was stro- strong AI was the view that a machine is actually thinking.... that, as opposed to simulating thinking or carrying out intelligent processes that we would call intelligent.

    24. LF

      At a high level, if you look at the founding of the field of McCarthy and S- Searle and so on, are we closer to having a better sense of that line between narrow, weak AI and strong AI?

    25. MM

      Yes, I think we're closer to having a better idea of what that line is. Early on, for example, a lot of people thought that playing chess would be... You couldn't play chess if you didn't have sort of general human level intelligence. And of course once computers were able to play chess better than humans, that revised that view, and people said, "Okay, well maybe now we have to revise what we think of intelligence as," or-

    26. LF

      Right.

    27. MM

      Uh, and so that's kind of been a theme throughout the history of the field, is that once a machine can do some task, we then have to look back and say, "Oh, well, that changes my understanding of what intelligence is, because I don't think that machine is intelligent. At least that's not what I want to call intelligence."

    28. LF

      Do you think that line moves forever? Or will we eventually really feel as a civilization like we crossed a line, if it's possible?

    29. MM

      It's hard to predict, but I don't see any reason why we couldn't, in principle, create something that we would consider intelligent. I don't know how we will know for sure. Maybe our own view of what intelligence is will be refined more and more until we finally figure out what we mean when we talk about it, but I- I think eventually we will create machines in a sense that have intelligence. They may not be the kinds of machines we have now. And one of the things that that's going to produce is- is making us sort of understand our own machine-like qualities, that we, in a sense, are mechanical in the sense that like cells. Cells are kind of mechanical. They pro- they have algorithms they process information by, and somehow out of this mass of cells we get this emergent property that we call intelligence. But underlying it is, uh, really just cellular processing and- and lots and lots and lots of it.

    30. LF

      Do you think we'll be able to... Do you think it's possible to create intelligence without understanding our own mind? You said sort of in that process we'll understand more and more, but do you think it's possible to sort of create without really fully understanding from a mechanistic perspective, sort of from a functional perspective, how our mysterious mind works?

  2. 15:0030:00

    So do you think,…

    1. MM

      aspects of what we would call intelligence. You know, they have memory. They do process information. They have goals. They accomplish their goals, et cetera. And, um, to me that... The question of what is this thing we're, we're talking about here was, uh, really fascinating to me. And, and exploring it using computers seemed to be a good way to approach the question.

    2. LF

      So do you think, kind of, of intelligence... Do you think of the... our universe as a kinda hierarchy of complex systems and then intelligence is just the property of any... You, you can look at any level and every level has some a- aspect of intelligence? So we're just, like, one little speck in that giant hierarchy of complex systems?

    3. MM

      I don't know if I would say any system like that has intelligence. But I guess, I... What I wanna... I don't have a good enough definition of intelligence to say that.

    4. LF

      So let me, let me do-

    5. MM

      (laughs)

    6. LF

      ... sort of, uh, multiple choice, I guess.

    7. MM

      (laughs)

    8. LF

      So, uh, so you said ant colonies. So are ant colonies intelligent? Are the bacteria in our body int- intelligent? And then l- going to the ph- the physics world, molecules and the behavior at the quantum level of, of electrons and so on, is... Are those kinds of systems, do they possess intelligence? Like where's, where's the line that-

    9. MM

      Yeah.

    10. LF

      ... feels compelling to you?

    11. MM

      I don't know. I mean, I think intelligence is a continuum, and I think that the ability to, in some sense, have intention, have a goal, have, have a... Some kind of self-awareness is part of it. So I'm not sure if... You know, it's hard to know where to draw that line. I think that's kind of a mystery. But I wouldn't say that, say, the, you know... This, the planets orbiting the sun are... Is an intelligent system. I mean, I would find th- that maybe not the right term to describe that. And this is... You know, there's all this debate in the field of, like, what's, what's the right way to define intelligence? What's the right way to model intelligence? Should we think about computation? Should we think about dynamics and, um, should we think about, you know, free energy and all of that stuff? And I think that it's, it's a fantastic time to be in the field because there's so many questions and so much we don't understand. There's so much work to do.

    12. LF

      So are we... Are we the most special kind of intelligence in this kind of... You said there's, uh, a bunch of different elements and characteristics of intelligence systems and colonies. Uh, i- our... Is human intelligence, the thing in our brain, is that the most interesting kind of intelligence in this continuum?

    13. MM

      Well, it's interesting to us 'cause, 'cause it, it is us. (laughs)

    14. LF

      So-

    15. MM

      I mean, interesting to me, yes, and... Because I'm part of, you know, human...

    16. LF

      But to understanding the fundamentals of intelligence, what I'm getting at-

    17. MM

      Yeah, I mean, there-

    18. LF

      ... do we... Is studying the human... Is sort of... I- if everything we've talked about, what you talk about in your book, what, uh... Just the AI field, this notion, yes, it's hard to define, but it's usually talking about something that's very akin to human intelligence.

    19. MM

      Yeah. To me, it is the most interesting because it's the most complex, I think. It's the most self-aware.... it is the only system, at least that- that I know of, that reflects on its own intelligence.

    20. LF

      And you talk about the history of AI, and- and us, in terms of, uh, creating artificial intelligence, being terrible at predicting the future with AI or with tech in general. So, why do you think we're so bad at predicting the future? Are we hopelessly bad? So, no matter what, whether it's this decade or the next few decades, every time we make a prediction, there's just no way of doing it well? Or, as the field matures, we'll be better and better at it?

    21. MM

      I believe as the field matures, we will be better. And I think the reason that we've had so much trouble is that we have so little understanding of our own intelligence.

    22. LF

      Hmm.

    23. MM

      So, there's the famous story about Marvin Minsky assigning computer vision as a summer project-

    24. LF

      Yeah.

    25. MM

      ... to his undergrad students. And I believe that's actually a true story.

    26. LF

      Yeah, no, there's a-

    27. MM

      (laughs)

    28. LF

      ... there's- there's a write-up on it w- that everyone should read. It's like a pr- I think it's like a proposal, uh, that describes everything that sh- should-

    29. MM

      Yeah.

    30. LF

      ... be done in that project. And it's hilarious because it, uh, I mean, you could explain it, but from my sort of recollection, it describes basically all the fundamental problems with computer vision, many of which that still haven't been solved.

  3. 30:0045:00

    (laughs) …

    1. LF

      the cognitive folks, the Gary Marcus camp, the Yann camp?

    2. MM

      (laughs)

    3. LF

      There's unsupervised and there's self-supervised. There's the supervised.

    4. MM

      Yeah.

    5. LF

      And then there's the engineers who are actually building systems.

    6. MM

      Right.

    7. LF

      You have sort of the Andrej Karpathy at Tesla building actual s- s- you know, it's not philosophy, it's real-

    8. MM

      Right.

    9. LF

      ... like, systems that operate in the real world. What... yeah.

    10. MM

      Yeah.

    11. LF

      What do you take away from all- all this beautiful variety?

    12. MM

      I mean- I mean, I- I don't know if... You know, these- these different views are not necessarily mutually exclusive, and I- I think people like Yann LeCun agrees with the developmental psychology-

    13. LF

      Right.

    14. MM

      ... uh, causality, intuitive physics, et cetera. Uh, but he still thinks that it's learning, like, end-to-end learning is the way to go.

    15. LF

      Will take us perhaps all the way.

    16. MM

      Yeah, and that we don't need... there's no sort of innate...... stuff that has to get built in. This is, you know, it's because n- it's a hard problem. Uh, I, I personally, you know, I'm very sympathetic to the cog- cognitive science side 'cause that's kind of where I came in to the field. I've become more and more sort of an embodiment (laughs) adherent saying that, you know, without having a body, it's gonna be very hard to learn what we need to learn about the world.

    17. LF

      That's definitely something I'd l- I'd love to talk about i- in a little bit, to step into the cognitive world then, if you don't mind, 'cause you've done so many interesting things. If w- if we look to CopyCat, taking a couple of decades step back, you, Douglas Hofstadter, and others have created and developed CopyCat more than 30 years ago.

    18. MM

      (laughs) That's painful to hear. (laughs)

    19. LF

      (laughs) So what is it? What is, w- well, what is CopyCat?

    20. MM

      It's a program that makes analogies in an idealized domain, idealized world of letter strings. So as you say, 30 years ago, wow.

    21. LF

      Yeah.

    22. MM

      Uh, so I started working on it when I started grad school in, um, 1984. Wow. (laughs)

    23. LF

      (laughs)

    24. MM

      Dates me. Um, and it's based on Doug Hofstadter's ideas that, about, um, that analogy is really a core aspect of thinking. Uh, I remember he, he has a really nice quote in, in, in the book by, by himself and Emmanuel Sander called Surfaces and Essences. I don't know if you've seen that book, but it's, it's about analogy.

    25. LF

      Mm-hmm.

    26. MM

      Uh, he says, "Without concepts, there can be no thought, and without analogies, there can be no concepts."

    27. LF

      Mm-hmm.

    28. MM

      So the view is that analogy is not just this kind of reasoning technique where we go, you know, uh, shoe is to foot as glove is to what? You know, these kinds of things that we have on IQ tests or whatever. Uh, that, but that it's much deeper, it's much more per- uh, pervasive in everything we do, in every, our language, our, our thinking, our perception. So we, so he had a view that was a very active perception idea. So the idea was that, um, instead of having kind of wha- a passive, uh, network in which you ha- have input that's being processed through these feedforward layers and then there's an output at the end, that perception is really a, a dynamic process, you know, where, like, our eyes are moving around and they're getting information, and that information is feeding back to what we look at next, influences what we look at next and how we look at it. And so CopyCat was trying to do that, kind of simulate that kind of idea where you have these, um, agents. It's kind of an agent-based system, and you have these agents that are picking things to look at and deciding whether they were interesting or not-

    29. LF

      Mm-hmm.

    30. MM

      ... whether they should be looked at more. And, and that would influence other agents.

  4. 45:001:00:00

    Yeah. And, and if…

    1. MM

      or something, I don't know, uh, and they're connected via trillions of synapses, and there's all this chemical processing going on. There, there's just a lot of capacity for (laughs) stuff. And their information's encoded in different ways in the brain. It's encoded in, uh, ch- chemical interactions, it's encoded in elec- electric, like, firing and firing rates. And, and nobody really knows how it's encoded, but it just seems like there's a huge amount of capacity. So I think it's, it's huge. It's just enormous, and it's amazing how much stuff we know.

    2. LF

      Yeah. And, and if we're-

    3. MM

      (laughs)

    4. LF

      But we know, and not just know, like, facts, but it's all integrated into this thing that we can make analogies with.

    5. MM

      Yes.

    6. LF

      There's a dream of semantic web. There's, there's a lot of dreams from expert systems of building giant knowledge bases. W- do you see a hope for these kinds of approaches of building, of converting Wikipedia into something that could be used in analogy-making?

    7. MM

      Uh, sure. And I think people have, have made some progress along those lines. I mean, people have been working on this for a long time. But the problem is, uh, and this, I think, was, is, is the problem of common sense. Like, people have been trying to get these common sense networks. Here at MIT, there's this ConceptNet project, right? Uh, but the problem is that, as I said, most of the knowledge that we have is in- invisible to us. It's not in Wikipedia. (laughs) It's very basic things about, you know, intuitive physics, intuitive psychology, intuitive metaphysics, all that stuff.

    8. LF

      If you were to create a website that described intuitive physics, intuitive psychology, would it be bigger or smaller than Wikipedia? What do you think?

    9. MM

      I guess describe to whom? Uh... (laughs)

    10. LF

      (laughs)

    11. MM

      I'm sorry, but it, it-

    12. LF

      That's, no, that's ver- really good. I, I think-

    13. MM

      You know?

    14. LF

      ... it's exactly right, yeah.

    15. MM

      That's a hard question because, you know, m- how do you represent that knowledge is the question, right? I can certainly write down F=MA and all of Newton's laws and a lot of physics can be deduced from that. Uh, uh, uh, but that's probably not the best representation of that knowledge for, for doing, uh, the kinds of reasoning we want a machine to do. So, so I don't know. It's, it's, it's impossible to say now. (laughs)

    16. LF

      (laughs) I-

    17. MM

      And people, you know, the projects, like there's a famous, the famous Cyc project-

    18. LF

      Cyc project, yeah.

    19. MM

      ... right, that D- Doug- Douglas Lenat did that was trying-

    20. LF

      That thing still going?

    21. MM

      I think it's still going, and it's-

    22. LF

      Yeah.

    23. MM

      The, the idea was to try and encode all of common sense knowledge, including all this invisible knowledge, in some kind of logical representation. And it just never, I think, could do any of the things that he was hoping it could do, because that's just the wrong approach.

    24. LF

      Of course, that's what they always say, you know? And, and, and then the history books will say, "Well, the Cyc project finally found a breakthrough in, uh, 2058 or something." Uh, it, it, you know, we're, so much progress has been made in just a few decades that, uh-

    25. MM

      Yeah, it could be.

    26. LF

      ... who knows what the next breakthroughs will be.

    27. MM

      It could be.

    28. LF

      It's certainly a compelling notion, what the Cyc project stands for.

    29. MM

      I think Lenat was one of the earliest people to say common sense is what we need.

    30. LF

      Important.

  5. 1:00:001:15:00

    I think the idea…

    1. LF

      uh, uh, to me at least. It's easy to cr- it's, uh, it's easy to criticize, and well, look, like exactly what you're saying, mental models sort of almost from a psych- put a psychology hat on, say, look, these networks are clearly not able to achieve what we humans do with forming mental models, the analogy making, so on, but that doesn't mean that they fundamentally cannot do that. Like you can't... it's very difficult to say that, I mean, at least to me. Do you have a notion that the learning approaches really... I mean, they're going to... not- not only are they limited today, but they will forever be limited in being able to construct, uh, such mental models?

    2. MM

      I think the idea of the dynamic perception is key here, the idea that moving your eyes around and getting feedback. And that's something that... you know, there's been some models like that. There's certainly recurrent neural networks that operate over several time steps. And... but the problem is that it... the- the actual... the recurrence is...... you know, it... Basically, the, the feedback is to, at the next time step is the entire, um, hidden state-

    3. LF

      Yes.

    4. MM

      ... of the network, which, which is, uh... and it turns out that it, that's, that doesn't work very well.

    5. LF

      But see, he- the, the thing I'm saying is, mathematically speaking, it has the information in that recurrence to capture everything. It just doesn't seem to work.

    6. MM

      Yeah.

    7. LF

      (laughs)

    8. MM

      Right.

    9. LF

      So, uh, like, the, my, uh, you know, it's like, um, it's the same Turing machine question, right? Uh, y- yeah, maybe th- theoretically, it, uh, computers an- anything that's Turing, uh, uh, a universal Turing machine can, can be intelligent, but practically, the architecture might be v- have, be very specific kind of architecture to be able to create it. So just, uh, I, I guess, to sort of ask almost the same question again is how big of a role do you think deep learning needs, will play or needs to play in this, in perception?

    10. MM

      I think that deep learning as it's currently, um, a- as it currently exists, you know, will play s- that kind of thing will play some role. And, uh, but I think that there is a lot more going on in perception. But who knows? You know, the, the definition of deep learning, I mean, it-

    11. LF

      Well, that's another problem.

    12. MM

      ... it's pretty broad. It's kind of an umbrella for a lot

    13. LF

      So what, what- ... different things. ... I mean is purely sort of neural networks, you, that-

    14. MM

      Yeah, and, and feedforward neural networks.

    15. LF

      Essentially.

    16. MM

      Yeah.

    17. LF

      Or it, there could be recurrence. But-

    18. MM

      Yeah.

    19. LF

      Sometimes it feels like for... So I talked to Gary Marcus. It feels like the criticism of deep learning is kind of like us birds criticizing airplanes for not flying well or, or that they're not really flying. Do you think deep learning, do you think it could go all the way, like Yann LeCun thinks? Do you think that, uh, yeah, the-

    20. MM

      (sighs)

    21. LF

      ... the brute force learning approach can go all the way?

    22. MM

      I don't think so, no. I mean, I, I think it's an open question, but, uh, I, I, I tend to be on the innateness side that there has... that there's some things that we, we've been evolved to-

    23. LF

      Hmm.

    24. MM

      ... uh, be able to learn. And that learning just can't happen without them. So, so one example, here's, here's an example I had in the book that, that I think is, is useful, to me, at least, in thinking about this. So, um, th- this has to do with the DeepMind's Atari game-playing program.

    25. LF

      Mm-hmm.

    26. MM

      Okay? And it learned to play these Atari video games just by, um, getting input from the pixels of the screen, and, um, it learned to play, uh, the game Breakout, um, thousand percent better than humans, okay?

    27. LF

      Mm-hmm.

    28. MM

      That was one of their results, and it was great. And, and it learned this thing where it tunneled through the side of the, uh-

    29. LF

      Mm-hmm.

    30. MM

      ... of the bricks in the Breakout game, and the ball could bounce off the ceiling and then just wipe out bricks.

  6. 1:15:001:30:00

    Mm-hmm. …

    1. LF

      Almost like an active learning space where it's constantly taking edge cases and pulling back in. There's this da- data pipeline. Another aspect that is really i- important that people are studying now is called multitask learning, which is sort of breaking a- apart this problem, whatever the problem is, in this case driving, into dozens or hundreds of sp- little problems that you can turn into learning problems. So there's this giant pipeline th- you know, it's kind of interesting. I've, I've been skeptical from the very beginning, but become less and less skeptical over time, how much of driving can be learned? I still think it's much farther than, than the, the CEO of the- that particular company thinks it will be. But it, it, uh, is just constantly surprising that through good engineering and data collection and a- active selection of data how you can attack that long tail.

    2. MM

      Mm-hmm.

    3. LF

      And it's an interesting open question that you're absolutely right. There's a much longer tail and all these edge cases that we don't think about. But it's this, it's a fascinating question that applies to natural language in all spaces. How big, how, how big is that long tail?

    4. MM

      Right.

    5. LF

      And, uh, I mean, n- not to linger on the point, but what's your sense in driving, in these practical problems of the human experience, can it be learned? So the current... What are your thoughts of sort of Elon Musk thought, let's forget the thing that he says it'll be solved in a year, but can it be solved in, in, in a reasonable timeline? Or do fundamentally other methods need to be invented?

    6. MM

      So I, I don't... I think that ultimately driving... So, so it's a trade-off in a way. Uh, you know, being able to drive and deal with any situation that comes up does require kind of full human intelligence. And even in humans aren't (laughs) intelligent enough to do it 'cause humans... I mean, most human accidents are, are because the human wasn't paying attention or the human's drunk or whatever.

    7. LF

      And not because they weren't intelligent enou-

    8. MM

      Uh, not because they weren't intelligent enough, right. Uh, whereas the accidents with autonomous vehicles is because they weren't intelligent enough.

    9. LF

      They're always paying attention, right? (laughs)

    10. MM

      Yeah, they're always paying attention. So, so it's a, it's a trade-off, you know? And I think that it's a very fair thing to say that autonomous vehicles will be ultimately safer than humans, 'cause humans are very unsafe. (laughs)

    11. LF

      (laughs)

    12. MM

      It's kind of a low bar. (laughs)

    13. LF

      But, uh, just like you said, the... I, I, I think humans got a bad rap, right? 'Cause we're really good at the common sense thing.

    14. MM

      Yeah, we're great at the common sense thing. We're, we're bad at the paying attention thing.

    15. LF

      Paying attention thing, right?

    16. MM

      E- especially when we're bo- you know, driving's kind of boring and we have these phones to play with and everything. But, um, I think what, what's gonna happen is that for many reasons, not just AI reasons but also, like, legal and other reasons, that the, the definition of self-driving is gonna change, or autonomous is gonna change. It's, it's not gonna be f- just... I'm gonna go to sleep in the back and you just drive me anywhere. Uh, It's gonna be more... Certain areas are going to be instrumented to have the sensors and the mapping and all of the stuff you need for... That, that the autonomous cars won't have to have full common sense, and they'll do just fine in those areas, as long as pedestrians don't mess with them too much. That's another question. (laughs)

    17. LF

      That's right. that's the key.

    18. MM

      But, um, the-

    19. LF

      Th-

    20. MM

      I don't think we will have fully autonomous self-driving in the way that, like, most... The average person thinks of it for a very long time.

    21. LF

      And just to reiterate, this is the interesting open question that I think I agree with you on, is to solve fully autonomous driving, you have to be able to engineer in common sense.

    22. MM

      Yes.

    23. LF

      I, I think it's an important thing to hear and think about. I hope that's wrong, but I currently agre-

    24. MM

      (laughs)

    25. LF

      ... agree with you that unfortunately, you do have to have a, to be more specific, sort of these deep understandings of physics and-

    26. MM

      Yeah.

    27. LF

      ... uh, of, of the way this world works, and also human dynamics. Like you mentioned, pedestrians and cyclists are actually, that's, whatever that non-verbal communication is, some people call it, there's that dynamic that, uh, is also part of this common sense.

    28. MM

      Right. And we're pretty, we humans are pretty good at predicting what other humans are gonna do.

    29. LF

      And how our, our actions impact the behaviors of-

    30. MM

      Yeah.

  7. 1:30:001:35:34

    Sure. But I guess…

    1. LF

      kinds of problems that emerge.

    2. MM

      Sure. But I guess the example that he gives there of these corporations, that's people, right?

    3. LF

      That's fundamental.

    4. MM

      Those are people's values. I mean, we're talking about people. The corporations are... Their values are the values of the people who run those corporations.

    5. LF

      But the idea is the algorithm... That's right. So the, the fundamental person... The, the fundamental element of what does the bad thing is a human being.

    6. MM

      Yeah.

    7. LF

      But the, the algorithm kind of controls the behavior of this mass of human beings.

    8. MM

      Which, which algorithm?

    9. LF

      Uh, well, for, for a company, that's the a- so for example, if it's an advertisement-driven company that recommends certain things, uh, and encourages engagement, s- sort of gets money by encouraging engagement, and therefore, the company more and more b- it's like this cycle that builds an algorithm that-

    10. MM

      Right.

    11. LF

      ... enforces more engagement and may perhaps more division in the culture and so on, so on, and-

    12. MM

      I gue- I guess the, the question here is sort of who has the agency? I- so, so you might say-

    13. LF

      Yeah.

    14. MM

      ... for instance, we don't want our algorithms to be racist.

    15. LF

      Right.

    16. MM

      And facial recognition, you know, some people have criticized some facial recognition systems as being racist 'cause they're not as, um, good on-... darker skin than lighter skin.

    17. LF

      That's right.

    18. MM

      Okay. But the agency there, the, the, the, the actual alg- facial recognition algorithm isn't what has the agency. It's, it's not the racist thing, right? It's, it's the, the, the, I don't know, the, the combination of the training data, the cameras being used, I, whatever. But my understanding of... And, and I will say, I to- I agree with Bengio there that he, you know, I think there are these value issues with our use of algorithms. But my understanding of what, uh, Russell's argument was, is more that the algor- the, the machine itself has the agency now. It's the thing that's making the decisions and it's the thing that has what we would call values.

    19. LF

      Yes.

    20. MM

      So whether that's just a matter of degree, you know, it's hard, it's hard to say, right? 'Cause, but I would say that's sort of qualitatively different than a f- face recognition neural network.

    21. LF

      And to broadly linger on that point, if you look at Elon Musk or Stuart Russell or Bostrom, people who are worried about existential risks of AI, however far into the future, their argument goes is, it eventually happens. We don't know how far, but it eventually happens. Do you share any of those concerns and what kind of concerns in general do you have about AI that approach anything like existential threat to humanity?

    22. MM

      So I would say, yes, it's possible, but I think there's a lot more closer in existential threats to humanity. (laughs)

    23. LF

      'Cause you said like a hundred years for... So your time-

    24. MM

      So more, more than a hundred years.

    25. LF

      More than a hundred years, and so that means-

    26. MM

      Maybe, maybe even more than 500 years. I don't, I don't know. I mean, it's...

    27. LF

      So the existential threats are so far out that the future is, I mean, there'll be a million different technologies that we can't even predict now that will fundamentally change the nature of our b- behavior, reality, society, and so on before then.

    28. MM

      Yeah. I think so. I think so. And, you know, we have so many other pressing existential threats going on right now.

    29. LF

      Nuclear weapons even, and...

    30. MM

      Nuclear weapons, climate problems, you know, po- poverty, possible pandemics. The- you can go on and on. (laughs) And I think tho- you know, worrying about existential threat from AI is, is, is not the best priority for what we should be worried about. That, that's kind of my view, 'cause we're so far away. But, uh, you know, I, I'm, I'm not, I'm not necessarily criticizing Russell or, or, or, or Bostrom or whoever for worrying about that. And I'm, I think it's, some, some people should be worried about it. It's, it's certainly fine. But I, I, I was more sort of getting at their, their view of intel- what intelligence is.

Episode duration: 1:52:39

Install uListen for AI-powered chat & search across the full episode — Get Full Transcript

Transcript of episode ImKkaeUx1MU

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome