Skip to content
Lex Fridman PodcastLex Fridman Podcast

Peter Norvig: Artificial Intelligence: A Modern Approach | Lex Fridman Podcast #42

Lex Fridman and Peter Norvig on peter Norvig on AI’s Past, Present, Ethics, and Human-Centered Future.

Lex FridmanhostPeter Norvigguest
Sep 30, 20191h 3mWatch on YouTube ↗

EVERY SPOKEN WORD

  1. 0:0015:00

    The following is a…

    1. LF

      The following is a conversation with Peter Norvig. He's the director of research at Google and the co-author with Stuart Russell of the book Artificial Intelligence: A Modern Approach that educated and inspired a whole generation of researchers, including myself, to get into the field of artificial intelligence. This is the Artificial Intelligence podcast. If you enjoy it, subscribe on YouTube, give it five stars on iTunes, support on Patreon, or simply connect with me on Twitter @lexfridman, spelled F-R-I-D-M-A-N. And now, here's my conversation with Peter Norvig. Most researchers in the AI community, including myself, own all three editions, red, green, and blue, of the, uh, Artificial Intelligence: A Modern Approach. It's a field-defining textbook, as many people are aware, that you wrote with Stuart Russell. How has the book changed and how have you changed-

    2. PN

      (laughs) Yeah.

    3. LF

      ... uh, in relation to it from the first edition to the second to the third and now fourth edition as you work on it?

    4. PN

      Yeah. So it's, so it's been a lot of years, a lot of changes. One of the things changing from the first to m- m- maybe the second or third was just the rise of, uh, computing power, right? So I think in the, in the first edition we said, uh, "Here's predicate logic, but, uh, that only goes so far 'cause pretty soon you have millions of, uh, short little predicate expressions and they couldn't possibly fit in memory. Uh, so we're gonna use first-order logic that's, uh, more concise." And then we quickly r- realized, "Oh, predicate logic is pretty nice because there are really fast SAT solvers and other things, and look, there's only millions of expressions and that fits easily into memory, or maybe even billions fit into memory now." So, that was a change of, uh, the type of technology we needed just because the hardware expanded.

    5. LF

      Even to the second edition?

    6. PN

      Yeah. Yeah.

    7. LF

      So resource constraints were loosened significantly for the second edition?

    8. PN

      Yeah. Yeah. And then-

    9. LF

      And that was early 2000s, second edition?

    10. PN

      Right. So '95-

    11. LF

      Yeah.

    12. PN

      ... was the first and then, uh, 2000, 2001 or so. And then, uh, moving on from there, I think we're s- we're starting to see that again with the, uh, GPUs and then, uh, more specific type of, uh, machinery like the TPUs and w- we're seeing custom ASICs and so on, uh, for deep learning. So, we're seeing another advance in terms of the hardware. Then I think another thing that we especially noticed this time around is in all three of the first editions, we kind of said, "Well, we're gonna find AI as maximizing expected utility."

    13. LF

      Mm-hmm.

    14. PN

      "And you tell me your utility function and now we've got 27 chapters worth of cool techniques for how to optimize that." I think in this edition, we're saying more, "You know what? Maybe that optimization part is the easy part-

    15. LF

      Mm-hmm.

    16. PN

      ... and the hard part is deciding what is my utility function? What do I want? And if I'm a collection of agents or a society, uh, what do we want as a whole?"

    17. LF

      So, you touched that topic in this edition. You get-

    18. PN

      Yeah.

    19. LF

      ... a little bit more into utility.

    20. PN

      Yeah. Yeah.

    21. LF

      That's really interesting. Uh, on a, uh, a technical level or almost pushing the philosophical?

    22. PN

      I guess it, it is philosophical, right? So we, we've always had a philosophy chapter, which, which I was, uh, glad to s- that we were supporting. And now, it's less kind of the, uh, you know, Chinese room-type argument-

    23. LF

      Mm-hmm.

    24. PN

      ... and more of these, uh, ethical and societal-type issues. Uh, so we get into, uh, the issues of, uh, fairness and bias and, uh, and just the issue of, uh, aggregating utilities.

    25. LF

      So, how do you encode human values into a utility function?

    26. PN

      (laughs)

    27. LF

      Is, is there something that you can do purely through data in a learned way or is there some systematic... Obviously, there's no good answers yet. There's just, uh-

    28. PN

      Yeah.

    29. LF

      ... beginnings to this, uh, to, to even opening the door to these questions.

    30. PN

      Right. So, there is no one answer. Yes, there are techniques, uh, to try to learn that. So we talk about inverse reinforcement learning.

  2. 15:0030:00

    Mm-hmm. …

    1. PN

      I think we'll gain a better understanding of what you can do there. I think we'll need to incorporate all the things we can do with the other technologies.

    2. LF

      Mm-hmm.

    3. PN

      Right? So deep learning started out doing convolutional networks and, uh, very close to perception, uh, and has since moved to be, uh, to be able to do more with, uh, actions and some degree of, of longer term planning. Uh, but we need to do a better job with, uh, representation and reasoning and, uh, one-shot learning and so on, and w- well, I think we don't know yet how that's gonna play out.

    4. LF

      So do you think looking at the, some success but certainly, uh, eventual demise, the partial demise of experts as symbolic, uh, systems in the '80s, do you think there is kernels of wisdom in the work that was done there with logic and reasoning and so on that will rise again in your view?

    5. PN

      So certainly I think the idea of representation and reasoning is crucial, that, uh, you know, sometimes you just don't have enough data about the world to learn de novo, uh, so you've got to have a, a, some idea of representation, whether that was programmed in or told or whatever, and then be able to take, uh, steps of reasoning. I, I think the problem, uh, with, uh, you know, the good old-fashioned AI was, uh, one, we tried to base everything on these, uh, symbols that were atomic. And that's great if you're, like, trying to define the properties of a triangle.

    6. LF

      Right.

    7. PN

      Right? Because they have necessary and sufficient conditions. Uh, but things in the real world don't. The real world is, is messy and doesn't have sharp edges, and atomic symbols do. So that was a, a poor match. And then the other aspect was that the, uh, reasoning was universal and applied anywhere, which in some sense is good, but it also means there's no guidance as to where to apply.

    8. LF

      Mm-hmm.

    9. PN

      And so you, you know, you started getting these paradoxes like, uh, uh, "Well, if I have a mountain and I remove one grain of sand, uh, then it's still a mountain," and, "But if I do that repeatedly, at some point it's not," right? And, uh, with logic, you know, there's nothing to stop you from applying things, uh, repeatedly. Uh, but maybe with, uh, something like, uh, deep learning, and I don't really know what the right name for it is, uh, we could separate out those ideas. So, one, we could say, uh, you know, a mountain isn't just an atomic notion. It, it's some sort of, something like a word embedding that, uh, uh, has a, uh, a more complex representation.

    10. LF

      Mm-hmm. Yeah.

    11. PN

      And secondly, we could somehow learn, yeah, there's this rule that you can remove one grain of sand, uh, and you can do that a bunch of times but you can't do it, uh, a near infinite amount of times. But on the other hand, when you're doing induction on the integers, sure, then it's fine to do it an infinite number of times. And if we could l- uh, somehow we have to learn when these strategies are applicable-... rather than having the strategies be completely neutral and avai- uh, available everywhere.

    12. LF

      Anytime you use neural networks, anytime you learn from data or form representation from data in an automated way, it's not very explainable as to, uh, or it's not introspective to us humans in terms of, uh, how this neural network sees the world. Where... Why does it succeed so brilliantly on so many, in so many cases and fail so miserably in surprising-

    13. PN

      Yeah.

    14. LF

      ... ways and small. So, what do you think is this... is, um, the future there? Can simply more data, better data, more organized data solve that problem, or is there elements of symbolic systems that need to be brought in which are a little bit more explainable?

    15. PN

      Yeah. So, I prefer to talk about trust and, uh, validation and verification rather than just about explainability. And then I think, uh, explanations are one tool that you use towards those goals. And I think it is an important issue that, uh, we don't want to use these systems unless we trust them and we want to understand where they work and where they don't work, and- and an explanation can be part of that, right? So, I apply for loan and I get, uh, denied, uh, I want some explanation of why, and uh, you have, uh, in Europe we have the GDPR that says, uh, you're required to be able to get that. But on the other hand, an explanation alone is not enough, right? So, you know, we were used to dealing with people and with, uh, organizations and corporations and so on, and they can give you an explanation and you have no guarantee that that explanation relates to reality.

    16. LF

      Right.

    17. PN

      Right? So, the bank can tell me, "Well, you didn't get the loan 'cause you didn't have enough collateral," and that may be true or it may be true that they just didn't like my, uh, religion or- or something else. Uh, I can't tell from the explanation, and that's, uh, that's true whether the decision was made by a computer or by a person. So, I want more. I do want to have the explanations and I want to be able to, uh, have a conversation to go back and forth-

    18. LF

      Mm-hmm.

    19. PN

      ... and said, "Well, you gave this explanation, but what about this?"

    20. LF

      Mm-hmm.

    21. PN

      "And what would have happened if this had happened? And, uh, what would- what would I need to change that?" So, I think a conversation is- is a better way to think about it than just, uh, an explanation as a single output. Uh, and I think we need testing of various kinds, right? So, in order to know, was the decision really based on my collateral or was it based on my, uh, religion or skin color or whatever? I can't tell if I'm only looking at my case, but if I look across all the cases, then I can detect a pattern.

    22. LF

      Right.

    23. PN

      Right? So, you want to have that kind of capability. Uh, you want to have these adversarial testing, right? So, we thought we were doing pretty good at, uh, object recognition in- in images. We said, "Look, we're- we're at sort of pretty close to human level of performance on ImageNet," and so on. Uh, and then you start seeing these adversarial images and you say, "Wait a minute, that part is nothing like (laughs) human performance." Uh-

    24. LF

      Yeah, you can mess with it really easily.

    25. PN

      You can mess with it really easily, right?

    26. LF

      Yeah.

    27. PN

      And, uh, yeah, you can do that to humans too, right? So...

    28. LF

      In a different way, perhaps.

    29. PN

      Right. Humans don't know what color the dress was.

    30. LF

      Right.

  3. 30:0045:00

    Right. (laughs) …

    1. PN

      or online, so sort of the physical location, and then the other is, uh, kind of the affiliation, right? So, you stuck with it in part because you were in the classroom and you saw everybody else was suffering-

    2. LF

      Right. (laughs)

    3. PN

      ... the same, the same way you were.

    4. LF

      Yeah.

    5. PN

      Uh, but also because you were enrolled, you had paid tuition.

    6. LF

      Yeah.

    7. PN

      Sort of everybody was expecting you to stick with it. Uh-

    8. LF

      Mm-hmm. Society, parents-

    9. PN

      Yeah.

    10. LF

      ... c- uh, peers.

    11. PN

      Right.

    12. LF

      Yeah.

    13. PN

      And so those are two separate things. I mean, you could certainly imagine I pay a huge amount of tuition (laughs) and everybody signed up and says, "Yes, you're doing this," uh...... but then I'm in my room and my classmates are in, are in different rooms, right? We c- we could have things set up that way. Uh, so it's not just the online versus offline. I think what's more important is the commitment, uh, that you've made. And certainly, it is important to have that kind of informal, uh, you know, I meet people outside of class, we talk together because we're all in it together. Uh, I think that's, uh, really important both in keeping your motivation and also that's where some of the most important learning goes on. So, you wanna have that. Uh, maybe, you know, e- especially now, we start getting into higher bandwidths and augmented reality and virtual reality, you might be able to get that without being in the same physical place.

    14. LF

      Do you think it's possible we'll see a course at Stanford, for example, that, for students, enrolled students, is only online in the near future? Where literally, sort of it's part of the curriculum and there is no...

    15. PN

      Yeah. So, you're starting to see that. Uh, I know, uh, Georgia Tech has a master's, uh, that's done that way.

    16. LF

      Oftentimes, it's sort of they're creeping in, in terms of a master's program or sort of a further education-

    17. PN

      Yeah.

    18. LF

      ... considering the constraints of students and so on. But I mean literally, is it possible that we, uh, you know, Stanford, MIT, Berkeley, all these places go online only in, uh, in the next few decades?

    19. PN

      Y- yeah, probably not 'cause, you know, they've got a big, uh, commitment to a physical campus.

    20. LF

      Sure.

    21. PN

      Right?

    22. LF

      So there, there's (laughs) there's a momentum-

    23. PN

      Yeah.

    24. LF

      ... that's both financial and cultural, yeah?

    25. PN

      Right. And th- and then there are certain things that's just hard to do, uh, virtually, right? So, you know, we're in a field, uh, where, uh, if you have your own computer and your own paper (laughs) and so on, uh, you can do the work anywhere. Uh, but if you're in a biology lab or something, uh, you know, y- you don't have all the right stuff at home.

    26. LF

      Right. So our field, programming, you've also done a lot of, you've done a lot of programming yourself. In, uh, 2001, you wrote a great article about programming called Teach Yourself Programming in 10 Years. Sort of responds to-

    27. PN

      Yeah.

    28. LF

      ... all the books that say Teach Yourself Programming in 21 Days. So, if you were giving advice to someone getting into programming today, this is, uh, a few years since you've written that article, what's the best way to undertake that journey?

    29. PN

      I think there's lots of different ways and I think, uh, programming means more things now. And I guess, you know, when I wrote that article, I was thinking more about becoming a professional software engineer. And I thought that's a, you know, a c- sort of a career-long, uh, field of study. Uh, but I think there's lots of things now that people can do where programming is a part of solving what they wanna solve, uh, without it achieving that professional-level status.

    30. LF

      Yeah.

  4. 45:001:00:00

    Yeah. (laughs) Yeah. So,…

    1. LF

    2. PN

      Yeah. (laughs) Yeah. So, I think a couple things. So, one was, I think it was designed for a single programmer or a small team, and a skilled programmer who had the good taste to say, "Well, I'm, I am doing language design and I have to make g- good choices." And if you make good choices, that's great. If you make bad choices, y- uh, you can hurt yourself.

    3. LF

      Mm-hmm.

    4. PN

      And it can be hard for other people on the team to understand it.

    5. LF

      Right.

    6. PN

      So, I think there was a, a limit to the scale of the size of a project in terms of number of people that Lisp was good for. And as an industry, we kind of grew, uh, beyond that. I think it is in part the parentheses. You know, one of the jokes is the acronym for Lisp is, uh, Lots Of Irritating Silly Parentheses.

    7. LF

      (laughs)

    8. PN

      Uh, my acronym was, uh, Lisp Is Syntactically Pure, saying all you need is parentheses and atoms. But I remember, you know, so we had the, the AI textbook and, uh, because we did it in the '90s, we had, uh, we had pseudocode in the book, but then we said, "Well, we'll have Lisp online," 'cause that's the language of AI at the time.

    9. LF

      Mm-hmm.

    10. PN

      And I remember some of the students complaining 'cause they hadn't had Lisp before and they didn't quite understand what was going on. And, and I remember one student complained, "I don't understand how this pseudocode corresponds to this Lisp." And there was a one-to-one correspondence between the (laughs) the, uh, symbols in the code and the pseudocode, and the only thing difference was the parentheses.

    11. LF

      (laughs)

    12. PN

      (laughs) So, I said, "It must be that for some people, a certain number of left parentheses shuts off their brain."

    13. LF

      (laughs) Yeah, it's very, it's very possible in that sense, and Python just goes the other way.

    14. PN

      Yeah.

    15. LF

      It just-

    16. PN

      And so, so that was the point at which I said, "Okay, can't have only Lisp as a language."

    17. LF

      Yeah.

    18. PN

      Because I, you know, I don't wanna, you know, you only got 10 or 12 or 15 weeks or whatever it is to teach AI and I don't want to waste two weeks of that teaching Lisp. So, I said, "I gotta have another language." Java was the most popular language at the time. I started doing that, and then I said, "It's really hard to have a one-to-one correspondence between the pseudocode and the Java," because Java is so verbose. Uh, so then I said, "I'm gonna do a survey and find the language that's most like my pseudocode," and turned out Python basically was my pseudocode.

    19. LF

      (laughs)

    20. PN

      Somehow, I had channeled, uh, Guido-

    21. LF

      (laughs)

    22. PN

      ... designed a pseudocode that was the same as Python, although I hadn't heard of Python, uh, at that point. Uh, and from then on, uh, that's what I've been using 'cause it's been a good match.

    23. LF

      So, what's the story in Python behind pyTudes? Your GitHub repository-

    24. PN

      Yeah.

    25. LF

      ... with puzzles and exercises in Python is pretty fun.

    26. PN

      Yeah, just, uh, uh, it seemed like fun. Uh, you, you know, I like, uh, doing puzzles and I like, uh, being an educator. I, I did a class with Udacity, uh, Udacity 212, I think it was, that was basically problem-solving, uh, uh, using Python and looking at different problems. And-

    27. LF

      Does pyTudes feed that class, uh, in terms of the exercises? I was wondering what the-

    28. PN

      Uh, yeah, so the class, the class came first.

    29. LF

      Yeah.

    30. PN

      Some of the stuff that's in pyTudes was write-ups of what was in the class, and then some of it was just continuing to, uh, to, uh, work on new problems.

  5. 1:00:001:02:57

    Mm-hmm. …

    1. PN

      whether they're robots or drones or whatever. Uh, some of that, uh, we're seeing due to AI, a lot of it, uh, you don't need AI.

    2. LF

      Mm-hmm.

    3. PN

      Uh, and I don't know what's a, what's a worse threat if it's a autonomous drone or, uh, it's, uh, CRISPR technology becoming available or... we have lots of, uh, threats to face and some of them involve AI and some of them don't.

    4. LF

      So the threats that technology presents, are you, for the most part, optimistic about technology also alleviating those threats or creating new opportunities or protecting us from the more detrimental effects of these new technologies?

    5. PN

      Yeah, I don't know. It, it... again, it's hard to predict the future and, uh-

    6. LF

      (laughs) Yes.

    7. PN

      ... as a succ- society so far we've survived, uh, nuclear bombs-

    8. LF

      Somehow, yeah.

    9. PN

      ... and, and other things. Of course, uh, only societies that have survived are having this conversation.

    10. LF

      (laughs)

    11. PN

      So, uh, uh, maybe that's a survivorship bias there.

    12. LF

      Yeah. What problem stands out to you as exciting, challenging, impactful to work on in the near future for yourself, for the community and, and broadly?

    13. PN

      So, I, you know, we talked about these, uh, assistants in conversation. I, I think that's a great area. I think, uh, combining, uh, common sense reasoning, uh, with, uh, the power of data i- is a, a great area.

    14. LF

      In which application? In, in conversation relations or just broadly speaking?

    15. PN

      Just in, in general, yeah. As a programmer, I'm interested in, uh, programming tools, both in terms of, uh, you know, the current systems we have today with, with TensorFlow and so on. Can we make them much easier to use for a broader-

    16. LF

      Mm-hmm.

    17. PN

      ... uh, class of people? And also, can we apply, uh, machine learning to, uh, the more traditional type of programming, right? So, you know, when you go to Google and you, uh, type in a query and you spell something wrong, it says, "Did you mean..."

    18. LF

      Mm-hmm.

    19. PN

      And the reason we're able to do that is 'cause lots of other people made a similar error and then they corrected it. Uh, we should be able to go into our code bases and our bug fix bases and, uh, when I type a line of code, it should be able to say, "Did you mean such and such?" If you type this today, you're probably gonna fi- type in this bug fix, uh, tomorrow.

    20. LF

      Yeah, that's a really exciting application of, uh, almost a, a- an assistant for the coding programming experience-

    21. PN

      Yeah.

    22. LF

      ... at every level. So, I think I could s- safely speak for the entire AI community (laughs) -

    23. PN

      Mm-hmm.

    24. LF

      ... first of all for, uh, thanking you for the amazing work you've done. Uh, certainly for the amazing work you've done with, uh, AI: A Modern Approach book.

    25. PN

      Yeah, thank you.

    26. LF

      I think we're all looking forward very much for the fourth edition and then the fifth edition and so on.

    27. PN

      Yeah, yeah.

    28. LF

      So, uh, Peter, thank you so much for talking today.

    29. PN

      Yeah, thank you. A pleasure.

Episode duration: 1:03:12

Install uListen for AI-powered chat & search across the full episode — Get Full Transcript

Transcript of episode _VPxEcT_Adc

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome