Skip to content
Lex Fridman PodcastLex Fridman Podcast

Daniel Kahneman: Thinking Fast and Slow, Deep Learning, and AI | Lex Fridman Podcast #65

Lex Fridman and Daniel Kahneman on daniel Kahneman on human thinking, AI limits, and life’s stories.

Daniel KahnemanguestLex Fridmanhost
Jan 14, 20201h 18mWatch on YouTube ↗

EVERY SPOKEN WORD

  1. 0:0015:00

    The following is a…

    1. DK

      The following is a conversation with Daniel Kahneman, winner of the Nobel Prize in Economics for his integration of economic science with the psychology of human behavior, judgment, and decision-making. He's the author of the popular book, Thinking, Fast and Slow, that summarizes in an accessible way his research of several decades, often in collaboration with Amos Tversky...... and what, what people can do.

    2. LF

      So, the effect of the in group and the out group?

    3. DK

      You know, the- it's clear that those were people, you know, you could- you could shoot them. You could- you know, they were not human. They were not- there was no empathy or very, very little empathy left. So occasionally, you know, they might have been- and, and very quickly, by the way, uh, the empathy disappeared if there was initially. And the fact that everybody around you was doing it, that, that completely- the group doing it and everybody shooting Jews, I think that, that, uh, makes it permissible. Now, how much, you know, whether it would- it could happen, uh, in every culture or whether the Germans were just particularly efficient and, and disciplined so they could get away with it.

    4. LF

      Mm-hmm.

    5. DK

      That's-

    6. LF

      It's a question.

    7. DK

      It's an interesting question.

    8. LF

      Are these artifacts of history or is it human nature?

    9. DK

      I think that's really human nature. You know, you put some people in a position of power relative to other people and, and then they become less human. They- they become different.

    10. LF

      But in general, in war, outside of concentration camps in World War II, it seems that war brings out darker sides of human nature, but also the beautiful things about human nature.

    11. DK

      Well, you know, I mean, what it- what it brings out is the, the loyalty among soldiers. I mean, it brings out the bonding. Male bonding, I think, is a very real thing that- and that happens. And so- and, and there is a certain thrill to friendship and there is certainly a certain thrill to friendship under risk-

    12. LF

      Yeah.

    13. DK

      ... and to shared risk. And so people have very profound emotions up to the point where it gets so traumatic that, uh, that little is left, but-

    14. LF

      So, let's talk about psychology a little bit. Uh, in your book, Thinking, Fast and Slow, you describe two modes of thought system. One, the fast, instinctive and emotional one, and system two, the slower, deliberate, logical one. At the risk of asking Darwin to discuss (laughs) , uh, theory of evolution, uh, can you describe distinguishing characteristics for people who have not read your book of the two systems?

    15. DK

      Well, I mean, the word system is a bit misleading, but it's- at the same time it's misleading, it's also very useful.

    16. LF

      Yes.

    17. DK

      But what I call system one, it's easier to think of it as, as a family of activities. And primarily the way I describe it is there are different ways for ideas to come to mind, and some ideas come to mind automatically. And the example- a standard example is two plus two, and then something happens to you. And, and in other cases you've got to do something, you've got to work in order to produce the idea. And my example, I always give the same pair of numbers as 27 times 14, I think.

    18. LF

      You have to d- perform some algorithm in your head, some steps.

    19. DK

      Yes, and, and it takes time.

    20. LF

      Yeah.

    21. DK

      It's a very different. Nothing comes to mind except something comes to mind which is the algorithm, I mean, that you've got to perform, and then it's work-

    22. LF

      Right.

    23. DK

      ... and it engages short-term memory and it engages executive function and it makes you incapable of doing other things at the same time. So, uh, the- the main characteristic of system two that there is mental effort involved and there's a limited capacity for mental effort, whereas system one is effortless essentially. That's the major distinction.

    24. LF

      So, you talk about their- you know, it's really convenient to talk about two systems, but you also mentioned just now and in general that there's no distinct two systems in the brain from a neurobiological, even from psychology perspective. But why does it seem to, uh ... from the experiments you've conducted, there does seem to be kind of emergent two modes of thinking. So, at some point these kinds of systems came into a brain architecture, maybe mammals share it, but- or, or do you not think of it at all in those terms that it's all a mush and these two things just emerge?

    25. DK

      I mean, you know, evolutionary theorizing about this is cheap and-

    26. LF

      Yeah. Fair enough.

    27. DK

      ... and easy. So it's- the way I think about it is that it's very clear that animals, uh, have, have a perceptual system and that includes an ability to understand the world-

    28. LF

      Mm-hmm.

    29. DK

      ... at least to the extent that they can predict. They can't explain anything, but they can anticipate what's going to happen and that's a key form of understanding the world. And my crude idea is that we- what I call system two-

    30. LF

      Mm-hmm.

  2. 15:0030:00

    So, along this line…

    1. DK

      And, and all that is in system one, so you... uh, system two verifies.

    2. LF

      So, along this line of thinking, really, what we are, are machines that construct pretty effective system one. You could think of it that way. So, so we're now talking about humans, but if you think about building artificial intelligence systems, robots, do you think all the features and bugs that you have highlighted in human beings are useful for constructing AI systems? So both systems are useful for perhaps-

    3. DK

      Well-

    4. LF

      ... instilling in robots?

    5. DK

      What is happening these days is that actually what is happening in deep learning is, is more like a system one product than like a system two product.

    6. LF

      Mm-hmm.

    7. DK

      I mean, deep learning matches patterns and anticipate what's going to happen, so it's highly predictive. Uh, what-

    8. LF

      That's right.

    9. DK

      ... what d- deep learning doesn't have, and, you know, many people think that this is a critical... it, it doesn't have the ability to reason, so it, it does... uh, there is no system two there. But I think very importantly, it doesn't have any causality or any way to represent meaning and to represent real interactions. So, uh, until that is solved, uh, the... you know, what can be accomplished is marvelous and very exciting but limited.

    10. LF

      That's actually really nice to think of, uh, current advances in machine learning as essentially system one advances. So how far can we get with just system one?

    11. DK

      Well, um-

    12. LF

      If we think of deep learning and artificial intelligence (laughs) systems as system one.

    13. DK

      I mean, you know, it's very clear that DeepMind has already gone way, way beyond what people thought was possible. I think, I think the thing that has impressed me most about the developments in AI, is the speed. It's that things, at least in the context of deep learning, and maybe this is about to slow down, but things moved a lot faster than anticipated. The transition from solving, solving chess to solving Go, uh, was... I mean, that's bewildering how quickly it went. The move from AlphaGo to AlphaZero is sort of bewildering, the speed at which they accomplished that. Now, clearly, uh, they are... there re- so there are many problem that you can solve that way, but there are some problems for which you need something else. <|agent|><|en|> Well, reasoning and also... you know, the... uh, one of the real mysteries, uh, a psychologist Gar- Gary Marcus, who is also a critic of AI, um... I mean, he... what he points out, and I think he has a point, is that, uh, humans learn quickly.... uh, children don't need a million examples, they need two or three examples. So, clearly there is a fundamental difference. And what enables, uh, what enables a machine to, to learn quickly, what you have to build into the machine, because it's clear that you have to build some expectations or something in the machine to make it ready to learn quickly, uh, that's, that at the moment seems to be unsolved. I'm pretty sure that DeepMind is working on it but, um-

    14. LF

      Yeah, they're-

    15. DK

      ... if they have solved it, I, I haven't heard yet.

    16. LF

      They're trying to actually, them and OpenAI are trying to, to start to get to use neural networks to reason. So assemble knowledge-

    17. DK

      Yeah.

    18. LF

      ... uh, of course causality is, temporal causality is out of reach to most everybody. You, you mentioned wha- the benefits of system one is essentially that it's fast, it allows us to function in the world.

    19. DK

      Fast and skilled, yeah.

    20. LF

      It's skill.

    21. DK

      And it has a model of the world. You know, in a sense, I mean there was the earlier phase of, of, uh, AI, uh, attempted to model reasoning, and they were moderately successful, but you know, reasoning by itself doesn't get you m- much. Uh, deep learning has been much more successful in terms of, you know, what they can do. But now, it's an interesting question, whether it's approaching its limits. What do you think?

    22. LF

      I think absolutely. So I, I just talked to Yann LeCun, he mentioned, you know... (laughs)

    23. DK

      I know him.

    24. LF

      So he thinks that, uh, the limits, we're not going to hit the limits with neural networks, that ultimately this kind of system one pattern matching will start to, start to look like system two, with, without significant transformation of the architecture. So I'm more with the, with the majority of the people who think that yes, neural networks will hit a limit in their capability.

    25. DK

      Well he, on the one hand I have heard him tell Demis Hassabis essentially that, you know, what they have accomplished is not a big deal, that they have-

    26. LF

      Mm-hmm.

    27. DK

      ... just touched, that basically, you know, they can't do unsupervised learning-

    28. LF

      Yeah.

    29. DK

      ... in a, in an effective way. And, but you're telling me that he thinks that the current, within the current architecture, you can do causality and reasoning?

    30. LF

      So he's very much a pragmatist in a sense that's saying that we're very far away, that there's still-

  3. 30:0045:00

    Well, I think that…

    1. LF

      lies, is it seems that almost every robot-human collaboration system is a lot harder than people realize. So, do you think it's possible for robots and humans to collaborate successfully?Uh, we, we talked a little bit about semi-autonomous vehicles, like in the Tesla, autopilot, but just in tasks in general... i- if you think, we talked about current neural networks being kind of system one. Do you think, uh, those same systems can borrow humans for system two type tasks and collaborate successfully?

    2. DK

      Well, I think that in any system where humans and, and the machine interact, uh, the human will be superfluous within a fairly short time. Uh, that is if, if the machine is advanced enough so that it can really help the human, then it may not need the human for a long time. Now, it would be very interesting if, if there are problems that for some reason the machine doesn't, cannot solve, but that people could solve, then you would have to build into the machine an ability to recognize that it is in that kind of problematic situation-

    3. LF

      Mm.

    4. DK

      ... and, and to call the human. That, that cannot be easy without understanding. That is, it's, it must be very difficult to, to program a recognition that you are in a problematic situation without understanding the problem, but...

    5. LF

      That's very true. In order to understand the full scope of situations that are problematic, you almost need to be smart enough-

    6. DK

      To solve it.

    7. LF

      ... to solve all those problems.

    8. DK

      Yeah. It's not clear to me how much the machine will need the human. I think the example of chess is very instructive. I mean, there was a time at which Kasparov was saying that human-machine combinations will beat everybody. Uh, even Stockfish doesn't need people.

    9. LF

      Yeah.

    10. DK

      And AlphaZero certainly doesn't need people.

    11. LF

      The question is, just like you said, how many problems are like chess and how many problems are the ones where are not like chess, where-

    12. DK

      Let me-

    13. LF

      ... well, every problem probably in the end is like chess. The question is, how long is that transition period?

    14. DK

      I mean, you know, that's, that's a question I would ask you in terms of... I mean, autonomous vehicle, just driving, is probably a lot more complicated than Go to solve the-

    15. LF

      Yes.

    16. DK

      ... to solve the problem.

    17. LF

      And, and that's surprising to people.

    18. DK

      Because it's open. No. I mean, I, you know, it wouldn't, that's not surprising to me because the, because the, there is a hierarchical aspect to this, which is recognizing a situation, and then within the situation, bringing, bringing up the relevant knowledge.

    19. LF

      Right.

    20. DK

      And, uh, and for that hierarchical type of system to work, uh, you need a more complicated system than we currently have.

    21. LF

      A lot of people think because as human beings, this is probably the, the cognitive biases, they think of driving as pretty simple because they think of their own experience. This is actually a, a b- big problem for AI researchers or people thinking about AI because they evaluate how hard a particular problem is based on very limited knowledge-

    22. DK

      Yeah.

    23. LF

      ... ba- basically on how hard it is for them to do the task.

    24. DK

      Yeah.

    25. LF

      And then they take for granted... I me- maybe you can speak to that because most people tell me driving is trivial, and-

    26. DK

      Well-

    27. LF

      ... and humans, in fact, are terrible at driving is what people tell me. And I see humans, and humans are actually incredible at driving, and driving is really terribly difficult.

    28. DK

      Yeah.

    29. LF

      Uh, so do you... (laughs) is that just another element of the effects that you've described in your work on the psychology side?

    30. DK

      Well...

  4. 45:001:00:00

    I- …

    1. LF

      the being in it. But there's also a modern... I don't know if you think about this or interact with it, there's a modern way to, uh, magnify the remembering self, which is by posting on Instagram, on Twitter, on social networks. A lot of people live life for the picture that you take, that you post somewhere. And now thousands of people share it and potentially- potentially millions, and then you can relive it even much more than just those minutes. Do you think about that-

    2. DK

      I-

    3. LF

      ... magnification much?

    4. DK

      You know, I'm too old for social networks. I, you know, I- I've never seen Instagram, so-

    5. LF

      (laughs)

    6. DK

      ... I cannot really speak intelligently about those things. I'm just too old.

    7. LF

      But it's interesting to watch the exact effects you described?

    8. DK

      I- I think it will make a very big difference. I mean, and it will make... It will also make a difference, and that I don't know, whether, uh... It's clear that in some ways the devices that serve us, uh, supplant function. So you don't have to remember phone numbers, you don't have... You really don't have to know facts. I mean, the number of conversations I'm involved with where somebody says, "Well, let's look it up."

    9. LF

      Yeah.

    10. DK

      Uh, so it's- it's a... In a way, it's made conversations... Well, it's- it means that it's much less important to know things. You know, it used to be very important to know things. This is changing. So the requirements of that- that we have for ourselves and for other people are changing because of all those supports and because... And I have no idea what Instagram does-

    11. LF

      (laughs)

    12. DK

      ... but it's, uh-

    13. LF

      Well, I'll tell you-

    14. DK

      ... I wish I knew.

    15. LF

      I mean, I- I wish I could just have the... my remembering self could enjoy this conversation, but I'll get to enjoy it even more by having wat- by watching it and then talking to others, it'll be about 100,000 people, scary as it is to say, (laughs) "Well, listen or watch this." Right? It changes things. It changes the experience of the world, that you seek out experiences which could be shared in that way. It's in... And- and I haven't seen... It's- it's the same effects that you described, and I don't think the psychology of that magnification has been described yet, 'cause it's a new world.

    16. DK

      I mean, you know, the sharing... There was a per- there was a time when people read books and, uh-

    17. LF

      (laughs)

    18. DK

      ... and- and you could assume that your friends had read the same books that you read.

    19. LF

      Mm-hmm.

    20. DK

      So there was-

    21. LF

      Kind of invisible sharing that you're-

    22. DK

      There was a lot of sharing going on, and there was a lot of assumed common knowledge.

    23. LF

      Yeah.

    24. DK

      And, you know, that was built in. I mean, it was obvious that you had read The New York Times, it was obvious that you had read the reviews. I mean, uh, so a lot was taken for granted that was shared, uh, and, you know, when there were... When there were three television channels, it was obvious that you'd seen one of them, probably the same as... Uh, so sharing ex- sharing-

    25. LF

      Has always been there.

    26. DK

      Always... It was always there, it was just different.

    27. LF

      At the risk of, uh, inviting mockery from you, let me say that... (laughs) that I'm also a fan of Sartre and Camus and existentialist philosophers. And, um, I'm joking, of course, about mockery. But from the perspective of the two selves, what do you think of the existentialist philosophy of life? So, trying to really emphasize the experiencing self as the proper way to... uh, or the best way to live life.

    28. DK

      I don't know enough philosophy to answer that, but it's not, uh... You know, the emphasis on, on experience is also the emphasis in Buddhism.

    29. LF

      Yeah, right. That's right.

    30. DK

      So, uh, that's... You just have got to, to experience things and, and, and not to evaluate, and not to pass judgment, and not to score, not to keep score. So, uh-

  5. 1:00:001:15:00

    Yeah, and I mean...…

    1. DK

      my experience, are a very personal experience. And- and I have to like the person I'm working with. Otherwise, you know, I mean, there is that kind of a collaboration which is like, uh, an exchange, a commercial exchange of, uh, "I'm giving this, you give me that," but the- the real ones are interpersonal, they're between people who like each other, and- and who like making each other think, and who like the way that the other person responds to your thoughts. Uh, you have to be lucky.

    2. LF

      Yeah, and I mean... But I already noticed the pa- even just me showing up here-... you've, uh, you've quickly started to digging in a particular problem I'm working on, and already new information started to emerge. If y- is that a process y- y- just a process of curiosity-

    3. DK

      Yeah.

    4. LF

      ... of talking to people about problems and seeing?

    5. DK

      I'm curious about anything to do with AI and robotics and s- you know, and, uh, so, and I knew you were dealing with that, so I was curious.

    6. LF

      Just follow your curiosity?

    7. DK

      Yeah.

    8. LF

      Jumping around on, on the psychology front, the, uh, dramatic-sounding terminology of replication crisis, but really just the, at times, th- this effect that at times studies do not, are not fully generalizable. They don't-

    9. DK

      You're being polite. Uh, it's worse than that, but... (laughs)

    10. LF

      Is it?

    11. DK

      Yeah.

    12. LF

      So I'm actually not fully familiar-

    13. DK

      Well, I mean-

    14. LF

      ... to the degree how bad it is, right? So, what do you think is the source? Where do you think?

    15. DK

      I think I know what's going on, actually. I mean, I have a theory about what's going on. And what's going on is that there is, first of all, a very important distinction between two types of experiments. And one type is within-subject, so it's the same person-

    16. LF

      Right.

    17. DK

      ... as two experimental conditions. And the other type is between-subjects, where some people are this condition, other people are that condition. They're different worlds. And between-subject experiments are much harder to predict and much harder to anticipate. And the reason, uh, and they're also more expensive because you need more people, and it's, it's just... So between-subject experiments is where the problem is.

    18. LF

      Okay.

    19. DK

      Uh, it's not so much in within-subject experiments. It's really between. And there is a very good reason why the intuitions of researchers about between-subject experiments are wrong, and that's because when you are a researcher, you are in a within-subject situation. That is, you are imagining the two conditions, and you see the causality, and you feel it.

    20. LF

      Mm-hmm.

    21. DK

      And, but in the between-subjects condition, they don't... They, uh, they see, they live in one condition, and the other one is just nowhere. So, our intuitions are very weak about between-subject experiments. And that, I think, is something that people haven't realized. And, and in addition, because of that, we have no idea about the power of, uh, manipulations, of experimental manipulations, because the same manipulation is much more powerful when, when you are in the two conditions-

    22. LF

      Mm-hmm.

    23. DK

      ... than when you live in only one condition. And so, the experimenters have very poor intuitions about between-subject experiments. And, and there is something else, which is very important, I think, uh, which is that almost all psychological hypotheses are true. That is, in the sense that, you know, directionally, if you have a hypothesis that A really causes B, that, that it's not true that A causes the opposite of B. Maybe A just has very little effect. But hypotheses are true-

    24. LF

      Yeah.

    25. DK

      ... mostly, except, mostly, they're very weak. They're much weaker than you think when you are having images of... So, uh, the reason I'm excited about that is that I recently heard about, uh, some, uh, some friends of mine, uh, who, uh, they e- essentially funded 53 studies of behavioral change-

    26. LF

      Yep.

    27. DK

      ... by 20 different teams of people with a very precise objective of changing the number of time that people go to the gym, but-

    28. LF

      Mm-hmm.

    29. DK

      ... you know, so... And, and the success rate was zero.

    30. LF

      They're not successful?

  6. 1:15:001:18:35

    Does the possibility of…

    1. DK

      Uh, um, and, and humor would be more impressive than just factual conversation, which I think is, is easy. And allusions would be interesting, and metaphors would be interesting, I mean, but new metaphors, not practiced metaphors. So, there is a lot that's, you know, would be sort of impressive if... and that, uh, it's completely natural in conversation, but that you really wouldn't expect.

    2. LF

      Does the possibility of creating an, a human-level intelligence or superhuman-level intelligence system excite you? Scare you?

    3. DK

      Well, I mean, you know, I'm, uh-

    4. LF

      How does it make you feel?

    5. DK

      I find the whole thing fascinating, absolutely fascinating.

    6. LF

      So, exciting?

    7. DK

      I think, and exciting. It's also terrifying, you know? But, uh, but I'm not going to be around to see it. And, uh, so I'm curious about what is happening now, but I also know that, that predictions about it are silly.

    8. LF

      (laughs)

    9. DK

      Uh, we really have no idea what it will look like 30 years from now. No idea.

    10. LF

      Speaking of silly, bordering on the profound, let me ask the question of, in your view, what is the meaning of it all?

    11. DK

      (laughs)

    12. LF

      The meaning of life?

    13. DK

      Uh-

    14. LF

      These, uh, descendant of great apes that we are, why... what drives us as a civilization, as a human being, as a force behind everything that you've observed and studied? Is there any answer, or is it all-

    15. DK

      Uh-

    16. LF

      ... just a beautiful mess?

    17. DK

      There is no answer that, that I can understand. Uh, and I'm not, and I'm not actively looking for one, um, bec-

    18. LF

      Do you think an answer exists?

    19. DK

      No. There is no answer that we can understand. I'm not qualified to speak about what we cannot understand.

    20. LF

      (laughs)

    21. DK

      But there is, I know, that we cannot understand reality, you know? And, I mean, there are a lot of thing that we can do. I mean, you know, m- m- gravity waves. I mean, that's, that's a big moment for humanity. And-

    22. LF

      Yeah.

    23. DK

      ... when you imagine that ape, you know, being able to, to go back to The Big Bang.

    24. LF

      (laughs)

    25. DK

      That's, that... But, but-

    26. LF

      But the why-

    27. DK

      Yeah, the why.

    28. LF

      ... is bigger than us.

    29. DK

      (laughs) The why is hopeless, really.

    30. LF

      Danny, thank you so much. It was an honor. Thank you for speaking today.

Episode duration: 1:18:40

Install uListen for AI-powered chat & search across the full episode — Get Full Transcript

Transcript of episode UwwBG-MbniY

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome