Skip to content
The Joe Rogan ExperienceThe Joe Rogan Experience

Joe Rogan Experience #1188 - Lex Fridman

Lex Fridman is a research scientist at MIT, working on human-centered artificial intelligence.

Lex FridmanguestJoe RoganhostGuestguest
Oct 25, 20182h 55mWatch on YouTube ↗

EVERY SPOKEN WORD

  1. 0:0015:00

    (laughs) …

    1. LF

      (laughs)

    2. JR

      (laughs) Four, three, two, one. Hello, Lex.

    3. LF

      Hey, Joe.

    4. JR

      We're here, man. What's going on?

    5. LF

      We're here. Mecca.

    6. JR

      Thanks for doing this. You brought notes. You're seriously prepared.

    7. LF

      When you're jumping out of a plane, it's best to bring a parachute. This is my parachute.

    8. JR

      I, I understand. Yeah. Um, how long have you been working in artificial intelligence?

    9. LF

      My whole life, I think.

    10. JR

      Really?

    11. LF

      So I've, uh, when I was a kid, wanted to become a psychiatrist. I wanted to understand the human mind. I think the human mind is the most beautiful mystery that our entire civilization has taken on exploring through science. I think, you look up at the stars and you look at the universe out there, you had Neil deGrasse Tyson here, it's an amazing, beautiful scientific journey that we're taking on in exploring the stars, but the mind, to me, is a bigger mystery and more fascinating. And it's been the thing I've been fascinated by from the very beginning of my life, and just I think all of human civilization has been wondering, you know, what is in this- inside this thing, the hundred trillion connections that are just firing all the time, somehow making the magic happen to where you and I can look at each other, make words, all the fear, love, life, death that happens is all because of this thing in here. And understanding why is fascinating. And what I early on understood is that one of the best ways, for me at least, to understand the human mind is to try to build it, and that's what artificial intelligence-

    12. JR

      Ah.

    13. LF

      ... is, you know, i- it's, it's not enough to s- from a psychology perspective to study, from a psychiatry perspective to i- investigate from the outside. The best way to understand is to do.

    14. JR

      So, you mean almost like reverse engineering a brain.

    15. LF

      There's some stuff, exactly, reverse engineering the brain, there's some stuff that you can't understand until you try to do it. You can hypothesize your... I mean, we're both martial artists from various, uh, directions, you can hypothesize about what is the best martial art, but until you get in the ring, like what the UFC did, and test ideas is when you first realize that the touch of death that I've seen some YouTube videos on, that you perhaps cannot kill a person with a single touch, or your mind, or telepathy, that there are certain things that work, wrestling works, punching works. Okay, can we make it better? Can we create something like a touch of death? Can we figure out how to turn the hips, how to deliver a punch in the way that does do a significant amount of damage? And then you've, at that moment, when you start to try to do it, and you face some of the people that are trying to do the same thing, that's the scientific process. And you try, you actually begin to understand what is intelligence, and you begin to also understand how little we understand. It's like, uh, Richard Feynman, who I'm dressed after today-

    16. JR

      Are you? (laughs)

    17. LF

      (laughs) He's a physicist. I'm not sure if you're familiar with him.

    18. JR

      Sure, yeah.

    19. LF

      Yeah. Yeah, he always used to wear this exact thing, so I, I feel, I feel pretty badass wearing it. Uh-

    20. JR

      "If you think you know astrophysics, you don't know astrophysics."

    21. LF

      That's right.

    22. JR

      Well, he said it about quantum physics, right?

    23. LF

      Quantum physics.

    24. JR

      Yeah.

    25. LF

      That's right. That's right. Uh, so he was a, a quantum physicist. And he kind of, uh, I remember hearing him talk about s- that... understanding our, the nature of the universe, uh, of reality could be like an onion. We don't know.

    26. JR

      Mm.

    27. LF

      But it could be like an onion to where you think you know, you're studying a layer of an onion, and then you peel it away and there's more, and you keep doing it and there's an infinite number of layers. With intelligence, there's the same kind of component to where we think we know, we got it, we figured it out, we figured out how to beat the human world champion in chess. We solved intelligence. And then we try the next thing. Wait a minute, Go is really difficult to solve as a game. And then you say, "Okay, it's, uh, I've" I came up when the game of Go was impossible for artificial intelligence systems to beat, and have now recently have been beaten. And-

    28. JR

      Within the last, like, five years, right?

    29. LF

      ... in the next, the last five years. There's a lot of technical fascinating things of why that victory is interesting and important for artificial intelligence.

    30. JR

      It requires creativity, correct?

  2. 15:0030:00

    Cut the shit? …

    1. LF

      deGrasse Tyson moment where I, it was, y- you said there's cut be- cut the-

    2. JR

      Cut the shit?

    3. LF

      ... cut the shit moments.

    4. JR

      Yes (laughs) .

    5. LF

      For me, for me, the, it, the, the movie opening is, everyth- everything about it was, uh, I- I was rolling my eyes the first time.

    6. JR

      Why were you rolling your eyes? What was the cut the shit moment?

    7. LF

      So, uh, that's a general bad tendency that I'd like to talk about amongst people who are scientists that are actually trying to do stuff, uh, they're trying to build the- the thing. Uh, it's- it's very tempting to roll your eyes and tune out-

    8. JR

      Mm-hmm.

    9. LF

      ... in a lot of aspects of artificial intelligence discussion and so on. For me, there's real reasons to roll your eyes and there's just... Well, let me, uh, let me just describe it. So the- this person in Ex Machina, no spoiler alerts, uh, is in the middle, what, like a Jurassic Park-type situation where he's like in the middle of a land that he owns?

    10. JR

      Yeah, we don't really know where it is-

    11. LF

      Right.

    12. JR

      ... it's not established but you have to fly over glaciers and you get to this place and there's rivers-

    13. LF

      Right.

    14. JR

      ... and he has this fantastic compound and inside this compound he appears to be working alone.

    15. LF

      Right. And he's like lift- like, he's like, um, doing curls I think, like dumbbells-

    16. JR

      Mm-hmm.

    17. LF

      ... and, uh, drinking heavily. So the, everything I know about science, everything I know about engineering is it doesn't happen alone. So the situation of a compound with no hundreds of engineers there-

    18. JR

      Right.

    19. LF

      ... working on this, is not, it's not-

    20. JR

      Feasible.

    21. LF

      It's not feasible, it's not possible. And the other, uh, moments like that were the technical, the discussion about how it's technically done. They- they threw in a few jargon to spice stuff up and it don't, makes, doesn't make any sense.

    22. JR

      Well, that's where I am blissfully-

    23. LF

      Yeah.

    24. JR

      ... ignorant.

    25. LF

      Yep.

    26. JR

      So I watch it, I go, "This movie's awesome."

    27. LF

      Yeah (laughs) .

    28. JR

      And you're like, "Ah, I know too much."

    29. LF

      Yeah, know too much.

    30. JR

      (laughs)

  3. 30:0045:00

    Mm-hmm. …

    1. JR

      Anderson Silva thinks Steven Seagal is ... I'm gonna, I'm gonna put this in a respectful way. He ... Anderson Silva has a wonderful sense of humor.

    2. LF

      Mm-hmm.

    3. JR

      And Anderson Silva is very playful, and he thought it would be hilarious if-... if people believed that he was learning all of his martial arts from Steven Seagal.

    4. LF

      From Steven Seagal, got it.

    5. JR

      He also loves Steven Seagal movies, legitimately, so treated him with a great deal of respect.

    6. LF

      Right.

    7. JR

      He also recognizes that Steven Seagal actually is a master of Aikido. He really does understand Aikido and was one of the very first Westerners that was teaching in Japan, speaks fluent Japanese, sp- was teaching at a dojo in Japan, and is, you know, a, a legitimate master of Aikido.

    8. LF

      Right.

    9. JR

      The problem with Aikido is it's, it's one of those martial arts that has merit i- in a, in a vacuum. Like, if you, if you're in a world where there's no p- NCAA wrestlers or no judo players or no Brazilian jiu-jitsu black belts or no, um, Muay Thai kickboxers-

    10. LF

      Mm-hmm.

    11. JR

      ... there might be something to that Aikido stuff. But in the world where all those other martial arts exist, and we've examined all the intricacies of hand-to-hand combat, it falls horribly short.

    12. LF

      Well, see, this is the point I'm trying to make. You just said that, "We've investigated, uh, all the intricacies."

    13. JR

      Yeah.

    14. LF

      You said, "All the intricacies of hand-to-hand combat."

    15. JR

      Mm-hmm.

    16. LF

      I mean, you're just speaking, but you wanna open your mind to the possibility that Aikido has, uh-

    17. JR

      Some techniques that are effective.

    18. LF

      ... some techniques that are effective.

    19. JR

      Yeah, when I say all, that's, you're, you're correct. That's not a, uh, correct way of describing it.

    20. LF

      Right.

    21. JR

      'Cause there's always new moves that are being ... Like, for instance, um, in this, uh, recent fight between Anthony Pettis and Tony Ferguson, Tony Ferguson actually used Wing Chun in a fight.

    22. LF

      Mm-hmm.

    23. JR

      He, he trapped one of Anthony Pettis' hands and hit him with an elbow.

    24. LF

      Right.

    25. JR

      He, uh, basically used a technique that you would use on a Wing Chun dummy-

    26. LF

      Right.

    27. JR

      ... and he did it in an actual-

    28. LF

      In an actual fight.

    29. JR

      ... world-class mixed martial arts fight. And I remember watching it, "Wow," going, "This crazy motherfucker actually pulled that off."

    30. LF

      Mm-hmm.

  4. 45:001:00:00

    Hmm. …

    1. LF

      important to think about. But what happens is if you think too much about the, uh, encroaching doom of humanity, there's some aspect to it that is paralyzing-

    2. JR

      Hmm.

    3. LF

      ... where you almost... It turns you off, uh, from actually thinking about the, these ideas. They, there's something so appealing. It's like a black hole that pulls you in and if you notice folks like Sam Harris and so on spend a large amount of the time ta- you know, uh, they're talking about the negative stuff about something that's far away, not to say it's not wrong to talk about it, but they spend very little time about the potential positive impacts in the near term and also the negative impacts in the near term. So-

    4. JR

      Let's go over those.

    5. LF

      Yep. Fairness. So the w- the more and more we put decisions about our lives into the hands of artificial intelligence systems, whether you get a loan or, uh, an autonomous vehicle context, or in terms of, uh, re- recommending jobs for you on LinkedIn or all these kinds of things, the idea of fairness becomes a bias in, in these machine learning systems, becomes a really big threat. Because the way current neuro, uh, the way current artificial intelligence systems function is they train on data. So there's no way to, for them to somehow gain a greater intelligence than our, than the data we provide them with. So we provide them with actual data and so they carry over, if we're not careful, the biases in that data, the, the discrimination that's inherent in our current society as, as represented by the data. So they'll, they'll just carry that forward.

    6. JR

      How so?

    7. LF

      Uh, so there's people working on this, uh, more so to s- uh, to show really the negative impacts, uh, in terms of getting a loan or whether to say whether this particular human being should be convicted or not of a crime. Or there's, there's ideas there that can carry... You know, in our criminal system, there's discrimination and if you use data from that criminal system to then assist deciders, judges, juries, lawyers in making this incriminating, in, in making a decision of what kind of penalty a person gets, they're gonna carry that forward.

    8. JR

      So you mean like racial, economic biases?

    9. LF

      Racial, economic, yeah.

    10. JR

      Um, geographical?

    11. LF

      And that's a... Sort of I don't study that e- exact problem, but it's, it's you're aware of it because of the tools we're using. It only... So the two ways... So I'd like to talk about neural networks with-

    12. JR

      Okay.

    13. LF

      ... with Joe. (laughs)

    14. JR

      Sure. Let's do it.

    15. LF

      Okay. So the current approaches are there's been a lot of, uh, demonstrated improvements, exciting new improvements in our advancements of our artificial intelligence and those are, for the most part, have to do with neural networks, something that's been around since the 1940s, has gone through two AI winters where everyone was super hyped and then super bummed and super hyped again and bummed again and now we're in this other hype cycle. And what neural networks are is these collections of interconnected simple compute units, they're all similar. It's kind of, like, it's inspired by our own brain. We have a bunch of little neurons interconnected and the idea is y- these interconnections are really dominant and random, but if you feed it with some data-... they'll learn to connect just like they do in our brain, in a way that interprets that data. They form representations of that data and can make decisions. But there's only two ways to train those neural networks that we have now. One is we have to provide a large dataset. If you want that neural network to tell the difference between a cat and a dog, you have to give it 10,000 images of a cat and 10,000 images of a dog. You need to give it those images. And who tells you what a picture of a cat and a dog is? It's humans, so it has to be annotated. So as teachers of these artificial intelligence systems, we have to collect this data, we have to invest significant amount of effort and co- annotate that data, and then we teach neural networks, uh, to make that prediction. The, what's not obvious there is how poor of a method there is to achieve any kind of greater degree of intelligence. You're just not able to get very far besides very specific narrow tasks of cat versus dog or, uh, should I give this person a loan or not, these kind of simple, simple tasks. I would argue autonomous vehicles are actually beyond the scope of that kind of approach. And then the other realm of where neural networks can be trained is if you can simulate that world. So if the world is simple enough or it's conducive to be formalized sufficiently to where you can simulate it, so a game of chess is just, it's- it's- there's rules. Game of Go, there's rules, so you can simulate it. The- the big exciting thing about Google DeepMind is that they were able to beat the world champion by doing something called competitive self-play, uh, which is to have two systems play against each other. They don't need the human. They play against each other. But that only works, and that's a beautiful idea and super powerful and really interesting and surprising, but that only works on things like games and simulation. So now if I wanted to, uh... Sorry to be going to analogies of like UFC for example. (laughs) I- if I wanted to train a system to become the world champion, uh, beat, uh, what's his name, Nurmagomedov, right? I could play the UFC game. I- I could create sys- that. I could create two neural networks that play, use competitive self-play to play in that virtual world and they could become state of the art, the best fighter ever in that game. But transferring that to the physical world, we don't know how to do that. We don't know how to teach systems to do stuff in the real world. So some of the stuff that freaks you out often is Boston Dynamics robots.

    16. JR

      Ugh.

    17. LF

      Yeah. Those, that- (laughs)

    18. JR

      Every day I go to the Instagram page and I just go, "What the fuck are you guys doing?"

    19. LF

      So, uh-

    20. JR

      Engineering our demise.

    21. LF

      (laughs) Mark Rober, uh, CEO is, uh, spoke at the class, uh, I taught. He is a, he calls himself a bad boy of robotics so he- he's having a little fun with it.

    22. JR

      He should definitely stop doing that.

    23. LF

      (laughs)

    24. JR

      Don't call yourself a bad boy of anything.

    25. LF

      That's true.

    26. JR

      How old is he?

    27. LF

      (laughs)

    28. GU

      (laughs)

    29. LF

      He's, he... (laughs) Okay, he's one of the greatest roboticists of our generation.

    30. JR

      That's great.

  5. 1:00:001:15:00

    But does anybody say…

    1. LF

      including revolutionizing infrastructure and rethinking of transportation in general, it's possible to do in the next five, 10 years, maybe 20. But it's not easy, like everybody says. And-

    2. JR

      But does anybody say it's easy?

    3. LF

      Yeah. This, uh, s- the, this, there's a lot of hype between autonomous, uh, behind autonomous vehicles. Elon Musk himself and other people have promised autonomous vehicles. Th- that timeline has already passed. There's been going on, "In 2018, we'll have autonomous vehicles." Now, Ford, GM-

    4. JR

      Well, they're, they're semi-autonomous now, right? So-

    5. LF

      Yeah.

    6. JR

      ... they, I know they do, they can brake for pedestrians. Like, if they see pedestrians, they're supposed to brake for them and avoid them. Right?

    7. LF

      Th- that's part of the... Technically, no.

    8. JR

      Wasn't that an issue with an Uber car that hit a pedestrian that was o- operating autonomously?

    9. LF

      That's right.

    10. JR

      Someone, a homeless person stepped out off of a median right into traffic and it, it nailed it, and then they found out it didn't have-... just one of the settings wasn't in place.

    11. LF

      That's right. But that was an autonomous vehicle being tested in Arizona.

    12. JR

      Mm-hmm.

    13. LF

      And, uh, unfortunately there was a fatality. A person, a person died.

    14. JR

      Yeah.

    15. LF

      A pedestrian was killed. So what happened there, that's the, that's the thing I'm saying is really hard. That's full autonomy. That's technically when the car, you can remove the steering wheel and the car just drives itself and take care of-

    16. JR

      Right.

    17. LF

      ... everything. Everything I've seen, everything we're studying, so we're studying drivers in Tesla vehicles, we're building our own vehicles. It seems that it'll be a long way off before we can solve the fully autonomous pr- driving problem.

    18. JR

      Because of pedestrians?

    19. LF

      And... But two things. I mean, pedestrians and cyclists and the edge cases of driving. All the stuff we take for granted. The same reason we take for granted how hard it is to walk, how hard it is to pick up this bottle, us, our intuition about what's hard and easy is really flawed as human beings.

    20. JR

      Um, can I interject? What if, uh, all cars were autonomous?

    21. LF

      That's right.

    22. JR

      If we got to a point where every single car on the highway is operating off of a similar algorithm or off the same system, then things would be far easier, right? Because then you have to... don't, don't deal with random kinetic movements, people just changing lanes, people looking at their cellphone, not paying attention to what they're doing. All sorts of things that you have to be wary of right now driving and pedestrians and bicyclists.

    23. LF

      Totally. And that's, that's in the realm of things I'm talking about where you think outside the box and, and revolutionize our transportation system.

    24. JR

      Mm-hmm.

    25. LF

      That requires government to, uh, to play along.

    26. JR

      Seems like that's going that way, though, right? D- Do you feel like-

    27. LF

      Yeah.

    28. JR

      ... that one day we're gonna have, uh, autonomous driving pretty much everywhere?

    29. LF

      Espe-

    30. JR

      Especially on the highway?

  6. 1:15:001:24:53

    Mm-hmm. …

    1. LF

      the existential threat is really important to have as part of the conversation.

    2. JR

      Mm-hmm.

    3. LF

      But there's this level, there's this line. It's hard to put into words. There's a, there's a line that you cross when that worry becomes-

    4. JR

      Hyperbole.

    5. LF

      Yeah. And, and then it ... There's something about human psyche where it becomes paralyzing for some reason.

    6. JR

      Right.

    7. LF

      Now, when I have beers with my friends, the non-AI folks, we actually go ... We cross that line all day.

    8. JR

      Mm-hmm.

    9. LF

      And have fun with it. I, I talk to-

    10. JR

      Maybe I should get you drunk right now.

    11. LF

      Yeah. Maybe. (laughs)

    12. JR

      (laughs)

    13. LF

      Uh. I'm ... Regret every moment of it.

    14. JR

      (clears throat)

    15. LF

      This ... I talked to Steve Pinker, uh, as... Enlightenment Now.

    16. JR

      Mm-hmm.

    17. LF

      His book. Kinda highlights that (sighs) th- that kind of, um, that ... He's totally n- doesn't find that appealing, because that's crossing all realms of rationality and reason.

    18. JR

      When you say, "That appealing," what do you mean?

    19. LF

      Uh, crossing the line into what will happen in 50 years.

    20. JR

      What could happen.

    21. LF

      What could happen.

    22. JR

      He doesn't find that appealing.

    23. LF

      He doesn't find it appealing because he's studied ... And I'm not sure I d- I agree with him, uh, to the degree that he takes it. Uh, h- he finds that there's no e- evidence. He, he wants th- there t- all our discussions to be grounded in evidence and data. And he f- ... He highlights the fact that there's something about human psyche that desires this negativity, that it wants ... There's, there's something undeniable where we want to create and engineer the gods that overpower us and destroy us.

    24. JR

      We want to or we worry about it?

    25. LF

      There's stuff-

    26. JR

      I don't know if we want to.

    27. LF

      That we ... Uh, let me rephrase that. We want to worry about it.

    28. JR

      Yeah.

    29. LF

      There's something about the psyche that but th-

    30. JR

      Well, because you can't take the genie and put it back in the bottle.

Episode duration: 2:55:44

Install uListen for AI-powered chat & search across the full episode — Get Full Transcript

Transcript of episode j5FOumrXyww

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome