Skip to content
Lex Fridman PodcastLex Fridman Podcast

Elon Musk: Neuralink, AI, Autopilot, and the Pale Blue Dot | Lex Fridman Podcast #49

Lex Fridman and Elon Musk on elon Musk on Neuralink, AI Safety, Tesla Autonomy, and Humanity’s Future.

Lex FridmanhostElon Muskguest
Nov 12, 201936mWatch on YouTube ↗

EVERY SPOKEN WORD

  1. 0:0015:00

    The following is a…

    1. LF

      The following is a conversation with Elon Musk, part two. The second time we spoke on the podcast, with parallels, if not in quality, then in outfit, to the objectively speaking, greatest sequel of all time, Godfather Part II. As many people know, Elon Musk is a leader of Tesla, SpaceX, Neuralink, and The Boring Company. What may be less known is that he's a world-class engineer and designer, constantly emphasizing first principles thinking in taking on big engineering problems that many before him would consider impossible. As scientists and engineers, most of us don't question the way things are done. We simply follow the momentum of the crowd. But revolutionary ideas that change the world on the small and large scales happen when you return to the fundamentals and ask, "Is there a better way?" This conversation focuses on the incredible engineering and innovation done in brain computer interfaces at Neuralink. This work promises to help treat neurobiological diseases, to help us further understand the connection between the individual neuron to the high level of function of the human brain, and finally, to one day expand the capacity of the brain through two-way communication with computational devices, the internet, and artificial intelligence systems. This is the Artificial Intelligence Podcast. If you enjoy it, subscribe by YouTube, Apple Podcasts, Spotify, support on Patreon, or simply connect with me on Twitter, @lexfridman, spelled F-R-I-D-M-A-N. And now, as an anonymous YouTube commenter referred to our previous conversation, as the, quote, "Historical first video of two robots conversing without supervision," here's the second time, the second conversation with Elon Musk. Let's start with a easy question about consciousness. In your view, is consciousness something that's unique to humans, or is it something that permeates all matter, almost like a fundamental force of physics?

    2. EM

      I don't think consciousness permeates all matter.

    3. LF

      Panpsychists believe that.

    4. EM

      Yeah.

    5. LF

      There's a philosophical...

    6. EM

      How would you tell? (laughs)

    7. LF

      (laughs) That's true. That's a good point.

    8. EM

      I believe in scientific method. I don't want to blow your mind or anything, but the scientific method is like, if you cannot test a hypothesis, then you cannot reach meaningful conclusion that it is true.

    9. LF

      Do you think consciousness, understanding consciousness, is within the reach of science, of the scientific method?

    10. EM

      We can dramatically improve our understanding of consciousness. You know, I would be hard-pressed to say that we understand anything with complete accuracy, but can we dramatically improve our understanding of consciousness? I believe the answer is yes.

    11. LF

      Does an AI system, in your view, have to have consciousness in order to achieve human level or superhuman level intelligence? Does it need to have some of these human qualities like consciousness, maybe a body, maybe a fear of mortality, capacity to love, those kinds of silly human things?

    12. EM

      (sighs) E- b- there's- it's- it's different... O- o- you know, there's this- there's the scientific method, which I very much believe in, where something is true to the degree that it is testably so. And- and otherwise, you're really just talking about, you know, preferences or un- untestable beliefs or that, you know, that kind of thing. So ends up being somewhat of a semantic question, where we are conflating a lot of things with the word intelligence. If we parse them out and say, you know, w- o- are we headed, uh, towards the future where an AI will be able to outthink us in every way? Then the answer is unequivocally yes.

    13. LF

      In order for an AI system that needs to outthink us in every way, it also needs to have a- a capacity to have consciousness, self-awareness, and understanding.

    14. EM

      Well, it will be self-aware, yes. That's different from consciousness. I mean, to me, in terms of what- what consciousness feels like, it feels like consciousness is in a different dimension. But this is, this is- could be just an illusion. You know, if- if you dama- damage your brain in some way physically, you get- you- you damage your consciousness, which implies that consciousness is a physical phenomenon, in- in my view. The thing is that- that I think are really quite, quite likely is that digital intelligence will out- be able to outthink us, uh, in- in every way, and it will soon be able to simulate what we consider consciousness, uh, so- to- to a degree that you would not be able to tell the difference.

    15. LF

      And from the- from the aspect of the scientific method, it's- might as well be consciousness if we can simulate it perfectly.

    16. EM

      If you can't tell the difference, and this is sort of the- the Turing test, but think of a more sort of advanced version of the Turing test. If you- i- if you're- if you're talking to a d- digital superintelligence and can't tell if that is a computer or a human, like let's say you're just having a conversation over the phone or a video conference or something where you're- you're- you think you're talking, loo- looks like a person, makes all of the right, uh, uh, inflections and movements and- and all the small subtleties that constitute a human, uh, and, uh, talks like a human, makes mistakes like a human, like, the- at that- and- and you literally just can't tell, is this- are you video conferencing with a person or- or an AI?

    17. LF

      Might as well-

    18. EM

      Might as well.

    19. LF

      ...be human. So on a darker topic, you've expressed serious concern about existential threats.... of AI. It's perhaps one of the greatest challenges our civilization faces. But since I would say we're kind of an optimistic descendants of apes, perhaps we can find several paths of escaping the harm of AI. So if I can give you three options, maybe you can comment which do you think is the most promising. So one is scaling up efforts on a- AI safety and beneficial AI research in, in hope of finding an algorithmic or maybe a policy solution. Two, is becoming a multi-planetary species as quickly as possible. And three, is merging with AI and, and riding the wave of that increasing intelligence, uh, as it continuously improves. What do you think is most promising, most interesting, as a civilization, that we should invest in?

    20. EM

      I think there's, there's a lot... a tremendous amount of investment going on in AI. Where there's a lack of investment is in AI safety. And there should be, in my view, a government agency that oversees anything related to AI to confirm that it is... does not represent a public safety risk. Just as there is a regulatory authority for... it's like the Food and Drug Administration is NITSA for automo- automotive safety, there's the FAA for aircraft safety, we generally come to the conclusion that it is important to have a government referee or a, a referee that is serving the public interest in, in ensuring that, uh, um, things are safe when, when there's a potential danger to the public. Um, I would argue that, uh, AI is unequivocally, uh, something that has potential to be dangerous to the public, and therefore should have a regulatory agency just as other things that are dangerous to the public have a regulatory agency. But let me tell you, the problem with this is that the government moves very slowly and the, the rate of... the rate that... usually the way a regulatory agency, uh, comes into being, is that something terrible happens, there's a huge public outcry and years after that, there's a regulatory agency or a rule put in place. Take something like, like seat belts. It was known for, I don't know, a decade or more, that seat belts would have a massive im- impact on, uh, safety and, and save so many lives and serious injuries, and the car industry fought the requirement to put seat belts in tooth and nail. That's crazy.

    21. LF

      Yeah.

    22. EM

      And, and I don't know, hundreds of thousands of people probably died because of that. And they said people wouldn't buy cars if they had seat belts, which is obviously absurd. You know, or look at the bat- tobacco industry and how long they fought anything about smoking. That's part of why I helped make that movie Thank You For Smoking. You can sort of see just how pernicious it can be when you have these companies effectively achieve regulatory capture of, of government

    23. NA

      Yeah.

    24. EM

      ... the bad. People in the AI community refer to the advent of digital super intelligence as a singularity. That, that is not to say that it is good or bad but it, that it is very difficult to predict what will happen after that point and, and that there's some probability it will be bad, some probably it'll be, it'll be good. We obviously want it to affect that probability and have it be more good than bad.

    25. LF

      Well, let me, on the merger with AI question and, and the incredible work that's being done at Neurolink, there, there's a lot of fascinating innovation here across different disciplines going on. So the flexible wires, the robotic sewing machine that responds to brain movement and everything around ensuring safety and so on. So we currently understand very little about the human brain. Do you also hope that the work at Neurolink will help us understand more about our... about the human mind, about the brain?

    26. EM

      Yeah, I think the work in Neurolink will definitely shed a lot of insight into how the brain and the mind works. Right now, just the data we have regarding how the brain works is, is very limited. You know, we've got fMRI which is... that, that's kind of like putting a, you know, stethoscope o- on the outside of a factory wall and then putting it like all over the factory wall and you can sort of hear the sounds but you don't know what the machines are doing really. It's hard... you can infer a few things but it's very broad brush stroke. In order to really know what's going on in the brain, you really need... you have to have high precision sensors and then you want to have stimulus and response, like if you trigger a neuron, what... how do you feel? What do you see? How does it change your perception of the world?

    27. LF

      You're speaking to physically just getting close to the brain, being able to measure signals from the brain-

    28. EM

      Yeah.

    29. LF

      ... will give us sort of open the door in- inside the factory?

    30. EM

      Yes, exactly. Being able to have high precision sensors that tell you what individual neurons are doing and then being able to trigger the neuron and see what the response is in the brain so you can see the consequences of, of a... if you fire this neuron, what happens? How do you feel? What does it change? It'll be really profound to have this in people because people can articulate their change, like if there's a change in mood or if they've... you know, if they can tell you if they can see better or hear better or be able to form sentences better or worse or, you know, their memories are jogged or that... you know, that kind of thing.

  2. 15:0030:00

    So, do you think…

    1. EM

      is like what we call like human intelligence, you know? So it's like the, that's like the advanced computer relative to other creatures. Uh, other cr- other creatures do not have either... r- really, they don't- they don't have the computer. Or they have a very weak computer relative to humans. But- but it's... just, it's like, it sort of seems like sh- surely the really smart things should control the dumb thing, but actually the dumb thing controls the smart thing. (laughs)

    2. LF

      So, do you think some of the same kind of machine learning methods, uh, whether that's natural language processing applications, are going to be applied for the communication between the machine and the brain? To- to learn how to do certain things like movement of the body, how to process visual stimuli and so on? Do you see the value of using machine learning to understand the language of the- the two-way communication with the brain?

    3. EM

      Sure, yeah, absolutely. I mean, we're a neural net and- and that, you know, AI is basically a neural net. So it's like digital neural net will interface with biological neural net and hopefully bring us along for the ride, you know? But the vast majority of our int- of our- of our intelligence will be digital.

    4. LF

      Does it...

    5. EM

      Like, like so... like think of like the- the difference in intelligence between your- your cortex and your limbic system is gigantic. Your- your- your limbic system really has no comprehension of what the hell the cortex is doing. Um, you know, it's just literally hungry, you know, or tired, or angry, or sexy, or something, you know?

    6. LF

      (laughs)

    7. EM

      That's... and- and just... and- and then it... that communicates that- that impulse to the cortex and tells the cortex to go satisfy that. (laughs) Then a lot of... a great deal of... like, a massive amount of thinking, like truly stupendous amount of thinking has gone into sex.

    8. LF

      Okay.

    9. EM

      Without purpose, without procreation- without procreation.

    10. LF

      Yeah.

    11. EM

      Which- which is actually quite a silly action in the absence of procreation. It's- it's a bit silly.

    12. LF

      Well...

    13. EM

      So, why are you doing it?

    14. LF

      Assuming everything-

    15. EM

      Because it makes the limbic system happy. That's why.

    16. LF

      That's why.

    17. EM

      But it's pretty absurd, really. (laughs)

    18. LF

      (laughs) Well, the whole of existence is pretty absurd in some kind of sense.

    19. EM

      Yeah. But- but I mean, this is a lot of computation has gone into, "How can I do more of that?"

    20. LF

      (laughs)

    21. EM

      With procreation not even being a factor. This is, I think, a very important area of research for NSFW.

    22. LF

      (laughs) Uh, an agency that should receive a lot of funding, especially after this conversation.

    23. EM

      (laughs) I propose the formation of a new agency.

    24. LF

      Oh boy. Uh (laughs)

    25. EM

      (laughs)

    26. LF

      What is the most exciting or some of the most exciting things that you see in the future impact of Neuralink, both in the science, the engineering and societal broad impact?

    27. EM

      So Neuralink, I think at first, will solve a lot of brain-related diseases. So-... uh, it could be anything from, like, autism, schizophrenia, memory loss. Like, everyone experiences mem- memory loss at, at certain points in- in age. Parents can't remember their- their kids' names and that kind of thing. So there's, I think, a tremendous amount of good that, uh, Neuralink can do in solving, uh, critical, uh, uh, critical damage to the brain or the spinal cord. There's a lot that can be done to improve quality of life of individuals, and that will be- those will be steps along the way. And then ultimately, it's intended to address the ex- the risk- the existential risk associated with, uh, digital superintelligence. Um, like, we will not be able to f- be smarter than a- a- a- es- a digital supercomputer. Um, so therefore, if you cannot beat 'em, join 'em. And at least we won't have that option.

    28. LF

      So you- you- you have hope that Neuralink will be able to be a kind of, um, connection to allow us to- to merge, to ride the wave of the improving, uh, AI systems?

    29. EM

      I think the chance is above 0%.

    30. LF

      So it's non-zero?

  3. 30:0035:54

    Do you have a…

    1. EM

      of the objects around you. Once you have an accurate vector space representation, the planning con- and control is relatively easier. I'd say it's relatively easy. Basically, once you have ...... accurate vector space representation, then, then you're, you're kind of like a video game, like cars in, like, Grand Theft Auto or something. Like, they work pretty well. They drive down the road, they don't crash, you know, pretty much, unless you crash into them. Um, that's because they've- they've got an accurate vector space representation of where the cars are, and they're just then, and then they're rendering that as the- as the output.

    2. LF

      Do you have a sense, high level, that Tesla's on track, uh, on being able to achieve full autonomy? So, on the highway-

    3. EM

      Yeah, yeah, absolutely.

    4. LF

      ... and still no driver state, driver sensing?

    5. EM

      We have driver sensing with torque on the wheel.

    6. LF

      That's right.

    7. EM

      Yeah.

    8. LF

      By the way, just a quick comment on karaoke. Most people think it's fun, but I also think it is, it's a driving feature. I've been saying for a long time, singing in the car is really good for attention management and vigilance management.

    9. EM

      That's right. Tesla karaoke, uh, yeah, is great. It's one of the most fun features of the car.

    10. LF

      You think of a connection between fun and safety sometimes?

    11. EM

      Yeah, if you can do both at the same time, that's great.

    12. LF

      I just met with Ann Druyan, wife of, uh, Carl Sagan-

    13. EM

      Oh, yeah.

    14. LF

      ... who directed ci- Cosmos.

    15. EM

      I'm generally I'm a big fan of Carl Sagan. He's re- super cool and had a great way of putting things. All the consciousness of all civilization, everything we've ever known and done is on this tiny blue dot. People also get... They get too trapped in their, it's like squabbles amongst humans and just don't think of the big picture and they take, uh, civilization and our continued existence for granted. They shouldn't do that. Look at the history of civilizations. They rise and they fall. And now civilization is all... It's globalized. And so civilization, I think now, rises and falls together. There's no, there's not geographic isolation. This is a big risk. Things don't always go up. That should be... That's an important lesson of history.

    16. LF

      In 1990, at the request of Carl Sagan, the Voyager 1 spacecraft, which is a spacecraft that's reaching out farther than anything human made into space, uh, turned around to take a picture of Earth from 3.7 billion miles away, and as you're talking about the pale blue dot, that picture, the Earth takes up less than a single pixel in that image-

    17. EM

      Yes.

    18. LF

      ... appearing as a tiny blue dot as, uh, pale blue dot, as Carl Sagan called it. So he spoke about this dot of ours in 1994, and if you could humor me, I was wondering if, in the last two minutes, you could, uh, read the words that he wrote describing this pale blue dot.

    19. EM

      Sure. Yeah, so it's funny, the- the universe appears to be 13.8 billion years old. Earth is, like, four and a half billion years old. In another half billion years or so, the sun will expand and probably evaporate the oceans and make life impossible on Earth, which means that if it had taken consciousness 10% longer to evolve, it would never have evolved at all. Just 10% longer. Um, and I wonder, I wonder how many dead one-planet civilizations there are out there in the cosmos that never made it to the other planet and ultimately extinguished themselves or were destroyed by external factors. Probably a few. It's only just possible to tr- to travel to Mars, just barely. If g was 10% more, whew, wouldn't work, really. If g was 10% lower, it would be easy. Like, you can go single stage from the surface of Mars all the way to the surface of the Earth, because Mars is 37% Earth's gravity thereabout. You need a giant booster to get off the Earth. Channeling Carl Sagan, "Look again at that dot. That's here. That's home. That's us. On it, everyone you love, everyone you know, everyone you've ever heard of, every human being who ever was, lived out their lives. The aggregate of our joy and suffering, thousands of confident religions, ideologies, and economic doctrines, every hunter and forager, every hero and coward, every creator and destroyer of civilization, every king and peasant, every young couple in love, every mother and father, hopeful child, inventor, and explorer, every teacher of morals, every corrupt politician, every superstar, every supreme leader, every saint and sinner in the history of our species lived there, on a mote of dust suspended in a sunbeam. Our planet is a lonely speck in the great enveloping cosmic dark. In our obscurity, in all this vastness, there is no hint that help will come from elsewhere to save us from ourselves. The Earth is the only world known so far to harbor life. There is nowhere else, at least in the near future, to which our species could migrate." This is not true (laughs) . This is false. Mars.

    20. LF

      And I think Carl Sagan would agree with that. He couldn't even imagine it at that time. So thank you for making the world dream, and thank you for talking today. I really appreciate it. Thank you.

    21. EM

      Thank you.

Episode duration: 36:09

Install uListen for AI-powered chat & search across the full episode — Get Full Transcript

Transcript of episode smK9dgdTl40

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome