Skip to content
Lex Fridman PodcastLex Fridman Podcast

Elon Musk: Neuralink and the Future of Humanity | Lex Fridman Podcast #438

Elon Musk is CEO of Neuralink, SpaceX, Tesla, xAI, and CTO of X. DJ Seo is COO & President of Neuralink. Matthew MacDougall is Head Neurosurgeon at Neuralink. Bliss Chapman is Brain Interface Software Lead at Neuralink. Noland Arbaugh is the first human to have a Neuralink device implanted in his brain. Thank you for listening ❤ Check out our sponsors: https://lexfridman.com/sponsors/ep438-sb See below for timestamps, transcript, and to give feedback, submit questions, contact Lex, etc. *Transcript:* https://lexfridman.com/elon-musk-and-neuralink-team-transcript *CONTACT LEX:* *Feedback* - give feedback to Lex: https://lexfridman.com/survey *AMA* - submit questions, videos or call-in: https://lexfridman.com/ama *Hiring* - join our team: https://lexfridman.com/hiring *Other* - other ways to get in touch: https://lexfridman.com/contact *EPISODE LINKS:* Neuralink's X: https://x.com/neuralink Neuralink's Website: https://neuralink.com/ Elon's X: https://x.com/elonmusk DJ's X: https://x.com/djseo_ Matthew's X: https://x.com/matthewmacdoug4 Bliss's X: https://x.com/chapman_bliss Noland's X: https://x.com/ModdedQuad xAI: https://x.com/xai Tesla: https://x.com/tesla Tesla Optimus: https://x.com/tesla_optimus Tesla AI: https://x.com/Tesla_AI *SPONSORS:* To support this podcast, check out our sponsors & get discounts: *Cloaked:* Online privacy protection. Go to https://lexfridman.com/s/cloaked-ep438-sb *MasterClass:* Online classes from world-class experts. Go to https://lexfridman.com/s/masterclass-ep438-sb *Notion:* Note-taking and team collaboration. Go to https://lexfridman.com/s/notion-ep438-sb *LMNT:* Zero-sugar electrolyte drink mix. Go to https://lexfridman.com/s/lmnt-ep438-sb *Motific:* Generative ai deployment. Go to https://lexfridman.com/s/motific-ep438-sb *BetterHelp:* Online therapy and counseling. Go to https://lexfridman.com/s/betterhelp-ep438-sb *OUTLINE:* 0:00 - Introduction 0:49 - Elon Musk 4:06 - Telepathy 10:45 - Power of human mind 15:12 - Future of Neuralink 20:27 - Ayahuasca 29:57 - Merging with AI 34:44 - xAI 36:57 - Optimus 43:47 - Elon's approach to problem-solving 1:01:23 - History and geopolitics 1:05:53 - Lessons of history 1:10:12 - Collapse of empires 1:17:55 - Time 1:20:37 - Aliens and curiosity 1:28:12 - DJ Seo 1:36:20 - Neural dust 1:43:03 - History of brain–computer interface 1:51:07 - Biophysics of neural interfaces 2:01:36 - How Neuralink works 2:07:26 - Lex with Neuralink implant 2:27:24 - Digital telepathy 2:38:27 - Retracted threads 2:44:01 - Vertical integration 2:50:55 - Safety 3:00:50 - Upgrades 3:09:53 - Future capabilities 3:39:09 - Matthew MacDougall 3:44:58 - Neuroscience 3:52:07 - Neurosurgery 4:03:11 - Neuralink surgery 4:22:20 - Brain surgery details 4:38:03 - Implanting Neuralink on self 4:53:57 - Life and death 5:03:17 - Consciousness 5:06:11 - Bliss Chapman 5:19:27 - Neural signal 5:26:19 - Latency 5:30:59 - Neuralink app 5:35:40 - Intention vs action 5:46:54 - Calibration 5:56:26 - Webgrid 6:19:28 - Neural decoder 6:40:03 - Future improvements 6:48:59 - Noland Arbaugh 6:49:08 - Becoming paralyzed 7:02:43 - First Neuralink human participant 7:06:45 - Day of surgery 7:24:31 - Moving mouse with brain 7:49:50 - Webgrid 7:57:52 - Retracted threads 8:06:16 - App improvements 8:13:01 - Gaming 8:23:59 - Future Neuralink capabilities 8:26:55 - Controlling Optimus robot 8:31:16 - God 8:33:21 - Hope *PODCAST LINKS:* - Podcast Website: https://lexfridman.com/podcast - Apple Podcasts: https://apple.co/2lwqZIr - Spotify: https://spoti.fi/2nEwCF8 - RSS: https://lexfridman.com/feed/podcast/ - Podcast Playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4 - Clips Channel: https://www.youtube.com/lexclips *SOCIAL LINKS:* - X: https://x.com/lexfridman - Instagram: https://instagram.com/lexfridman - TikTok: https://tiktok.com/@lexfridman - LinkedIn: https://linkedin.com/in/lexfridman - Reddit: https://reddit.com/r/lexfridman - Facebook: https://facebook.com/lexfridman - Patreon: https://patreon.com/lexfridman

Lex FridmanhostElon MuskguestGuestguestDJ SeoguestMatthew MacDougallguestBliss ChapmanguestNoland Arbaughguest
Aug 2, 20248h 37mWatch on YouTube ↗

EVERY SPOKEN WORD

  1. 0:000:49

    Introduction

    1. LF

      The following is a conversation with Elon Musk, DJ Tsao, Matthew McDougall, Bliss Chapman, and Noland Arbaugh, about Neuralink and the future of humanity. Elon, DJ, Matthew, and Bliss are, of course, part of the amazing Neuralink team, and Noland is the first human to have a Neuralink device implanted in his brain. I speak with each of them individually, so use timestamps to jump around, or, as I recommend, go hardcore and listen to the whole thing. This is the longest podcast I've ever done. It's a fascinating, super technical and wide-ranging conversation, and I loved every minute of it. And now, dear friends, here's Elon Musk, his fifth time on this, the Lex Fridman podcast.

  2. 0:494:06

    Elon Musk

    1. EM

      Drinking coffee or water?

    2. LF

      Water.

    3. EM

      (laughs)

    4. LF

      (laughs) I'm so over-caffeinated right now. Do you want some caffeine?

    5. EM

      I mean, sure.

    6. LF

      There's a, there's a nitro drink.

    7. EM

      This will keep you up for like, you know, tomorrow afternoon, basically (laughs) .

    8. LF

      Yeah. I don't have any sugar.

    9. EM

      So what, what is nitro? It's just got a lot of caffeine or something?

    10. LF

      Don't ask questions. It's called "nitro."

    11. EM

      (laughs)

    12. LF

      Do you need to know anything else?

    13. EM

      It's got, it's got nitrogen in it. That's ridiculous. I mean, what we breathe is 78% nitrogen anyway.

    14. LF

      (laughs)

    15. EM

      (laughs) What do you need to add more for? (laughs) Most, most people think that they're breathing oxygen and they're actually breathing 78% nitrogen. You need, like, a milk bar. Like-

    16. LF

      A milk bar? (laughs)

    17. EM

      (laughs) Like from A Clockwork Orange. (laughs)

    18. LF

      Yeah. Yeah. Is that a top three Kubrick film for you?

    19. EM

      A Clockwork Orange? It's pretty good. I mean, it's demented. Jarring, let's say. (laughs)

    20. LF

      Uh. (laughs) Okay. Uh, okay. So first, let's step back, and big congrats on, uh, getting Neuralink implanted into a human. That's a historic step for Neuralink, and, uh-

    21. EM

      Oh, thanks, yeah.

    22. LF

      ... there's many more to come.

    23. EM

      Yeah, we just, um... Obviously have our second implant as well.

    24. LF

      How did that go?

    25. EM

      Uh, so far, so good. It's, uh, there... Looks like we've got, um, I think on the order of 400 electrodes that are, are providing signals. So...

    26. LF

      Nice.

    27. EM

      Yeah.

    28. LF

      How, how quickly do you think the number of human participants will scale?

    29. EM

      Uh, it depends somewhat on the regulatory approval, the rate at which we get re- regulatory approvals. Uh, so we're hoping to do 10 by the end of this year, a total of 10, so eight more.

    30. LF

      And with each one, you're gonna be learning a lot of lessons about the neurobiology of the brain, the everything, the whole chain of the neural link, the decoding, the s- the signal processing, all that kind of stuff.

  3. 4:0610:45

    Telepathy

    1. EM

    2. LF

      Yeah, that BPS is an interesting metric to measure. There might be a big leap in the experience once you reach a certain level of BPS.

    3. EM

      Yeah.

    4. LF

      Like entire new ways of interacting with a computer might be unlocked.

    5. EM

      And with humans.

    6. LF

      With other humans.

    7. EM

      Provided they have (laughs) , they want a Neuralink too.

    8. LF

      Right. Do you think-

    9. EM

      Otherwise, they won't be able to absorb the signals fast enough.

    10. LF

      Do you think they'll improve the quality of intellectual discourse?

    11. EM

      Well, I think you can, you could think of it, you know, if, if you were to slow down communication, how, how would you feel about that? You know, if you'd only talk at, let's say, one-tenth of normal speed, you'd be like, "Wow, that's agonizingly slow."

    12. LF

      Yeah.

    13. EM

      Uh, so now imagine you could speak at, communicate clearly, um, at 10 or 100 or 1,000 times faster than normal.

    14. LF

      Listen, uh, I'm pretty sure nobody in their right mind listens to me at 1x. They listen at 2x.

    15. EM

      (laughs)

    16. LF

      So I can, I can, I can only imagine what 10x would ex- feel like, or could actually understand it.

    17. EM

      I usually default to 1.5x. Um, you can do 2x, but I... Well, actually, if I'm trying to go, if, if I'm listening to somebody who gets too, in like sort of 15, 20-minute segments to go to sleep, then I'll do it 1.5x. Um, if I'm paying attention, I'll do 2x. (laughs)

    18. LF

      Right. Um-

    19. EM

      But actually, if, if you start actually listening to podcasts or, or sort of audio books or anything at... If you get used to doing it at 1.5, then, then one sounds painfully slow.

    20. LF

      I'm still holding on to one because I'm afraid. I'm afraid of myself becoming bored with the reality, with the real world where everyone's speaking in 1x. (laughs)

    21. EM

      (laughs) Well, it depends on the person. You can speak very fast, like we can, we can communicate very quickly. And also if you use a wide range of... If your vo- if your vocabulary is, is larger, your, uh, bit rate, effective bit rate is higher.

    22. LF

      (laughs) That's a good way to put it.

    23. EM

      Yeah.

    24. LF

      The effective bit rate. I mean, that is the question, is how much information is actually compressed in the low bit transfer of language.

    25. EM

      Yeah, if you, if there's, if there's a single word that is able to convey something that would normally require, um, I don't know, 10 simple words, then you've, you've got a, you know, maybe a 10X compression on your hands. And that's really like with memes. Memes are like data, data compression. Um, it conveys a whole... Uh, there's, you're simultan- simultaneously hit with a wide range of symbols that you can interpret, um, and it's, you, you kinda get it, um, faster than if it were words or, or a simple picture.

    26. LF

      And of course, you're referring to memes broadly, like ideas.

    27. EM

      Yeah. There's a, an entire idea structure that is like a, an idea template, and then you can add something to that idea template. Uh, but somebody has that preexisting idea template in their head. Um, so when you add that incremental bit of information, you're conveying, uh, much more than if you just, you know, said a few words. You, it's everything associated with that meme.

    28. LF

      You think there'll be emergent leaps of capability as you scale the number of electrodes?

    29. EM

      Yeah.

    30. LF

      Like, there'll be a certain... Do you think there'll be like an actual number where it just, the human experience will be altered?

  4. 10:4515:12

    Power of human mind

    1. EM

    2. LF

      Well, it is a very interesting question for a super intelligent species. What use are humans?

    3. EM

      Um, I think there is some argument for humans as a source of will.

    4. LF

      Will?

    5. EM

      Will, yeah. Source of will or purpose. So if you, if you consider the human mind as being essentially the, there's the primitive limbic elements, which basically even like reptiles have, and there's the cortex. That's the thinking and planning part of the brain. Now, the cortex is much smarter than the limbic system, and yet it's largely in service to the limbic system. It's trying to make the limbic system happy. I mean, the sheer amount of compute that's gone into people trying to get laid is insane.

    6. LF

      Hm.

    7. EM

      Um, without the pur- without actually seeking procreation.

    8. LF

      Right.

    9. EM

      They're just literally trying to do the sort of simple motion, um.

    10. LF

      Right.

    11. EM

      (laughs)

    12. LF

      (laughs)

    13. EM

      And they get a kick out of it.

    14. LF

      Yeah.

    15. EM

      So this, uh, simple, which in the abstract rather absurd motion, which is sex, uh, the cortex is putting massive amount of compute into trying to figure out how to do that.

    16. LF

      So like 90% of distributed compute with a human species is spent on trying to get laid probably, like-

    17. EM

      A massive amount.

    18. LF

      ... a large percentage.

    19. EM

      Yeah, yeah. There's no purpose to most sex except, uh, hedonistic. You know, it's just sort of a ha- joy or whatever, dopamine release. Um-... now, I want to know, once in a while it's procreation, but for humans it's mostly, modern humans, it's mostly, uh, recreational. Um, and, uh, and so, so the, so your cortex, much smarter than your limbic system, is trying to make the limbic system happy 'cause the limbic system wants to have sex, so, um, or want some tasty food, or whatever the case may be. And then that, that is then further augmented by the tertiary system which is your phone, your laptop, iPad, whatever, you know, all, all your computing stuff. That's your tertiary layer. So you're actually already a cyborg. Uh, you have this tertiary compute layer which is in, uh, in the form of your, your computer with all the applications, all your compute devices. Um, and, uh, and so (laughs) in the getting laid front, there's actually a massive amount of compute, of, of digital compute also trying to get laid. (laughs) .

    20. LF

      (laughs)

    21. EM

      You know, with like Tinder and whatever, you know?

    22. LF

      Yeah. So the, the compute that we humans have built is also participating. (laughs) .

    23. EM

      (laughs) Yeah. I mean, there's like gigawatts of compute going into getting laid, like of digital compute.

    24. LF

      Yeah. (laughs) What if AGI was-

    25. EM

      This is happening as we speak.

    26. LF

      (laughs) If we merge with AI, it's just gonna expand the compute that we humans use-

    27. EM

      (laughs) Pretty much.

    28. LF

      ... to try to get laid.

    29. EM

      Well, it's one of the things certainly, yeah.

    30. LF

      Yeah.

  5. 15:1220:27

    Future of Neuralink

    1. EM

      the universe.

    2. LF

      So do you think people, when you have a Neuralink with 10,000, 100,000 channels, most of the use cases will be communication with AI systems?

    3. EM

      Well, if ass- assuming that the, there are not, um... Yeah, I mean, there's, there's this solving basic, uh, neurological issues that people have, you know, if they've got, um, damaged neurons in their spinal cord or neck or, you know, um, as, as is the case with the first two patients then, you know, there's obviously the first order of business is solving fundamental neuron damage in the spinal cord, neck, or in the brain itself. Um, so, you know, and a, a se- a second, um, product is called Blindsight which is to enable people who are completely blind, lost both eyes or optic nerve or just can't see at all, uh, to be able to see, um, by directly triggering the neurons in the visual cortex. So we're, we're just starting at the basics here, you know? This is like, um, very, the s- the simple stuff, uh, relatively speaking is, uh, solving, um, neuron damage.

    4. LF

      Mm-hmm.

    5. EM

      Um, you know, it can also solve, uh, I think probably schizophrenia, you know, um, if, uh, people have seizures of some kind, it could probably solve that. Um, it could help with, uh, memory. There, there's, so there's like a kind of a, a tech tree, if you will (laughs) , of like, you got the basics, um, like, like you need, you need literacy before you can have, you know, Lord of the Rings (laughs) .

    6. LF

      Mm-hmm.

    7. EM

      You know? (laughs) .

    8. LF

      (laughs) Got it.

    9. EM

      So do you have letters and alphabet?

    10. LF

      (laughs)

    11. EM

      Okay, great. Uh, words? You know, then eventually you get, uh, sagas. So, you know, I think there's th- there may be some, you know, things to worry about in, in the future, but the first several years are really just solving basic neurological damage. Like, for people who have essentially complete or near complete loss of from the brain to the body, um, like Stephen Hawking would be an example, uh, the Neuralink would be incredibly profound 'cause I mean, you can imagine if Stephen Hawking could communicate as fast as we're communicating, perhaps faster. Um, and that's certainly po- uh, possible. Probable, in fact. Likely, I'd say.

    12. LF

      So there is a, a kind of dual track of medical and non-medical? Meaning, so everything you've-

    13. EM

      Well-

    14. LF

      ... talked about could be applied to people who are non-disabled in the future?

    15. EM

      The logical thing to do is, sensible thing to do is to start off solving, um, basic, uh, neuron damage issues.

    16. LF

      Yes.

    17. EM

      Um, 'cause the, th- there's obviously some risk with, with the new device. There's, you can't get the risk down to zero. Um, it's not possible. So you wanna have, um, the highest possible reward.... given that, given there's a certain irreducible risk. And if, um, if somebody's able to have a profound improvement in their communication, um, that's worth the risk.

    18. LF

      As you get the ri- the risk down.

    19. EM

      Yeah, as, as you get the risk down. O- once the risk is, is down to, to, you know, you want, if you have, like, thousands of, of people that have been using it for, for years and the risk is minimal, then, um, perhaps at that point you could consider saying, "Okay, let's, let's aim for augmentation." Now, now, I think we, we, we're actually gonna aim for augmentation with people who have neur- neuron damage. So, we're not just aiming to give people a communication data rate equivalent to normal humans, we're aiming to give people who have, you know, quadriplegic, or maybe have complete loss, um, of the connection to the brain and body, a communication data rate that exceeds normal humans.

    20. LF

      Mm-hmm.

    21. EM

      I mean, while we're in there, why not? Let's give people superpowers.

    22. LF

      And the same for vision! As you restore vision-

    23. EM

      Yeah.

    24. LF

      ... there could be aspects of that restoration that are superhuman.

    25. EM

      Yeah. At, at first, the vision restoration will be, uh, low res, um, 'cause you have to say, like, how many neurons can you put in there and h- and, and trigger. And, and you can do things where you, you, um, adjust the electric field so that given if you've got, say, 10,000 neurons, it's not just 10,000 pixels because you can adjust the, the field between the, the neurons and, and do them in patterns, uh, in order to get, so ha- have say, 10,000 electrodes effectively give you, uh, I don't know, maybe a, like having a, a megapixel or a 10 megapixel situation. Um, so... And, and then o- over time, I think you get to higher resolution than human eyes and you could also see in different wavelengths. So like Geordi La Forge from Star Trek, you know, had like the thing. You could just... Do you want to see in radar? No problem. You can see ultraviolet, infrared, eagle vision, whatever you want.

  6. 20:2729:57

    Ayahuasca

    1. EM

      (laughs)

    2. LF

      Do you think there will be, uh... Let me ask a Joe Rogan question. Do you think there'll be... (laughs)

    3. EM

      (laughs)

    4. LF

      I just recently, uh, took an ayahuasca.

    5. EM

      (laughs) Is that a Joe Rogan question?

    6. LF

      So this question... No. Well, yes. (laughs)

    7. EM

      Well, I guess, technically it is. (laughs)

    8. LF

      Yeah. Have you tried-

    9. EM

      Yeah, is this DMT in there or something? (laughs)

    10. LF

      Have you tried DMT, bro? (laughs) Ah, I love you, Joe. Okay. (laughs) But do-

    11. EM

      (laughs) Yeah, wait, wait. Yeah, have you said much about it? The, the ayahuasca stuff?

    12. LF

      I have not. I have not. I have not. I've been-

    13. EM

      Okay, well, why don't you spill the beans?

    14. LF

      (laughs) Uh, it was an, it was a truly incredible experience.

    15. EM

      Don't be turning the tables on you. (laughs)

    16. LF

      (laughs) Wow, yeah.

    17. EM

      I mean, you're in the jungle. (laughs)

    18. LF

      Yeah, amongst the trees, myself and-

    19. EM

      Yeah, it must have been crazy.

    20. LF

      ... and a shaman. Yeah, yeah, yeah. With the insects, with the animals all around you, like, jungle as far as I can see. There's no-

    21. EM

      I mean...

    22. LF

      (laughs) That's the way to do it.

    23. EM

      Things are gonna look pretty wild.

    24. LF

      Yeah, pretty wild.

    25. EM

      (laughs)

    26. LF

      I took a... I took-

    27. EM

      I mean-

    28. LF

      ... an extremely high dose.

    29. EM

      (laughs) J- j- j- j- don't go hugging an anaconda or something, you know? Uh... (laughs)

    30. LF

      Uh, you haven't lived unless you made love to an anaconda. (laughs) I'm sorry. (laughs) But-

  7. 29:5734:44

    Merging with AI

    1. LF

      You've talked about the, the threats, the safety concerns of AI. Let's look at long-term visions. Do you think Neuralink is, in your view, the, the best current approach we have for AI safety?

    2. EM

      It's an idea that may help with AI safety. Um, certainly not... I wouldn't wanna, I would- wouldn't wanna claim it's like some panacea or it's, that it's a sure thing. Um, but, I mean, many years ago I was thinking like, "Well, what, um..."What would inhibit alignment of human collective, human will with, uh, artificial intelligence? And the low data rate of humans, especially our, our slow output rate, um, would necessarily just because it's such a, because the communication is so slow, would, uh, diminish the link between humans and computers. Like, the more you are a tree, (laughs) the, the less you know what the tree is. Like, let's say you, you look at a tree, you look at this plant or whatever, and like, "Hey, I'd really like to make that plant happy," but it's not saying a lot, you know? (laughs)

    3. LF

      Mm-hmm. So the more we increase the data rate that humans, uh, can intake and output, then that means the, the better, (laughs) the, the higher the chance we have in a world full of AGIs.

    4. EM

      Yeah. We could better align collective human will with, uh, AI if the output rate especially was dramatically increased. Like, and I think there's, there's potential to increase the output rate by, I don't know, three, maybe six, maybe more orders of magnitude. So it's better than the current situation.

    5. LF

      And that output rate would be by increasing the number of electrodes, number of channels, and also maybe implanting multiple NeuraLinks?

    6. EM

      Yeah.

    7. LF

      Do you think there will be a world in the next couple of decades where it's hundreds of millions of people have NeuraLinks?

    8. EM

      Yeah, I do.

    9. LF

      Do you think when people just, when they see the capabilities, the superhuman capabilities that are possible, and then the, the safety is demonstrated?

    10. EM

      Yeah, if it's extremely safe, um, and you have s- and, and you can have superhuman abilities, um, and let's say you can upload your memories, um, you know, so you wouldn't, you wouldn't lose memories, um, then I think probably a lot of people would, would choose to have it. It would supersede the cellphone, for example. I mean, it's the-

    11. LF

      Mm-hmm.

    12. EM

      The, the, the biggest problem that a, say a phone has, um, is, is trying to div- figure out what you want. So that's why you've got, uh, you know, autocomplete, and you've got output, which is all the pixels on the screen, but from the perspective of the human. The output is so friggin' slow. Desktop or phone is desperately just trying to understand what you want. And, and, um, you know, there's an eternity between every keystroke from a computer standpoint.

    13. LF

      (laughs) Yeah. Yeah.

    14. EM

      So...

    15. LF

      That's why the computer's talking to a tree, a slow-moving tree-

    16. EM

      Yeah.

    17. LF

      ...that's trying to swipe.

    18. EM

      Yeah.

    19. LF

      (inhales deeply)

    20. EM

      So, you know, if you have computers that are doing trillions of instructions per second, and a whole second went by, I mean, there's a trillion things it could have done, you know? (laughs)

    21. LF

      Yeah. I think it's exciting and scary for people, because once you have a very high bit rate, that changes the human experience in a way that's very hard to imagine.

    22. EM

      Yeah, it would be... We would be something different, s- I mean some sort of futuristic cyborg. Uh, I mean, I mean, we're obviously talking about, by the way, like, (laughs) it's not like around the corner. It's, you're asking what the fu- distant future, it's like, maybe this is, like, it's not super far away, but 10, 15 years, that kind of thing.

    23. LF

      (inhales deeply) When can I get one? 10 years?

    24. EM

      Pro- probably less than 10 years. Depends what you want, wanna, wanna do, you know?

    25. LF

      Hey, if I can get, like, 1,000 bps.

    26. EM

      1,000 bps, wow.

    27. LF

      And it's safe, and I can just interact with a computer while laying back and eating Cheetos.

    28. EM

      (laughs)

    29. LF

      I don't eat Cheetos. There's certain aspects of human-computer interaction, when done more efficiently and more enjoyably, are, like, worth it.

    30. EM

      Well, we feel pretty confident that, um, I, I think maybe within the next year or two that someone with a NeuraLink implant will be able to outperform, um, a, uh, pro gamer.

  8. 34:4436:57

    xAI

    1. LF

      I got to visit Memphis.

    2. EM

      Yeah, yeah.

    3. LF

      You're going big on compute.

    4. EM

      Yeah.

    5. LF

      And you've also said, "Play to win or don't play at all," so.

    6. EM

      Yeah. (laughs)

    7. LF

      (laughs) What does it take to win?

    8. EM

      Um, for AI that means you've gotta have the most powerful training compute. And your, the, the rate of improvement of training compute has to be faster than everyone else, or you will not win. Your, your AI will be worse.

    9. LF

      So how can Grok, let's say 3, that might be available, what, like next year?

    10. EM

      Well, hopefully end of this year.

    11. LF

      Grok-3.

    12. EM

      If we're lucky, yeah.

    13. LF

      How can that be the best LLM, the best AI system available in the world? How much of it is compute? How much of it is data? How much of it is, like, post-training? How much of it is the product that you package it up in, all that kinda stuff?

    14. EM

      I mean, they all matter. It's sort of like saying what, what, you know, let's say it's a Formula One race. Like, what matters more, the car or the driver? I mean, they both matter. Um, if the, if, if you're, if a car is not fast, then, you know, if it's, like, let's say it's half the horsepower of your competitor's, the best driver will still lose. Uh, I don't know, if it's twice the horsepower, then probably even a mediocre driver will still win. So the training computer is kinda like the engine, how many, the horsepower of the engine. So y- really, you wanna try to do the best on that, and you, then, um, then it's how efficiently do you use that training compute, and how efficiently do you do the inference, the, uh, use of the AI. Um, so obviously that comes down to human talent. Um, and then what unique access to data do you have? Uh, that's also plays a, plays a role.

    15. LF

      You think Twitter data will be useful?

    16. EM

      Uh, yeah. I mean, (laughs) I think, I think most of the leading AI companies were, already s- have already scraped a- uh, (laughs) all the Twitter data.... uh, not I think they have. Um, so, uh, on a go-forward basis, what's useful is-

    17. LF

      Yes.

    18. EM

      ... is the, is the fact that it's, uh, up to the second, in a-

    19. LF

      Yes.

    20. EM

      ... that's the, because they, they, it's hard for them to scrape in real time. So there's, there's a, an a- an immediacy advantage that Grok has already.

  9. 36:5743:47

    Optimus

    1. EM

      I think with Tesla and, and the real-time video coming from the several million cars, ultimately tens of millions of cars with Optimus, there might be hundreds of millions of Optimus robots, maybe billions, learning a tremendous amount from the real world. Uh, that's, that's the, the biggest source of data, I think, ultimately, is, is sort of Optimus probably ... is, Op- Optimus is gonna be the biggest source of data.

    2. LF

      Because it's able to-

    3. EM

      'Cause r- reality scales.

    4. LF

      (laughs)

    5. EM

      (laughs) R- reality scales to the scale of reality.

    6. LF

      (laughs)

    7. EM

      Um, it's actually humbling to see how little data humans have actually been able to accumulate. Um, really, you s- if you say how many trillions of usable tokens have humans generated where on a n- non-duplicative like, uh, discounting s- s- spam and repetitive stuff, it's not a huge number. You run out pretty quickly.

    8. LF

      And Optimus can go, so Tesla cars can ... are ... unfortunately have to stay on the road. Uh-

    9. EM

      Right.

    10. LF

      ... Optimus robot can go anywhere.

    11. EM

      (laughs) Yeah.

    12. LF

      There's more reality off the road. And go off-road.

    13. EM

      Yeah. I- I mean, like the Optimus robot can, like, pick up the cup and see, did it pick up the cup in the right way? Did it-

    14. LF

      Yeah.

    15. EM

      ... you know, say, go pour water in the cup, you know?

    16. LF

      Yeah.

    17. EM

      Did the water go in the cup or not go in the cup? Did it spill water or not?

    18. LF

      Yeah.

    19. EM

      Um, simple stuff like that. I mean, but, but it can do at that at scale times a billion, you know? So generate use- useful data from reality. So ca- co- cause and effect stuff.

    20. LF

      What do you think it takes to get to mass production of humanoid robots like that?

    21. EM

      It's the same as cars, really. I mean, global capacity for vehicles, um, is about 100 million a year. And, uh, it c- it could be higher, it's just that the demand is on the order of 100 million a year. And then there's roughly two billion, uh, vehicles that are in use in some way. So, uh, which makes sense. Like the, the life of a vehicle is about 20 years, so at steady state, you can have 100 million vehicles produced a year with a t- with a two billion vehicle fleet, roughly. Um, now for humanoid robots, the utility is much greater. So my guess is humanoid robots are more like at a, a bil- a billion plus per year.

    22. LF

      But, you know, until you came along and started, uh, building Optimus, it- it was thought to be an extremely difficult problem. I mean, still-

    23. EM

      Well, I think it is-

    24. LF

      ... it's an extremely difficult problem. (laughs)

    25. EM

      Yes. It's, it's no walk in the park. I mean, Octo- O- Opt- (laughs) Optimus currently would struggle to have a w- wa- walk to walk in the park.

    26. LF

      (laughs)

    27. EM

      I mean, it can walk in a-

    28. LF

      Yeah.

    29. EM

      ... park, but park is not too difficult, but it w- it will be able to w- (laughs) walk, um, over a wide range of terrain.

    30. LF

      Yeah. And pick up objects.

  10. 43:471:01:23

    Elon's approach to problem-solving

    1. EM

    2. LF

      Can you just speak to what it takes for a great engineering team? For you, the, what I saw in Memphis, the super computer cluster, is just this intense drive towards simplifying the process, understanding the process, constantly improving it, constantly iterating it.

    3. EM

      Well, (laughs) it's easy to say simplify and it's very difficult to, to do it. Um, you know, I have this very basic first pr- ba- basic first principles algorithm that I run kind of as like a mantra, which is to first question the requirements, make the requirements, um, less dumb. The requirement's alwa- always dumb to some degree, so if you wanna start off by reducing the number of re- requirements, um, and, um, no matter how smart the person is who gave you those requirements, they're still dumb to some degree. Um, if you're, you have to start there, because otherwise, uh, you could get the perfect answer to the wrong question. So, so try to make the question the least wrong possible. That's what, um, questioning the requirements means. And then the second thing is try to delete the, whatever the step is, the, the part or the process step. Um, sounds very obvious, but, um, people often forget to do, to, to try deleting it entirely. And if you're not forced to put back at least 10% of what you delete, you're not deleting enough. Like, uh, so, and, uh, and it's, uh, somewhat illogically, people often, most of the time, um, feel as though they have succeeded if they have not been forced to put the, put, put things back in. But actually they haven't, because they've been overly conservative and put, and have left things in there that shouldn't be. So, and only the third thing is try to optimize it or simplify it. Um, again, this sounds ... these all sound, I think, very, very obvious when I say them, but, uh, the number of times I've made these mistakes is, uh, more than I care to remember. Um, that's why I have this mantra.

    4. LF

      (laughs)

    5. EM

      So, in fact, I'd say the, the most common mistake of smart engineers is to optimize a thing that should not exist.

    6. LF

      Right.

    7. EM

      (laughs)

    8. LF

      So, so you, like, like you say, you run through the algorithm-

    9. EM

      Yeah.

    10. LF

      ... and basically sh- show up to a problem, uh, show up to the, the, the, the, the super computer cluster and see the process and ask, "Can this be deleted?"

    11. EM

      Yeah. First try to delete it.

    12. LF

      (laughs)

    13. EM

      Um, yeah.

    14. LF

      Yeah. That's not easy to do.

    15. EM

      No. And, and actually, th- there's, th- what generally makes people uneasy is that you've gotta delete, at least some of the things th- that you delete, you will put back in.

    16. LF

      Yeah.

    17. EM

      But going back to sort of where our limbic system can steer us wrong is that, um, we tend to remember, uh, with a, sometimes a jarring level of pain, uh, where we've, where we deleted something that we subsequently needed.

    18. LF

      Yeah.

    19. EM

      Um, and so people remember that one time they forgot to put in this thing three years ago and that caused them trouble. Um, and so they over correct, and then they put too much stuff in there and over complicate things. So you actually have to say, "No, we're deliberately gonna delete more than we, we should." So that we're putting, at least one in ten things we're gonna add back in.

    20. LF

      Mm-hmm. Uh, and I've seen you suggest just that, that, uh, something should be deleted, and you can kind of see the, the pain.

    21. EM

      Oh, yeah, absolutely.

    22. LF

      Everybody feels a little bit of the pain.

    23. EM

      Absolutely. And, uh, and I tell them in advance, like, "Yeah, there's some of the things that we delete we're gonna put back in." And, and that, people get a little shook by that. Um, but it makes sense, because if you, if you are so conservative as to never have to put anything back in, you obviously have a lot of stuff that isn't needed. So you, you, you gotta over correct. This is, I would say, like a cortical override to a limbic instinct.

    24. LF

      One of many that probably leads us astray.

    25. EM

      Yeah. Um, and there's like a step four as well, which is, um, any given thing can be sped up. (laughs) However fast you think it can be done. Like, whatever the speed, the, the speed it's being done, it can be done faster.

    26. LF

      Mm-hmm.

    27. EM

      But, but you shouldn't speed things up until it's opt- un- until you try to delete it and optimize it, otherwise you're speeding up s- that something that, speeding up something that shouldn't exist is absurd. Um, and then, and then the, the fifth thing is to, to automate it.

    28. LF

      Yeah.

    29. EM

      And I've gone backwards so many times where I've automated something-

    30. LF

      (laughs)

  11. 1:01:231:05:53

    History and geopolitics

    1. EM

    2. LF

      Uh, since I am interviewing Donald Trump-

    3. EM

      Cool.

    4. LF

      ... you wanna stop by?

    5. EM

      Yeah, sure. I'll stop in.

    6. LF

      There was tragically an a- an assassination attempt on Donald Trump. Uh, after this, you tweeted that you endorse him. What's your philosophy behind that endorsement? What do you hope Donald Trump does for the future of this country and for the future of humanity?

    7. EM

      Well, I think there's... You know, people tend to take like, say, an endorsement as, um, well, I- I agree with everything that person has ever done their entire life 100% wholeheartedly, and that's- that's not gonna be true of anyone. Um, but we have to pick, you know, we got two choices really for- for who's president, and it's not- not just who's president, but the entire admini- administrative structure, uh, changes over. Um, and I thought, uh, Trump displayed, uh, courage under fire, objectively. Um, you know, he's, uh, just got shot. He's got blood streaming down his face, and he's like fist pumping saying, "Fight." You know, like, that's, uh, impressive. Like, you can't feign bravery in a situation like that. Um, like most people would have been ducking. There would not be... 'Cause there could be a- a second shooter. You don't know. Um, the- the president of the United States gotta represent the country, and uh, they're representing you. They're representing everyone in America. Well, like you want someone who is strong and courageous, uh, to represent the country. Um, that's not to say that he is without flaws. We all have flaws. Uh, but on balance, um, and certainly at the time, it was, um, a choice of, you know, Biden, po- poor guy, you know, has trouble climbing a flight of stairs, and the other one's fist bumping after getting shot. This is no- no comparison. I mean, who do you want dealing with, uh, some of the toughest people and, you know, other world leaders who are pretty tough themselves? And um, I- I mean, and I'll tell you like one of the things that I think are important. Um, you know, I think we want a secure border. We don't have a secure border. Um, we want safe and clean cities. Uh, I think we wanna r- reduce the amount of spending that we're... At least slow down the- the spending, um, and uh, because we're- we're currently spending at a rate that is bankrupting the country. The interest payments on US debt this year exceeded the entire Defense Department spending. If this continues, all of the federal government taxes will simply be paying the interest, and then... And you- you keep go- going down that road, you- you end up, you know, in the tragic s- situation that Argentina had back in the day. Argentina used to be one of those prosperous places in the world, and hopefully with Milei taking over, he can restore that, but um, it's... It was an incredible fall from grace for Argentina to go from being o- one of the most prosperous places in the world to, um, being very far from that. So, I think we should not take American prosperity for granted. Um, so we- we really wanna, I think... We- we've gotta reduce the size of government, we've gotta reduce the spending, and we've gotta live within our means.

    8. LF

      Do you think politicians in general, politicians, governments... Well, how much power do you think they have to- to steer humanity towards good?

    9. EM

      Um, um there's a sort of age-old debate in history, like you know the- the... Is history determined by- by these fundamental tides or is it determined by the captain of the ship?

    10. LF

      Mm-hmm.

    11. EM

      It's both really. I mean, there are tides and- but it also matters who's captain of the ship. So, so it's false dichotomy essentially. There's... You- you... (laughs) I mean, I mean there are certainly tide, the tides of history are... There are, there are real tides of history and these- these tides are often technologically driven. If you say like the Gutenberg press, you know, the widespread availability of books as a result of a printing press. That- that was a massive tide of history and independent of any ruler, but you know, you... (laughs) I think in stormy times you want the best possible captain of the ship.

  12. 1:05:531:10:12

    Lessons of history

    1. EM

    2. LF

      Well, first of all, thank you for recommending Will and Ariel Durant's work. I've, uh, read the- the short one for now.

    3. EM

      The Lessons of History.

    4. LF

      The Lessons of His- Lessons of History.

    5. EM

      Yeah.

    6. LF

      And so one of the... Well, sort of one of the lessons, one of the things they highlight is the importance of technology.

    7. EM

      Yeah.

    8. LF

      Uh, technological innovation and they... Which is funny 'cause they've written... They wrote so long ago, but they were noticing that the- the rate of technological innovations was (laughs) speeding up. Um-

    9. EM

      Yeah, probably is.

    10. LF

      I would love to (laughs) see what they think about now. Uh, but yeah, so you did... To me, the question is how much government, how much politicians get in the way of technological innovation and building versus like help it in which- uh, which- which politicians, which kind of policies help technological innovation, 'cause that seems to be... If you look at human history, that's an important component of empires rising and succeeding.

    11. EM

      Yeah. Well, I mean, in terms of dating civilization, the start of civilization, I think the start of writing in- in my view is- is the... That's- that's why what I think is probably the- the right starting point to date civilization, and from that standpoint, civilization has been around for about 5,500 years.... um, when writing was invented by the ancient Sumerians, um, who, who are gone now. Um, but the, the ancient Sumerians. Th- in terms of getting a lot of firsts, the, those ancient Sumerians really have a long list of firsts. (laughs) It's pretty wild. Um, in fact, Durant goes through the list of like, "You want to see firsts? We'll show you firsts."

    12. LF

      (laughs)

    13. EM

      Um, the Sumerians just a, ass, were just ass-kickers. Um, and then the Egyptians who were right next door, um, relatively speaking, uh, they're like, weren't that far, developed an entirely different form of writing, the hieroglyphics. Cuneiform and hieroglyphics are totally different. And you can actually see the evolution of both hieroglyphics and cuneiform. Like, the cuneiform starts off being very simple, and then it gets more complicated, and then towards the end it's like, wow, okay, they really get very sophisticated with the cuneiform. So, I think of civilization as being about 5,000 years old. Um, and Earth is, um, if physics is correct, four and a half billion years old. So, civilization has been around for one-millionth of Earth's existence.

    14. LF

      (laughs)

    15. EM

      Flash in the pan.

    16. LF

      Yeah, these are the early, early days. And so we, we draw-

    17. EM

      Very early.

    18. LF

      We make it very dramatic because there's been rises and falls of empires, and-

    19. EM

      Many, so many, so many rises and falls of empires.

    20. LF

      (laughs)

    21. EM

      So many.

    22. LF

      And there'll be many more.

    23. EM

      Yeah, (laughs) exactly. It's, I mean, only a tiny fraction, probably less than 1% of, of what was ever written in history is a- is available to us now. I mean, if they didn't put it literally chiseled in stone or put it in a clay tablet, we don't have it.

    24. LF

      Yeah.

    25. EM

      I mean, there's some small amount of like papyrus scrolls that were rec- recovered that are thousands of years old, uh, because they were deep inside a pyramid and weren't affected by moisture. Uh, but, but other than that, it's really gotta be in a clay tablet or chiseled. (laughs) So, the vast majority of stuff was not chiseled because it takes a while to chisel things. Um, so that's why we've got tiny, tiny fraction of the information from history. But even that little information that we do have and the archaeological record, uh, shows so many civilizations rising and falling. It's wild.

    26. LF

      We tend to think that we're somehow different from those people. One of the other things that Durant highlights is that human nature seems to be the same. (laughs)

    27. EM

      (laughs)

    28. LF

      It just persists.

    29. EM

      Yeah, I mean, the basics of human nature are more or less the same. Yeah.

    30. LF

      So we get ourselves in trouble in the same kinds of ways, I think, even with the advanced technology.

  13. 1:10:121:17:55

    Collapse of empires

    1. EM

    2. LF

      What, what do you think it takes for the American empire to not collapse in the near-term future, in the next 100 years, to continue flourishing?

    3. EM

      Well, the single biggest thing that is, um, often actually not mentioned in history books, but Durant does mention it, uh, is the birth rate. So, um, like I said, like, uh, perhaps just a counterintuitive thing happens, uh, when civilizations become, uh, are, are, are winning for too long. The- they've been, they, the birth rate declines. It can often decline quite rapidly. We're seeing that throughout the world today. You know, currently South Korea is like, I think maybe the lowest fertility rate. There, there are many others that are close to it. It's like 0.8, I think. If the birth rate doesn't decline further, uh, South Korea will lose roughly 60% of its population. And, and, but every year, the birth rate is dropping. Um, and this is true through most of the world. I, I don't mean to single out South Korea. It's been happening throughout the world. So, as, as soon as, as, as soon as any given, uh, civilization reaches a level of prosperity, the birth rate drops. Um, and now you can go look at the same thing happening in an- in ancient Rome. So, uh, Julius Caesar took note of this, I think around 50, 50-ish BC, um, and tried to pass, I don't know if he was successful, tried to pass a law to give an incentive for any Roman citizen that would have a third child. And I think Augustus was, was able to... well, he was, you know, the dictator, so (laughs) the senate was just for show. Uh, I think he did pass a, a tax incentive for Roman citizens to have a third child, but it- it, those efforts were unsuccessful. Um, Rome fell because the Romans stopped having, making Romans. That's actually the fundamental issue. And, and there were other things. So there, there was like, um, they had like quite a serious malaria, series of malaria epidemics, and plagues and whatnot. Um, but they had those before. Uh, the, the, the, it's just that the birth rate was far lower than the death rate.

    4. LF

      It really is that simple.

    5. EM

      Well, I'm saying that's-

    6. LF

      More people is acquired.

    7. EM

      ... that, that's, that's at a fundamental level. If a civilization does not at least maintain its numbers, um, it will disappear.

    8. LF

      So perhaps the amount of compute that the biological computer allocates to, to sex is justified. In fact, we should probably increase it.

    9. EM

      Well, I mean, there's this hedonistic sex, which is, uh...... you know, that, that's neither, that's neither here or there.

    10. LF

      Yeah.

    11. EM

      Um, it, it's-

    12. LF

      Not productive.

    13. EM

      It's, it, it doesn't produce kids. (laughs) Well, you know, you, you, what-

    14. LF

      Just taking notes.

    15. EM

      ... what matters, I mean, Durant makes this very clear, because he's looked at one civilization after another, and they all went through the same cycle. When the civilization was under stress, the birth rate was, was high. But as soon as there were no external en- enemies, or they've, they were, had a extended period of prosperity, the birth rate inevitably dropped. Every time. I don't believe there's a single exception.

    16. LF

      So, that's like the foundation of it. You need to have people.

    17. EM

      Yeah.

    18. LF

      (laughs)

    19. EM

      I mean, at, at a base level-

    20. LF

      Yeah.

    21. EM

      ... you know, humans, no humanity.

    22. LF

      And then there is other things like, you know, uh, human freedoms, and just giving people the freedom to build stuff.

    23. EM

      Y- yeah, absolutely. There's ... but, but at, at a basic level, if you do not at least maintain your numbers, if you're below replacement rate and that trend continues, you will eventually disappear. It's just elementary. Um, now, then obviously you want, also wanna try to avoid like, uh, massive wars. Um, you know, if there's a global thermonuclear war, probably we're all toast, you know? Radioactive toast. (laughs)

    24. LF

      (laughs)

    25. EM

      So, (laughs) so we wanna try to avoid those things. Um, then there, there are, um, there's a thing that happens over time with, with any given, uh, civilization, which is that the laws and regulations accumulate. Um, and if there's not a, if there's not some forcing function like a war to clean up the accumulation of laws and regulations, eventually everything becomes legal. And you, you, that the, that's like the hardening of the arteries.

    26. LF

      Yeah.

    27. EM

      Um, or a way to think of it is like being tied down by a million little strings, like Gulliver. You can't move. And it's not like any one of those strings is the, is the issue, because you've got a million of them. So, there h- there has to be a, a sort of a, a garbage collection for laws and regulations, um, so that you, you, you don't keep accumulating laws and regulations to the point where you can't do anything. This is why we can't build high-speed rail in America. It's illegal. That's the issue.

    28. LF

      (laughs)

    29. EM

      It's illegal six days to Sunday to build high-speed rail in America.

    30. LF

      I wish you could just like for a week go into Washington and like be the head of the committee for making, uh, what is it, for the, the garbage collection, making government smaller, like removing stuff.

  14. 1:17:551:20:37

    Time

    1. LF

      your own life, what to you is a measure of success in your life?

    2. EM

      A measure of success I'd say wh- wh- like what, how many useful things can I get done?

    3. LF

      A day-to-day basis, you wake up in the morning, "How can I be useful today?"

    4. EM

      Yeah. Maximize utility, area under the curve of usefulness. Very difficult to be useful that scale.

    5. LF

      That scale. Can you like speak to what it takes to be useful for somebody like you, where there's so many amazing great teams? Like, how do you allocate your time to being the most useful?

    6. EM

      Well, time, time is the, tr- time is the true currency.

    7. LF

      Yeah.

    8. EM

      So, it is tough to say what, what is the best allocation of time. I mean, there are, you know, often say, if you look, if you look at say Tesla, I mean, Tesla this year will do over $100 billion in revenue. So, that's $2 billion a week. Um, if I make slightly better decisions, I can affect the outcome by $1 billion. So, then, uh, you know, I try to do the best decisions I can, and on balance, you know, at least compared to the, the competition, pretty good decisions. But the marginal value of, of it-... better decision can easily be, in the course of an hour, $100 million.

    9. LF

      Given that, how do you take risks? How do you do the, the algorithm that you mentioned? I mean, deleting, given that a small thing can be a billion dollars. How do you decide to...

    10. EM

      Yeah. Well, I, I think you have to look at it on a percentage basis, because if you look at it in absolute terms, it's, it's just, uh, I would never get any sleep. It's, I would just be like, "I need to just keep working and, and work my brain harder," you know? And I'm not trying to get as much as possible out of th- out of this meat computer. (laughs) So, it's not, uh ... it's pretty hard, um, because you could just work all the time and, and, and at any given point, uh, like I said, a slightly better decision could be $100 ... $100 million impact for Tesla or SpaceX for that matter. Um, but, but it is wild when, when considering the marginal value of, of time can be $100 million an hour at times or more.

    11. LF

      Is your own happiness part of that equation of success?

    12. EM

      It has to be to some degree, other if I'm sad I ... if I'm depressed, I make worse decisions. So, I, I can't have like ... if, if I have zero recreational time, then, uh, I make work- worse decisions. So I don't have a lot, but it's above zero.

  15. 1:20:371:28:12

    Aliens and curiosity

    1. EM

      I mean, my motivation, if I've got a religion of any kind is, uh, a, uh, religion of curiosity, of trying to understand. You know, it's, it's really the, the mission of Grok, understanding the universe. I'm trying to understand the universe, um, or le- at least set things in motion such that at some point civilization understands the universe or ... uh, far better than we do today and even what questions to ask. As Douglas Adams pointed out in his book, the ... sometimes the answer is the ... is arguably the easy part. The ... trying to frame the question correctly is the hard part. Once you frame the question correctly, the answer is often easy. So, um, I'm trying to set things in motion such that we are at least at some point able to understand the universe. Um, so for SpaceX, the goal is to make life multi-planetary, um, and, uh, to ... which, which is, if, if you go to the, the Fermi paradox of where the ... where are the aliens? You've got these, these sort of great filters, like, just like, why, why have we not heard from the aliens? Now, a lo- lot of people think there are aliens among us. I often claim to be one, which nobody believes me, but, um, it did say alien registration card at one point on my, uh-

    2. LF

      (laughs)

    3. EM

      ... immigration documents. Um ...

    4. LF

      Yeah.

    5. EM

      So I've not seen any evidence of aliens. So it, it suggests that, um ... at least one of the, one of the explanations is that, uh, intelligent life is extremely rare. Um, and again, if you look at the history of earth, civilization has only been around for one millionth of earth's existence. So if, you know, if, if aliens had visited here, say, 100,000 years ago, they would be like, "Well, they don't even have writing, you know, just hunter-gatherers basically." So, um, so how long does a civilization last? So for SpaceX, the, the goal is to establish a self-sustaining city on Mars. Mars is the only viable planet for such a thing. Um, the moon is close, but it's ... it lacks resources, and I think it's probably vulnerable to any, any, any calamity that takes out Earth could ... the moon is too close. It's vulnerable to a calamity that takes out Earth. Um, so I'm not saying we shouldn't have a moon base, but Mars is ... Mars would be far more resilient. Um, the difficulty of getting to Mars is what makes it resilient. Um, so, but ... and I ... you know, in, in going through the ... these various explanations of why don't we see the aliens, why ... one of them is that they, they failed to pass these, these great filters, these, these key hurdles, and one of those hurdles is being a multi-planet species. Um, so if you're a multi-planet species, then if something were to happen, whether that was a natural catastrophe or a man-made catastrophe, at least the other planet would probably still be around. So you're not like ... you don't have all the eggs in one basket. And once you are sort of a two-planet species, you can obviously extend to ... send life paths to the asteroid belt, to ... maybe to the moons of Jupiter and Saturn, um, and ultimately to other star systems. But if you can't even get to another planet, uh, you know, definitely not getting to star systems.

    6. LF

      And the other possible great filters, uh, super powerful technology like AGI, for example. So you're, you're basically trying to knock out one great filter at a time.

    7. EM

      Digital super intelligence is possibly a great filter. I hope it isn't, but it might be. You know, guys like say Geoff Hinton would say ... you know, has ... he invented a number of the key principles in art- artificial intelligence. I think he puts the probability of AI annil- annihilation around 10 to 20%, something like that. So, you know, so it's, it's not, uh ... it's not like, you know, looking on the bright side, it's, uh, 80% likelihood to be great. (laughs) So, so but I think AI risk mitigation is important. Um, being a multi-planet species would be a massive risk mitigation. And, um, I do want to sort of, once again, emphasize it's impor- the importance of having enough children to sustain, um, our numbers, um, and not going, n- not plummet into population collapse, which is currently happening. P- population collapse is a real and current thing. Um, so the, the only reason it's not being reflected in the total population numbers is that, is that, as much is because people are living longer. Um, but, but you, you, you can... It's easy to predict, say, what the population of any given country will be. Um, just take the birth rate last year, how many babies were born, multiply that by life expectancy, and that's what the population will be, steady state, unless, i- if the birth rate continues at that level. But if it keeps declining, it will be even less and eventually dwindle to nothing. So I keep, you know, banging on the baby drum here, um, for a reason. Um, because it has been the c- the source of civilizational collapse over and over again, throughout history. Um, and so, why don't we just, uh, not ... try to stave off that day.

Episode duration: 8:37:34

Install uListen for AI-powered chat & search across the full episode — Get Full Transcript

Transcript of episode Kbk9BiPhm7o

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome