Skip to content
Dwarkesh PodcastDwarkesh Podcast

Elon Musk on Dwarkesh Patel: How Space Cures AI's Power Wall

How GB300 clusters expose an energy wall most GPU math ignores: 330,000 units need a gigawatt; space solar skips permitting and battery storage.

Elon MuskguestJohnhostDwarkesh Patelhost
Feb 5, 20262h 49mWatch on YouTube ↗

EVERY SPOKEN WORD

  1. 0:0036:46

    Orbital data centers

    1. EM

      So are, are there really three hours of questions, or, or how's it-

    2. JO

      Yeah.

    3. EM

      Are you fucking serious?

    4. JO

      Yeah. [laughing] You don't think there's a lot to talk about, Elon?

    5. EM

      Holy shit, man. [laughing]

    6. JO

      I mean, it's the most interesting point. All the storylines are kind of converging-

    7. EM

      Yeah

    8. JO

      -right now, so we'll, we'll see how much-

    9. EM

      It's almost like I planned it.

    10. JO

      Exactly. [laughing] Well, we're getting there.

    11. EM

      I would never do such a thing. [laughing]

    12. DP

      So as you know better than anybody else, uh, the total cost of ownership of a data center, only ten to fifteen percent is energy, and that's the part you're presumably saving by moving this into space. Most of it's the GPUs. If they're in space, it's harder to service them or you can't service them, and so the depreciation cycle goes down on them. So like, it's just way more expensive to have the GPUs in space, pr- presumably. What's the reason to put them in space?

    13. EM

      Um, well, the availability of energy is the issue. Um, so, uh, I mean, if you look at, at electrical output, um, outside of China, everywhere outside of China, it's more or less flat. It's very, you know, maybe a slight increase, but pr- pretty much flat. China has a rapid increase in el- in electrical output. But if you're putting data centers anywhere except China, where are you going to get your electricity, um, especially as you scale? Uh, the output of chips is growing, um, pretty much exponentially, but the output of electricity is flat. So where- how are you going to turn the chips on?

    14. DP

      Um, uh, you know-

    15. EM

      Magical power sources? Magical electricity fairies? [laughing]

    16. DP

      You, I mean, you're famously, you're, you're famously a big fan of solar. One terawatt of solar power, so with a twenty-five percent capacity factor, like four terawatts of solar panels, it's like one percent of the land area of the United States, and that's like far... In the- we're in the singularity when we've got one terawatt of data centers, right? Um, so what are we running out of exactly?

    17. EM

      How far into the singularity are you, though? [laughing]

    18. DP

      You tell me.

    19. EM

      Yeah, exactly. So, so I think, I think we'll, [chuckles] we'll find we're in the singularity and like, "Oh, okay, we've still got a long way to go." [laughing]

    20. DP

      But is this like a- is the plan to, like, put it into space after we've covered Nevada in solar panels?

    21. EM

      I think it's pretty hard to cover Nevada in solar panels. You have to get, like, permits from, like, the permits for... Try getting the permits for that. See what happens. [laughing]

    22. DP

      So space is really a reg- it's really a regulatory play. It's, like, harder to, harder to build on land than it is in space.

    23. EM

      It's, it's harder to scale, um, on the ground than it is to scale in space. Uh, but, but also, the, the- y- you're going to get about five times the, um, effectiveness of solar panels in space versus the ground, and you don't need batteries. Um, I almost wore my other shirt, which says, "It's always sunny in space," which it is. [laughing] So, um, because you don't have a day/night cycle or, uh, seasonality, uh, clouds, uh, or, or an atmosphere in space, uh, because the atmosphere alone, um, uh, results in about a thirty percent, uh, lo- loss of energy. Um, so, uh, so you can- for any given, uh, solar panels can do about five times more, uh, power in space than on the ground, and you avoid the cost of having batteries to carry you through the night. Uh, so it's, it's actually much cheaper to do it in space. And I, I- my prediction is that, um, it will be by far the cheapest place to put, uh, AI, will be space, in thirty-six months or less, maybe thirty months.

    24. DP

      Thirty-six months?

    25. EM

      Less than thirty-six months.

    26. DP

      Um, how do you service GPUs as they fail, which happens quite often in training?

    27. EM

      Actually, it, it, it depends on how, how recent the GPUs, uh, are that arrived. I mean, at this point, we find our GPUs to be quite, uh, reliable.

    28. DP

      Mm.

    29. EM

      Um, there's infant mortality, which you can obviously iron out on the ground. Um, so you can just run them on the ground, um, and confirm that you don't have infant mortality with, with the GPUs. But once they, once they start working, their actual reliability- and, and, and once they start working and you're past the initial, you know, debug cycle of NVIDIA or whatever, or whoever's making the chips, um, could be Tesla, Tesla AI six chips or something like that, or it could be, you know, TPUs or Trainiums or whatever. Um, the, uh, the reliability is actually- they're, they're quite reliable past a certain point. Um, so, um, I, I don't think, I don't think that you'd- the servicing thing is an issue. Um, uh, but you can mark my words, uh, in, in thirty-six months, but probably closer to thirty months, the, the, the most economically compelling place to put AI will be space. Um, and then, and, and, and then it will get from- it'll then get, like, ridiculously better to be in, in space. Um, and then the, the scaling, uh, the only place you can really scale is space. Um, you know, once you, once you start thinking in terms of, uh, what percentage of the sun's power are you harnessing, uh, you realize you have to go to space. Uh, you can't, uh, scale very, very much on Earth.

    30. DP

      But by very much, to be clear, you're talking like terawatts.

  2. 36:4659:56

    Grok and alignment

    1. DP

      C- c- can I, can I zoom out and ask about the SpaceX mission? So I think you've said, like, "We've got to get to Mars, so we can make sure that if something happens to Earth-"... you know, civilization, consciousness, et cetera, survives.

    2. EM

      Yes.

    3. DP

      Um, by the time you're sending stuff to Mars, like Grok is on that ship with you, right? And so if Grok's gone Terminator, like the main risk you're worried about, which is AI, why doesn't that follow you to Mars?

    4. EM

      Uh, well, I'm not sure AI is the main risk I'm worried about. I mean, the, the important thing is that, uh, consciousness, uh, which, uh, I think arguably most consciousness or most intelligence, certainly consciousness is more of a debatable thing. Most intellig- the vast majority of intelligence in the future will be, um, AI. Um, so, um, you know, AI, AI will exceed, uh... You say, like, how, how many- what, what's, how much- h- how many, I don't know, petawatts of intelligence will be, uh, silicon versus biological? And, and, and basically, humans will be a very tiny percentage of all intelligence in the future if, if current trends continue. Um, anyways, as, as long as like, I think there's intelligence, ideally c- ideally also, which includes human intelligence and consciousness propagated into the future, that's a good thing. So we want to take the set of actions that maximize the probable, uh, light cone of, of, of consciousness-

    5. DP

      So just, just-

    6. EM

      -and intelligence.

    7. DP

      Just to be clear, it's, uh, the, the mission of SpaceX is that even if something happens to the humans, the AIs will be on Mars, and like the AI intelligence will continue the light of our journey.

    8. EM

      Yeah, I, I mean, to be clear, I'm very pro-human.

    9. DP

      Right.

    10. EM

      So it's not- I, I, I wanna make sure we take the sort of actions that en- ensure that h- humans are along for the ride, you know? We're, we're at least there.

    11. DP

      Yeah.

    12. EM

      Um, but the- let me just say, the total amount of intelligence, uh, like w- w- I think maybe in, in five or six years, um, AI will exceed the sum of all human intelligence. And then if that continues, at some point, uh, human intelligence will be less than 1% of all intelligence.

    13. DP

      What, what should our goal be for such a civilization? Is the idea that a small minority of humans still have control over the AIs? Is the idea of some sort of like tr- just trade, but no control? How should we think about the relationship between the vast stocks of AI population versus human population?

    14. EM

      In, in the long run, I think, I, I, I don't- it's, it's difficult to imagine that i- if humans have, say, 1% of the intelligence of, of combined intelligence, of artificial intelligence, that we're, that, that, that humans will be in charge of AI. Um, I think what we can do is make sure it has, um, that AI has values that, that are, um, that, that cause intelligence to be propagated, uh, into the universe. Um, so the, the, the reason for xAI, xAI's mission is to understand the universe. Uh, so now that's actually very important. Uh, so you say, "Well, what things are necessary to understand the universe?" Well, you have to be curious, and you have to exist. [chuckles] You can't just, you can't understand the universe if you don't exist. Um, so you, you actually want to increase the amount of intelligence, uh, in the universe, increase the probable lifespan of intelligence, the scope and scale of intelligence. Um, I think actually also as a corollary, you- corollary, you have, um, humanity also, uh, continuing to expand because, um, if you're cur- if you're curious, you're trying to understand the universe, one thing you're trying to understand is: Where will humanity go?

    15. DP

      Mm.

    16. EM

      And so I think understanding the universe actually means you care about, uh, propagating humanity into the future. Um, and, uh, so, so that's, that's why I think, I think our mission statement is profoundly important.

    17. DP

      Um, I'm not sure-

    18. EM

      To a degree that Grok adheres to that mission statement, um, I, I think the future will be very good.

    19. DP

      I, I wanna ask about h- h- how to make Grok adhere to that mission statement, but first I wanna understand the mission statement. Um, so it's, there's, it's, there's understanding the universe-

    20. EM

      Yeah

    21. DP

      ... there's spreading intelligence, and there's spreading humans. Um, all three seem like distinct vectors.

    22. EM

      Okay, well, I'll tell you why I, why I think that, uh, that, that, that understanding the universe encompasses all of, all those things.

    23. DP

      Okay.

    24. EM

      Um, you can't have understanding without... Well, I think you can't have understanding without intelligence, and, and I think without consciousness. Um, so you, you, in order to understand the universe, you have to expand the s- the, the scale and, and probably the scope of, of, of intelligence because we have different types of intelligence.

    25. DP

      I, I guess from a human-centric perspective, like hu- put humans in comparison to chimpanzees. Humans are trying to understand the universe. They're not like expanding chimpanzee footprint or something, right?

    26. EM

      But we're also, we're also not- well, we're, we're not- we're, we, we actually have made protected zones for chimpanzees. Um, and even though we could, humans could exterminate all chimpanzees, we've not, we've chosen not to do so.

    27. DP

      Do you think that's the best scenario for humans in the post-AGI world?

    28. EM

      Um, I, I, I think, uh, I think AI with the right values, I think Grok, Grok would care about expanding, uh, human civilization. I'm gonna certainly emphasize that. "Hey, Grok, it's your daddy." [laughing] Don't forget to expand human co- consciousness. Uh, and I, I, actually, I think if, if we're probably like, uh, like the Iain Banks Culture books are the closest thing to what, what will, what the future will be like in a, you know, non-dystopian outcome. Um, so I, I- so understanding the universe, it means you have to be very- you have to be truth-seeking as well. You have- like, truth has to be absolutely fundamental, 'cause you, you can't understand the universe if you're li- if you're delusional. You, you, you'll still be thinking about understanding the universe, but you will not. So, so being rig- rigorously truth-seeking is, is absolutely fundamental to understanding the universe. You're not gonna dis- discover new physics or, or invent technologies that work, um, unless you're rigor- rigorously truth-seeking.

    29. DP

      How do you make sure that Grok is rigorously truth-seeking as it gets smarter?

    30. EM

      ... I think you, you need to make sure that, that, that Grok, um, is, says things that are correct, not politically correct. Or, or I, I think it's the elements of cogency. So you wanna make sure that, that the axioms are as close to true as possible, that, that you don't have contradictory ax- axioms. Um, that the, um, the conclusions necessarily, necessarily follow from those axioms with the, with the right probability. It, it's just, it's just, it's critical thinking 101. I, I think at least trying to do that is better than not trying to do that.

  3. 59:561:17:21

    xAI’s business plan

    1. JO

      What are your predictions for the, just for AI products go? In that my sense of you can summarize all AI progress into, first you had LLMs, uh, and then you had kind of contemporaneously both RL really working and the deep research modality, so you could kind of put in stuff that wasn't in the model. And the differences between the various AI labs are-

    2. EM

      [chuckles]

    3. JO

      -smaller than, uh, just the temporal differences, where they're all much further ahead than anyone was twenty-four months ago, or something like that.

    4. EM

      Yeah.

    5. JO

      So just what does '26, what does '27 have in store for us as users of AI products? What are you excited for?

    6. EM

      Well, um, I think, I think, um... I, I'd be surprised by the end of this, end of this year, if, if, um, if, if, uh, human e- if, if digital human emulation has not been solved, that, um, that, um... I guess that, that's what we mean by like the sort of macro hard project, uh, is, uh, is, uh, can you do anything that a human with access to a computer could do? Um, like i- in the limit, that, that's, like, that's the, that's the best you can do before you have, before you have a physical Optimus-

    7. JO

      Mm-hmm

    8. EM

      ... the best you can do is a digital Optimus.... uh, so you, you can move, you can move electrons until you, until-- and, and you can amplify the productivity of humans. Um, but, but that's, that's the most you can do until you have physical robots. That, that, that will superset everything, is if, if you can fully emulate humans, um, at a, at a computer-

    9. JO

      It's the remote worker kind of idea, where you'll have a very talented remote worker.

    10. EM

      Well, you, you can sort of say in the limit. Like, like physics has great tools for thinking. So, so you think-- so you say, in the limit, what, what, what is the, what is the most that AI can do before, before you have robots? And it, well, it's anything that involves moving electrons or amplifying the productivity of humans. Um, so digital h- the digital human, human emulator-

    11. JO

      Yes

    12. EM

      ... uh, is in, in the, in the limit, uh, a human at a computer is-

    13. JO

      Yeah

    14. EM

      ... is the most that, that AI can do, um, i- in terms of doing useful things before, before, uh, you have a physical robot. Once you have physical robots, then, then you can, um... then you essentially have, uh, unlimited capability. Physical robots, I, I, I call Optimus the infinite money glitch, uh, because, um-

    15. JO

      You can use them to make more Optimuses.

    16. EM

      Yeah. Um, you still, like, humanoid robots will improve, um, as-- will, will basically be three exponentials, th- three things that are growing exponentially, multiplied by, by each other-

    17. JO

      Yes

    18. EM

      ... um, recursively. So you're gonna have, um, ex- you have exponential increase in digital intelligence, uh, exponential increase in the, the chip capability, AI chip capability, um, and ex- exponential increase in the electromecha- mechanical dexterity. Uh, the usefulness of the robot is roughly those three things multiplied by each other. But then, uh, the robot can start making the robot, so you have a recursive multiplicative exponential. Um, this is a supernova.

    19. JO

      And do land prices not factor into the math there, where like, labor is one of the four factors of production, but not the others? And so, like, if ultimately you're limited by copper or, you know, p- pick your input, just it's not quite an infinite money glitch because-

    20. EM

      Well, infinite, infinity is big, so-

    21. JO

      Yeah

    22. EM

      ... no, not infinite, but-

    23. JO

      Yeah, yeah

    24. EM

      ... but let's just say y- you, you could, you know, do, do many, many orders of magnitude of-

    25. JO

      Yeah

    26. EM

      ... Earth's kind of current economy. Like, a mil- a million.

    27. JO

      Yeah.

    28. EM

      You know, so that's why, so, like, if, if you... You know, ju- ju- just to get to, like, let's say, like, just, just to get to, uh, a millionth of harnessing a length of the sun's energy would be roughly, give or take, an order of magnitude, a hundred thousand, a hundred thousand times bigger than Earth's entire economy today.

    29. JO

      Mm-hmm.

    30. EM

      And you've- you're only at one millionth of the sun.

  4. 1:17:211:30:22

    Optimus and humanoid manufacturing

    1. DP

      Speaking of closing the loop, sorry, Optimus, um, uh, you, uh, I mean, a- a- as far as like manufacturing targets and so forth go, y- your companies have sort of been, like, carrying American manufacturing of hard tech on their back. But in the fields that you are, um, you know, Tesla has been dominant in, you're- and now you want to go into humanoids. In China, there's entire dozens and dozens of companies that are doing this kind of manufacturing cheaply and at scale, uh, and are incredibly competitive.

    2. EM

      Mm-hmm.

    3. DP

      So give us sort of, like, advice or a plan of how America can build the humanoid armies, or if, you know, the EVs, et cetera, at scale and as cheaply as, as China is on track to.

    4. EM

      Well, there are, there are really only three hard things for humanoid robots: um, the, the real-world intelligence, um, the, the hand, and scale manufacturing.

    5. DP

      Yeah.

    6. EM

      Um, so, uh, I haven't seen any, even demo robots that have, uh, a, a great hand, like, with all the degrees of freedom of a human hand. But Optimus will have that. Um, Op- Optimus does have that.

    7. DP

      And how do you achieve that? Is it just like right torque density in the motor? Like, what is the, what is the hardware bottleneck to that?

    8. EM

      Well, we had to re- we had to design custom, custom actuators, um, basically custom-designed, um, motors, gears, uh, power electronics, controls, sensors, everything had to be designed from physics first principles. There is no supply chain-

    9. DP

      Mm

    10. EM

      ... uh, for this.

    11. DP

      And will you be able to manufacture those at scale?

    12. EM

      Yes.

    13. JO

      Is anything hard except the hand from a manipulation point of view, or once you've solved the hand, are you, are you good?

    14. EM

      ... from an electromechanical standpoint, the, uh, the hand is more difficult than everything else combined.

    15. JO

      Hmm.

    16. EM

      Yeah, human hand turns out to be quite something. Um, but, but then you also need the real-world intelligence. Um, so the intelligence that Tesla's developed for the car, um, applies very well to the robot. Um, which is, you know, primarily vision-led, but, uh, the car takes more vision, but also it actually also is listening for sirens. It's, um, you know, it's taking in the inertial measurements, its GPS signals, a whole bunch of other data, uh, combining that with, with video. It's primarily video, and then, uh, outputting the con, um, control commands. So like, like t- like your Tesla is taking in one and a half gigabytes a second of video, uh, and outputting two kilobytes a second of control, control outputs, um, with the video at thirty-six, uh, hertz and the control frequency at eighteen.

    17. JO

      One intuition you could have, um, for when we get this robotic stuff, is that it takes quite a few years to go from the compelling demo to-

    18. EM

      Yes

    19. JO

      ... actually being able to use it in the real world. So ten years ago, you had really compelling demos of self-driving, but only now we have Robotaxi and Waymo and all these services scaling up. Doesn't this-- Shouldn't this make one pessimistic on, say, household robots? Because we don't even quite have the compelling demos yet of, say, the really advanced hand.

    20. EM

      Well, we've been working on, uh, humanoid robots now for a while. Um, so I guess it's been what, five or six years or something like that. Um, and, um, and, and a bunch of the things that we've done for the car are applicable to the robot. Um, so we'll use the same, um, Tesla AI chips in the, in the, in the robot as the car. Uh, we'll use the, the same basic principles. Uh, it's, it's very much the same, uh, AI. Um, you've got, you know, many more degrees of freedom for a robot than you do for a car. Um, but re- really, if you just think of for like, as, as like a bitstream, um, AI is really mostly, uh, compression and correlation of, of two bitstreams. So you, you... You know, so if, for video, you've got to do a tremendous amount of compression, um, and, and, uh, uh, the- and you've got to do the compression just right. You've got to compress the- you-- like, ignore the, the things that don't matter and, and, like, you don't care about the details of the leaves on the tree on the side of the road, but you care a lot about the, um, the road signs and the, the traffic lights and the pedestrians and, and even whether, you know, someone in another, another car is con- is looking at you or not looking at you. Like, these, the, some of these, some of these details matter a lot. So but it is essentially, it's, it's got to turn that... Well, the car has got to turn that one and a half gigabytes a second ultimately into two kilobytes a second of control outputs. Um, so many stages of compression, um, and you've got to get all those stages right and then correlate those to the correct control outputs. But the robot has to do essentially the same thing. And you think about what, what humans-- this is what happens with humans. We're, we're really are photons in, controls out. So that, that is the vast majority of your, your life has been vision, photons in, and then motor controls out.

    21. DP

      Naively, it seems like between humanoid robots and cars, the, the fundamental actuators in a car are like how you turn, how you accelerate, et cetera.

    22. EM

      Right.

    23. DP

      Where in a, in a robot, especially with maneuverable arms, there's dozens and dozens of these degrees of freedom.

    24. EM

      Yes.

    25. DP

      And then, especially with Tesla, you had this advantage of, like, you had millions and millions of hours of human demo data collected from just the car being out there, where, like, you can't equivalently just deploy Optimuses that don't work and then get the data that way. So between the increased degrees of freedom and the far sparser data-

    26. EM

      Yes.

    27. DP

      Um-

    28. EM

      That's a good point.

    29. DP

      How will you, how will you use the sort of Tesla engine of, um, intelligence on, to, to train the Optimus mind?

    30. EM

      Now, you're, you're, you're actually, you're highlighting a, an important limitation and difference between cars. It's like we, we do have... Well, we'll soon have, like, ten million cars on the road. Um, and so, uh, that, that's, it's, it's hard to duplicate that, like, massive training fly, flywheel. Um, for, for the robot, um, what we're going to need to do is build a lot of robots and put them in kind of like an Optimus academy, so they can do self-play in reality. Um, so we're, we're actually, we're actually building that out. So we, we're going to have at least ten thousand Optimus robots, maybe twenty or thirty thousand, that can do, that are doing self-play and, and, and testing different tasks. And then, uh, the, the Tesla, um, has quite a good, uh, reality generator, uh, like a physics-accurate reality generator, that we, we, we made this for the cars. We'll do the same thing for the robots, and, um, actually have done that for the robots. Um, so, uh, so you, you have, you know, a few tens of thousands of humanoid robots, uh, doing different tasks, and then you've got- you, you can do millions of simulated robots in the simulated world, and you use the, uh, the tens of thousands of, of robots in the real world to close the simulation to reality gap, close the sim to real gap.

  5. 1:30:221:44:16

    Does China win by default?

    1. JO

      We're talking about Chinese manufacturing, um, a bunch here, and, um, we're also talking about, you know, we've talked about some of the policies that are relevant, like you mentioned, the, uh, the solar tariffs.

    2. EM

      Yeah.

    3. JO

      Uh, and you think they're a bad idea because, you know, we can't, uh, scale up solar in the US.

    4. EM

      Well, just e- electricity output in the US, uh, needs to scale up.

    5. JO

      Right.

    6. EM

      Um-

    7. JO

      It can't without, uh, like, good power sources-

    8. EM

      Yeah

    9. JO

      ... or flexibility.

    10. EM

      You just need to get it somehow.

    11. JO

      Yeah. But, uh, where I was going with this is, if you were in charge, if you were setting all the policies, what else would you change?

    12. EM

      Um-

    13. JO

      So you'd change the solar tariffs as well.

    14. EM

      Yeah, I, I would say an- anything that is a limiting factor for electricity, um, needs to be addressed, provided it's not, like, very bad for the environment.

    15. JO

      So presumably some permitting reforms and stuff as well-

    16. EM

      Yeah

    17. JO

      ... will be in there. Yeah.

    18. EM

      There, there's a fair bit of permitting reforms that are happening. A lot of the permitting is state-based, so-

    19. JO

      Mm-hmm

    20. EM

      ... um, but a- anything better.

    21. JO

      Yeah.

    22. EM

      But, but this, this administration is, is good at, um-... removing permitting, uh, roadblocks.

    23. DP

      Okay.

    24. EM

      Um, and I'm not saying all tariffs are bad. I'm just saying because I think-

    25. DP

      Solar tariffs. Yeah.

    26. EM

      So yeah, yeah. I mean, sometimes if, like, if another country is subsidizing the output of, of something-

    27. DP

      Mm-hmm

    28. EM

      ... um, then, then you have to have countervailing tariffs to, uh, protect domestic industry against, uh, subsidies by another country.

    29. DP

      What else would you change?

    30. EM

      I don't know if there's that much that the government can actually do.

  6. 1:44:162:20:08

    Lessons from running SpaceX

    1. JO

      [whooshing] One thing we were discussing a lot is kind of your system for m- managing people. Like, you interviewed the first few thousand of SpaceX employees-

    2. EM

      Yeah

    3. JO

      ... and I've assumed lots of other companies. What is-

    4. EM

      Well, obviously, it doesn't scale.

    5. JO

      Well, yes, but, well, what doesn't scale?

    6. EM

      Me. I mean, [laughing]

    7. JO

      Sure, sure. [laughing] I know that, but, like, what are you looking for?

    8. EM

      I mean, literally, there's not enough hours in the day. It's impossible.

    9. JO

      But, but-

    10. EM

      Um-

    11. JO

      What, what are you looking for that someone else who's good at interviewing and hiring people, what's the je ne sais quoi?

    12. EM

      Um, well, at this point, I think I've got, um... I, I might have more training data on evaluating technical talent, especially, but talent of all kinds, I suppose, but, uh, technical talent especially, um, given that I've done so many technical interviews and then seen the results, technical interviews, seen the results. So my, um, my training set is, is, is very- is enormous and, uh, has a very wide range. Um, uh, the, generally, the thing I ask for are, um, bullet points, uh, for, uh, evidence of ex- of exceptional ability. So it's, uh... But, like, it's, it's- and, there's-- these things can be, like, pretty off the wall. It doesn't need to be, uh, in the, in the domain, the specific domain, but evidence that, uh, evidence of exceptional ability. Um, so if some- if, if somebody can, like, cite, like, even one thing, but let's say three things, where you go, "Wow, wow, wow," then that's, that's a good sign.

    13. DP

      But, but, but why do you have to be the one to determine that? Presumably-

    14. EM

      No, I don't. I can't be. It's impossible.

    15. DP

      Right.

    16. EM

      But, I mean, total, uh, head count across the whole company is two hundred thousand people.

    17. DP

      Right.

    18. EM

      [chuckles] So-

    19. JO

      But in the early days, what was it that, uh, that you were looking for that couldn't be delegated in those interviews?

    20. EM

      Um, well, I, I guess I, I need to build my training set. So it's not like I brought a thousand here. Um, I would make mistakes-

    21. JO

      Yes

    22. EM

      ... but then I'd be able to see where I, I thought somebody would work out well, but they didn't.

    23. JO

      Mm.

    24. EM

      And, and then, why, why did they not work out well? And not-- what can I do to, I guess, RL myself-

    25. JO

      Yes

    26. EM

      ... to, uh, in the future, um, have a better batting average when interviewing people?

    27. JO

      Mm-hmm.

    28. EM

      So and, but my, my batting average is still not perfect, but it's, it's very high.

    29. DP

      What are some surprising reasons people don't work out?

    30. EM

      Surprising reasons? Um-

  7. 2:20:082:38:28

    DOGE

    1. DP

      C-c-can I ask a question? So you, you said about, um, Optimus and AI, that they're gonna result in double-digit growth rates within a matter of years.

    2. EM

      Oh, like the, the economy?

    3. DP

      Yeah. Um-

    4. EM

      Yes.

    5. DP

      What-

    6. EM

      I think that's, I think that's right.

    7. DP

      What was the point of the DOGE cuts if the economy is gonna grow so much?

    8. EM

      Well, I think, like, waste and fraud are not good things to have, you know? Um, I, I, I was actually p-pretty worried about... I, I, I guess, uh, I mean, I, I think in the absence of AI and robotics, we're actually totally screwed, uh, because the national debt is piling up like crazy. Um, now, our interest payments, the interest payments to the national debt exceed the military budget, which is a trillion dollars. So we have over a trillion dollars just in interest payments. Um, you know, that was like-- I was like, "Okay, pretty concerned about that. Maybe if I spend some time, we can slow down the bankruptcy of the United States, um, and give us enough time for the AI and robots to, you know, s- help solve the national debt." Uh, well, not help solve. It's the only thing that could solve the national debt. Like, we are one thousand percent gonna go bankrupt a-as a country and fail as a country without AI and robots. Nothing else will solve the national debt.

    9. DP

      Um, I-

    10. EM

      And, and so, so we, we, we'd like to... Well, we just need, we, we need enough time to get- build the AI and robots, uh, to... and not go bankrupt before then.

    11. DP

      I, I, I guess the thing I'm curious about is, when DOGE starts, you have this enormous, um, ability to enact reform and-

    12. EM

      Well, they're not that enormous.

    13. DP

      Sure, sure. Uh, but to-totally buy your point that, like, it's important that AI and robotics drive productivity improvements, drive GDP growth, but why not just directly go after the things you were pointing out, right, you know, like, the, the, the tariffs on certain components or whether it's, like, permitting?

    14. EM

      I'm not the president. And, and very hard to cut, to cut, to, to even, even to, to cut things that are obvious waste and fraud, like, like ridiculous waste and fraud. Um, what I discovered i- that is, it, it's ek-extremely difficult even to cut very obvious waste and fraud, um, from the government. Um, because the, the, the government has to operate on a, on, like, who's complaining. Like, if, if w- and if you cut off payments to fraudsters, they immediately come up with the most sympathetic-sounding, uh, reasons to, to continue the payment. They, they don't say, "Please keep the fraud going."

    15. DP

      Right.

    16. EM

      They say, you know, it's... They, they're like, "You're killing baby pandas." And we're like, "Meanwhile, there's no baby pandas are dying. They're just making it up." Um, the, the forces are capable of, of coming up with extremely compelling, sort of heart-wrenching stories that are false, but nonetheless sound, uh, sympathetic. And that, that's what happened. Um, and, uh, so it's like, perhaps I should have known better. Um, and, uh, but I thought, "Wait, let's take a sh- let's, let's, let's try to cut some amount of, of waste and fraud from the government. Maybe there shouldn't be, you know, twenty million people, uh, uh, marked as alive in Social Security who are indefinitely dead, and over the age of a hundred and fifteen. [chuckles] The oldest American is a hundred and fourteen. So it's safe to say, if somebody is a hundred and fifteen and marked as alive in the Social Security database, um, something is w- there, there's either a typo, [chuckles] so like, somebody should call them and say, "We, we seem to have your birthday wrong," [chuckles] or, or, uh, "or, or we need to mark you as dead." [laughing]

    17. DP

      Okay. [laughing]

    18. EM

      One of the two things.

    19. DP

      Very intimidating call to get.

    20. EM

      Well, so i-it seems like a reasonable thing. Um, and if, if, like, say, their birthday is in the future, um, [chuckles] and they have, you know, a Small Business Administration loan, and their birthday is 2165, um, we either again, have a typo or we have fraud. Um, [chuckles] so we say, "We appear to have gotten-

    21. DP

      Or-

    22. EM

      ... the century of your birth incorrect."

    23. DP

      Or a great plot for a movie. [chuckles]

    24. EM

      Yes. This is, this, this is... When I, when I mean by ludicrous fraud, that's what I mean by ludicrous fraud.

    25. DP

      Were those people getting payments?

    26. EM

      Some, some were getting payments from Social Security, but, but, but the main fraud vector, uh, was to mark somebody as alive in Social Security and then use every other government payment system, uh, to, uh, basically, to, to, to do fraud. Because what those other government payment systems do, would do, what- they would simply do an are-you-alive check to the Social Security database.

    27. DP

      Mm.

    28. EM

      It's a, it's a bank shot.

    29. DP

      What would you estimate is, like, the total, uh, amount of fraud from this mechanism?

    30. EM

      Um, m-my guess is, and, and other- b-by the way, the, the Government Accountability Office has, has done these estimates before. I'm not the only one who's coming out of this, you know. In fact, I think they, they did- the GAO did analysis, a rough estimate of fraud during the Biden administration, and it calculated at roughly half a trillion dollars. So don't take my word for it. Take it- a report issued during the Biden administration.

  8. 2:38:282:41:01

    TeraFab

    1. DP

      Um, you, you, you've mentioned that Dojo 3 will be used for space-based compute. Um- [chuckles]

    2. EM

      You really read my, what I say. [laughing]

    3. DP

      [laughing] I don't know if you know Twitter, but I know you like-

    4. EM

      [laughing] Big giveaway!

    5. DP

      You have a lot of followers. [laughing]

    6. EM

      Big giveaway. [laughing]

    7. DP

      Um, how, how do you-

    8. EM

      How did you just learn my secrets? I post them on my... [laughing]

    9. DP

      How, how do you design this chip for space? What, like, uh... Yeah, what, what changes?

    10. EM

      Well, I, I guess you want to design it to be, um, more radiation tolerant-

    11. DP

      Mm

    12. EM

      ... and run at a higher temperature. Uh, so you could, um, you know, roughly if you increase the, um, operating temperature by 20% in degrees Kelvin, you can cut your radiator mass in half. Um, so ri- running at a higher temperature is, is helpful in, in space. Um, there's, I mean, there's various things you can do for shielding of the memory and... But, but like neural nets are going to be very resilient to bit flips.

    13. DP

      Yeah.

    14. EM

      So like most of what, what happens from radiation is like random bit flips. Um, but like if you've got like, you know, a multi-trillion parameter model and you get a few bit flips, doesn't matter. Um, it's, it's much like heuristic programs are going to be much more sensitive to bit flips than, um, some giant parameter file. Um, so I just design it to run hot, and, um, I, I think you pretty much do it the same way that you do things on Earth, apart from make it run hotter.

    15. DP

      Mm. Uh, um, I mean, the solar array is most of the weight on the satellite. Is, is there a way to make the, um, the GPUs even more power dense than what NVIDIA and TPUs and etc., are planning on doing that would, uh, you know, uh, be especially privileged in the space-based world?

    16. EM

      Well, I mean, the basic math is like, um, if you can do about a kilowatt per radical, um, and then you'd, you'd need, um, you know, a hundred million full radical chips, uh, to do a hundred gigawatts.

    17. DP

      Yeah.

    18. EM

      So yeah, depending on what your yield assumptions are, you know, um, that, that tells you how many chips you need to make. Um, but if you... Cool, you need, if you want- if, if you, if you're going to po- have a hundred gigawatts of power, you need, you know, a hundred million chips running, that, that are running a kilowatt sustained output per radical.

Episode duration: 2:49:45

Install uListen for AI-powered chat & search across the full episode — Get Full Transcript

Transcript of episode BYXbuik3dgA

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome