Skip to content
Nikhil KamathNikhil Kamath

Humanoids Cost as Much as an SUV Now | Nikhil Kamath x Brett Adcock | WTF Online Ep 2

We’ve crossed the point where AI lives inside screens. Brett and I talk about the moment it steps into the real world — and what that does to labour, society, memory, and families. The future didn’t arrive gradually — it arrived all at once. Timestamps: 00:00 - Intro 01:37 - Brett’s path to building a humanoid 03:41 - When do flying taxis become real? 08:40 - Moving from Archer to Figure AI 12:03 - Are we ready for humanoids? 14:52 - What’s inside a robot? 22:06 - Can humanoids out-efficient humans? 28:35 - The next form factor 34:23 - Competing with LLM 38:35 - Why real-world data beats synthetic data 41:51 - Kids + humanoids: Safety & design 46:59 - Dystopia: competitive lever in AI regulation? 49:25 - Robots’ eyesight & perception 52:17 - Why humanoids are possible today 56:59 - Other players in the industry 1:00:17 - The first problem humanoids solve 1:04:06 - Is China ahead in robotics? 1:07:54 - Ending partnership with OpenAI 1:10:38 - When does AI money turn into real revenue? 1:13:29 - Where to invest? 1:17:24 - What should you build in humanoids? 1:22:50 - What’s next for social media? 1:28:42 - What happens to jobs + society? #NikhilKamath - Investor & Entrepreneur Twitter: [https://x.com/nikhilkamathcio](https://www.youtube.com/redirect?event=video_description&redir_token=QUFFLUhqbm9WZVh3cHVTX3JEeGptVjlOZ1R3cW5rVkZJUXxBQ3Jtc0tuekFjWnRXME9XUUVLcDNCTk9YcHd5OU1MV1NMamE0cWE1T25meGJ4VWRMa21OY3VYLWM2T05iOUJtYTNWbWRSLW5YUXNzTTRHUUpjOGdZSGJzNEYxMkt2Y2hmWVNUeU51Nk5MRFVieVNtSTJwMkFXZw&q=https%3A%2F%2Fx.com%2Fnikhilkamathcio&v=wHQiewz8k9g) LinkedIN: [](https://www.youtube.com/redirect?event=video_description&redir_token=QUFFLUhqbGNsNjlxS2NyU3VxOUNIQU1VUmczaWNobmtJd3xBQ3Jtc0tsVmczaDdwdkpMZWlNaVdISk1mQUFfbmhZNVB2al9OU1hwbF9rYTFoMFJGN2FKRnFreXFEaXZhRGttd2xLRHBpQVhIS19XaW5wQTZ3UjB6bm5vazVmdUkwSEdsU0MxS1lXYmJvVnhlekVRczc0RmdTRQ&q=https%3A%2F%2Fwww.linkedin.com%2Fin%2Fnikhilkamathcio&v=wHQiewz8k9g)https://www.linkedin.com/in/nikhilkamathcio/ Instagram: https://www.instagram.com/nikhilkamathcio/ Facebook: https://www.facebook.com/nikhilkamathcio/ #BrettAdcock - Founder, Figure AI Twitter: https://x.com/adcock_brett?s=11 LinkedIN: https://www.linkedin.com/in/brettadcock?utm_source=share&utm_campaign=share_via&utm_content=profile&utm_medium=ios_app Instagram: https://www.instagram.com/brett_adcock?igsh=MTBjdzkybmJzMTJsMQ== Facebook: https://www.facebook.com/share/1Fd5fNCoG1/?mibextid=wwXIfr #WTFiswithnikhilkamath #PeopleByWTF #WTFOnline

Nikhil KamathhostBrett Adcockguest
Nov 5, 20251h 36mWatch on YouTube ↗

EVERY SPOKEN WORD

  1. 0:001:37

    Intro

    1. NK

      [upbeat music] Let me know when you guys can start. Ready? [upbeat music] Would you leave your kids with your humanoid?

    2. BA

      We're not there today. Like, I would not let my, like, robot roam free for hours and weeks right now with my young kids.

    3. NK

      Everybody talks about China is so far ahead in robotics. Why is that?

    4. BA

      Yeah, I just don't believe it. [upbeat music]

    5. NK

      Hi, Brett. Lovely to meet you. Uh, what's behind you? Maybe we can start there.

    6. BA

      Yeah. Um, so I'm at, I'm at Figure offices. Uh, we design humanoid robots. So these are robots that, you know, like, kind of like move around the world like humans: legs, arms, hands. Um, behind me is our first-generation robot, second and third-generation, uh, humanoid robots here at Figure.

    7. NK

      Okay. So

  2. 1:373:41

    Brett’s path to building a humanoid

    1. NK

      tell us a bit about yourself, Brett. Where did you begin? Where were you born? How did you come to build a humanoid?

    2. BA

      Yeah. Um, so I was born in the Midwest, here, uh, in, in the US, on a third-generation farm, actually. [chuckles] We, um, we farmed corn and soybeans. Um, and then b- basically, since, like, middle school and high school, I started working on technology businesses, in... mostly in software. Uh, and that led to basically, [clears throat] basically now twenty years building tech companies. Um, I spent about ten years in software. Uh, ended up starting a company in 2012 called Vettery, and it was an A-- it was an AI marketplace for recruiting. The goal was to basically help, uh, candidates find jobs and employers find great talent, and that, that whole process is extremely broken and hard. I ended up selling Vettery, uh, for a little over a hundred million in 2017 to the Adecco Group, which is, like, the largest recruiting company in the world. After that, I, I really wanted to get into hardware, so I started a company called Archer Aviation, uh, in 2018, and what we do is we build electric aircraft, electric vertical take-off and landing aircraft. So it's, it's kind of like a hel- it takes off kind of like a helicopter, [clears throat] so you can, like, put them in cities, and it flies a little bit like a, like an airplane, and it's full- fully electric powertrain. And the goal is to basically help move traffic that it's like, you know, in cities and move into the air, so folks can get around a lot easier. And yeah, ba- basically designed five generations of aircraft so far at Archer. Our current aircraft is Midnight. It's a piloted four-passenger aircraft that we're, we're flying now, trying to get that certified in the US airspace with the FAA. Ended up taking that company public about four years ago, five years ago, and then, yeah, three year- three years ago, I started Figure. So we, we design, we, we, we design and manufacture humanoid robots. And the goal, the goal here at Figure is to be able to do everything a human can in the physical

  3. 3:418:40

    When do flying taxis become real?

    1. BA

      world.

    2. NK

      I have an investment in a company here in India which does electric taxis called Sattela Aviation. How far are we really? Traffic is a big problem in India, and, you know, I don't know how different Archer is, 'cause I've not seen enough about it, but it's the typical electric taxi, uh, one sixty kilometers range on a charge, meant to take off from hotels, the, the helipads on top of a hotel, get to the airport. How far are we from a world where this could be more pragmatic, this is something that will be used by people?

    3. BA

      I mean, we have aircraft today that are f- are, that are flying pi... fully piloted. Like, y- you, you have a couple components here. The f- the first is you, you have to be in the right decade to make sure the technologies are all feasible to make this really happen. This is a really hard technology. Most of the problems and challenges here happen because we have, um, um... th- there's probably, like, a thirty X difference in energy density between, like, traditional fossil fuel, say, kerosene, that you might find in an aircraft, in an electric aircraft, um, like what we find in batteries. There's the problem of just, like, building the technology and really making it viable. I, I think we've shown at Archer that that is, like, fu- fully possible today. I mean, we, we fly basically every week, um, our full-scale aircraft at this point, fully electric, similar concept operations we'd want to do in market. Uh, at this point, it's largely, like, a safety and certification and policy pers- like, the, the, basically, like, what's gating this is certification. Uh, i- in most countries, it's, it's governed federally, where you need, like, um... You know, in Europe, it's EASA. Here in the US, it's FAA. You, you need, you need, like, basically a certification level at the fed- at the federal level, uh, that then unblocks your ability to take passengers and make it ubiquitous. Uh, so at this point, like, largely missing certification, you didn't have, like, manufacturing and scale needed, um, but the, but the technology is certainly here. So I, I, I think within the next, like, five years, we'll be taking passengers, like, paying passengers around in, in airspaces here in the world on electric aircraft like this.

    4. NK

      And if that's true for America, do you think that will be the way forward? Do you think that'll replace human, human-driven taxis in the near future?

    5. BA

      I don't think it replaces all, like, human-driven taxis. I think it just takes a, a large portion of what's, like, the tra- like, the basically the-... folks that are taking t- transportation today that might be going, that might be spending like half an hour, hour, hour and a half in traffic, which is sizable. In some places like LA, it's like almost ten percent of like all the, all the travel in the mornings and after work. It's j- just a much better quality of life if we're not sitting in traffic all day.

    6. NK

      Would it also change short-haul flights, like, say, between San Francisco and LA, for example, which is one hour?

    7. BA

      Like, my view is, like, all aircraft, like, all, all aviation, like, all air transport will move electric here, and I think hopefully in our lifetime. Uh, it's, it's starting out with, like, very short-range, uh, flights, mostly 'cause of, like, a, a specific battery energy density issue. We, we have, like... Um, if you had to compare, like, a battery versus, like, what's in a, like, in a aircraft, like, say, ke- kerosene, there, there's basically roughly a thirty X difference, like, in, in, like, net energy in those two systems. So you, you basically have an issue where, like, you can basically just go, like, thirty X, like, like, [chuckles] like less far. And then with, with traditional- with, with, with, with eVTOL aircraft, you're taking off vertically, you're using a considerable amount of power doing that versus, like, I say, an aircraft that might be going from San Francisco to LA that might be taking off conventionally. But i- w- we're, we're starting with, like, you know... W- we're starting now with, like, these short, uh, these kind of shorter flights, helping them, um, basically, like, move, move passengers around. It, it'll then happen regionally, and then it'll happen at much larger, like, long- longer distances. Uh, the technologies needed to go, like, kind of, like, much longer distances are very different than what we're designing here today at Archer with eVTOL aircraft. To, to get to, like, you know, those level of distances where you need to kind of like c- city-to-city travelling and distances, you're, you're going to have to redesign, a, a different type of aircraft system to do that, in my mind.

    8. NK

      Will that happen, though, do you think?

    9. BA

      It'll happen for sure. Yeah, for sure it'll happen.

    10. NK

      So if you could bet on a train, a car, a traditional plane, or a EV flying vehicle going between San Francisco and LA, you would bet on a flying electric vehicle?

    11. BA

      Electric gets you like... Uh, I mean, like, w- we can design, like, some of the safest forms of transportation in the world, like flying in the air. Um, I mean, fl- flying today, we're like, it's the, the safest form of transportation we take as humans. There will be, like, a large portion of folks that will travel between those cities like this in the air. I think there will also be a large portion of that, that also travels, uh, on the ground, whether it's train or tunnel or, or car. Like, we're, we're, we're- there, there's so many humans now, and so many humans are just moving into cities. So you, you just have, like, a r- real issue where, like, the infrastructure is not just, like, able to support this at this moment.

    12. NK

      Right.

  4. 8:4012:03

    Moving from Archer to Figure AI

    1. NK

      And how did you transition from Archer to Figure AI? Could you tell us about that?

    2. BA

      Well, I think, like, one thing that's interesting at Archer is, like, the aircraft is basically like a flying robot. [chuckles] And if you think about what we do at Figure building robots, you need, uh, you know, you need batteries and motors and embedded systems, uh, uh, uh, like sensors, control software, uh, a lot of the same kind of, uh, ingredients needed to kind of build, like, kind of any robot, just in, in general. So, you know, having built up Archer over the years, like, got really familiar with those key kind of areas and, um, uh, and integrating hardware and software together, just overall. I mean, f- for me, as an entrepreneur, like, working in hardware, I think, like, the humanoid robotics are, are solving, like, general-purpose humanoids is, like, the ultimate meta problem. Uh, I think the end state for most things done in the physical world is done through a humanoid robot at this point. I think we'll, we will see billions of humanoids on the planet, um, in our gen- in, like, our lifetime, and they will do everything for us, from things at home, like doing the laundry, washing dishes, uh, unpacking groceries, uh, cleaning up the house, to w- we will see billions in the commercial workforce in, um, areas of, like, logistics and manufacturing and healthcare. Our core thesis here is that the world was designed for humans. Like, we've- we as humans designed it for ourselves. Uh, so e- everything we- everything you do, we do all day is, is designed for the way we look and feel, and the whole world was for it. Even, you know, we're drinking coffees and teas, right? Like, with the cups that we hold and the, the door that we, like, exit from here and the stairs and all the tools we use were all designed for our, our, um, you know, ba- basically for our, the human form. So our, our belief here is that the humanoid is, like, the ultimate, like, general-purpose machine. And so what we have here is we have a, a deep focus on designing, like, really incredible hardware systems. Uh, at Figure here, we design basically the entire hardware and software stack on the robot, almost entirely ourselves at this point. So that goes down to, like, the motors and battery systems, uh, all the electronics, wiring, um, kinematics and structures, um, and, and, and sensors. Uh, and then, two, is we, we spend a, a large amount of time on the AI side. This is just, like, an incredibly hard problem. If, if you think about, like, uh, like, the humanoid robot, the ones you see behind me, e- every robot has about forty different joints, roughly, on the robot, and, um, a joint has, like, a motor that can spin roughly 360 degrees. So the amount of, um, the amount of positions the body can be in is equal to, like, um, 360 to the power of forty. And so that means that, uh, there's more potential states the robot can be in than atoms in the universe, which is, like, always crazy to think about, but it's, like, much more. So you, you can't solve this with code. You have to use advanced AI, like neural nets, to be able to make this really work well. So we spend, like, a, a large fraction of our time outside of hardware on just, like, how do we get neural networks to control the robot and do the things we, we would need to do, like, like real stuff that humans would do? Um, we're about three and a half years old now. We've shipped now three generations of humanoid robots, and we've been able to demonstrate that humanoids can do, like, useful human-like work now fully autonomously, and, and in most cases, like, with neural nets all the way down the stack.

  5. 12:0314:52

    Are we ready for humanoids?

    1. NK

      I thought of this recently. Uh, I didn't realize how close we were to humanoids being in the actual world, and when I thought about that, I'm like, "Wow, I should have really-... looked this up earlier, researched it more, because it feels very, I don't know, dystopian or utopian to picture a world where I have something that looks like me walking around my house, probably sitting in my office, and in the car sitting behind, beside me. It, it's kind of a weird feeling to get your head around. Maybe 'cause I'm not used to it, and I will get used to it soon.

    2. BA

      Yeah. I think, like, it just feels like such a sci-fi movie [chuckles] a bit. Like, when you see these things working, it feels just so, like, uh... It almost feels like it's not possible to go do and make it work. We have them walking around here in the office, and it's just, like, still not, like, used to it, just, like, it looks incredible, like, seeing robots out and about, just, like, walking all day and doing stuff.

    3. NK

      Uh, does it feel scary, Brett?

    4. BA

      Not to us, no. We feel really energized being around the robots and, and seeing it. Not, not scary. I mean, we, we've built them. Like, we really understand the limitations, and, uh, we've inherently tried to intrinsically design the, the hardware for safety and also the, the whole system for safety, so n- not scary to us. Like, we're, we're around them all day. We've, we've made them, we've birthed them. Like, you know, we've, we brought them here. So, uh, it just like- it's something really special about it, where it doesn't feel like it's poss-- it should be, like, twenty years out in the future, but somehow it's like they're here, and they're working really well. Like, I think the robots we have, like, are working really well. And to see that happening, you're like: Oh, man, just give us five or ten more years. What would that really look like? It feels like it's, uh, just too soon for the world a bit. Um, and, um... but, but it's all happening now. Like, we're really spending a lot of time trying to build, like, true general purposeness. Like, be able to talk to a robot and have it do most useful things you'd want it to do. That's, like, a really hard problem. Like, um, it's, it's, like, largely a software and AI problem at this point, uh, for us to go solve, but we can do it. Like, if you-- we, like, we announced, um, you know, recently Figure 03, our, our third-generation robot, which is right, right behind me, and we, we, we, like, we put out a bunch of content about how the robot's doing a bunch of stuff in homes and even in, like, the more, like, workforce. And, and the robot is doing all those tasks. Like, it's doing all the manipulation for all those tasks, um, and actions all through a neural network. And we did maybe a, you know, a dozen, dozen and a half or so use cases there in under- in a few weeks with neural nets at places the robot's never seen, you know, before, a month before that. And, uh, just like, that- that's the ingredient you really need to scale at really, like, like, large levels. Like, you need, like, neural nets on humanoid hardware, and we're seeing that now. So it, it just shows us how, like, how close we are. I don't- it's not something we'll solve in a month, but it's certainly not something that's ten years away at this point.

    5. NK

      [electronic sounds]

  6. 14:5222:06

    What’s inside a robot?

    1. NK

      Can we start with you breaking down a robot, Brett? What's inside? Uh, I think the first thing you could maybe start with is vision. 'Cause I did a little bit of research on vision and how a robot is seeing and how it has spatial intelligence in a way, but would love to hear from you. Like, I can see the robot behind you. What's in it?

    2. BA

      Yeah. Let me start with, like, the basics of the robot and build up to, to, to this in terms of vision. Um, vision is kinda, like, higher level in the stack, but low level on the robot, uh, it's basically a bunch of motors. [chuckles] And we have, like, roughly forty of them on the robot. Uh, we call them actuators, but they're like s- similar electric motors that we would see out in the world today. We have a bunch more sensors and transmission and other things in the, in there that make it much more complicated. But you can think of every joint as having an electric motor inside of it that is getting commands from, like, a central computer on, like, what- like, how to output torque or how, how to basically move joint positions. Um, that is powered by a battery, battery system, like basically a traditional, like, lithium-ion battery pack that you might see in an electric car or electric aircraft.

    3. NK

      And where does the battery sit?

    4. BA

      Yeah, it's right here in the middle of the torso, in the chest. Um, so we have basically, like, a bunch of motors, we have a battery system in the torso, and then we have a computer, uh, both a CPU for compute, and we have a GPU for inference for our AI models sitting in the torso next to the battery system.

    5. NK

      And how much of the compute happens physically on the robot, and how much happens outside of the robot?

    6. BA

      Today, a- all of our compute happens all on the robot, so we do everything on board.

    7. NK

      Will that change tomorrow?

    8. BA

      Some of the compute will start moving more and more off board. Uh, there, there is naturally some compute that needs to be done here on board. We need to run, uh, a decent amount of the neural nets at, uh, kind of like, b- basically a couple hundred hertz, like a couple hundred times a second. There's no way you're gonna be able to do that fully off board. The robot, uh, would just be too slow. We, uh, you can, you can generally off board, you can maybe get a, uh, you know, you can maybe get two to ten hertz or something like that. Uh, you know, you generally can communicate a few times a, a second, but we, we need, like, two hundred times a second minimum on board the robot. So we have a lot of, like, onboard compute today, sits on the robot to tell it how to move its motors and limbs, really fast speeds. You gotta think, like, every little time second, the robot's net is repositioning its entire body to balance. It's repositioning its hand and fingers, maybe head's moving to start seeing something better, and it's doing that closed loop, meaning, like, uh, based on where the hand's going and what the task is doing, the robot's reevaluating what to go do. And it's doing that, uh, you know, a couple hundred times a second. So we have a naturally a lot of compute here. We will offboard, uh... We, we can offboard more and more of the kind of bigger brain-thinking stuff over time into the, into the cloud. Um, but today we do it all on board, mostly because we don't need a, uh, we don't have to worry about having a network connection. Like, you don't need Wi-Fi or 5G in this case, and we can operate at really fast speeds, and we know we can do the work. Like a human, you know, well, I guess some humans can't, can't work without a phone or a network. But like, you know, humans really can do stuff without, like, a, a real internet net connection, and that's just similar to how you wanna think about a robot, too. It'd be a bummer if the Wi-Fi went out and the robot can't do anything, uh, anymore. So our compute lives in on board. Uh, we have a ton of wiring, a ton of distributed, like, electronics, uh, and then we have a bunch of sensors inside the motors and the rest of the robot to understand how to balance and feel forces and track positions. So-... um, like traditional robots that you see out in the world today, if you put your hand harder and harder on the table and keep pushing, w- while, while not moving, the robots really can't really understand what's really happening. Our, our robot can feel that force, like a human could. Uh, we have s- we have basically torque tracking and force feedback in every joint across the whole robot, and that's really important to help balance and t- we have to touch the world and go out there and grab things, so we really under- we need to understand those forces. And it's also a tricky sensor to integrate into the robot, uh, here internally. And yep, we have cameras. So we see in p- pixel space. On this current generation, we have, I think, eight cameras. We have, uh, one on each palm and the, uh, uh, of the, of the hands, and then we have them in the head, the rest, so we can kind of see around ourselves. Uh, so the robot sees through a camera. Uh, that camera can update, you know, like pretty quickly, and that feeds into an onboard computer that then ultimately tells what the motors what to, what to do. So that's, you know, r- roughly what's really happening. So the, uh, the robot's reasoning in pixel space. It's, it's going around like a self-driving car. We don't, we don't have lidar, so like a camera-based self-driving car, and we're like, we're reasoning through the world, uh, based on that to do work.

    9. NK

      I'm no expert at this, but if one were to say the Moore's Law is starting to fail, where the number of transistors you can put on a chipset will not, uh, increase at the pace that we have seen, they won't continue to double, long term, to increase compute on board, is that possible to have enough compute on board to make the robot never need to connect to a network per se?

    10. BA

      Yeah, we, we run fully on board today in commercial settings, uh, every, every day right now, so without a network. So we... O- one of our- we have a commercial customer here in the US, we've now been running with for, uh, over five months now, every day. Um, we, we, we, we put this out publicly. It's with BMW. That's a fully onboard, uh, onboard system that's doing all the reasoning on board of what the robot should do. What, what you're seeing in the world is, um, y- you know, compute, e- e- even if Moore's Law sl- slows down, compute's getting better, so you'll be able to put, put more on compute over time than you have before. So our, our current Figure 03 robot has, like, more compute, like better compute than we had in previous generations. Uh, second thing is, like, AI models that we're putting on are getting smarter and, uh... Like, it's weird. AI models are at one time getting bigger, uh, and, uh, but also s- also some of the, the smaller models are getting smarter and, and better. Um, so that's also in our advantage, meaning we can take, um, kind of bigger, like, reasoning models that traditionally have been, you know, maybe a trillion parameters or whatever, a hun- like billions of parameters. Uh, these, these can be quantized now into like some smaller models that we can run on board that are also smarter than they were previously a year or two ago. So we have models getting smaller and smarter, and we have compute getting better. So those trends in general are, are in our favor, and today we're running robots, uh, fully on board, so we don't have to wait for, like, a, "Oh, let's hope the compute catches up," or models get, you know, some, some point in space. Um, we, we can do it all today. So, uh, like we really, we really today have a path where, you know, fully on board, we can run robots daily now in o- in commercial operations. The big bottleneck for us is we're, we're trying to solve, like, general robotics here at Figure, and that for me, for us means, like, putting a robot into, like, an, a home it's never been in before. And just through, like, a speech prompt, like if you- like a voice prompt, like ask it to do something, like this could be like, "I'm gonna be gone all day, make sure my laundry is done when I- before I get home," and you want that task done and done correctly over, like, a very long time horizon.

  7. 22:0628:35

    Can humanoids out-efficient humans?

    1. NK

      If I were to think every task, the, the ingredient or the input is energy, like me as a human, I can eat a sandwich, and I can do X amount because I can convert that into... That I can convert the energy of that sandwich into something. Do you think the robot or the humanoid ever gets to that point where it becomes more efficient than a human in utilizing energy or creating energy?

    2. BA

      Today, we're, like, far, like, l- like less efficient than a human. Um, humans are like, r- are, like, the, are... We are, like, extremely efficient. Um, like our, you know, biological neural nets are really efficient. The way, the, how much energy or how much power we use to exert, like walking and movements and things, or, and even just sitting here idle, is much better than we get on the robot today. Um, I see no reason over time- Like, you know, w- we're seeing power drop now across our robot systems, like, you know, um... There's, there's, uh, there's a bunch of work that we're doing now. There's, like, there's, like, the computational work to get more efficient, and then there's, like, the mechanical work to get more efficient, uh, overall that we're look- looking at. Both of those two are, like, we're getting improvements on year to year, like a, a robot to robot. Um, I think we will approach- Uh, we see a path like approaching human-like kind of energy consumption. Uh, it's gonna be really hard. Like, you're, you're talking to us now in the early days of this. Like, we're like, you know, it's like Waymo, like sixteen years ago. It's like, it's, it's early, it's just now happening. We- there's a lot of stuff on our roadmap that we're like: "Oh, man, we gotta go, like, solve, you know, manufacturing at scale," all these different things, and, uh, reducing power, uh, footprint is, is, is certainly one of them. I wouldn't say it's a blocker today. Like, we can, we, like, we can do work, in a lot of cases, almost the whole day, depending on what the task looks like. Um, and, and if it's not that, if it's, you know, a task that has a lot of locomotion and movements, we can do kind of like human shifts in terms of time span.

    3. NK

      Is that the ultimate test? Like, if a robot were to have general intelligence, with the same amount of energy, it should be able to do not more of one task, but more of any task a human can do?

    4. BA

      Yeah, I think, like, ultimately, yeah. Like, we would, like, maybe judge this on how much watts it really took to do these, like, like, uh, like, like longer horizon tasks.... I think the near-term thing is, like, if you got a robot in your home, and you speak, and you voice prompt it, like, wh- what percentage of those things can they do fully end-to-end like a human and, and learn from those mistakes? Like, we make a lot of- we do a lot of planning, like la- like doing laundry, it's like, okay, well, uh, w- like, where are the laundry baskets, or wh- what is the laundry? Oh, they're in the rooms. Like, which rooms? Okay, this room upstairs or downstairs, and you have to do a bunch of, like, like, really quick, like, probably subconscious planning that's happening. And, and then we've got to go out and, like, actually, you know, uh, almost, like, serially, uh, complete those tasks. There's a lot of, like, local failures that are happening. You go to a room, and there's no laundry basket there. What do you do? Like, a robot's gotta be able to reason through that whole thing from A to Z. And then, then in some cases, it could, like, for laundry, it could take hours. Like, the ultimate test right now is, like, can we get a humanoid robot to do those type of things? And we, we've seen that in, like, LLMs. We've seen, like, a, like a, a, a robustness to, like, almost, like, human-like intelligence that we're, like, we're tapping into, and we're trying to do that now in robotics. Like, how do we- how do we get this, like, this machine to reason about the world, like, like, uh, have, like, um, common- human common sense? And, um, I think that would just be like a... That'll be incred- when we hit that, you're gonna have a robot in your home, and you're gonna be doing this, and it's just gonna be incredible. And then from there, yeah, getting to more efficient systems, um, getting the cost down. I, I think you're gonna spend a lot of time with the robots. Like, as, like, almost like a companion. Like, they're gonna be in your home. They're gonna be with your kids. They're gonna be... They're gonna know everything. Like, they're gonna have, like, almost, like, this infinite context window for memory. Like, I think you're gonna, like, talk to it, and it's gonna know if you, like... It's emotionally, from an EQ perspective, it's gonna know if you've had a tough day from just the sound of your voice and when you come home, and, uh, I, I think, um, there's a really important, like-

    5. NK

      Do you program, do you program EQ through... How do you teach a robot to have EQ? Does it measure your voice every day, and you have to, like, give it data as to, "This day, I was feeling a certain way"? It learns like that?

    6. BA

      The best, like, AI, like, you know, um, call it, like, native audio voice reasoning models today, g- can do a decent job at, like, really understanding EQ well. Uh, I would say it's, like, it hasn't done as well with, like... I, I think IQ has been a big focus for, like, the whole space now of, like, information retrieval for voice models and things, but, uh, there is a real path to getting voice models to, like, a really high Q- a high IQ, and, um, that could be, like, really understanding your tone and your emotional state when talking. Uh, these are, like, fully solvable things. Uh, it's in the world model, it's in the datasets, and it's, uh, a matter of, like, accurately training new, like, AI, like, voice and reasoning models to be able to understand this well and do well at this, but this is a sol- this is a solvable thing. We, we- we're spending time on it here internally. I would say in the next year or two, you will have, like, voice, like, AI models that you're talking to, whether it's on the phone or it could be through a robot or whatever, that you really won't be able to tell if it's a human or machine anymore.

    7. NK

      And going back to the robot, you said there are 40 joints, 40 motors, eight cameras. The torso has the GPU and the battery. What else is in it?

    8. BA

      We have an order of, like, a thousand parts, so it's like, it's not just, like, I, uh... As impetuous as, like, say, here, here, hear those things.

    9. NK

      But the head, though. What's in the head?

    10. BA

      Um, the head has, like, cameras. We have, like, a, like, the normal network connections you would want, like 5G, Wi-Fi, Bluetooth. Uh, we have, like, in Figure 03, we have three screens. We have a front screen, like an OLED, LED. We have, like, two side screens. We can, like... I'll talk about that. We have a, we have a speaker and mic, a new speaker and mic system that's, like, really, actually really good for Figure 03, so we're spending a lot more time as, like, speaking to the robots. I, I think, um, like, like, just voice is gonna be the natural UI into the robot. Like, you're not, like, maybe you have an app on your phone or whatever, but, like, voice is how you're gonna communicate most of the time through the robot. Um, and if you're not with it, you'll text it. Like, the, like, like, th- that is gonna be such an important part of the ecosystem here for-

    11. NK

      Is that

  8. 28:3534:23

    The next form factor

    1. NK

      the new form factor for AI overall, to communicate with AI? 'Cause a lot of people have been extrapolating what could be the next form factor after the cell phone. Do you think it would be something voice-related? How does a man communicate with a computer tomorrow?

    2. BA

      Yeah, my view is that, like, voice is the natural, like, UI into artificial super intelligence here i- in general, whether it's, like, digital systems or physical systems, and voice is gonna play an extremely important role in all of this. I think you will have both language devices, like, almost, like, next-generation, like, kind of, like, iPhone-level products, and I think you will have humanoids, and I think they will be networked really well together.

    3. NK

      Whenever I say this to my friends, they ask me if you're sitting in a meeting, you can't speak-

    4. BA

      Yeah

    5. NK

      ... to interact with your device, but you can pull up a phone.

    6. BA

      Yeah, the, the reality is, like, the, there's a couple things, like, as we think about, like, the next frontier of devices and, and stuff, uh, the first is, like, speech is just not good enough today. Like, our voice models are not good enough today for us to really understand how this could be, like, ubiquitous, and it's, like, the main UI. They're just, like, they're, they're B. They need to be A+. They're not good enough in a, in a lot of areas. You... They talk over you. You can't interject. The, sometimes the IQ gets collapsed in training and is not very intelligent and when tool calling. There's a bunch of ran- random issues, but that is gonna get really good over the next 12 to 18 months. We, we will see in 12 to 18 months voice models approaching human Turing tests, and that's not just from an IQ perspective and information retrieval. It's, it's EQ as well, as we just talked about, naturalness. Um, so I, I, I think, um, that's one. Two is, like, the, the models today just don't have enough context. We're missing, like, the, like... When we're, like, going out in the world, and we're talking today, like, all this con- it's being missed. Like, the, the, the models don't understand this. What it's forcing us to do is it's making us go to, like, a computer and type it, type in prompts and ask it things and tell it, like, "Hey, I'm, like, I have this thing, and please help me here."... But if you had, like, all this information, like you had like a limited, uh, memory, uh, and retrieval of all this information we're having all, all throughout the day, every day, uh, these AI systems would need, like, very little understanding of what, what is, like, currently in context or, like, what you need to do right now to help you. My view is, like, the next generation of stuff is b- basically language devices and humanoid robots, and that, that is, like, that'll go extremely far in my mind.

    7. NK

      Will they replace the traditional phone or computer?

    8. BA

      Yes. Every phone and computer was built pre-AI. The, the, the- all the hardware we have today in the world needs to be completely rebuilt from scratch. Think about it like this: we have, like, human-like intelligence in, like, the digital format now, and we, we are communicating with this through, like, really old pre-AI devices. There's zero chance, or very low chance, that they're- they were somehow built specifically well enough for this new technology. Um, in my mind, there's, it's, it's not. They're not optimized well enough for this. So you need a whole new class of devices, and, um, robotics is a part of this story, to basically be able to, um, take this, like... What's happening right now is we have, like, this massive AI overhang, where AI models are able to do incredible things, uh, in, like, a- I would call this, like, the tech community. Like, we have, like, humanoid robots doing incredible things with AI. We have, like, I can ask, you know, computer-using agents to do crazy things now on computers without any human involvement, and then most of the world is using AI systems for information retrieval, like a Google search, and it's mostly a constraint on hardware at this point.

    9. NK

      So this is a digression, Brett, but I was investing in a hardware company recently called Nothing, the phone company, and give me some advice. I was playing off this thesis that a lot of the large language models are getting very democratized, and the world will end up at a point where they're all a commodity, and they're very similar to each other, and hardware will be the new differentiation. Does that kind of make sense?

    10. BA

      It's a huge differentiator today because, like, people are... Like, a lot of folks are just scared of building hardware, so you have, like, a lot less folks just even spending time on it, and then hardware is just, like, so much harder to go build than software. So you have, like, a bunch of, like, this, like, collapsing, uh, conversion funnel of folks that, like, who are spending time on it, that really knows what they do, and then most people who work on hardware build crappy hardware. So you have, like [chuckles] you have a problem where, like, there's just not a lot of good hardware in the world, and a lot have failed trying to do this. And, um, you know, at this- at some point, you will buy a robot because of what it can do from a safety and cost and performance. Like, you want a robot that can't just come in and just do one thing in your home. You want it to do everything and be fully general purpose, and then you want it getting better and learning. That's a data, that's a data problem. So the more data we're feeding into the system, the smarter it's gonna get and less mistakes it's gonna make. You want, like, a- you don't want, like, a dumb robot in your home. You really want a smart robot [chuckles] in your home, and everybody does, right? You don't want the dumb employee at work. You only want the, the, like, the fastest logistics robot or, like, the, you know, the highest-performing manufacturing robot. So at some point, the moat becomes, like, w- what can the robot capabilities look like? That's a data problem in some ways, and you need the hardware and AI systems to do that. You need the models, and you need the AI, uh, in the, in the robots, but it ultimately becomes data. I think same with those devices you were talking through. Like, at some point, these models will know everything about you. It's almost like a really good executive assistant. A really good executive assistant, like, just is really high performance, but, like, if you swap that person out and get a new person the next day, it's like, it's back to zero, and it's really hard to get off the ground and start working. So I think data's gonna be incredibly important. Um, today, we're not capturing most of that throughout the day, so there's a huge opportunity to really capture this in my mind.

  9. 34:2338:35

    Competing with LLM

    1. NK

      If you do compute on board and you're competing with, say, a large language model, which is doing compute in a data center, for example, or anywhere, will that, integrated with a piece of hardware, maybe not humanoid, but any kind of hardware, not be smarter than a humanoid?

    2. BA

      The onboard model, you mean, versus, like, the off-board model?

    3. NK

      Yeah.

    4. BA

      We can run off-board models today. This is not a problem. Like, we, we've actually done off-board models, so we, we have... One thing we have in the robot, like, on GPUs, we have a, we have a B- we have an LLM on the robot.

    5. NK

      Mm-hmm.

    6. BA

      We have a vision language model, [clears throat] very similar to what we would, like, all use today, um, out and about in the world. We use that for, like, semantic reasoning. We use it, um, [clears throat] to really understand [clears throat] the world around us, and, um, [clears throat] if we can find a better model off board, we, we will use that. Like, there's no technology, um, blocker for us using, like, a trillion-parameter model in the cloud, if that makes sense.

    7. NK

      But do you think that's the way forward? Is that where you would be five years from now, off board?

    8. BA

      I think you will certainly have off-board reasoning. I think you will have... I think you'll have both. I think you'll have off board, like a big brain off board that you need to talk to, that can do very complex, um, maybe planning and reasoning. You will have on board, uh, something that can move, like, much faster and much more reliable, and then you'll even have smaller- even smaller models living on board. Call- we call it the S1 level, like system one level, that are doing, like, very fast, um, kind of like, like really quick, closed-loop motor control.

    9. NK

      If I were to buy a robot and all of the compute and memory is on board, isn't there a certain amount of learning beyond which it can't learn anymore because it's capped to capacity?

    10. BA

      There, there's an area of, like, fleet learning here that's really important. We are taking data from all the robots, and we are consolidating that out to, like, our, our servers here at Figure, and we're training on that. And what we're seeing now is we're seeing, like, incredible transfer from other robot data that's helping the current robot do better-

    11. NK

      Right

    12. BA

      ... which is like, which is unbelievable. Like, moving packages on, like, logistics line is somehow helping us fold laundry. It's like the, the numbers get better.

    13. NK

      I watched, I watched that video, by the way. It was really nice.

    14. BA

      Cool.

    15. NK

      Very, very interesting. [chuckles]

    16. BA

      Yeah, thanks. I mean, like, that type of stuff is helping, like, other tasks.... And it's just like, that's kinda crazy, right? Like, uh, the only technology we've ever seen that working is like, is, is like, is deep learning, and it's working, and we saw that with LLMs a bit. So I think, um, that's one, and two is you really want the robot, as it's making mistakes in the real world, understanding and learning from that. Um, like having a reward for it. So like, you really want if the robot's like s- you know, screwing a cap on top of a bottle, and it's messing up, you really want it to understand how to get better at that over time. It's what humans would do. We do trial and error, kind of like self-play. And so you have, like, a aspect of, like, fleet learning that you really want the robot to learn from, and two is you want the robot to get better as it's doing, like, tasks locally, um, over time. The thing I keep taking away from more and more is, um, the, the, the near-term thing that we're seeing that's, like, been just absolutely astounding is we've been able to take one model, put a- all data into it, and see all tasks get better. And, you know, we, like, ex- like basically, like, really strong transfer, and that's, um, that's, like, really important here for general purposeness. We really want the robots to do a vast amount of things, and we want the, we want the fleet to continue to learn from it. The good news is, like, um, I don't know if you have kids, but I have, like, three kids. They all like, um... they all kind of need to, like, learn on their own. Like, they'll learn to walk and, like, learn to do their own thing. You can't, like, tell them how to do it. They have to kinda, like, learn through, like, trial and error. That's not how humanoids learn. Like, once one robot knows how to do, like, logistics, that's the same neural networks we can put to every single robot in the fleet, and they can all do it. And so you have this weird scaling law here for humanoids, where once, like, one robot knows how to do it, every robot in the fleet, if you have millions, millions of robots to that point, would be able to really understand how to do that task. It's gonna make scaling just absolutely in- insane once we kind of figure out how to, like, uh, like, really run this flywheel of data acquisition, flywheel and loop- and training loop, uh, really quick. Uh, it's g- it's gonna be able to, like, robots are gonna wake up every day and, and know how to do new tasks

  10. 38:3541:51

    Why real-world data beats synthetic data

    1. BA

      in the world.

    2. NK

      Could one argue that because you're generating the data to learn from, I would say it's synthetic data in a manner, will never compare with data in the text form or, you know, data I can scrape off the web? It has significantly more data points if I were to teach a model on language versus here, the robot doing something and learning from it.

    3. BA

      Are you trying to make the point that, like, learning from, like, text... Uh, basically, you're saying, like, text is so much more dense than, like-

    4. NK

      Yeah.

    5. BA

      Maybe like-- Yeah. You know, I think they all serve, like, a little bit different purposes. There's a lot more data out in the physical world than there is on the internet. We've kind of, like, um, we've, like, tapped out all the kind of tokens on the internet at this point. So, um, we think, like, being able to explore the physical world is, like, extremely important for, like, overall intelligence. Um, there's like, you know, there's a thought process here we can go through of, like, if we're, if we're actually not able to, like, change our environment a bit as humans, like, uh, can you really achieve, like, real, real intelligence? Um, and, uh, so we think, we think, like, touching the world and interacting with the world is, like, e- extremely important on the path to, like, artificial super intelligence for humanity, and it's like a... it's, it's a very large data set, uh, that we need to go out and, and collect. Um, I think the, as it relates to the, the, like, text pre-training that we've seen in the world, that- I think that's, uh, g- gonna be extremely important for semantic reasoning and almost like this, like, like, like langu- like language condition bridge into the human world. Uh, we're able now to talk to the robot and say... I, I was, I was in the lab last week. We're, we're, we're just like, you know, we're collecting data from humans navigating different types of apartments and homes. We're taking that. We're training, like, a navigation model, basically a model that we can tell the robot, "Walk to this place in this apartment." And, um, I was in the lab, and I basically asked the robot to walk to the oven, and there was a table and stuff and chairs in the way, and I just, like, you know, told it. It, it walked all the way around the chairs, walked to the oven. It was perfect. And then I went, I went to the back and grabbed, like, a, a very large ladder. It was like this big orange ladder, and I, I put it inside the apartment, and I positioned the robot away from it, behind a table, and asked the robot to navigate to the, the ladder, and this ladder is, like, fully out of distribution. It's never seen a ladder before in its whole life. But like, you know, like a, like a text-based LLM or VLM has seen a ladder. You know, you could ask, like, you, you could ask, like a, like a, like a, like a, a language bot, like, "Hey, is this a ladder or not?" They'd be like, "That's a ladder. [chuckles] It's an orange ladder." So, like, the semantic reasoning is there, but the robot has, like, the LLM has, like, no idea how to tell the body and joints how to move to, to approach the ladder and get there. And, um, I watched the robot at that point just, like, walk fully to the ladder, which was, like, really incredible. I think there's, like, a natural part of the-- this, this, this part for us of going to solve general robotics, that, like, semantic reasoning we're getting from, like, pre-trained language models is, is, is, like, um, vital to basically being able to pull this off, and I think there's another piece, which is, like, how do we actually interact and touch the world? Which I think is another piece of, like, of raw intelligence that's needed for humanity, and I think the, the goal for us is to be able to kind of bridge those two worlds and, uh, ba- basically build the right world model, so the robot can, like, navigate the world effectively in the physical world, which is, like, much more higher dimensional than the, than the internet or text.

  11. 41:5146:59

    Kids + humanoids: Safety & design

    1. NK

      And you spoke about kids, Brett. Would you leave your kids with your humanoid? And also, this whole companion concept, how important is it that the robot doesn't look menacing or does not look scary? Does the robot of tomorrow have-- Does your humanoid of tomorrow have facial expressions and stuff like that?

    2. BA

      Um, so actually, I've had a robot in my house now for, like, a few months, and, um, you know, my, my kids have been there, but we, like, we monitor the robot, so we're not, like, we're not, like, uh... We're like, we have somebody with the robot. Like, we're not letting the kids go up and hug it. We're like, we're just doing tasks. We're trying to understand real p- robot performance. True safety for me is, like, having a robot that I've just- we've designed in my home with my kids, we have young kids, and, like, fully autonomously, and we're, we're not, we're not there today. Like, I would not let my, like, robot roam free for hours and weeks right now with my young kids. There's a, there's a certain track record of safety and performance that we, like, I would wanna see internally before, like, doing that. I think that's just, like, a really high watermark for me, uh, of trying to get there, and we're, we're pushing, like, every day to try to achieve this.... Um, and then I think, like, you, you touched on a few things on the design side. It, it's funny, like, it's almost like does this, like, end at Westworld, is what you're trying to ask, like, maybe? Like, um, and it certainly seems possible to go build this. Like, you've seen like- I've seen like, um, robots with like, you know, real, like, human-like faces and expressions, and, um, like, really good, I would say. Um, our view right now is that we are trying to go out and do useful work. We're not trying to fool sh- another human that we're a human, from, like, a entertainment perspective or maybe companion perspective. Uh, so we've not designed the robot with, like, facial expressions and, like, uh, facial features and things like this. We've designed the robot to be, like, basically a tool for humanity. Um, there's another question is, like, do we approach Westworld or not? Which I think is, um... I don't know. I'd like- in some way I'm, like, really inspired and wanna try to push there, but we're also, like, on a little different track here at Figure today. I think we definitely don't wanna, like, design a robot that is, like, uh, scary or aggressive in any way, but we also don't wanna, like, try to, like, put, like, googly eyes on it and make it- and fool you that it's, like, a, a kid toy, and it's not. And, um, if you go back and look at, like, the early Waymos, it was like, it had the big googly eyes, and I d- I just, like, I, um, I quite despise that, like, design language. It's almost like, um, this is a very complex and sophisticated technology, and you're trying to put, like, googly eyes on it to fool everybody that it's like, it's like a, it's a toy. So I think our approach is like-

    3. NK

      Have you seen those toys, like, Moxie and stuff like that? Which-

    4. BA

      Those are small-

    5. NK

      Mm.

    6. BA

      Those are small things.

    7. NK

      Yeah.

    8. BA

      I mean, our, our robot's like-

    9. NK

      Yeah ... like, uh, over five foot. It's, like, you know, over 100 pounds. Like, it's like, it's a, it's a real thing, and, like, I think if we were to start putting, like, googly eyes on it and high, like, bright colors and stuff, and make it feel like a toy, I think it would do a disservice to, uh, you know, our customers in the world. I think this is... We need to, we need to approach this like a, a very high-end technology, uh, which it is, and, uh, we, we wanna market it, we wanna design for that. We don't wanna make it scary, but we don't- also don't wanna make it feel like it's a, you know, it's, it's not a kid's toy. So I think, like, we're trying to figure out how to like, like, basically marry that design language here internally. It's, it's tough. We're getting better at it. As you're seeing the, the robots behind me, like, it's radically changing. Like, the robot over here is, like, made of all metal [chuckles] and silver and, uh, wires hanging out. The- this is Figure 02 here, um, and then over here is Figure 03. So, like, you know, we're, we're seeing, like, radical advancements in the, the design, uh, like the industrial design for Figure robots, and I think, I think that will continue. I think you'll see us approaching what we think is our ideal design state. Um, so we'll, we'll see. We're also in the early innings of figuring out what user feedback looks like and where to go, so we don't have, like, all the answers for this. Uh, so it's like, you know... I guess a question for you: Do we, do we approach Westworld or do we stay on the current track? What's your view? I think Westworld.

    10. BA

      Okay, Westworld. Cool.

    11. NK

      Yeah.

    12. BA

      Yeah.

    13. NK

      'Cause why not, right? I think-

    14. BA

      Why not?

    15. NK

      I think-

    16. BA

      Yeah, let's do it.

    17. NK

      Yeah. Yeah, somebody-

    18. BA

      Yeah

    19. NK

      ... will do it.

    20. BA

      Yeah, that'll for sure happen.

    21. NK

      Yeah.

    22. BA

      Um-

    23. NK

      Mm

    24. BA

      ... it's possible, which I think that would be pretty crazy. We, um... I think if you put like a j- i, I think even our robots now, if you put, like, a wig on it and put a jacket on it, and it was, like, standing, like, you know, with the, with the back to you, I think you would, uh, sometimes maybe, like, say, "Is that a human or not?" So, um-

    25. NK

      Yeah

    26. BA

      ... because it's walking around and doing... It's grabbing, like, you know, water or coffee at the office or things like that, you'd be like, um... So I think there's no reason that we couldn't approach Westworld.

    27. NK

      I'm sure there's no reason why we couldn't my- we couldn't put my face or your face on it either, at some point.

    28. BA

      We're already doing my voice. Like, I can, like, talk to the robot and say, "Go to the Brett voice," and then it goes to the Brett voice and talks to me with my voice. It's, uh-

    29. NK

      It's a crazy thought. Imagine losing someone close to you, say, a parent or something like that, and creating a robot which looks just like him, speaks like him, and has his memories, which are in this particular robot, programmed in. That, that seems like Westworld plus, plus.

    30. BA

      It's just starting, you know? We're like, we're like, we're a few years into it now, and I think over the next 10 or 20 years it's just gonna be, um, it's gonna be a sci-fi future, which is gonna be really fun.

  12. 46:5949:25

    Dystopia: competitive lever in AI regulation?

    1. NK

      So Brett, I, I speak to a few AI people in the Valley, like, a, a lot of the really popular ones. Do you think a lot of them are speaking about dystopia or a dystopian world in a manner to create regulation so they have a regulatory moat in place, so other people can't compete so easily?

    2. BA

      Do you think they're talking dystopian a lot right now? Is that your view?

    3. NK

      I would say it's divided, half... I would say 60% dystopian, 40% utopian, but I sometimes question if the dystopian narrative is beneficial to them if a regulation were to come out and not have many others compete with the incumbent leaders.

    4. BA

      I think it's a little hard. I think my, like, inner circle of folks that I spend time with in the office and things are so highly optimistic on the future of what this can do. Like, too much pessimism is a little bit of a poison. So we have, like, an aggressive optimism here at the office and a value for it at the company, and we... Like, my whole inner circle of, like, folks that I spend my time with, even outside of the, uh, the company and stuff, are just so optimistic on, like, a positive future that we wanna live in. I think that's the first step, is just trying to think about, like, what is that positive future and trying to approach that.

    5. NK

      Yeah.

    6. BA

      Like, I don't have any interest in spending, like, m- most of my material life on a company that's gonna ultimately help it be worse. Uh, that wouldn't make any sense. So I, I think, I think this is all headed to, like, a really good spot. I think there's obviously some chance, like, these things could go bad, which is always the case in all technology and things that we're doing in the world. I think that probability is non-zero. I think we've gotta be very, very thoughtful about making sure we don't approach that, or if we are approaching it, like, safeguards in place. And also then, uh, painting the picture of what, like, a positive future for humanity will look like, and approach that, like, you know, esotomically approach that as fast as possible. So for me, like, I don't know, humanoid robots for me, we look at this, we're like, "This is gonna be incredible." We don't wanna do... You know, we wanna, like, help the world in so many ways, and it's gonna be like, who wants to do dishes? Like, and do like... I, I woke up this morning and was, like, helping to put away dishes in the, uh, dishwa- I was like, 'cause my kids were eating, like, breakfast. Like, I don't wanna do that anymore. It's terrible. And, um, I-... I don't think anybody really wants to do that kind of stuff. So I think that stuff is gonna be really critical. I think the AI systems that we're seeing today, and like advanced reasoning models, and are gonna be, like, extremely beneficial for the world, uh, net, net, net positive. And, um, I don't know, I'm rooting for that, for that future, so I'm heading down, like, that track. But I certainly understand, uh, there's a, there's a good amount of folks sitting, sitting there just, like, I would say, scared, and maybe rooting against that.

  13. 49:2552:17

    Robots’ eyesight & perception

    1. NK

      Brett, you didn't cover eyesight. I, I, I tried to read up about eyesight, and I watched a bunch of people talk about it, but I couldn't understand it. How does eyesight work?

    2. BA

      Yeah, so we basically... It's super simple. Our input to the model is, like, the, the robot eyesight, [chuckles] um, what, what the user is prompting or speaking, and then the joint states, like, where the joints are- like, where the robot's at in, in time. So, like, uh, if you take, like, a sliver of timestamp, you'll get a, you'll get a, you'll get an image, which is coming from a video frame. You'll get the, the prompt, like, what's the prompt? Like, "Go do dishes," or whatever, "W- walk to the ladder," and this, uh, like I, like I said before, uh, and then what's the current state of the robot? Like, that's the input state.

    3. NK

      Mm-hmm.

    4. BA

      And then the output-

    5. NK

      Mm-hmm

    6. BA

      ... is basically, like, uh, the action space of the robot is, uh, sending out, uh, targeted commands, uh, for, for in joints. Like, "Where, where, where do I position my body to go do?" And that's running, like, really- that loop's running really fast. So the robot just, like, literally sees what the humans see, like, from the head.

    7. NK

      For, for me to understand in a simpler manner, you're not saying it's like a Waymo, which is more radar and lidar, but like a Tel- Tesla self-driving car, which is more cameras?

    8. BA

      Yeah, exactly. So we- almost like if you think- if you put, like, a VR headset on, and you can, like, see the world through a headset, that's what the robot sees. And, like, very, very similar. We have, like, two cameras on the face, and they just, like, point out-

    9. NK

      Mm-hmm

    10. BA

      ... and they're gonna see it. We, um, in our launch video for Figure 03, we have, like... When we started doing the laundry, when it grabs one of the pods, it's, it's actually the video coming from the robot head, and it's just like, it looks like you put a camera, like a GoPro, on somebody's head, and they're like, they're moving around the world. That's what the robot sees, and so it just sees, like, you know, kind of like sees, uh, egocentric video, uh, similar to if you had a GoPro on your head. Its hands, it has a prompt of what to do, it has a goal in mind, like a behavior, and then it knows its state, and then it sees what it needs to do. So in a lot of ways, it's like visual servo in its scale. If it needs to grab the cup, it sees a cup, it needs to grab the cup as a command, it knows its state-

    11. NK

      Mm-hmm

    12. BA

      ... where my hand position's at, head, torso, body, and then it sees its hand as it relates to the, to the cup, and it just sees, like, you know, it needs to approach this. And it's almost like building, like, a closed-loop planner on how to go do that, and that's what the neural net's doing. It's outputting commands of where to position the body to go achieve this target, and it's doing this over, like, you know, thousands and thousands of demonstrations of humans, of, you know, doing this, doing this exact work or similar work, and then we're, we're using that for training. So, um, our training set is, like, human data, so we use humans for navigation and manipulation. We're like, um, you know, humans are wearing, uh... We have humans, like, basically doing this work. We're, we're, we're recording this information, and we're training Helix, which is our neural network internally, to be able to do this on the robot.

  14. 52:1756:59

    Why humanoids are possible today

    1. NK

      What has changed, Brett? Like, maybe a year or so ago, I had Yann LeCun, who's one of... You know, he, he worked on the initial AI projects and is quite a authority on it. He came and explained to me how a transformer was created, uh, how the A- how the generative AI models came about. I'm, I'm quite an idiot, and I couldn't understand all of it, but what has changed in the world that is allowing for a humanoid to be built today that could not have been done five years ago?

    2. BA

      Well, o- one is, like, deep learning just works, and I think that's an extremely important part of this. We are able to take very complex things, like a complex robot with high degrees of freedom, we talked about earlier, we will output actions out of it using deep learning models. That's a, that's a thing that, like, didn't exist five years ago to really make happen. Didn't exist the right way for us to make happen. Uh, but also, like, the hardware, like, we have, like, really good inference compute and training compute now to really make this really happen. People don't talk about this a lot, but the hardware's really gotten good. Um, if you go back and, like, watch, like, old, you know, humanoids, like, 10 years ago, a lot of them were hydraulic systems. They weren't even electric. They were, like, leaking oils. They're extremely dangerous around humans. Uh, they're very big and bulky, sometimes loud.

    3. NK

      Yeah, yeah. I was, I was watching this Boston, uh, Dynamics founder speak about how he still loves hydraulics 'cause of how much power output he's able to get out of it.

    4. BA

      Yes, yes. Y- you... Like, you weren't able to get enough power out of these systems 10 years ago. Uh, like, the, the power density, torque density wasn't really high enough, so you had to choose some other technology, which was hydraulic system at the time. We've now had torque density and power density of actuators and battery systems get good enough, where you can run, like, fully electric systems instead of hydraulic systems, and that makes the robot smaller, lighter weight, quieter, faster. Uh, like, it can run... The run time's longer. It's just, like, a better... It's like the- it's where you want to go when you go into market. You, you, you don't want to bring to market a hydraulic system. There's also, like, local sensors on, on the robot, cameras, torque sensing, other things that, like, we've spent a lot of time on. Like, they were kind of like... Some of these were, like, do or die products internally, last three years, that we've advanced internally ourselves, that we think are probably state-of-the-art. Like, nobody else in the world has done systems like this. That has allowed us to, to run neural nets on the back end. So I think, like, I think hardware is one, so, like, electric hardware, I would say specifically, and then I would say the deep learning, and algorithms and compute around there have been fundamental breakthroughs that weren't possible 10 years ago.

    5. NK

      In a way, would you also say the big change has been a humanoid can today leverage prior knowledge in a manner that it could not?

    6. BA

      100%. Like, it has-... semantic grounding, uh, it has memory, uh, it has, like, a language condition, so humans can tell it, like, to do something, like grab the coffee cup on the table, put it down. It knows exactly what that really means, like, from a human. It doesn't need, like, a translator to the robot world. It au- automatically understands that, knows it's the coffee cup that I talked about from, from like a language, like a language prompt or a voice prompt. So, um, all of that, you stitch it all together, like... wow, it looks like it's possible now to make this happen. It looks like general robotics is, like, actually possible today. You know, not yet working ten hours a day in my home, but it does work in my home in some cases for, like, these, like, individual stuff that's hardest. Like, we can do laundry. Like, if you think about the things you would need to see to make general-purpose robots work, the hardest thing in that book is doing, like, high-rate manipulation of deformable objects, like, like laundry, towels, like things like this, like folding a towel, for instance. That, like, constantly moves. You grab it, it's not like, um, it's not rigid. It, it's just, like, a really hard problem to solve. There could be, like, an, an unlimited amount of variations a towel can be in. We have neural networks that can do that now, and that, that for me felt like the last chapter of the book. It's like, well, we can fold towels with a neural nano humanoid, that we have solved general robotics. The, the screwed-up thing is that we've actually already solved-- we already know how to do that. [chuckles] We can fold laundry, and there's other things in the ecosystem that we're working on now to get that working fully end to end, so it's like it can work for hours like that without any human intervention. Some of these chapters now in the book are filling out. Some are left open now, and we're just, like, we're trying to fill the whole ch- whole book out in, in, in some way. But, like, the, the stuff that you would've asked me three years ago, if we were doing this podcast when we first started, I would've said, like, "Here's all the laundry I need to go fold," and we can do that. I'd be like, "We've solved general robotics." Like, we're there. So in some aspects, uh, we've, like, solved some hard parts, but we still have a lot, like, a lot more to go, but we, we see it now. We see, like, a little light in the tunnel to go do this, and what comes out of that tunnel is, like, a robot I can put in your home that can do ten hours or twenty hours of work a day and do the things you don't wanna do all day long.

  15. 56:591:00:17

    Other players in the industry

    1. NK

      What's happening with the others in your industry, Brett? Like, I've looked at Physical Intelligence. They seem more like they're building the software more than the hardware. Neo Gamma, Boston Dynamics, Tesla. Who's doing what, which is very cool? I'm gonna play the video of your robot doing the segregation of the packages, but what are the others doing which is quite cool?

    2. BA

      I don't know. It's, it's kind of been interesting. I feel like some of the older generation of robotics companies in the space, the last, like, three years, have, have, um, in some way, kind of collapsed. Groups I've, like... You know, ten years ago, I would watch and, and look up to, I think have just, like, have kind of, like, somewhat faded away. I think there's been this emergence of, like, new, new companies now in the space that are... Like, I think that are, are quite interesting overall. The, the thing that's been most interesting for me is, like, everybody's trying to s- make some sort of bet, and there's a lot of like, you know, in, like, China, you're seeing, like, a lot of just hardware-only bets. Like, they're ju- they're just building hardware. They're selling it. Uh, they're generally not working on hands. They're generally not working on AI systems that much, but they're not... There's, there's no, like, full-stack solution. It's like, here's, like... You know, you can go to Unitree and buy a robot hardware today. It's like, go, go buy it, and then you it's up to you to go make it work, and it's like, there's not much you can really do with it today. And then on the opposite side, there's folks who are just working on pure AI, like no hardware, and you mentioned a s- some of these here that, like, don't care about the hardware, care about the AI systems. In, in our mind, you r- really can't choose one or the other. You have to just, like, you have to go down a road where you, you do the hardware, you do it incredibly well. Um, you do the AI, you also do it incredibly well, and then you put it all together really well. Like, the companies that can do really well are g- are gonna do both really well, and what they're gonna output is a robot that can do useful work, like autonomously. Like, what you wanna see is, um, a demonstration of a robot doing something that's, like, not teleopt or, um, like, coded. And what you're seeing now in the world is, like, I would say most major competitor to us at this point are putting out most of their content and updates teleopt from a human, and I, I think it's, like, perhaps some of the most deceiving things I've ever seen. [chuckles] It's like, uh, if I was driving a self-driving car, if a self-driving car pulled up next to us, and we found out there was some guy from Tennessee driving it, we'd both be like, "That's, uh, that's not what I thought this would look like." [chuckles]

    3. NK

      Can I push you a bit on Elon? The way to review Tesla's stock as a multiple of earning to the price the stock trades at, suddenly the humanoid is starting to form a part of how one would evaluate the stock. What are they trying to build?

    4. BA

      I don't know what they're trying to build. They're building a humanoid. I don't, um, know anything more than you know, what's public. Um, like, internally here, we kind of, like, put, like, a little bit of horse blinders on in terms of, like, our focus to what we're doing. Um, you know, w- we're kind of reasoning from the, from the ground up of, like, what we should really do. We don't really pay too much attention to, like, what the competitors are doing. I think that would've been... I, I think that would've ended in, uh, pretty badly last three years if we did. I think, like, for instance, like, a lot of groups in the space don't work on, like, high-dimensional hands, and i- we made a big bet internally. Like, we're gonna go all in on hands, make 'em, m- uh, over time, they'll be complex, and they'll be extremely important for high, high dextrous manipulation. T- Tesla should be really great at this, like, uh, so I'm rooting for them to do well.

    5. NK

      Right.

  16. 1:00:171:04:06

    The first problem humanoids solve

    1. NK

      What do you suspect will be the first problem a humanoid Figure 03 will solve, or Figure 04, 05, or whatever? First real-world problem.

    2. BA

      Since April, we've had Figure 02 in our first commercial customer, working every day, uh, every workday at the client for ten hours a day. It's, it's doing a real task, uh-

    3. NK

      Yeah

    4. BA

      ... on, on the real production line, helping to help build cars. This is not, like, a high volume, so we're like, we're... But, but, but the point is, like, we're learning how to operationalize robots. We're learning how to get robots to run every single day. Like, how does it work autonomously? Is there any human interventions? Is there hardware failures? Like, are there software failures? Like, how do we make it better going forward? And I- we've got now, like, n- close to six months of, like, real-world data. We think maybe the, one of the first in the world to have done this, of a robot running every single day in the, in the commercial workforce in, like, really tough environments, so it's not like it's easy. [chuckles] That's been good. I think for Figure 03, we're making a much bigger push now into the home.... than ever before. So, um, we think the home-

    5. NK

      I thought you were focusing on factories and auto plants and stuff like that, and companies like Neo Gamma were focusing more on the home.

    6. BA

      We've made a big, big push, uh, starting the last six months into the home here, and w- we're making an even larger push on Figure 03 into the home. Figure 03 will-

    7. NK

      You think that'll happen first, home? Because the factory will not probably need the humanoid form factor, 'cause the factory can be rebuilt or remodeled-

    8. BA

      Yeah, I don't-

    9. NK

      - to support.

    10. BA

      I don't believe that thesis, no.

    11. NK

      No?

    12. BA

      Like, you go into the factories, they don't- y- you could rebuild a factory from scratch, greenfield, and then you'd still have a tough time removing all the humans. It's like, uh, you see, you see some of the most automated manufacturing and logistics companies in the world, and they're like, they're heavy human, audit... They're heavy human.

    13. NK

      Right.

    14. BA

      I think here, to build, like, millions of robots in the world, our view is that the humanoid is, is the right path to go make that really happen. Uh, listen, I think three years ago, you would- if we were on this call three years ago, I would've said, like, the, the home is, um, such a harder problem than the commercial workforce 'cause it's much more variable. Like, both of our homes will be different. If you go to another home, it'll be different-

    15. NK

      Mm-hmm

    16. BA

      ... like different toasters, different appliances, everything. So you, so you need, like, real common-sense reasoning in the home that you might not necessarily need to ship into like, [clears throat] say, a manufacturing logistics company, day one. We, we've like, um... So our view is, like, shipping to the commercial workforce, buy our wa- buy our time to solve the home. Our-- My view on this is completely flipped last year. I think the home is near term. I think the home is solvable. Um, I think we're basically data-bound at this point. So we launched a program two weeks ago called Project Go Big, and we're trying to go build the largest, kind of like internet-scale pre-training set for robotics in the world. We're, we're trying to build basic data to teach the robot how to do things, uh, across various tasks in the home. Every week now, we're collecting data in different new homes and apartment buildings here in the US, and we're using that for training Helix. I think what this- next twelve months, I think we'll probably build the biggest robot pre-training set in the world to go make this happen. Um, it's like LLMs in some way. We're just, we're just data-bound, and then the more data we're seeing, the better [chuckles] the models are getting. So in, in our, in o- you know, in our side, we really can't scrape the internet for robot data like LLMs can. We have to go generate it. And the good news is a humanoid is like, like a human, so we can, like, capture this from human data. Uh, we've already been in my home the last, like, two to three months, uh, in, like, alpha testing. Um, we've, we've, we've showed some of the results last week of, like, what robots can do in homes with, like, you know, in those cases, those tasks were all done through Helix. You know, we will do more of that over the next twelve months, and we will also spend time shipping robots commercially. We think it's nice to get outdoors and get robots out to the real world that's really messy and learn how the robots behave. It's been a huge, huge benefit for us.

  17. 1:04:061:07:54

    Is China ahead in robotics?

    1. NK

      [electronic sound] Whenever I speak to people in the US, uh, I've been spending a fair amount of time there this year, everybody talks about China is so far ahead in robotics. Why is that? Is that ma- manufacturing at scale? What are they doing that others can pick up on?

    2. BA

      Yeah, I just don't believe this. There's a lot of chatter-

    3. NK

      Okay

    4. BA

      ... like, about this constantly. Like, I look at like, kind of manufacturing 2.0 and robotics 2.0. I, I'm interested in, like, the new generation of manufacturing, the new generation of robotics going out, and when you start looking at that space, the US, and particularly Figure, is ahead of every single Chinese sys- like, uh, company I've, I've, I've seen globally, of what, what I know publicly, in terms of capabilities. Like, w- watch the robot do advanced things with neural nets autonomously over long horizons. A- and the answer is, like, there's really nobody else in the world even close to what we're doing there. Um, there is a ton of manufacturing capacity in China, but the problems right now are not solving manufacturing, and this problem solved. The problem right now is solving a general-purpose humanoid robot. Once you solve that, then it becomes a data and manufacturing constraint, but we are in a race to solve general-purpose robots. We are not in a race against China on manufacturing robots right now, and it doesn't matter if China makes a million humanoid robots tomorrow. They just don't work very well, [chuckles] to be frank. And, um, may- maybe they will in the future, but what I've seen today is they're, they're very far from even where we're at at Figure, and I know the work they need to go do to get to where we're at, and I know the work we need to go do to do general-purpose work. It's not gonna happen tomorrow. It's not gonna happen next month. There's a, there's a h- large, like, fourteen peaks we have to go climb to go get there. So then, then once we transition from there, we then go have to ma- make a lot of robots, um, and, and get good at manufacturing. What the manufacturing looks like here, it looks like consumer electronics manufacturing. And I think when people think about manufacturing, get scared of it, they get scared at automotive manufacturing. Automotive manufacturing is, uh, like, the most complex systems I've ever seen in my entire life all put together to manufacture cars. It's, it's, like, hundreds of times harder than manufacturing consumer electronics. Like, we manufacture consumer electronics a lot by hand in the world, and we ship a billion phones, whatever it is, and it... Making cars is extremely tough. Making humanoids is not a car. It's a consumer electronics. You can hold every part in your hand. You could theoretically build millions of these by hand. Uh, we won't, but you can. So the constraint for me is not like, if we were sitting here today and trying to invent the next, like, AI device or next phone, like small phone, we, we wouldn't say, like, "Well, the hard part's gonna be making a lot of them." We would say, "Getting the AI systems and the hardware designed and, you know, engineered and prototyped," is probably the harder parts. Making them is like, you know, we don't sit here and say, like, "You're gonna die making, like, consumer electronics products." Like, everybody's gonna be able to make it. I think humanoid is a similar way. I think you can manufacture these. We're, we're doing them now. I think, I think manufacturing is hard but less hard than automotive, and I do not think it's whoever has the most manufacturing capacity wins this game right now. I think it's whoever has the best general purposeness, and that's just at the hardware design layer and the AI layer and the overall integration of those systems to do useful work. So-... Right now, America's winning this. I think globally, um, our goal is to beat China at this. We're, we're working seven days a week here to go do that. Um, and like, you know, at the end of the day, I'm, like, pro-human, so I hope, like, whatever happens, like, we, um, we go out and, like, build incredible things for humanity and make it really work. But I think it's been a mischaracterization that, like, "Oh, China's got this capacity. Oh, China's got this head start, and it's hard to keep up." That's completely wrong in my mind.

  18. 1:07:541:10:38

    Ending partnership with OpenAI

    1. NK

      I read somewhere that you decided to move away from the incumbent hyperscalers, the models. Like, you had a partnership with OpenAI, which stopped in twenty twenty-five. Why is that?

    2. BA

      The short answer is we'd like-- we-- OpenAI led my Series B. We, we generally were working on some kind of advanced models with them for maybe a year or two, a year, a year or so, about a year. And, um, Figure ultimately chose to kind of, uh, l- leave them and go in our own direction. The, the summary is that we were just way better at this stuff internally. We were good at getting models designed and, and embedded on the system and running on a robot and working. We were just really good at it, and I think we've been good at it, and we viewed the, the right path was just be able to do it ourselves. Most of the advancements we made during that period of time were all done internally, um, and we just felt the better path for us internally would, would be to design a system, our AI systems ourselves, which, which we, which we've been doing basically since the start. It also became difficult for us, like, even... You know, we have a really large AI team here, and they're like some of the best AI robotics, uh, folks in the world. It also became hard, like, hiring people when we had a partnership like that, where folks felt like we were outsourcing our AI, which we weren't doing, and we were actually doing all of it mostly ourselves internally. And just, just, like, for many reasons, it became like, this is not the right path for, for us to go down. The path for Figure is to go alone and do an extremely good job of bringing it here internally and understanding the models really well, down to the substrate levels, and then be able to put it on embedded systems and make it really work. And it's, it's not just like sitting in a room, like, training models and, like, throwing it over the, over a fence and making it really work. It's, it's-- You've got to really... It's-- This is hardware. You've got to be able to sit there with the robot and get it to work, and that, that is just like a, a, uh... It's a lot of trial and error. It's a lot of evaluations. Um, it's a lot of hard work. But y- at the end of the day, the buck stops with being on robot and making that stuff work, and the team here is just the best in the world at being able to do this. They- I mean, they're from, from DeepMind and, like, some of the best places in AI, and th- they're, they're sitting here with the robots. They're sitting here with a GPU cluster and training models, and we're doing that daily now and evaluating whether it looks well, like, really good. I, I, I sit here and watch us do generally anywhere between, like, five and twenty evaluations of new models every single day, and, like, we've been doing that now for a year and a half, two years. Like, every day, we're testing new things and trying to cl- like, hill climb. That- that's kind of led to the results you've seen today, and then what you haven't seen is, like, the next generation of results that we're working on in here internally, and, um, I, I just can't wait to show those as well.

  19. 1:10:381:13:29

    When does AI money turn into real revenue?

    1. NK

      Brett, so my primary job is that of a investor, stock market investor, public market investor. When I look at the world of AI, if I were to extrapolate and say world output is about a hundred trillion, and let's say half of that comes from services of one kind or another, so say fifty trillion in services, for the amount of money that has gone into AI, we haven't seen the revenue from AI commiserate with that. When does that change, and how does that trickle down into humanoids? Will it be the same or similar for many years?

    2. BA

      I mean, my, my view is, is this will take some time, but, like, AI systems will just eat everything. [chuckles] S- software ate the world prior to this, and AI will eat software and the rest of the world, and it- it'll, it'll take some time. The- it's hard to kind of take these existing habits and things that are embedded in society and, and radically transform them over, like, a very short period of time. But over, like, five, ten, fifteen, twenty years, I, I think things will radically transform. My, my view is that there's gonna be a generational company to be built that will be able to put ultimately billions of robots on the planet that will build, like, the largest company we've ever seen in the world by, by a long shot, and it's driven from, like, the metrics you said. Like, you know, I think it's, like, a little under half of GDP is, like, human labor. It's not just, like, getting to the point where, like, okay, you have the biggest market in the world now, which is, like, human labor, and you're like, you can basically build a human, a mechanical human. How many, like, cheap humans that work twenty hours a day would you want if they're, like, extremely affordable and, and they can do human-like performance, and they're like, you know, one-tenth the cost of a human or whatever it would be over time? Like, unlimited, right? As long as I keep putting more humans, like, more mechanical humans in, and they keep giving me ROI output, you'll just keep feeding them into the economy. And GDP, in, in a lot of ways, is like a, kind of this by-product of, like, human productivity, and we even measure GDP on a per capita, like, per human perspective is how we measure GDP. [chuckles] So if the denominator there, per capita or per human, is, like, per synthetic human or per humanoid, I think you have, like, this unbounded effect to GDP that it's not just like, okay, the biggest TAM in the world, but that, that TAM could, like, grow, like, by many factors here in the future. So I think, um, it will take time because we can't just, like, snap our fingers and, like, appear-- We just talked about manufacturing, but we can't snap our fingers and make a hundred million robots. But I think through our lifetime, we will be able to build billions of robots and get them out to the world, and they will ultimately start building their own robots from themselves. What will emerge on the humanoid side will be the largest company in the world by far, and then on the AI side, you sense, I think AI is just gonna eat everything. [chuckles]

    3. NK

      ... An

  20. 1:13:291:17:24

    Where to invest?

    1. NK

      advice to an investor who is looking at the various AI companies, let's say, uh, OpenAI, Anthropic, Alphabet, all of these big incumbent players in the space. If you had one dollar to invest, where would you put it and why?

    2. BA

      Is it private or public, or both?

    3. NK

      E- either. Either.

    4. BA

      Personally, I put all my money in my companies. Like, I, I don't make public investments. I put it all in here. So I would tell you, biasedly, put them into my companies.

    5. NK

      [chuckles]

    6. BA

      But, um, listen, I'm a big believer in, um, kind of like growth investing, investing in things I really think I believe in. I, I'd be at the frontier for me of deep tech. Uh, I think it's a-

    7. NK

      And which company is that?

    8. BA

      It's just like deep tech would be like... I, I, I work in deep tech. I, I mean, Archer and Cover and Figure, my companies that I've started, are deep tech companies. And they're, like, at the frontier of, like, really hard things, look possible, and if solved, I think could be, like, radically important for a lifetime.

    9. NK

      But could you pick a company which isn't one of yours, like from the ecosystem?

    10. BA

      You know, like the, the new age LLMs, I think those will go really w- far, the OpenAIs and Anthropics, and, um, I love, you know, what SpaceX is doing and Tesla, I think these groups. I think these are like, you know, I, I think, I, you know... I'm in the Bay Area, so Waymo is a good example. Like, Way- Waymos are everywhere here. It's like one of the best product experience I've ever had.

    11. NK

      Yeah.

    12. BA

      Uh, Waymo would be on my list. Like, these, these will be like generational companies. Uh, they- they're certainly on the track to be like generational industries, and then finding the right players inside of those that are, like, willing to, to, to, to play, you know, play hard for a long period of time and win. I think these are, like, uh, areas that I would be, like, going all in on. I'm spending my life going all in on some of these, and I think that's like- I always, like, think it's important to be excited about what you're, like, say, investing in or spending your time in. Um-

    13. NK

      Mm-hmm.

    14. BA

      So I'm a, I'm a more believer of, like, kind of those areas. Anything else?

    15. NK

      Don't wanna pick one?

    16. BA

      I mean, listen, I would probably pick Waymo, uh, as my, as my pick. I think I, you know, I think it's like a fifty billion market cap. Uh, super underrated. Like, I think if you haven't experienced sitting in a Waymo, you wouldn't really quite understand.

    17. NK

      I have, I have. I've used it quite a bit in the Valley.

    18. BA

      Yeah, what was your experience?

    19. NK

      Incredible. This one particular time, I was at a traffic signal in a Waymo, and there was a fire truck coming behind me, and my Waymo actually broke the signal, did this maneuver, went to the right, and allowed the truck to pass. It was incredible that a self-driving car could do that.

    20. BA

      The conventional wisdom says this is, like, not possible. If you go out and, like, talk to a lot of people, it's like, you know, "Self-driving is not quite here." Then you, like, ride in a Waymo, you're like: "Oh, man, it's been here. It's here right now," [chuckles] and it's unbelievable, and it's gonna keep getting better and better. And I, I, I think of that for my kids, too. My kids are young. I don't, I don't want them to drive. Driving is too dangerous. I have some close friends that are senior Waymo. Like, I think Waymo would be great. I think the market cap is reasonable. I think they've really proven their operational readiness to be able to make it work. I think there's a long ways to go now to scale that out to, like, a global fleet of, like, millions of cars. But I think the recipe is really there of, like, how to go do it, and the, the learnings are definitely there. I think the stack will evolve on the sensors and models and stuff over time to be more scalable. And they've-

    21. NK

      Yeah

    22. BA

      ... yeah, I think they've been at it for, like, sixteen years now. So it's like, it's like almost like-

    23. NK

      Yeah

    24. BA

      ... where did this come from? They've- there's been people there working for sixteen years on this problem, and that's also, like, I think, really, uh, inspiring, to see folks, like, really playing for the long game and delivering a product that's, like, super complex. Like, they, they have the complexity we have, but they don't- but then they have the safety thing of being able to not hit the road unless you know for sure you can make the right decisions at, like, fast speeds. And, you know, when we're walking at a, you know, max a meter a second, we don't really have those same safety conditions that a car would have in those cases. So it makes it even harder in terms of complexity that they have than we would have here. So being able to see that level of complexity, I think, is just astounding.

  21. 1:17:241:22:50

    What should you build in humanoids?

    1. NK

      So Brett, my community is largely Indian-origin entrepreneurs across the world who are looking to build something. If I were to extrapolate that humanoids have the biggest use case in societies with, say, a declining demographic dividend or really high labor cost, for countries like India, where the cost of labor is still quite low compared to, say, America, for example, and if I were to be a bit more candid, the access to risk capital is significantly lower here. What do we build? What does my community build in the humanoid space?

    2. BA

      What, what I can relate to on my side is, like, I, I grew up on a farm, came from very humble beginnings. Had, uh, [pauses] no to negative money most of my life. And, um, I think a couple pieces of advice I have just for folks in general is, like, I have a, I have a big thesis on just, like, just going for it. Like, I've spent, I spent a decent amount of my career as an entrepreneur, like, de-risking into areas where I felt like are possible, um, especially in software, that were like, um, hmm, you know, like, weren't like the really big, big ideas. And I think with, you know, Archer and Cover and Figure now and stuff I'm working on, these are like- these are big ideas, and I could really get to go for it. I've learned a lot from that experience. I've learned that... a few things. One is, I think, um, when you're going for these big ideas, they generally have, like, bigger markets, like bigger TAMs. And I think investors like to fund bigger, harder things that have, like, bigger outcomes. They just like, like that binary, like, risk reward. So I think you just access the capital. I've seen it's generally easier when you're working on harder things that can have bigger outcomes than working on smaller things like a, a calendar app or whatever it would look like. [chuckles] Um, I think the second thing is people from an employee base that wanna work for you, you, you need to hire, like, really good people. They wanna work for the really hard problems. They also don't wanna work for a calendar app. They wanna work for, like, a hard, really important problem for the world. Um, and then I think, like, the motivation and dopamine levels of folks that are working on these things is also harder, and it's generally not, like, ten or fifty times harder.... generally, we're working on stuff that's probably, like, multiple times harder, but the reward if it pays off in terms of, like, is, is like, is, like, tens of folds hi- higher. And so, uh, one thing I would say is, like, just try to go for it. And I know that also depends on access to capital and the cost of that, and talent, and everything else. I, I understand that, but, like, I have, like, you know, my life, I just, like, just, just try to go for it. [chuckles] I think that's one. Uh, two is, um, the humanoid space, like, it needs enormous help. We need help on compute and training. We need help on data collection significantly over time. Um, we need help on components. Like, one thing I've really realized is, three years ago, I thought there was a more, more of a mature supply chain. We can go out and get motors, and cameras, and sensors, like, the world has tons of- like, tens of thousands of different types of motors. Like, and what we found is we've had to, like... None, none of them are very good when you bring them in. And you start looking at it, like, the best motor we had or actuator was, like, twice as big as we needed it to be and twice as heavy. We're like, "Ah, man, what are we gonna do?" We talked to the vendor, they're like, "This is what we got. If you want to redesign it, it takes, like, two years, and maybe we'll get it, maybe we won't." Uh, and I was like: "Oh, man, we gotta, we gotta go do this ourselves." There's a huge area of supply chain that needs, that needs help on. Um, w- we need help on go-to-market activities as well over time. Um, you know, I think what's been maybe more challenging with humanoids is that we've had to vertically integrate almost everything now, which also makes it tough for, like, community folks trying to help out. Um, like, you can't build on top of the humanoid similar to what you can build on top, like, with LLMs today, um, and use them like an API. So hu- we are literally designing the full stack. They can go in your home and do everything, um, down to, like, the AI models, and the middleware operating system, the firmware, the embedded software, all the hardware. So it makes it a little bit more challenging for a startup to pop up and be like: "Yep, we'll use you," but there are opportunities, uh, today for this stuff. Um-

    3. NK

      And how do we find... How do we find these?

    4. BA

      I think one thing I, uh, you just got to talk to the right folks in the industry, and then you've got to start building stuff. You just gotta go for it. Like, the, like, one thing I always see for entrepreneurs is they have a hard time, like, there's a lot of, like, fiction, like, a lot of inertia to get started. Just go build some stuff. Like, go out there with a little budget, go build a little website, go out there and, like, cold call your way through it. Like, you gotta get out there and just start trying some stuff. I think this trial-and-error period is, like, extremely important. It's like a shark swimming: if, like, you stop, you die as an entrepreneur. So you, you gotta keep just going, and you'll need to go do some trial and error. You're gonna talk to everybody in the space. There's a bunch of conferences for humanoids and, uh, ro- like, like embe- like, embodied AI. Uh, you gotta get into that network and understand their pain points, understand where, like, where the business is headed, what things the- those companies might need, and then you gotta start just making inroads into them, and you gotta start, like, building and showing value. Like, for us, like, we have, like, we have, like, o- over a billion dollars of cash. If there are things and problems in our landscape, we will go fund it, and we will bring people in and go solve those. So, like, um, if somebody approached us, "Hey, Brett, or Figure, or whatever, we've done these things, here's how we can really help," we, we, we trial and test those, like, frequently and see how that could work. I think it's just a lot of hustle. And, um, if you're out there and you're listening to this, like, just, just go. Just, like, start going, start building. Uh, you can do a lot on, like, a small budget and make up some traction. And that's how this all starts, and then you learn a lot from there, and then you ask a bunch of questions, you get feedback, and you keep going, and make that recursively better.

  22. 1:22:501:28:42

    What’s next for social media?

    1. NK

      I'm gonna digress and ask you a question. I've been exploring the idea of building a social media app out of India because I think the incumbent players in social media have gotten boring, for the lack of a better word. Any advice on what I could build around social media?

    2. BA

      I feel like I still don't have a platform that I really love, even here. Um, I feel like there's, like, certain aspects I can go to certain places. I can go to, like, LinkedIn and do some stuff professionally. You got Instagram, you got X. Like, I think they all have, like, very different, um, experiences and ways to deliver content, and I feel like the world really hasn't figured out what the right platform is there yet. Um, I think there's something to probably do with, like, an AI-native, like, social networking platform that could be, like, w- w- I think, really important. And I don't quite live in that world to understand that well enough, but I think there's... I, I would- I would start with an AI-first native experience, and I think there's a... There, there are some gaps in what certain platforms are able to deliver today and also what the kind, kind of content I want to consume and give out that I think, um, I think they can give rise to a new experience for folks. And I think, I think we also need it. Like, we're like, "I'm burnt out. I don't use Facebook-"

    3. NK

      Yeah.

    4. BA

      "- I don't really..." You know, and-

    5. NK

      Yeah

    6. BA

      ... LinkedIn. I just, like, don't use these things. I just- we need, we need something new and fresh, and there's a, there's a, there's a way to do that. There's a way to get, like, the multimodality of maybe what I get across some of these other platforms into one experience.

    7. NK

      Any advice on who I should hire for something like this, Brett?

    8. BA

      I've always, um... I think my conv- like, the conventional wisdom of everything is to go out and hire somebody, like, really experienced, that makes you feel really good, from a really good background, that's at a successful company. And I found, like, that playbook is just complete crap. Like, throw that right out the door. And, like, uh, even now, like, Figure's gotten to a point where, like, we have these big shots, like, knocking on our door, wanting to come work here now. And, like, big companies, whatever else, and it's just, like, not the right... Like, if you look at every, like, generational company, it's not like they went out and picked out, like, the VP here, and the VP here, and the VP here, and put them together. It's like the opposite of what Meta- I think Meta's doing that right now, right? Look at their, uh-

    9. NK

      Yeah

    10. BA

      ... Meta's, like, uh, superintelligence lab. It's like you're putting, like, fifteen Tom Bradys together and making that work. It's just gonna- it's just immediately gonna- it's gonna immediately collapse. [chuckles] Like, it's not gonna work.

    11. NK

      Mm-hmm.

    12. BA

      And I think, um, my view is you just need to find... The core axiom of what I look for here for talent is people that really care. And I can't, I can't find any better way to describe it than caring. They care a lot about whatever... You know, you're trying to hire somebody to do this. They obviously have the technical background to go do it, but even if you don't quite have the perfect technical background, if you have, like, a high aperture for learning-... You're gonna go out and figure out how to do that. You're gonna learn to code or learn to vibe code, or whatever, whatever you need to go do. So I, I have a big thesis on trying to find folks early stage that really care, and that, for me, it means, like, a lot of shot on, shot on goals. It means, like, getting myself out there, talking about what I'm doing, and trying to find, like, this big net, cast this wide net of folks that wanna come in here and work with me on this, like, really important project or whatever it is. And I think for you here, it's like, uh, finding folks that would really care about this problem of trying to build this, like, next-generation social networking app in India that's, like, native AI first. And I think the first step is, like, casting a wide net, telling everybody you're doing it. I don't think there's really any secrets in tech. I think people, like, hold those close to your chest, or they used to. I used to, and it just doesn't matter anymore. Like, a lot of it's on just raw execution and ability to deliver product. That really works well, and most people can't execute, and most people can't deliver a good product. So there really are no secrets. So, um, my view is to try to find somebody that deeply cares.

    13. NK

      Do you think it could be voice first?

    14. BA

      I'm, like, extremely bullish on voice models. I, I'm doing work on some of this work right now, and I think for both, like, the digital AI front... Like, it's funny, like, we're doing like the physical AI thing at Figure, where, like, we're trying to do everything a human can in the physical world. And, like, you know, I, I've also been working on, uh, something that is also helping out with the digital side of this. And, and all roads lead to full autonomy of AI agents, whether it's, like, through a robot or through, like, a digital systems. And the u- the UI to that, in my mind, is, is, is voice. Like, really good voice. And we just don't have, like, really good voice today, but it's getting, it's getting better. Like, the, the lines are getting better over time, and it's, it's like the gap's, like, narrowing of where we need to go. I think we're, like, twelve months away from, like, really good voice that can make an experience like you're talking about work, but we need it. And if not, you don't have, like, a high tolerance for, like, a voice that you can't interrupt or makes mistakes. Like, you just, like... It's just not a- it's a, it's immediately a bad experience. But I think for your thing, we're like, I think we're- it's a twenty twenty-six event for you of getting voices, voice, like, voice models, like native AI, like, probably half duplex models, like, extremely good. I think there could be a, a very significant voice aspect to this, and I also think about voice and generative AI, like rendering, as a, as a real differentiator to what's really happening today and, like, kind of like o- one point oh social, social, uh, social media networks. I would be starting to play around with a couple different of these avenues and starting to prototype them within a week. I would want to see if you can use an off-the-shelf TTS for voice, uh, start hooking up, like, a, a mock network together and start basically parallel pathing a couple of these, like, user journeys of what this could look like. And I would say it'd be nice if you could check in with me in a week, and we can, like, start demoing these and checking and testing them out. The only way through this for you is, like, to be able to rapidly prototype very different type of ideas and feel the experience and what that looks like and trying to close the gap on what you think that experience should, should look like.

    15. NK

      Yeah, I'm gonna hit you up after this and- [chuckles] ... seek some help on this.

    16. BA

      I wanna be your first user, so I get, like, a really great user-

    17. NK

      Yeah

    18. BA

      ... and I can always say-

    19. NK

      Yeah

    20. BA

      ... I was the first user on your platform. [chuckles]

    21. NK

      [chuckles] No, I'd love that.

  23. 1:28:421:36:19

    What happens to jobs + society?

    1. NK

      Last question for you, Brett. Beyond this utopian future, when the humanoids are doing most of what humans can do today from a work lens or from even a home lens, what happens to the world? What happens to socialism? What happens to capitalism? What happens to labor? What happens to a job and getting a salary?

    2. BA

      I think there's a few things that happen. One is we are building autonomous, I feel like autonomous agents, like synthetic humans, whether digitally or physically. They will do everything behind a computer a human can, and they will also do everything in the physical world through a humanoid that you would want to go do. If, if that's true, which I think is very high likelihood in the future, y- the prices of goods and services are, will, will slowly s- start collapsing, basically asyntomatically approaching zero. And you'll just have, like, an abundance of goods and services. Like, i- if you had, like, a robot farmer, like, all it would need is, like, the, the price of the land and then energy. And if energy generated on-site with solar and stationary battery, then you basically have, like, the rental cost of the robot and land, and it'd just be like, it, like, that would be the only real cost. And I think the robot cost will, like, really dramatically reduce. You'll have this dramatic reduction here over the next, like, five years as we approach higher volume manufacturing, and then you'll have another step down once robots can fully build robots in the loop together, and you remove all human labor from the whole manufacturing process. So I think you have that period of time [chuckles] that happens. I think that you'll be able to afford anything. Any, any, anything in the world that humans can do today, you'll have synthetic, you know, agents that will be able to do that for you.

    3. NK

      What happens then?

    4. BA

      I think there's a part of society that lacks purpose for what they do, like myself. Like, what do I do if, like, synthetic human can do everything I can do just as good? I, I spend my whole life, career doing stuff that I think is valuable, that I, you know, a, like, a computer can do it just as well. Uh, and then there's another aspect of us, like, freeing ourselves from all the stuff we do all day. Like, we have... You know, we work forty, fifty, sixty years of our life. We work at home all day long, doing laundry and dish. I al- I feel like I'm cleaning up the kids' toys, like, complo- constantly. Like, just, like, [chuckles] like, you know, just like I feel like I'm working. Well, I wanna, I wanna spend time with my kids, but I'm like, kids are getting breakfast, and I'm like, feeding, getting their breakfast or getting stuff out of the dishwasher. I don't wanna be doing that, so I wanna be talking to them. You know, I'm driving them to school. Like, I just, like, I can't- I have to focus on the street. You know what I mean? Like, there's just, like, all this stuff we're doing that we're not really realizing is just like- and then I'm holding all these things in my head and planning, like, how to book something or go travel or send a wire. Like, I just want systems to go and do this for us. I think it's quite freeing. I think when one, one aspect we lose purpose, another aspect is, like, extremely freeing, and another aspect, it's like you'll be able to afford anything in the world. And the funny thing is, like, the weird thing is we're gonna see that this, in our lifetime now. I grew up on a farm, right? Like, you just- we're seeing things advance, like, you know... My, my grandfather didn't grow up around electricity in the early years. Like, it's just, like, we're seeing things advance radically. I think it's gonna be a net, net gray.... Uh, it's just gonna be radically different twenty years from now.

    5. NK

      One could argue that humans didn't have purpose to begin with. We just deceived ourselves into th- thinking we had purpose.

    6. BA

      Hundred percent. I mean, is it purpose, like working like eight, ten hours a day, every single day? You know what I mean? These are the things I, I don't know. I mean, it's really... I happen to, like, love what I do, so I feel [chuckles] I have purpose or, uh, and hope it, hope it's, hope it's purposeful, uh, some point in like, [chuckles] like, over time.

    7. NK

      Do you love what you do, or do you like the by-product of what you do?

    8. BA

      I love the by-product of what I do, which helps me get dopamine and a feeling for, like, what I do every day in the office. So I really love being able to ship great product out. Nothing makes me happier to see a robot working for five months on-- out in the real world. That's awesome. It's like, it's like-- But I need, I need, I want millions of that, you know what I mean? So, but my days are hard. My days, like, all the... I ha- I have, like, this, um, problem funnel. Like, every problem funnels right to me, and I'm working on problems all day. I have this, like, joke where I'm telling people when I get to the meeting, I'm like: "Tell me good news, I've had a bad day." And it's just like they only-- they never have good news 'cause, like, things that are going well, I don't spend time on. So... But I'm working on problem, fixing things. I'm th- I'm doing things at a CEO level that are needed to unblock the critical path of the company. So I just work on problems all day. I j- I work, like, s- sun up to sun down. I'm generally in the office every night. Uh, so, like, no, my days are very hard, exhausting. Uh, I'm tired and, um... No, they're like, they're, [chuckles] they're extremely difficult and not fun on the surface, but, like, the, the by-product of this is we get to create general-purpose humanoid robots here.

    9. NK

      Is that dopamine from a place of ego where nobody's building humanoids yet, and you're the first people doing it, so you get a dopamine hit? Tomorrow, if everybody were building a robot, it might not be the same.

    10. BA

      Yeah, I think it'd really suck not winning. [chuckles] Um, we wanna go build something great, we wanna go win, and I think it's really important. Like, we don't wanna lo- I don't wanna lose. Like, losing is like, is, uh, terrible. Like, we, we don't wanna make bad decisions. Like, there's all this, like, there's all this, like, thing in the, in the valley, Silicon Valley, about failing, and it's okay. It's like, uh, it's not okay. I don't wanna fail. Like, I d- I don't like that way of thinking. Like, we fail all the time, and we make mistakes all the time, but we don't wanna make those mistakes. I'd rather not ma-- not have failed and succeeded [chuckles] like, so I happen to make better decisions sometimes after I fail, but, like, that's not the goal. The goal is to go win. Um, I would say short term, yeah, it's great. It's great to go win, but, like, even now, like, this is a ten, twenty, thirty-year journey. There is-- Like, the winner will, will emerge ten years from now, five years from now. You know what I mean? Like, not, not now. So, like, w- whether we're, like, one or five or two right now, none of this matters. So, like, what matters is shipping a product at scale that can generate the data to increase intelligence and reduce cost. That's, that's who wins in my mind. Like, who's the first to a million robots in market? That's a big marker. That will help determine, like, the leader. So I think that for me, like, um, even though I think we've been successful to date, we do not feel in any way like we're winning or I've won here. I think we, we wanna go really win it, and that's gonna take five, ten more years of i-- beyond, of more extreme, like, seven-day [chuckles] a week work that we're doing. So, uh, but we're in it. We're in it for the long haul. We s- started this company knowing it's gonna be a multi-decade thing, and, uh, we're just, we're just fortunate to be here. And I think we're most fortunate-- I'm most fortunate as a founder being here... You know, like, my advice to entrepreneurs is always, like, "Always be within the right decade." And it was unclear when I started this company. I was like: I'm gonna go do this for, like, twenty, thirty years beyond, if, like, humanoids were gonna happen this fast, and it happens to be that it's happening this fast now, and it's a race now to go solve general-purpose robots. So I, I feel fortunate just to be in a place where we have over a billion of cash. We have great commercial customers. Robots are working with neural nets. Figure 03 is, I think, is the best humanoid in the world. It's great. We have a great team. Team is, like, by far and away, the best asset we have at the whole company. So we have all the ingredients now to, like, pull off what the- we started as a dream, and I th- I think we feel fortunate about that.

    11. NK

      Thank you, Brett. Thank you for doing this. This was fun. I'll, uh, be down in the Valley in November, and I'd love to come check them out in person.

    12. BA

      Come by and, like, get me a part of the network, social network here early, too.

    13. NK

      Yeah, a hundred percent. [upbeat music] We'll connect after this. Thank you, Brett.

    14. BA

      Yeah.

    15. NK

      Cheers.

    16. BA

      All right, bye.

    17. NK

      Bye. [upbeat music]

Episode duration: 1:36:19

Install uListen for AI-powered chat & search across the full episode — Get Full Transcript

Transcript of episode fL2wyVLX08o

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome