EVERY SPOKEN WORD
155 min read · 30,735 words- 0:00 – 7:35
Introduction: Lex Fridman
- AHAndrew Huberman
(instrumental music) Welcome to the Huberman Lab Podcast, where we discuss science and science-based tools for everyday life. I'm Andrew Huberman, and I'm a professor of neurobiology and ophthalmology at Stanford School of Medicine. Today, I have the pleasure of introducing Dr. Lex Fridman as our guest on the Huberman Lab Podcast. Dr. Fridman is a researcher at MIT specializing in machine learning, artificial intelligence, and human-robot interactions. I must say that the conversation with Lex was, without question, one of the most fascinating conversations that I've ever had, not just in my career, but in my lifetime. I knew that Lex worked on these topics, and I think many of you are probably familiar with Lex and his interest in these topics from his incredible podcast, the Lex Fridman Podcast. If you're not already watching that podcast, please subscribe to it. It is absolutely fantastic. But in holding this conversation with Lex, I realized something far more important. He revealed to us a bit of his dream, his dream about humans and robots, about humans and machines, and about how those interactions can change the way that we perceive ourselves and that we interact with the world. We discussed relationships of all kinds, relationships with animals, relationships with friends, relationships with family, and romantic relationships. And we discussed relationships with machines, machines that move and machines that don't move, and machines that come to understand us in ways that we could never understand for ourselves, and how those machines can educate us about ourselves. Before this conversation, I had no concept of the ways in which machines could inform me or anyone about themselves. By the end, I was absolutely taken with the idea, and I'm still taken with the idea, that interactions with machines of a very particular kind, a kind that Lex understands and wants to bring to the world, can not only transform the self, but may very well transform humanity. So whether or not you're familiar with Dr. Lex Fridman or not, I'm certain you're going to learn a tremendous amount from him during the course of our discussion and that it will transform the way you think about yourself and about the world. Before we begin, I want to mention that this podcast is separate from my teaching and research roles at Stanford. It is, however, part of my desire and effort to bring zero-cost-to-consumer information about science and science-related tools to the general public. In keeping with that theme, I'd like to thank the sponsors of today's podcast. Our first sponsor is ROKA. ROKA makes sunglasses and eyeglasses that are of absolutely phenomenal quality. The company was founded by two all-American swimmers from Stanford, and everything about the sunglasses and eyeglasses they've designed had performance in mind. Now, I've spent a career working on the visual system and one of the fundamental issues that your visual system has to deal with is how to adjust what you see when it gets darker or brighter in your environment. With ROKA sunglasses and eyeglasses, whether or not it's dim in the room or outside, whether or not there's cloud cover, or whether or not you walk into a shadow, you can always see the world with absolute clarity. And that just tells me that they really understand the way that the visual system works, processes like habituation and attenuation. All these things that work at a real mechanistic level have been built into these glasses. In addition, the glasses are very lightweight. You don't even notice really that they're on your face, and the quality of the lenses is terrific. Now, the glasses were also designed so that you could use them not just while working or at dinner, et cetera, but while exercising. They don't fall off your face or slip off your face if you're sweating, and as I mentioned, they're extremely lightweight, so you can use 'em while running, you can use 'em while cycling, and so forth. Also, the aesthetic of ROKA glasses is terrific, unlike a lot of performance glasses out there which, frankly, make people look like cyborgs. These glasses look great. You can wear them out to dinner. You can wear them in, for essentially any occasion. If you'd like to try ROKA glasses, you can go to roka.com, that's R-O-K-A dot-com, and enter the code Huberman to save 20% off your first order. That's ROKA, R-O-K-A dot-com, and enter the code Huberman at checkout. Today's episode is also brought to us by InsideTracker. InsideTracker is a personalized nutrition platform that analyzes data from your blood and DNA to help you better understand your body and help you reach your health goals. I am a big believer in getting regular blood work done for the simple reason that many of the factors that impact our immediate and long-term health can only be assessed from a quality blood test. And now with the advent of quality DNA tests, we can also get insight into some of our genetic underpinnings of our current and long-term health. The problem with a lot of blood and DNA tests out there, however, is you get the data back and you don't know what to do with those data. You see that certain things are high or certain things are low, but you really don't know what the actionable items are, what to do with all that information. With InsideTracker, they make it very easy to act in the appropriate ways on the information that you get back from those blood and DNA tests, and that's through the use of their online platform. They have a really easy-to-use dashboard that tells you what sorts of things can bring the numbers for your metabolic factors, endocrine factors, et cetera, into the ranges that you want and need for immediate and long-term health. In fact, I know one individual, just by way of example, that was feeling good but decided to go with an InsideTracker test and discovered that they had high levels of what's called C-reactive protein. They would've never detected that otherwise. C-reactive protein is associated with a number of deleterious health conditions, some heart issues, eye issues, et cetera, and so they were able to take immediate action to try and resolve those CRP levels. And so with InsideTracker you get that sort of insight, and as I mentioned before, without a blood or DNA test, there's no way you're going to get that sort of insight until symptoms start to show up. If you'd like to try InsideTracker, you can go to insidetracker.com/huberman to get 25% off any of InsideTracker's plans. You just use the code Huberman at checkout. That's insidetracker.com/huberman to get 25% off any of InsideTracker's plans. Today's podcast is brought to us by Athletic Greens. Athletic Greens is an all-in-one vitamin-mineral-probiotic drink. I started taking Athletic Greens way back in 2012, and so I'm delighted that they're sponsoring the podcast.The reason I started taking Athletic Greens and the reason I still take Athletic Greens is that it covers all of my vitamin, mineral, probiotic bases. In fact, when people ask me, "What should I take?" I always suggest that the first supplement people take is Athletic Greens for the simple reason is that the things it contains covers your bases for metabolic health, endocrine health, and all sorts of other systems in the body. And the inclusion of probiotics are essential for a healthy gut microbiome. There are now tons of data showing that we have neurons in our gut, and keeping those neurons healthy requires that they are exposed to what are called the correct microbiota, little microorganisms that live in our gut and keep us healthy. And those neurons in turn help keep our brain healthy. They influence things like mood, our ability to focus, and many, many other factors related to health. With Athletic Greens, it's terrific because it also tastes really good. I drink it once or twice a day, I mix mine with water, and I add a little lemon juice or sometimes a little bit of lime juice. If you want to try Athletic Greens, you can go to athleticgreens.com/huberman, and if you do that, you can claim their special offer. They're giving away five free travel packs, the little packs that make it easy to mix up Athletic Greens while you're on the road, and they'll give you a year supply of vitamin D3 and K2. Again, go to athleticgreens.com/huberman to claim that special offer. And now my conversation with Dr. Lex Fridman.
- LFLex Fridman
We meet
- 7:35 – 26:46
What is Artificial Intelligence?
- LFLex Fridman
again.
- AHAndrew Huberman
We meet again. Thanks so much for sitting down with me. I have a question that I think is on a lot of people's minds, or ought to be on a lot of people's minds, because we hear these terms a lot these days, but I think most people, including most scientists, and including me, don't know really what is artificial intelligence, and how is it different from things like machine learning and robotics? So, if you would be so kind as to explain to us what is artificial intelligence and what is machine learning?
- LFLex Fridman
Well, I think that question is as complicated and as fascinating as the question of, what is intelligence? So, I think of artificial intelligence first as a big philosophical thing. Pa- Pamela McCorduck said, uh, AI was, uh, AI was the ancient wish to forge the gods, or was born as an ancient wish to forge the gods. So, I think at the big philosophical level, it's our longing to create other intelligent systems, perhaps systems more powerful than us. At the more narrow level, I think it's also a set of tools that are computational, mathematical tools to automate different tasks. And then also, it's our attempt to understand our own mind, so build systems that exhibit some intelligent behavior in order to understand what is intelligence in our own selves. So, all those things are true. Of course, what AI really means as a community, as a set of researchers and engineers, it's a set of tools, a set of, uh, computational techniques that allow you to solve various problems. The- there's a long history that, uh, approaches the problem from different perspectives. What's, uh, always been throughout one of the threads, one of the communities goes under the flag of machine learning, which is emphasizing in the AI space the- the task of learning. How do you make a machine that knows very little in the beginning follow some kind of process and learns to become better and better in- at a particular task? What's been most, uh, very effective in the recent, about 15 years is a set of techniques that fall under the flag of deep learning that utilize neural networks. What neural networks are, are these, uh, fascinating things inspired by the structure of the human brain, uh, very loosely, but they have, uh, it's a network of these little basic computational units called neurons, artificial neurons, and they have, uh, these architectures have an input and an output. They know nothing in the beginning, and they're tasked with learning something interesting. What that something interesting is usually involves a particular task. The- there's a lot of ways to talk about this and break this down. Like, one of them is how much human supervision is required to teach this thing? So, supervised learning, this broad category, is, uh, the- the neural network knows nothing in the beginning, and then it's given a bunch of examples of, uh, in computer vision, that would be examples of cats, dogs, cars, traffic signs, and then you're given the image and you're given the ground truth of what's in that image. And when you get a large database of such image examples where you know the truth, the, uh, the neural network is able to learn by example. That's called supervised learning. The question, there's a lot of fascinating questions within that, which is, how do you provide the truth? When you're given an image of a cat, how do you provide to the computer that this image contains a cat? Do you just say the entire image is a picture of a cat? Do you do what's very commonly been done, which is a bounding box? You have a very crude box around the cat's face saying this is a cat. Do you do semantic segmentation? Mind you, this is a 2D image of a cat, so it's not a ... The- the computer knows nothing about our three-dimensional world. It's just looking at a set of pixels. So, uh, semantic segmentation is drawing a nice, very crisp outline around the cat and saying that's a cat. That's really difficult to provide that truth, and the- one of the fundamental open questions in computer vision is, is that even a good representation of the truth? Now, there is another contrasting set of...... ideas, their attention, their overlapping is, uh, what used to be called unsupervised learning, what's commonly now called self-supervised learning, which is trying to get less and less and less human supervision into the, into, uh, into the task. So, self-supervised learning is, uh, more, uh, it's been very successful in the domain of, uh, language model, natural language processing, and now more and more it's being successful in computer vision tasks. And what's the idea there is let the machine, without any ground truth annotation, just look at pictures on the internet or look at texts on the internet and try to learn something, uh, generalizable about the ideas that are at the core of language or at the core of vision. And based on that, we humans at- at best like to call that common sense. So with this... We have this giant base of knowledge on top of which we build more sophisticated knowledge, but we have this kind of common sense knowledge. And so the idea with self-supervised learning is to build this common sense knowledge about what are the fundamental visual ideas that make up a cat and a dog and all those kinds of things without ever having human supervision. The- the dream there is the... (laughs) You just, you just let an AI system that's, uh, self-supervised run around the internet for a while, watch YouTube videos for millions and millions of hours, and without any supervision be primed and ready to actually learn with very few examples once the human is able to show up. With... We think of, uh, children in this way, human children, is your parents only give one or two examples-
- AHAndrew Huberman
Mm-hmm.
- LFLex Fridman
... to teach a concept. The- the dream with self-supervised learning is that would be the same with- with, uh, machines, that they would, uh, watch millions of hours of, uh, YouTube videos and then come to a human and be able to understand when the human shows them this is a cat. Like, remember this is a cat. They will understand that a cat is not just a thing with pointy ears or a cat- cat is a thing that's orange or is furry. They'll- they'll see something more fundamental that we humans might not actually be able to introspect and understand. Like, if I asked you what makes a cat versus a dog, you wouldn't probably not be able to answer that, but if I showed you, brought you a cat and a dog, you'd be able to tell the difference. What are the ideas that your brain uses to make that difference? Uh, that's the whole dream with self-supervised learning is it would be able to learn that on its own, that set of common sense knowledge that it's able to tell the difference. And then there's, like, a lot of incredible uses of self-supervised learning, s- uh, very weirdly called self-play mechanism. That's the mechanism behind the- uh, the reinforcement learning successes of, uh, the systems that won at, uh, Go, at, uh, AlphaZero, uh, that won at chess.
- AHAndrew Huberman
What? Oh, I see. Th- that play games.
- LFLex Fridman
That play games.
- AHAndrew Huberman
Got it.
- LFLex Fridman
So the idea of self-play, this probably applies, uh, to other domains than just games, is a system that just plays against itself. And this is fascinating in all kinds of domains, but, uh, it knows nothing in the beginning and the whole idea is it creates a bunch of mutations of itself and plays against those, uh, versions of itself, and the fascinating thing is when you play against systems that are a little bit better than you, you start to get better yourself. Like, learning, that's how learning happens. That's true for martial arts, uh, t- true in a lot of cases where you want to be interacting with- with, uh, systems that are just a little better than you, and then through this process of interacting with systems just a little better than you, you start following this process where everybody starts getting better and better and better and better until you are several orders of magnitude better than the world champion in chess, for example. And it's fascinating 'cause it's like a runaway system. O- one of the most terrifying and exciting things that, uh, David Silver, the creator of AlphaGo and AlphaZero, one of the leaders of the team, said, uh, to me, is, uh, they haven't found the ceiling for AlphaZero, meaning it could just arbitrarily keep improving. Now, in the realm of chess, that doesn't matter to us, that it's like it just ran away with the game of chess, like, it's like just so much better than humans. But the question is what... if you can create that in the realm that does have a- a bigger, deeper effect on human beings and societies, uh, that can be a terrifying process. To me, it's an exciting process if you supervise it correctly, if you inject, uh, if, uh, what's called, uh, value alignment. You, uh, you make sure that the goals that the AI is optimizing is aligned with human beings and human societies. There's a lot of fascinating things to talk about within the, uh, specifics of neural networks and all- all the problems that people are- are working on, but I would say the really big exciting one is self-supervised learning.
- AHAndrew Huberman
Mm-hmm.
- LFLex Fridman
We're trying to get less and less human supervision, uh, uh, l- less and less human sup- supervision of neural networks. And also, just to comment and I'll shut up.
- AHAndrew Huberman
No, please keep going. I'm- I'm learning. Uh, I have questions, but I'm learning, so please keep going.
- LFLex Fridman
So that... To me, what's exciting is not the theory, it's always the application. One of the most exciting applications of artificial intelligence, specifically neural networks and machine learning, is Tesla autopilot. So, these are systems that are working in the real world. This isn't an academic exercise. This is human lives at stake. This is safety critical. Uh, I-
- AHAndrew Huberman
These are automated vehicles. Auto- autonomous vehicles.
- LFLex Fridman
Semi-autonomous. We wanna be... (laughs)
- AHAndrew Huberman
Okay.
- LFLex Fridman
We- we've gone through wars on these topics. Uh-
- AHAndrew Huberman
Semi-autonomous vehicles.
- LFLex Fridman
Semi-autonomous. So, w-... even though it's called, uh, FSD, f- full self-driving, it is currently not fully autonomous, meaning human supervision is required. So, a human is tasked with overseeing the systems. In fact, liability-wise, the human is always responsible. This is a human factor psychology question, which is fascinating. I'm fascinated by the, the, the whole space, which is a whole nother space, of human-robot interaction, when AI systems and humans work together to accomplish task. That dance, to me, is, uh, is one of the smaller communities, but I think it will be one of the most important open problems once they're solved, is how do humans and robots dance together. To me, semi-autonomous driving is one of those spaces. So, for, uh, for Elon, for example, he doesn't see it that way. He sees, uh, f- semi-autonomous driving as a stepping stone towards fully autonomous driving. Like, humans and robots can't dance well together. Like humans and humans dance and, uh, robots and robots dance. Like, we need to s- t- this is an engineering problem, we need to design a perfect robot that solves this problem. To me, forever, maybe this is not the case with driving, but the world is going to be full of problems where it's always humans and robots have to interact, because I think robots will always be flawed, just like humans are going to be flawed, are flawed, and that's what makes life beautiful, that they're flawed. That's where learning happens, at the edge of your capabilities. So, you always have to figure out how can flawed robots and flawed humans interact together such that they, uh, like, the, the sum is bigger than the whole, as opposed to focusing on just building the perfect robot.
- AHAndrew Huberman
Mm-hmm.
- LFLex Fridman
So, th- so that's one of the most exciting applications, I would say, of artificial intelligence to me, is autonomous driving and semi-autonomous driving, and that's a really good example of machine learning, because those systems are constantly learning, and, uh, there's a, there's a process there that maybe I can comment on. Uh, the, Andrej Karpathy, who's the head of Autopilot, calls it the data engine, and thi- this process applies for a lot of machine learning, which is you build a system that's pretty good at doing stuff, you send it i- you send it out into the real world, it starts doing the stuff, and then it runs into what are called edge cases, like failure cases, where it screws up. You know, we do this as kids, that, you know, y- you have...
- AHAndrew Huberman
We do this as adults. (laughs)
- LFLex Fridman
We do this as adults. (laughs) Exactly. But we learn really quickly. But th- the whole point, and this is the fascinating thing about driving, is you realize there's millions of edge cases. Uh, there's just, like, weird situations that you did not expect. And so, the data engine process is you collect those edge cases and then you go back to the drawing board and learn from them. And so you have to create this data pipeline where all these cars, hundreds of thousands of cars as you're driving around, and something weird happens. And so whenever this weird detector fires, it's another important concept, uh, y- y- that piece of data goes back, uh, to the mothership for the, for the training, for the retraining of the system, and through this data engine process, it keeps improving and getting better and better and better and better. So basically, you send out a pretty clever AI systems out into the world and let it find the edge cases. Let it screw up just enough to figure out where the edge cases are and then go back and learn from them and then send out that new version and keep updating that version.
- AHAndrew Huberman
Is the updating done by humans?
- LFLex Fridman
The annotation is done by humans.
- AHAndrew Huberman
Okay.
- LFLex Fridman
The, so you have to, th- the weird examples come back, the edge cases, and you have to label what actually happened in there. There's also some mechanisms for automatic, automatically labeling, but mostly I think you always have to rely on humans to improve, to understand what's happening in the weird, weird cases. And then there's a lot of debate, and this's the other thing, what is artificial intelligence? Which is (laughs) a bunch of smart people having very different opinions about what is intelligence. So, AI, AI is basically a community of people who don't d- agree on anything.
- AHAndrew Huberman
It seems to be the case. I'm, you know, I, and first of all, thi- this is a beautiful description of terms that I've heard many times, uh, among my colleagues at Stanford, at meetings, in the, in the outside world, and, um, there's so many fascinating things. I have so many questions. But I do wanna ask one question about the culture of AI, because it does seem to be a community where, at least as an outsider, where it seems like there's very little consensus about what the terms and the operational definitions even mean, um, and there seems to be a lot of splitting happening now of not just supervised and unsup- supervised learning, but these sort of, uh, intermediate, uh, conditions where, uh, machines are autonomous but then go back for more instruction, like kids go home from college during the summer and get a little, you know, mom still feeds them, then eventually they leave the, the nest kind of thing. Um, i- is there something in particular about engineers or about people in this, uh, realm of engineering that you think l- lends itself to disagreement?
- LFLex Fridman
Yeah, I think, uh ... So, so first of all, the more specific you get, the less disagreement there is. So, there's a lot of disagreement about what is artificial intelligence, but there's less disagreement about what is machine learning and even less when you talk about active learning or machine teaching or, um, self-supervised learning. A- and then when you get into, like, NLP language models or transformers, when you get into specific neural network architectures, there's less and less and less disagreement about those terms. So, you might be hearing the disagreement from the high-level terms, and that has to do with the fact that engineering, especially when you're talking about intelligent systems, is, is, uh, a little bit of an art.... and a, a science. So the art part is, uh, is the thing that creates disagreements, because then you start having disagreements about, um, how easy or difficult a particular problem is. For example, a lot of people disagreed with Elon how difficult the problem of autonomous driving is, and, and so, but nobody knows. So, there's a lot of disagreement about what are the limits of these techniques, and through that, the terminology also contains within it the, um, the disagreements. But overall, I think it's also a young science. That also has to do-
- AHAndrew Huberman
Mm-hmm.
- 26:46 – 32:21
Machine & Human Learning
- AHAndrew Huberman
I, I have a couple questions, but one thing that I'm sort of fixated on that I find incredibly interesting is this example, uh, you gave of playing a game with a mutated version of yourself-
- LFLex Fridman
Mm-hmm.
- AHAndrew Huberman
... as a competitor.
- LFLex Fridman
Yeah.
- AHAndrew Huberman
I find that incredibly interesting as a, uh, kind of a parallel or a mirror for what happens when we try and learn as humans, which is we generate repetitions of whatever it is we're trying to learn, and we make errors. Occasionally we succeed. Um, uh, in a simple example, for instance, of trying to throw bull's-eyes on a dart board.
- LFLex Fridman
Yeah.
- AHAndrew Huberman
I'm going to have errors, errors, errors. I'll probably miss the dart board, and maybe occasionally hit a bull's-eye, and I don't know exactly what I just did, right? But then let's say I was playing darts against a version of myself where my, I was wearing a visual prism, like my visual, I had a visual defect. You learn certain things in that mode as well. You're saying that a machine can sort of mutate itself. Does the mutation always cause a deficiency that it needs to overcome? Because mutations in biology sometimes give us super powers, right? Occasionally, you'll get somebody who has better than 20/20 vision and they can see better than 99.9% of people out there.
- LFLex Fridman
Mm-hmm.
- AHAndrew Huberman
So, when you talk about a machine playing a game against a mutated version of itself, is the mutation always a, what we call a negative mutation? Uh, or a, or an adaptive or a maladaptive mutation?
- LFLex Fridman
No, y- you don't know until you get y- uh, so you mutate first and then figure out, and they compete against each other, and-
- AHAndrew Huberman
So you're evolving, you're, the machine gets to evolve itself in real time?
- LFLex Fridman
Yeah. And it, I, I think of it, which would be exciting if you could actually do with humans, it's not just... So, usually you freeze a version of the system, so really you take an, uh, Andrew of yesterday and you make 10 clones of them, and then maybe you mutate, maybe not, and then you do a bunch of competitions of the Andrew of today. Like, you fight to the death, and wins last. (laughs) So I love that idea of like creating-
- AHAndrew Huberman
That's funny.
- LFLex Fridman
... a bunch of clones of myself from like, from each of the day from p- the past year, and just seeing wh- who's going to be better at like podcasting or science or picking up chicks at a bar, or, uh, I don't know, or competing in jujitsu. That's one way to do it. I think a lot of Lexs would have to die for that process. But that's essentially what happens is, in reinforcement learning, through the self-playing mechanisms, it's a graveyard of systems that didn't do that well.
- AHAndrew Huberman
I think so.
- LFLex Fridman
And th- the, the surviving, the, the good ones survive.
- AHAndrew Huberman
D- do you think that, uh, I mean, Darwin's theory of evolution, uh, and m- might have worked in some sense in this way, but at the population level? I mean, you get a bunch of birds with different shaped beaks and some birds have the shaped beak that allows them to get the seeds. I mean, it's a trivial, trivially simple example of Darwinian evolution, but I think it, it, it's, it's correct if not, uh, even though it's not exhaustive. Is that what you're referring to? You essentially, that e- normally this is done between members of a different species, lots of different members of species have different traits, and some get selected for, but you could actually create multiple versions of yourself with different traits.
- LFLex Fridman
So with, I should probably have said this, but, uh, perhaps it's implied, but with machine learning or with reinforcement learning through these processes, one of the big requirements is to have an objective function, a loss function, a utility function. Those are all different terms for the same thing, is there's a, like an equation that says what's good. And, and then you're trying to optimize that equation. So there's a clear goal.... for these systems. Like-
- AHAndrew Huberman
Because it's a game, like with chess, there's a, there's a goal.
- LFLex Fridman
But for anything, anything you want machine learning to solve, there needs to be an objective function, in machine learning, it's usually called loss function, that you're optimizing. The interesting thing about evolution, c- complicated of course, but the goal also seems to be evolving, like it's a, I guess, adaptation to the environment is the goal, but it's unclear that you can convert that always. I- i- it's a, like survival of the fittest, it's unclear what the fittest is. In machine learning, the starting point, and this is like what human ingenuity provides, is that fitness function of what's good and what's bad. Which, it, it, it lets you know which of the systems is going to win, so you need to have a equation like that. One of the fascinating things about humans is we figure out objective functions for oursel- like, we're, um, it's the meaning of life, like, "Why the hell are we here?" And, uh, a machine currently has to have, uh, a hard-coded statement about why.
- AHAndrew Huberman
It has to have a meaning of-
- LFLex Fridman
Yeah.
- AHAndrew Huberman
... artificial intelligence l- based life.
- LFLex Fridman
Right. It can't ... So, like there's a lot of interesting explorations about, uh, that function being more about curiosity, about learning new things, and all that kind of stuff, but it's still hard-coded. If you want a machine to be able to be good at stuff, it has to be given very clear statements of what good at stuff means. That's one of the challenges of artificial intelligence is you have to formalize the ... In order to solve a problem, you have to formalize it, and you have to provide, uh, both like the full sensory information, you have to be very clear about what is the data that's being collected, and you have to also be clear about the objective function, what is the goal that you're trying to reach? And that's a very difficult thing for artificial
- 32:21 – 36:55
Curiosity
- LFLex Fridman
intelligence.
- AHAndrew Huberman
I love that you mentioned curiosity. I, I'm sure this definition falls short in many ways, but I define curiosity as a, uh, a strong interest in knowing, uh, something, but without an attachment to the outcome. You know, it's sort of a, uh, it's not ... It could be a random search, but i- there's not really an emotional attachment. It's really j- just a desire to discover and unveil what's there, without hoping it's a, you know, a gold coin under a rock, you're just looking under rocks. Is that more or less how the machi- you know, within machine learning, it sounds like there are elements of reward prediction, and, you know, rewards, the machine has to know when it's done the right thing. So, it ... Can you make machines that are curious, or are the sorts of machines that you are describing curious by design?
- LFLex Fridman
Yeah. C- uh, curiosity is a kind of a symptom, not, uh, the goal. So, what, what happens is, uh, f- one of the big trade-offs in reinforcement learning is this exploration versus exploitation. So, when you know very little, it pays off to explore a lot, even suboptimal, like, even trajectories that seem like they're not going to lead anywhere. That's called exploration. The smarter and smarter and smarter you get, the m- the more emphasis you put on exploitation, meaning you, uh, take the best solution, you take the best path. Now, through that process, the exploration can look like curiosity by us humans, but it's really just trying to get out of the local optima of the thing it's already discovered. It's sh- it's ... From a AI perspective, it's always looking to optimize the objective function. It, it derives ... And we could talk about this a lot more, but in terms of the tools of machine learning today, it derives no pleasure from just the curiosity of like, I don't know, a discovery, that moment.
- AHAndrew Huberman
So, there's no dopamine for a machine?
- LFLex Fridman
There's no dopamine.
- AHAndrew Huberman
There's no reward system chemical or, I guess, electronic reward system?
- LFLex Fridman
M- m- that said, if you look at machine learning literature and reinforcement learning literature, they will use, like DeepMind will use terms like dopamine, we're constantly trying to use the human brain to inspire totally new solutions to these problems. So, they'll think like, "How does dopamine function in the human brain, and how can that lead to more, um, interesting ways to discover, uh, optimal solutions?" But ultimately, currently, the, uh, there has to be a f- a formal objective function. Now, you could argue that humans also has a set of objective functions we're trying to optimize, we're just not able to introspect them. Like-
- AHAndrew Huberman
We don't ... Yeah, we don't actually know what we're looking for and seeking and doing.
- LFLex Fridman
Well, like Lisa Feldman Barrett, who you've spoken with, at least on Instagram, you sh- I hope you get-
- AHAndrew Huberman
I met her through you, yeah.
- LFLex Fridman
Yeah. I hope you-
- AHAndrew Huberman
Yeah.
- LFLex Fridman
... actually, uh, have her on this podcast, that would be fascinating.
- AHAndrew Huberman
She's terrific.
- LFLex Fridman
So, uh, she has a very, um ... It s- it has to do with homeostai- stasis, uh, like that, uh, basically, there's a very dumb objective function that the brain is trying to optimize, like, to keep, like, body temperature the sa- like, there's a very dumb kind of optimization function h- happening. And then, what we humans do with our fancy consciousness and cognitive abilities is we tell stories to ourselves, so we can have nice podcasts, but really it's the brain trying to maintain, uh, um, just like healthy state, I guess. That, that's fascinating. I, I a- also see the human brain and, and I hope artificial intelligence systems as, uh, not just systems that solve problems or optimize a goal, but are also storytellers. I think there's a power to telling stories. We tell stories to each other, that's what communication is. Like, when you're alone, that's when you solve problems, that, that's when it makes sense to talk about solving problems. But when you're a community, the capability to communicate, tell stories-... hold, uh, share ideas in such a way that those ideas are stable over a long period of time. That's like, that's being a charismatic storyteller. And I think both humans are very good at this. Arguably, I would, I would argue that's why we are who we are, is we're great storytellers. And then AI, I hope will also become that. So, it's not just about being able to solve problems with a clear objective function, it's afterwards be able to tell like a way better, like make up a way better story about why you did something, or
- 36:55 – 40:48
Story Telling Robots
- LFLex Fridman
why you failed.
- AHAndrew Huberman
So, you think that, uh, robots or, and/or machines of, of some sort are gonna start telling humans' stories?
- LFLex Fridman
Well, definitely. So, uh, the technical field for that is called explainable AI. Explainable artificial intelligence is trying to figure out how you get the AI system to explain to us humans why the hell it failed, or why it succeeded. Or, uh, there's a lot of different sort of versions of this. Or to visualize how it understands the world. That's a really difficult problem, especially with neural networks that are, um, famously opaque. That they, we don't understand in many cases why a particular neural network does what it does so well. And, uh, to try to figure out where it's going to fail, that requires the AI to explain itself. There's a huge amount of money, uh, like, uh, there's a huge amount of money in this, especially from government funding and so on, because if you want to deploy, uh, AI systems in the real world, we humans at least want to ask it a question like, "Why the hell did you do that?" Like, in a dark way, "Why did you just kill that person," right? Like, if a car ran over a person, we want to understand why that happened. And, uh, now again, we're sometimes very unfair to, uh, AI systems because we humans can't, often not explain why very well. But that's the field of, uh, explainable AI. That's very, people are very interested in because the more and more we rely on AI systems, like the, the Twitter recommender system. Th- that AI algorithm that's, I would say impacting elections, perhaps starting wars or at least military conflict. That's, that algorithm, we want to ask that algorithm, first of all, "Do you know what the hell you're doing? Do you know, do you understand the society level effects you're having, and can you explain the possible other trajectories?" Like we would have that kind of conversation with a human, we want to be able to do that with an AI. And in my own personal level, I think it would be nice to talk to AI systems for stupid stuff. Like, ro- robots, when they fail, to, um...
- AHAndrew Huberman
"Why d'you fall down the stairs?"
- LFLex Fridman
Yeah. But, um, not an engineering question. But almost like, uh, endearing question. Like, (laughs) like, I'm looking for, if I fell and you and I were hanging out, I don't think you need an explanation exactly what were the dynamic, like, what was the underactuated system problem here? Like, w- what...
- AHAndrew Huberman
Right.
- LFLex Fridman
... w- what was the texture of the floor or so on, or, like, what was the map-
- AHAndrew Huberman
No, I want to know what you're thinking.
- LFLex Fridman
That, or you might joke about like, "You're drunk again, go home." Or something. Like, there could be humor in it. That, that's an opportunity, like, storytelling isn't just explanation of what happened, it's something that, uh, makes people laugh, makes people fall in love, makes people dream, and understand things in a way that poetry makes people understand things as opposed to a rigorous log of, uh, where every sensor was, where every, uh, actuator was.
- AHAndrew Huberman
I mean, I find this incredible because, you know, one of the hallmarks of severe autism spectrum disorders is, um, a report of experience from the autistic person that is very much a catalog of s- of action steps. It's like, "How do you feel today?" And they'll say, "Well, I got up and I did this, and then I did this, and I did this," and it's not at all the way that a, a person with, who doesn't have autism spectrum disorder would, would respond. And the way you describe these machines has so much h- human, uh, has so much humanism or so much h- of, of a human and biological element, but I realize that we are talking about machines.
- 40:48 – 44:30
What Defines a Robot?
- AHAndrew Huberman
I d- I want to make sure that I understand if there's a distinction between a machine that learns, a machine with artificial intelligence, and a robot. Like, at what point does a machine become a robot? So, if I have a ballpoint pen, um, assuming, I wouldn't call that a robot, but if my ballpoint pen, um, can come to me when it's on the, when I move to the opposite side of the table, if it's, moves by whatever mechanism, at that point does it become a robot?
- LFLex Fridman
Okay, there's a million ways to explore this question, it's a fascinating one. So, first of all there's the question of what is life? Like, how do you know something is a living form and not?
- AHAndrew Huberman
Mm-hmm.
- LFLex Fridman
And it's similar to the question of when does sort of a, maybe a, a cold computational system becomes a, um... Well, we're already loading these words with a lot of meaning.
- AHAndrew Huberman
Sure.
- LFLex Fridman
Robot and machine. But ... So, one, I think movement is, is important but, but that's a kind of a boring idea that a robot is just a machine that's able to act in the world. So, one artificial intelligence could be both just the thinking thing, which I think is what machine learning is, and also the acting thing, which is what we usually think about robots. So, robots are the things that have a perception system that's able to take in the world, however you define the world, is able to think and learn and do whatever the hell it does inside, and then act on the world. So, that's the difference between maybe an AI system or a machine and a robot. It's something that's able... A robot is something that's able to perceive the world and act in the world.
- AHAndrew Huberman
So, it could be through language?... or sound, or it could be through movement, or both.
- LFLex Fridman
Yeah. And I think it could also be in the digital space, as long as there's a aspect of entity that's inside the machine, and a world that's outside the machine, and there's a sense in which the machine is sensing that world and acting in it.
- AHAndrew Huberman
So, we could, for instance, um, there could be a version of a robot, according to your, the definition that I think you're providing, where the robot, um, I, where I go to sleep at night and this robot goes and forages for information that it thinks I want to see, load it onto my desktop in the morning. There was no movement of that machine, there was no language, but it essentially has movement in cy- in cyberspace.
- LFLex Fridman
Yeah. There's a distinction that I think is important, in that there's a, there's an element of it being an entity, whether it's in the digital or the physical space. So, when you have something like Alexa in your home, most of the, uh, s- speech recognition, most of what Alexa's doing is constantly being sent back to the mothership. The, when Alexa is there on its own, that's, to me, a robot, when it's there interacting with the world. When it's simply a finger of the main mothership, that's not, then Alexa's not a robot. Then it's just an interaction device, that then maybe the main Amazon Alexa AI big, big system is the robot. So, the, that's important because there's some element to us humans, I think, where we want there to be an entity, whether in the digital or the physical space. That's where ideas of consciousness come in, and, uh, all those kinds of things that we project our understanding of what it means to be a being.
- AHAndrew Huberman
Mm-hmm.
- LFLex Fridman
And so to take that further, when does a machine become a robot,
- 44:30 – 47:37
Magic & Surprise
- LFLex Fridman
I think there's a, there's a special moment, there's a special moment in a person's life, in a, in a robot's life, where it surprises you. I think surprise is a really powerful thing, where you know how the thing works, and yet it surprises you. Uh, that, that's a magical moment for us humans. So, whether it's a chess playing program that does something that you haven't seen before, that makes people smile, like, "Huh." Those moments happened with AlphaZero for the first time in chess playing, where grandmasters were really surprised by a move. They didn't understand the move, and then they studied and studied, and then, then they understood it. But that moment of surprise, that's for grandmasters in chess. I find that moment of surprise really powerful, really magical, in just everyday life.
- AHAndrew Huberman
Because it supersedes the, the human brain in that moment?
- LFLex Fridman
Not, uh, so it's not supersedes, like outperforms, but, uh, surprises you in a positive sense. Like, "I didn't, I didn't think he could do that. I, I didn't think that you had that in you." And, uh, I think that moment is a big transition for a robot, from a, from a moment of being a servant that partic- that accomplishes a particular task with some level of accuracy, with some, uh, rate of f- of failure, to an entity, a being that's struggling just like you are in this world. And that, that's a really important moment that I think, uh, you're not gonna find many people in the AI community that talk like I just did. Uh, I, I'm not speaking like some philosopher or some hippie. I'm speaking from purely engineering perspective. I think it's really important for robots to become entities, and explore that as a real engineering problem. As opposed to, uh, everybody treats robots in the ro- robotics community, they don't even call 'em a he or she, they don't give 'em, try to avoid giving 'em names. They really want to see it like a system, uh, like a s- a, a servant. They see it as a servant that's trying to c- accomplish a task. To me, and I don't think I'm just romanticizing the notion, I think it's a being. It's a, currently perhaps a dumb being, but in the, in the long arc of, uh, history, humans are pretty dumb beings too, so.
- AHAndrew Huberman
I, I would agree with that statement.
- LFLex Fridman
(laughs) So, I, I tend to really want to explore this, uh, treating robots really as, uh, as entities, um, that, yeah. And so like, anthropomorphization, which is the, sort of the act of looking at a inanimate object and projecting onto it life-like features, I think robotics generally sees that, um, as a, as a negative. I see it as a superpower. Like, that, we need to use that.
- AHAndrew Huberman
Well, well, I'm struck by how that really grabs onto the relationship between human and machine, or human and robot.
- LFLex Fridman
Yeah.
- AHAndrew Huberman
So, I guess the simple question is,
- 47:37 – 49:35
How Robots Change Us
- AHAndrew Huberman
and I think you've already told us the answer, but does interacting with a robot change you? Does it, in other words, do, do we develop relationships to robots?
- LFLex Fridman
Yeah. I th- I definitely, (laughs) I definitely think so. I think, um, I think the moment you re- see a robot or AI systems as more than just servants, but, uh, entities, they begin to change you, just like good friends do, just like, um, relationships, just like other humans. I think, uh, for that, you have to have certain aspects of that interaction, like the robot's ability to, um, say no, to, uh, to have its own sense of identity, to have its own set of goals, that it's not constantly serving you, but instead trying to understand the world, and, uh, do that dance of understanding through communication with you. So, I, I definitely think there's a...I mean, I have a lot of thoughts about this, as you may know. Uh, and that's at the core of my lifelong dream actually, uh, of what I want to do, which is, um, I believe that most people have, uh, a notion of loneliness in them that we haven't discovered, that, that we haven't explored, I should say. And I see AI systems as helping us explore that, so that we can become better humans, uh, better people towards each other. So, I think that connection between human and and AI, human and robot, uh, is, is not only possible, but, uh, will help us understand ourselves in ways that are like several orders of magnitude, uh, deeper than we ever could have imagined. I, I tend to believe that ... well, I have, uh, uh, very wild levels of, uh, uh, belief, in terms of how impactful that would be. Right?
- 49:35 – 1:02:29
Relationships Defined
- LFLex Fridman
- AHAndrew Huberman
So, when I think about human relationships, I, I don't, um, always break them down into variables. But we could explore a fl- a few of those variables and see how they map to human/robot relationships. Um, one is just time. Right? If you spend zero time with another person, uh, at all, in, in cyberspace or on the phone or in person, you essentially have no relationship to them. If you spend a lot of time, you have a relationship. This is obvious. But I guess one variable would be time. How much time you spend with the other entity, robot or human. The other would be, um, wins and successes. You know, you enjoy successes together. Um, I'll give a absolutely trivial example of this in, in a moment. But, um, the other would be failures. When you struggle with somebody, whether or not you struggle between one another, you disagree. Like I was really struck by the fact that you said that robots saying no. I've never thought about a robot saying no to me. Um, but there it is. Uh-
- LFLex Fridman
I look, I look forward to you being one of the first-
- AHAndrew Huberman
(laughs)
- LFLex Fridman
... people I send this robot to.
- AHAndrew Huberman
So do I. So, there's, there's struggle. You grow, you know, when you struggle with somebody, you grow closer. Sometimes the struggles are imposed between those two people, so-called trauma bonding they call it in the whole, uh, psychology literature and pop psychology literature. But, in any case, I could imagine, so time, successes together, um, uh, struggle together, and then just, um, peaceful time. Hanging out at home, watching, watching movies, uh, waking up near one another. Uh, here we're breaking down the kind of elements of relationships of any kind. So, do you think that these elements apply to robot/human relationships? And if so, then I could see how, if the, if the robot is its own entity and has some autonomy in terms of how it reacts to you, it's not just there just to serve you, it's not just a servant, it actually has opinions and can tell you when maybe your thinking is flawed or your actions are flawed.
- LFLex Fridman
Mm-hmm. It can also leave.
- AHAndrew Huberman
It can, it could also leave. So, I've never conceptualized robot/human interactions this way. Uh, um, so tell me more about how this might look. Are we thinking about, um, a human appearing robot? Um, I know you and I have both had intense relationships to our, uh, w- we have separate dogs, obviously, but to, to animals. This sounds a lot like human/animal interaction. So, what is the ideal human/robot relationship?
- LFLex Fridman
So, there's, uh, a lot to be said here, but you actually pinpointed one of the big, big first steps, which is this idea of time. And it's a huge limitation in machine learning community currently, as this, n- now we're back to like the actual details. Lifelong learning is a, is a, is a problem space that, uh, focuses on how AI systems can learn over a long period of time. What's currently most machine learning systems are not able to do is to all of the things you've listed under time, the successes, the failures, or just chilling together watching movies, AI systems are not able to do that, which is all the beautiful, magical moments that I believe are the days filled with. They're not able to keep track of those together with you. They're-
- AHAndrew Huberman
Because they can't move with you and be with you.
- LFLex Fridman
No, no, no. Like literally, we don't have the, the, the techniques to, to do the learning, the actual learning of containing those moments. C- current machine learning systems are really focused on understanding the world in the following way. It's more like the perception system. Like, looking around, understand, uh, like what's in the scene. That there's a bunch of people sitting down, that there is, uh, cameras and microphones, that there's a table, understand that. But the fact that we shared this moment of talking today and still remember that for next time you're, for like, uh, next time you're doing something, remember that this moment happened. We don't know how to do that, uh, technique-wise. This is what I'm, this is what I'm, uh, hoping to innovate on, as I think it's a very, very important component of what it means to create, uh, a deep relationship, that sharing of moments together.
- AHAndrew Huberman
Could you post a photo of you and the robot, like self- selfie with robot?
- LFLex Fridman
Yeah.
- AHAndrew Huberman
And then the robot sees that image and recognizes that was time spent, there was a, there were smiles or there were tears?
- LFLex Fridman
Yeah.
- AHAndrew Huberman
And create some sort of, um, metric of, of emotional depth in the relationship and update its behavior?
- LFLex Fridman
So ... (sighs)
- AHAndrew Huberman
Could it text you in the middle of the night and say, "Why haven't you texted me back?"
- LFLex Fridman
Well, yes, all of those things. Uh, but, (laughs) the, we could, we could dig into that. But I think that time element, forget everything else, just sharing moments together, that changes everything. I believe that changes everything. Uh, there's specific things that are more in terms of systems that can explain you, um-... it's, it's more technical and probably a little bit offline 'cause I have k- kind of wild ideas how that can revolutionize, um, social networks and, uh, and operating systems. But the point is, that element alone, forget all the other things we're talking about hu- hu- like emotions, um, saying no all the time, just remember sharing moments together will change everything. We don't currently have systems that, uh, um, share, share moments together. Like, even just you and your fridge. Just all those times you went late at night and, and ate thing you shouldn't have eaten, that was a secret moment you had with your refrigerator. You shared that moment.
- AHAndrew Huberman
(laughs)
- LFLex Fridman
That darkness or that beautiful moment where you just, uh, you know, like, heartbroken for some reason, you're eating that ice cream or whatever, that's a special moment, and that refrigerator was there for you. And the fact that it missed the opportunity to remember that, uh, is, is, is tragic. And once it does remember that, I think you're gonna be very attached to that refrigerator. You, you're gonna go through some sh- through some hell with that refrigerator. Most of us have like, in, in, uh, in the developed world, have weird relationships with food, right? So, you can go through some, uh, some deep moments of trauma and triumph with food.
- AHAndrew Huberman
Absolutely.
- LFLex Fridman
And at the core of that is the refrigerator. So, a smart refrigerator, I believe, would, uh, change society. Not just the refrigerator, but the, these ideas in the systems all around us. So, that, I, I just want to comment on how powerful the idea of time is.
- AHAndrew Huberman
Mm-hmm.
- LFLex Fridman
And then there's a bunch of elements of actual interaction of, uh, allowing you as a human to feel like you're being heard, truly heard, truly understood. That we human... Like, deep friendship is like that, uh, I think, but we're still, there's still an element of selfishness, there's still an element of, um, not really being able to understand another human. And a lot of the times when you're going through trauma together, through difficult times and through successes, you're actually starting to get that inkling of understanding of each other. But I think that could be done more aggressively. You, you c- uh, more efficiently. Like, if you think of a great therapist, I think, I've never actually been to a therapist, but I'm a believer. I used to want to be a psychiatrist. Uh-
- AHAndrew Huberman
Do Russians go to therapists?
- LFLex Fridman
No, they don't. They don't. And, uh, if they do, the therapists don't live to tell the story.
- AHAndrew Huberman
(laughs)
- LFLex Fridman
No. Uh, (laughs) I, uh, I, I, I do believe in talk therap- which, well, friendship is to me, is, is talk therapy or like, it's, it, like, it's, uh, w- you don't even necessarily need to talk. (laughs)
- AHAndrew Huberman
Mm-hmm.
- 1:02:29 – 1:11:33
Lex’s Dream for Humanity
- LFLex Fridman
- AHAndrew Huberman
Yeah. So, l- let's talk about this startup and let's talk about the, the dream. You've mentioned this dream before in our previous conversations, always as little hints dropped here and there, uh, just, uh, for anyone listening, there's never been an offline conversation about this dream, I'm not privy to anything, uh, except what Lex says now. (laughs) Um, and I realize that there's no way to capture the full essence of a dream in any kind of verbal statement, um, in a way that captures all of it. But w- what is the, what is this dream that you've referred to now several times, uh, when we've sat down together and talked on the phone? Uh, look, maybe it's this company, maybe it's something distinct. I- if you feel comfortable, it'd be great if you could share a little bit about what that is.
- LFLex Fridman
Sure. So, the, the way people express long-term vision, I've noticed is quite different. Like Elon is an example of somebody who can very crisply say exactly what the goal is. Also has to do with the fact the problems he's solving have nothing to do with humans. So, my long-term vision is a little bit more difficult to express in words, I've noticed as I've tried. It could be my brain's failure. But there's, uh, ways to sneak up to it, so let me just say a few things. Early on in life, uh, and, and also in the recent years, I've interacted with a few robots where I understood there's magic there, and that magic could be shared by millions if it's, uh, brought to light. When I first met Spot from Boston Dynamics, I realized there's magic there that nobody else is seeing.
- AHAndrew Huberman
It's the dog.
- LFLex Fridman
The dog, sorry. The, Spot is the four-legged, uh, robot from Boston Dynamics, some people might have seen it, it's this yellow dog. And, um, you know, sometimes in, in, in life you just notice something that just grabs you, and I believe that this is something that, this magic is something that could be in every single device in the world. The, the way that I think, uh, maybe Steve Jobs thought about the personal computer, uh, uh, Woz didn't think about the personal computer this way, but Steve did, which is like he thought that the personal computer should be as thin as a sheet of paper and everybody should have one. I mean, this idea, I think it is heartbreaking that, uh, we're getting, the world is being filled up with machines that are soulless, and I think every one of them can have that same magic. One of the things that, uh, also inspired me in terms of a startup is that magic can be engineered much easier than I thought. That's my intuition with everything I've ever built and worked on. So, the, the dream is to add a bit of that magic in every single computing system in the world. So, the way that Windows operating system for a long time was, uh, the primary operating system everybody interacted with, they built apps on top of it, I think this, uh, is something that should be as a layer, as, almost as an operating system in, uh, every device that humans interact with in the world. Now, what that actually looks like, the actual dream when I was especially a kid, uh, it didn't have this concrete form of a business, it had more of a, a dream of, uh, exploring your own loneliness by interacting with machines, robots, um, this deep connection between humans and robots was always the dream. And so for me, I'd love to see a world where there's every home has a robot, and not a robot that washes the dishes, uh, or a, or a sex robot or a... I don't know. I... Think of any kind of activity the robot can do, but more like a companion, a way-
- AHAndrew Huberman
A family member.
- LFLex Fridman
A family member. The way a dog is.
- AHAndrew Huberman
Mm-hmm.
- LFLex Fridman
Uh, but a dog that's al- a- able to speak your language too. So, not just connect the way a dog does by looking at you and looking away and g- almost like smiling with its soul in that kind of way, uh, but also-... to actually understand what the hell, like, why are you so excited about the successes? Like, understand the details, understand the traumas. And that, I just think (sighs) that ha- always filled me with ex- excitement that I c- I could, uh, with artificial intelligence, br- bring joy to a lot of people. More recently, I've been more and more (sighs) heartbroken to see the kind of division, derision, even hate, that's boiling up on, uh, on the internet through social networks, and I thought this kind of mechanism is exactly applicable, uh, in the context of social networks as well. So, it's an operating system that, uh, serves as, as your guide to, uh, in the, uh, on the internet. One of the biggest problems with, uh, YouTube and social networks currently, is they're optimizing for engagement. I think if you create AI systems that know each individual person, you're able to optimize for long-term growth, for long-term happiness.
- AHAndrew Huberman
Of the individual, or-
- LFLex Fridman
Of the individual. Of the individual. And, uh, there's a lot of other things to say, which is the, in order for AI systems to, to learn everything about you, uh, they need to collect, they need to, just like you and I when we talk offline, we're collecting data about each other, secrets about each other, the same way AI has to do that. And that ha- allows you to, and that requires you to rethink, uh, ideas of ownership of data. I think each individual should own all of their data and very easily be able to leave. Just like AI systems can leave, humans can l- disappear, uh, and delete all of their data in a moment's notice, um, which is actually better than we humans can do, 'cause once we load the data into each other, it's there. I think it's, uh, very important to be both, um, give people complete control over their data in order to establish trust that they can trust you, and the second part of trust is transparency. Uh, whenever the data is used, to make it very clear what it's being used for. And not clear in a loyally legal sense, but r- clear in a way that s- people really understand what it's used for. I believe when people have the ability to delete all their data and walk away and, uh, know how the data is being used, I think they'll stay. Uh, o-
- AHAndrew Huberman
So, the, the possibility of a clean breakup is actually what will keep people together?
- LFLex Fridman
Yeah. I think so. I think, yeah, exactly. I think, uh, a happy marriage requires the ability to divorce easily without the, the, the divorce in- industrial complex or whatever the (laughs) that's currently going on that in, there's so much money to be made from lawyers and divorce, but yeah. The ability to leave is what enables love, I think.
- AHAndrew Huberman
It's interesting. I've heard the phrase, um, from a semi-cynical friend that, uh, marriage is the leading cause of divorce, but now we've heard that divorce, or the possibility of divorce, could be the leading cause of marriage.
- LFLex Fridman
Of a happy marriage.
- AHAndrew Huberman
Good point.
- LFLex Fridman
Of a happy marriage. So, that, yeah, uh, and so, but there's, there's a lot of details there, but the, the big dream is that connection between AI system and a human. And I haven't, uh, you know, there's so much fear about artificial intelligence systems and about robots, that I haven't quite found the right words to express that vision because the vision I have is one, uh, it's not like some naive delusional vision of like, technology is gonna save everybody. It's, I really do just have a positive view of ways AI systems can help humans explore themselves.
- AHAndrew Huberman
I love that positivity, and I, I agree that the, the stance, uh, everything is doomed is, um, equally bad. Um, to say that everything's gonna turn out all right, there has to be a dedicated effort, and clearly you're, um, thinking about what that dedicated effort would look like. You mentioned two, um, two aspects to this dream (clears throat) , and I wanna make sure that I understand where they, um, connect, if they do, or if these are independent streams. Uh, one was this, um, hypothetical robot family member, or some other form of robot that would allow people to experience the kind of, um, delight that you experienced, uh, many times and that you would like the world to, to be able to have, and it's, it's such a beautiful idea of, of this give.
- 1:11:33 – 1:16:57
Improving Social Media
- AHAndrew Huberman
And the other is social media or, uh, social network platforms that, uh, really serve individuals and, and their best selves and their happiness and their growth. Is there crossover between those, or are these two parallel dreams?
- LFLex Fridman
It's 100% the same thing. It's, it's, uh, difficult to kind of explain without going into details, but maybe one easy way to explain the way I think about social networks is to create an AI system that's yours, that's yours. It's not like Amazon Alexa that's centralized.
- AHAndrew Huberman
I see.
- LFLex Fridman
You own the data. It's, it's l- it's like your little friend that becomes your representative on Twitter, that, that helps you find things that will make you feel good, that will, uh, also challenge your thinking to make you grow, but not get too that, um, not let you get lost in the negative spiral of dopamine that, that, that gets you to be angry or most, just gets you to be not open to learning. And so, that little representative is optimizing your long-term health. Th- th- and it's, uh, I believe that that is not only good for human beings, it's also good for business. I think long term you could make a lot of money, uh, by challenging this idea that the only way to make money is, uh, maximizing engagement. And one, one of the things that people disagree with me on is they think Twitter's go- always going to win, like maximizing engagement is always going to win. I don't think so.... I think people have woken up now to understanding that, like, they don't a- always feel good, the, the ones who are on Twitter a lot. That th- they don't always feel good at the end of the week.
- AHAndrew Huberman
I would love feedback from whatever this, uh, creature, uh, whatever, I can't, I don't know what to call it, um, as to, you know, maybe at the end of the week it would automatically unfollow some of the people that I follow because it realized through some real- really smart data about how I was feeling inside or how I was sleeping or something that, well, it just wasn't good for me. But it might also put things and people in front of me that, uh, I ought to see. Is that-
- LFLex Fridman
Mm-hmm, yeah.
- AHAndrew Huberman
... is that kind of a sliver of what this, what this looks like?
- LFLex Fridman
The whole point because of the interaction, because of, um, sharing the moments and learning a lot about you, you're now able to understand what interactions led you to become a better version of yourself. Like, the person you yourself are happy with. I mean, this isn't, you know, if you're into flat earth and you feel very good about it, that you believe the earth is flat, like, th- the idea that you should censor that is ridiculous. If it makes you feel good and you're becoming the best version of yourself, I think you should be getting as much flat earth as possible. Now, it's also good to challenge your ideas, but not because the centralized, uh, c- committee decided, but because you tell it to the system that you like challenging your ideas. I think all of us do. And then, which actually YouTube doesn't do that well, once you go down the flat earth rabbit hole, that's all you're gonna see. It's nice to get some really powerful communicators to argue against flat earth, and it's nice to see that, uh, for you and, and potentially, at least long term, to expand your horizons. Maybe the earth is, is not flat. But if you continue to live your whole life thinking the earth is flat, I think, and you're being a good father or son or daughter and, like, you're being the best version of yourself and you're happy with yourself, uh, I think, uh, the earth is flat. So, like, I, I, I think this kind of idea, and I'm just using that kind of silly, ridiculous example, because I don't like the idea of centralized forces controlling what you can and can't see, but I also don't like this idea of, like, not censoring anything, because th- that's always... The biggest problem with that is this, it's th- there's a central decider. I think you yourself can decide what you want to see and not, and it's good to have a companion that, uh, reminds you that you felt shitty last time you did this, or you felt good last time you did this.
- AHAndrew Huberman
I mean, I feel like in every good story there's a, there's a guide or a companion that flies out or forages a little bit further or a little bit differently and brings back information that helps us, or at least tries to steer us in the right direction.
- LFLex Fridman
So, uh, actually, yeah, that's exactly, uh, that's exactly the, uh, what I'm thinking and, uh, what I've been working on. I, I should mention there's a bunch of difficulties here. Uh, y- you, you've seen me up and down a little bit recently, uh, so there's technically a lot of challenges here. There's... Like, with a lot of technologies, and the reason I'm talking about it on a podcast comfortably as opposed to working it in secret is it's really hard, and maybe its time has not come. And that's something you have to constantly struggle with in terms of, like, entrepreneurially as, as a startup. Uh, like, I've also mentioned to you maybe offline, I really don't care about money, I don't care about, uh, business success, all those kinds of things, um...
- 1:16:57 – 1:21:49
Challenges of Creativity
- LFLex Fridman
So, it- it's a difficult decision to make how much of your time, do you want to go all in here and give everything to this? Um, it's a big roll of the dice, because I've also realized that working on some of these problems, both with the, the robotics and the technical side on, in terms of, uh, the, the machine learning system that I'm describing, it's lonely. It's really lonely, because, uh, both on a personal level and a technical level. So, on the technical level I'm surrounded by people that kind of, um, doubt me, which I think all entrepreneurs go through. And they doubt you in the following sense, they, um, th- they know how difficult it is, like, the people that, uh, the colleagues of mine, they know how difficult lifelong learning is. They also know how difficult it is to build a system like this, uh, to buil- build a competitive social network. And, um, in general, there's a kind of, uh, loneliness to just working on something on your own for long periods of time, and you start to doubt whether, um, given that you don't have a track record of success, like, that's a big one. When you look in the mirror, especially when you're young, but I still have that on most things, you look in the mirror as like, and you have these big dreams, how do you know you're, how do you know you're actually as smart as you think you are? Like, how do you know you're going to be able to accomplish this dream? You have this ambition, you don't-
- AHAndrew Huberman
You sort of don't, but you're, you're kind of pulling on a, on a string hoping that there's a bigger ball of yarn? (laughs)
- LFLex Fridman
It's hope. Yeah. But you have this kind of intuition. I, I've, I think I pride myself in knowing what I'm good at because, uh, the reason I have that intuition is 'cause I think I'm very good at knowing all the things I suck at, which is basically everything. So like, whenever I notice, like, wait a minute, I- I'm kind of good at this, which is very rare for me, I think, like, that, that might be a ball of yarn worth pulling at. And the thing with, uh, in terms of engineering, uh, systems that are able to interact with humans-... I think I'm very good at that and, um, it's 'cause y- we're talking about podcasting and so on, I don't know if I'm very good at podcasting.
- AHAndrew Huberman
You're very good at podcasting. (laughs)
- LFLex Fridman
But I certainly don't, uh... I think maybe, uh, it is compelling to, to, for people to watch, um, a kindhearted idiot struggle with this, (laughs) with this form. Maybe that's what, what, what's compelling. But in terms of, like, actual being a good engineer of human/robot interaction systems, I think I'm good. But it's hard to know until you do it and then the world keeps telling you you're not, uh, and it's just, it's full of doubt and it's really hard, and I've been struggling with that recently and it's, it's kind of a fascinating struggle. But then that's where the Goggins thing comes in is, like, um, aside from the "Stay hard, motherfucker," is the, uh, like, whenever you're struggling that's a good sign that if you keep going that you're going to be alone in the success, right? Like, um-
- AHAndrew Huberman
Well-
- LFLex Fridman
... 'cause-
- AHAndrew Huberman
Well, in your case, however, I, I agree, and actually David had a post recently that I thought was... among his many brilliant posts, was one of the more brilliant about how, um, you know, he talked about this myth of the light at the end of the tunnel, and instead what he replaced that myth with was a concept that eventually your eyes adapt to the dark, uh, that the tunnel... it's not about a light at the end, that it's really about adapting to the dark-
- LFLex Fridman
(laughs)
- AHAndrew Huberman
... of the tunnel. He's very Gogginsesque-
- LFLex Fridman
I love him so much.
- AHAndrew Huberman
... and, uh, yeah.
- LFLex Fridman
(laughs)
- AHAndrew Huberman
You, you guys share a lot in, uh, a lot in common, uh, knowing you both a bit, um, you share a lot in common. But in this, this loneliness and the, and the pursuit of this dream, it, it seems to me has a certain component to it that is extremely valuable, which is that the loneliness itself could serve as a driver to build the companion for the journey.
- LFLex Fridman
Well, I'm very deeply aware of that, so, like, some people can, um, make f- 'cause I talk about love a lot, I really love everything in this world and, but as- also love humans, friendship and, uh, romantic, you know, like even the cheesy stuff, um, just-
- AHAndrew Huberman
You like romantic movies, Lex.
- LFLex Fridman
Yeah. No, well, not, not those-
- AHAndrew Huberman
Let's just... (laughs) I'm just kidding.
- LFLex Fridman
Not necessarily. It's, it's, uh, well, I got so much shit from Rogan about, like, uh, what is it? The tango scene from, uh, Scent of a Woman. But yes-
- AHAndrew Huberman
Good scene.
- LFLex Fridman
... I find, uh, like, a wo- there's nothing better than a woman in a red dress, like a, a, a, you know, just, like, classy...
- 1:21:49 – 1:22:22
Suits & Dresses
- LFLex Fridman
- AHAndrew Huberman
You should move to Argentina, my friend.
- LFLex Fridman
Yeah.
- AHAndrew Huberman
You know, my father's Argentine-
- LFLex Fridman
Yeah.
- AHAndrew Huberman
... and you know what he said when I, uh, when I went on your podcast for the first time? He said, "He dresses well." Because in Argentina the men go to a wedding or a party or something. You know, in the US they, by halfway through the night, 10 minutes in the night, all the jackets are off.
- LFLex Fridman
Yeah.
- AHAndrew Huberman
It looks like everyone's undressing for the party they just got dressed up for.
- LFLex Fridman
Yeah.
- AHAndrew Huberman
And he said, uh, and he said, "You know, I like the way he dresses," and then when I started... He was talking about you, and then when he, when I started my podcast, he said, "Why don't you wear a, a real suit like your friend Lex?"
- LFLex Fridman
(laughs)
- AHAndrew Huberman
(laughs) So-
- LFLex Fridman
I remember that.
- AHAndrew Huberman
... in any case.
- 1:22:22 – 1:30:09
Loneliness
- AHAndrew Huberman
Um, but let's talk about, um, this, this pursuit just a bit more, because I think what you're talking about is, is building a, not just a solution for loneliness, but you've alluded to the loneliness as itself an important thing, and I think you're right. I think within people there is, like, caverns of thoughts and shame, but also just the desire to be, um, to have resonance, to, to be seen and heard and I don't even know that it's seen and heard through language.
- LFLex Fridman
Mm-hmm.
- AHAndrew Huberman
Um, but these reservoirs are of, of loneliness I think, um, they're... well, they're interesting. Maybe you could comment a little bit about it, because just as often as you talk about love, haven't quantified it, but it seems that you talk about this loneliness and maybe you just would, if you're willing, you could, you share a little bit more about that and, and what, what that feels like now in the pursuit of building this robot-human relationship. You've been... L- Let me be direct. You've been spending a lot of time on building a robot-human relationship. Where's that at?
- LFLex Fridman
Oh, in t- oh, uh, uh, in terms of business and in terms of systems?
- AHAndrew Huberman
No, I'm talking about a specific robot.
- LFLex Fridman
Oh, ro- (laughs) so, okay. I should, I should mention a few things. So, one is there's a startup where there's a idea where I hope millions of people can use.
- AHAndrew Huberman
Mm-hmm.
- LFLex Fridman
And then there is my own personal, like, uh, almost like Frankenstein explorations with, uh, with particular robots, so I'm very fascinated with the legged robots in my own, uh, uh, private, sounds like dark, but, like, in, in one, uh, N of one experiments to see if I can recreate the magic, and, uh, that's been... I have a lot of really good already m- perception systems and, uh, control systems that are able to communicate affection in a doglike fashion, so I'm, I'm in a really good place there. The stumbling blocks, which also been part of my sadness recently, is, uh, that I also have to work with robotics companies that, um, you know, I gave so much of my heart, soul and love and appreciation towards Boston Dynamics, but Boston Dynamics is also, you know, is a company that has to make a lot of money and they have marketing teams and they're, like, looking at this silly Russian kid in a suit and tie. It's like, "Wh- What's he trying to do with all this love and robot interaction and dancing and so on?" So, there was a, I think, let's say for now, it's like when you break up with a girlfriend or something, right now we decided to part ways on this particular thing. They're huge supporters of mine, they're huge fans, but on this particular thing Boston Dynamics is not focusing on or interested in human/robot interaction. In fact, their whole business currently is keep the robot as far away from humans as possible.
- AHAndrew Huberman
Hmm.
- LFLex Fridman
Uh, because it's, it's in, in the industrial setting where it's doing monitoring in dangerous environments, it's almost like a remote security camera essentially is its application.... to me, I thought, uh, it's still, even in those applications, exceptionally useful for the robot to be able to perceive humans, like see humans, and to be able to, uh, in a big map, localize where those humans are and have human intention. For example, like this, I did this a lot of work with pedestrians, for a robot to be able to anticipate what the hell the human is doing, like where it's walking. If you're-
- AHAndrew Huberman
Mm-hmm.
- LFLex Fridman
... it's, humans are not ballistics object, they're not, just because you're walking this way one moment, doesn't mean you'll keep walking that direction. You have to infer a lot of signals, especially with the head movement and the eye movement. And so, I thought that's super interesting to explore, but, uh, they didn't feel that, so... I'll be working with a few other robotics companies that, uh, are, are, um, much more open to that kind of stuff, and they're super excited, and fans of mine. Hopefully Boston Dynamics, my first love that came back with an ex-girlfriend, will come around. But, so the- algorithmically, it's, uh, I'm basically, uh, done there. Uh, the, the rest is actually getting, um, some of these companies to work with. And then there's, uh, uh, for people who'd worked with robots know that one thing is to write software that works, and the other is to have a r- real machine that actually works. And it, it breaks down in all kinds of different ways that are fascinating, and so th- there's a big challenge there. But that's almost, um, uh, it may sound a little bit confusing in the context of our previous discussion, because the previous discussion was more about the big dream, how I hoped to have millions of people enj- enjoy this moment of magic. The, this current discussion about a robot is something I personally really enjoy, just brings me happiness. I really try to do now everything that just brings me joy. Uh, maximize that 'cause it's, 'cause robots are awesome, but two, given my like, little bit growing platform, I want to use the opportunity to educate people. Um, it's just, it's, like robots are cool, and if I think they're cool, I'll be able to, I hope, be able to communicate why they're cool to others. So, the, this little robot experiment is a little bit of research project, too. There's a couple publications with MIT folks around that. But the, the other is just to make some cool videos and explain to people how they actually work. Um, and, but as opposed to people being scared of robots, they can be, they could still be scared, but also excited-
Episode duration: 3:03:18
Install uListen for AI-powered chat & search across the full episode — Get Full Transcript
Transcript of episode VRvn3Oj5r3E
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome