Skip to content
Lex Fridman PodcastLex Fridman Podcast

Max Tegmark: Life 3.0 | Lex Fridman Podcast #1

Lex Fridman and Max Tegmark on max Tegmark on Consciousness, AGI, and Humanity’s Cosmic Responsibility.

Lex FridmanhostMax Tegmarkguest
Apr 19, 20181h 22mWatch on YouTube ↗

EVERY SPOKEN WORD

  1. 0:0015:00

    As part of MIT…

    1. LF

      As part of MIT Course 6.099, Artificial General Intelligence, I've gotten a chance to sit down with Max Tegmark. He is a professor here at MIT. He's a physicist, spent a large part of his career studying the mysteries of our cosmological universe, but he's also studied and delved into the beneficial possibilities and the existential risks of artificial intelligence. Amongst many other things, he's the co-founder of the Future of Life Institute, author of two books, both of which I highly recommend. First, Our Mathematical Universe. Second is Life 3.0. He's truly an out-of-the-box thinker, and a fun personality, so I really enjoyed talking to him. If you'd like to see more of these videos in the future, please subscribe and also click the little bell icon to make sure you don't miss any videos. Also, Twitter, LinkedIn, agi.mit.edu if you wanna watch other lectures or conversations like this one. Better yet, go read Max's book, Life 3.0. Chapter 7 on goals is my favorite. It's really where philosophy and engineering come together, and it opens with a quote by Dostoevsky, "The mystery of human existence lies not in just staying alive, but in finding something to live for." Lastly, I believe that every failure rewards us with an opportunity to learn. In that sense, I've been very fortunate to fail in so many new and exciting ways, and, uh, this conversation was no different. I've learned about something called radio frequency interference, RFI. Look it up. Apparently, music and conversations from local radio stations can bleed into the audio that you're recording in such a way that it almost completely ruins that audio. It's an exceptionally difficult sound source to remove. So, I've gotten the opportunity to learn how to avoid RFI in the future during recording sessions. I've also gotten the opportunity to learn how to use Adobe Audition and iZotope RX 6 to do some noise, some audio repair. Of course, this is an exceptionally difficult noise to remove. I am an engineer. I'm not an audio engineer, neither is anybody else in our group, but we did our best. Nevertheless, I thank you for your patience, and I hope you're still able to enjoy this conversation. Do you think there's intelligent life out there in the universe? Let's open up with an easy question.

    2. MT

      I have a minority view here, actually. When I give public lectures, I often ask for a show of hands, who thinks there's intelligent life out there somewhere else, and almost everyone puts their hand up and when I ask why, they'll be like, "Oh, there's so many galaxies out there, there's gotta be." But, I'm a nu- numbers nerd, right? So, when you look more carefully at it, it's not so clear at all. Th- the, when we talk about our universe, first of all, we don't mean all of space. We actually mean, I don't know, you can throw me the universe if you want, it's behind you there.

    3. LF

      (laughs)

    4. MT

      It's, we simply mean the spherical region of space from which light has had time to reach us so far during the 14.8 billion year, 13.8 billion years since our Big Bang. There's more space here, but this is what we call a universe, because that's all we have access to.

    5. LF

      Mm-hmm.

    6. MT

      So, is there intelligent life here that's gotten to the point of building telescopes and, and computers? My guess is no, actually. Uh, the probability-

    7. LF

      Interesting.

    8. MT

      ... of it happening on any given planet is some number, we don't know what it is, and, um, what we do know is that, uh, the number can't be super high, 'cause there's over a billion Earth-like planets in the Milky Way galaxy alone, many, uh, which are billions of years older than Earth. And, um, aside from some, um, UFO believers-

    9. LF

      (laughs)

    10. MT

      ... you know, there isn't much evidence that any super advanced civilization has come here at all. And so, that's the famous Fermi paradox, right? And, and then if you, if you work the numbers, what you find is that, um, if- if you have no clue what the probability is of getting life on a given planet, so it could be 10 to the -10, 10 to the -20, or 10 to the -2, or any power of 10 is sort of equally likely if you wanna be really open-minded.

    11. LF

      Mm-hmm.

    12. MT

      That translates into it being equally likely that our nearest neighbor is 10 to the 16 meters away, 10 to the 17 meters away, 10 to the 18. You know, by the time you get much less than, than 10 to the 16 already, we, we pretty much know there is nothing else that close. (clears throat) And when you get beyond-

    13. LF

      Because they would've discovered us.

    14. MT

      They, yeah, they would've been discovered us long ago or if they're really close, we would've probably noted some engineering projects that they're doing.

    15. LF

      Yeah.

    16. MT

      And if it's beyond 10 to the 26 meters, that's already outside of here.

    17. LF

      Yeah.

    18. MT

      So, so my guess is actually that there, we are the only life in here that's gotten to the point of, uh, building advanced tech, which I think is, is very, um, puts a lot of responsibility on our shoulders to not screw up, you know? I, I think-

    19. LF

      Absolutely.

    20. MT

      ... people who take for granted that it's okay for us to screw up, have an accidental nuclear war or go extinct somehow because there's a sort of Star Trek-like situation out there where some other life forms are gonna come and bail us out, and it doesn't matter, so I, I think they're lulling us into a false sense of security.

    21. LF

      Right.

    22. MT

      I think it's much more prudent to say, you know, "Let's be really grateful for this amazing opportunity we've had," and, uh, make the best of it, just in case it is down to us.

    23. LF

      So from a physics perspective, do you think intelligent life, so it's unique from a sort of statistical view of the size of the universe, but from the basic matter of the universe, how difficult is it for intelligent life to come about, the kind of advanced tech building life?... are, i- is implied in your statement that it's really difficult-

    24. MT

      Mm-hmm.

    25. LF

      ... to create something like a human species?

    26. MT

      Well, I th- I think what we know is that going from no life to having life that can do our k- level of tech, there is some sort of, to going beyond that and actually settling a whole universe with life, there is some road, major roadblock there, which is some Great Filter, as, um, it's sometimes called. Which, whi- which is tough to get through. It's either... That r- that roadblock is either bef- behind us or in front of us.

    27. LF

      Right.

    28. MT

      I'm hoping very much that it's behind us. I'm, I'm super excited every time we get a new, uh, report from NASA saying they failed to find any life on Mars.

    29. LF

      Mm.

    30. MT

      My guess, awesome.

  2. 15:0030:00

    I think so, yeah.…

    1. MT

      but, but we should stop making excuses. This is a science question, uh, and we c- and I, uh... there are, there are ways we can even test, test any theory that makes predictions for this. And, uh, coming back to this helper robot, I mean, so you said you'd want your helper robot to certainly act conscious and treat you, like, and, and have conversations with you and stuff?

    2. LF

      I think so, yeah.

    3. MT

      But wouldn't you... would you feel, would you feel a little bit creeped out if you realize that it was just, like, glossed up tape recorder, you know, the words "Just zombie." And was faking emotion? Would you prefer that it actually had an experience? Or, or, or would you prefer that it's actually not experiencing anything so you, you feel... you don't have to feel guilty about what you do to it? What would you prefer?

    4. LF

      It's such a, it, it's such a difficult question because, uh, you know, it's like when you're in a relationship and you say, "Well, I love you," and the other person says, "I love you back." It's like asking, "Well, do they really love you back or are they just saying they love you back?" Uh-

    5. MT

      (laughs)

    6. LF

      ... do you... don't you really want them to actually love you?

    7. MT

      (laughs)

    8. LF

      I... it's hard to... it's, it's hard to (laughs) really know the difference between, uh, everything seeming like there's consciousness present, there's intelligence present, there's a- affection, passion, love and, and it, it actually being there. I'm not sure. Do you ha-

    9. MT

      But like the mass gen-

    10. LF

      Do you, do-

    11. MT

      Can I ask you a question about this?

    12. LF

      Yes.

    13. MT

      Like, to make it a bit more pointed? So Mass General Hospital is right across the river, right?

    14. LF

      Yes.

    15. MT

      Suppose, suppose you're going in for a medical procedure and they're like, "You know, uh, for, for anesthesia what we're gonna do is we're gonna give you muscle relaxants so you won't be able to move, uh, and you're gonna feel excruciating pain during the whole surgery, but you won't be able to do anything about it. But then we're gonna give you this drug that erases your memory of it."

    16. LF

      Mm-hmm.

    17. MT

      Would you be cool about that?

    18. LF

      Hm.

    19. MT

      Or what, what's the difference that you're conscious about it or not if, if, if there's no behavioral change, right?

    20. LF

      Right. That's a really in- that's a really clear way to put it. That's... yeah. It feels like in that sense experiencing it is a va- a valuable quality, so actually being able to have subjective experiences, (laughs) at least in that case, is, is valuable.

    21. MT

      And I think we humans have a little bit of a bad track record also of making these self-serving arguments that other entities aren't conscious. You know, people-

    22. LF

      Right.

    23. MT

      ... often say, "Oh, these animals can't feel pain."

    24. LF

      Right.

    25. MT

      It's okay to boil lobsters because we asked them if it hurt and they didn't say anything, and, and now there was just a paper out saying lobsters did, do feel pain when you boil them-

    26. LF

      Yeah.

    27. MT

      ... and they're banning it in Switzerland. And, and, uh, and we did this with slaves too often and said, "Oh, they don't mind." Uh. (laughs) And they don't maybe ha- or aren't conscious or women don't have souls or whatever.

    28. LF

      Right.

    29. MT

      So, I'm a little bit nervous when I (laughs) hear people just take as an axiom that machines can't have experience ever. I think this is just a really fascinating science question is what it is.

    30. LF

      Yeah.

  3. 30:0045:00

    Yeah, human-level intelligence. Yeah.…

    1. MT

      see Alan Turing and others thought about it even earlier. Uh, so you asked me what exactly would I define human-level at.

    2. LF

      Yeah, human-level intelligence. Yeah.

    3. MT

      Yeah. So, this, the, the glib answer is just to say something which is better than us at all-

    4. LF

      (laughs)

    5. MT

      ... cognitive tasks, well, the c-... or better than any human at all cognitive tasks. But the really interesting bar, I think, goes a little bit lower than that, actually. It's when they can, when they're better than us at AI programming and, and, and at general learning, so that they can, can, if they want to, get better than us at anything by just studying up.

    6. LF

      So there better is a key word, and better is towards this kind of spectrum of the complexity of goals-

    7. MT

      Yeah.

    8. LF

      ... it's able to accomplish.

    9. MT

      Yeah.

    10. LF

      So another way to th- uh, so ano- an- and that's certainly a very, uh, clear definition of human love, so there's, it's almost like a sea that's rising, and you can do more and more and more things.

    11. MT

      Yeah.

    12. LF

      It's actually a graphic that you show. It's a really nice way to put it. So there's some peaks that, and there's an ocean level elevating, and you solve more and more problems. But, you know, just kind of, uh, to take a pause, and we took a bunch of questions in a lot of social networks, and a bunch of people asked a sort of, a slightly different direction on creativity. And um-

    13. MT

      Mm-hmm.

    14. LF

      ... and, and things l- that perhaps aren't a peak. Uh, uh, it, it, it's, you know, human beings are flawed, and perhaps better means having, being a, uh, having contradiction, being flawed in some way. So let me sort of-

    15. MT

      Yeah.

    16. LF

      ... start and s- start easy, first of all. Uh, so you have a lot of cool equations. Let me ask, what's your favorite equation, first of all? You, I know they're all like your children, but like-

    17. MT

      That one.

    18. LF

      (laughs) Which one is that?

    19. MT

      This is the Schrodinger Equation.

    20. LF

      Okay, sure.

    21. MT

      It's the master key of quantum mechanics of the micro world. So this, with this equation, we can calculate everything to do with atoms and molecules and all the way up to...

    22. LF

      (laughs)

    23. MT

      ... stuff.

    24. LF

      Yeah, so okay. So quantum mechanics is certainly a, a beautiful, mysterious formulation of our world. So I'd like to sort of ask you, uh, just as an example, it perhaps doesn't have the same beauty as physics does, but in mathematics, uh, abstract, the Andrew Wiles who proved the Fermat's Last Theorem-

    25. MT

      Yeah.

    26. LF

      So uh, he, I just saw this recently, and it, it kinda caught my eye a little bit. This is 358 years after it was c- it was conjectured. So this very simple formulation, everybody tried to prove it. Everybody failed. And so here's this guy, comes along, and eventually, f- uh, it proves it, and then, then fails to prove it, and then proves it again in '94, and he said like, the moment when everything connected into place, uh, in an interview said, "It was so indescribably beautiful," that moment when he finally realized the connecting piece, uh, of two c- conjectures. He said, "It was so indescribably beautiful. It was so simple and so elegant. I couldn't understand how I'd missed it, and I just stared at it in this disbelief for 20 minutes."

    27. MT

      (laughs)

    28. LF

      "Then, then during the day, I walked around the department, and I'd keeping, keep coming back to my desk, looking to see if it was still there. It was still there. I couldn't contain myself. I was so excited. It was the most important moment of my working life. Nothing I ever do again will mean as much." So that particular moment, and, and it kinda made me think of-

    29. MT

      Beautiful.

    30. LF

      ... what would it take... And I think you p- we have all been there at small levels. Um, maybe, let me ask, have you had a moment like that in your life, where you just had an idea that's like, "I, wow, yes"?

  4. 45:001:00:00

    Yeah. …

    1. LF

      the goals for this country, and a lot of people agree that those goals actually held up pretty well. And it's an interesting formulation of values and failed miserably in other ways. So, for the value alignment problem-

    2. MT

      Yeah.

    3. LF

      ... and the solution to it, we have to be able to put on paper, uh, or in, in, uh, in a program human values.

    4. MT

      Yeah.

    5. LF

      How difficult do you think that is?

    6. MT

      Very. But i- i- it's so important. We really have to give it our best. And it's difficult for two separate reasons. There's the technical value alignment problem of figuring out just how to make machines understand our goals, adopt them, and retain them, and then there's this c- the separate part of it, the philosophical part, whose values anyway? And since we, it's not like we have any great consensus on this planet on values, a- a- how, what mechanism should we create then to aggregate and decide, okay, what's a good compromise?

    7. LF

      Right.

    8. MT

      Uh, that second discussion can't just be left to tech nerds like myself, right?

    9. LF

      That's right.

    10. MT

      And, and if we refuse to talk about it, and then AGI gets built, who's gonna be actually making the decision about these values? It's gonna be a bunch of dudes in some tech company, right?

    11. LF

      Yeah. Yeah.

    12. MT

      And, uh, are, are they necessarily... (laughs) s- so representative of all humankind that we want to just entrust it to them? Are, are they even, uh, uniquely qualified-

    13. LF

      Mm-hmm.

    14. MT

      ... to speak to future human happiness just 'cause they're good at programming an AI? I'd much rather have this be a really inclusive conversation.

    15. LF

      But do you think it's possible, sort of, th- so you create a, a beautiful vision that includes, uh, sort of the diversity, cultural diversity, and f- various perspectives when discussing rights, freedoms, human dignity, but how hard is it to come to that consensus? Do you think, um, it's certainly a really important, uh, thing that we should all try to do, but do you think it's feasible?

    16. MT

      I, I think there's no better way to guarantee failure than to-

    17. LF

      Not to try.

    18. MT

      ... e- either to refuse to talk about it or, or refuse to try.

    19. LF

      Yeah.

    20. MT

      And, uh, I also think it's a really bad i- strategy to say, "Okay, let's first have a discussion for a long time, and then once we reach complete consensus, then we'll-"

    21. LF

      Right.

    22. MT

      "... try to load it into some machine." No, uh, we shouldn't let perfect be the enemy of good. Instead, we should start with the kindergarten ethics that pretty much everybody agrees on and put that into our machines now. We're not doing that even.

    23. LF

      Mm-hmm.

    24. MT

      Look at, uh, you know, uh, anyone who builds a, uh, passenger aircraft wants it to never, under any circumstances, fly into a building-

    25. LF

      Right.

    26. MT

      ... or a mountain, right? Yet the September 11 hijackers were able to do that. And even more embarrassingly, you know, uh, Andreas Lubitz, this depressed Germanwings pilot-

    27. LF

      Mm-hmm.

    28. MT

      ... when he flew his passenger jet into the Alps, killing over 100 people, he just told the autopilot to do it. He told the freaking computer-

    29. LF

      Yeah.

    30. MT

      ... to change the altitude to 100 meters.

  5. 1:00:001:15:00

    Mm-hmm. …

    1. MT

      We can now, I think, quite convincingly answer that question with no. It's enough to have just one kind. If you look under the hood of AlphaZero, there's only one kind of neuron, and it's ridiculously simple-

    2. LF

      Mm-hmm.

    3. MT

      ... uh, a simple mathematical thing. So it's, it's not the... It's just like in physics, it's not the de- if you have a gas with waves in it, it's not the detailed nature of the molecules that matter, it's the collective behavior somehow. Similarly, (clears throat) it's, it's, it's the higher level of structure of the network that matters, not that you have 20 kinds of neurons. I think (clears throat) our brain is such a complicated mess, because it wasn't evolved just to be intelligent, it was evolved to also be...... self-assembling-

    4. LF

      Right.

    5. MT

      ... and self-repairing, right? And, uh, evolutionarily attainable and so on, so-

    6. LF

      Yeah, it patches and so on, yeah.

    7. MT

      Yeah. So I, I think it's pretty... My, my hunch is that we're gonna understand how to build AGI before we fully understand how our brains work.

    8. LF

      Mm-hmm.

    9. MT

      Just like we, uh, we understood how to build flying machines long before we were able to build a mechanical bird.

    10. LF

      Yeah, that's right. You've given the ex- (laughs) you've given the, that, uh, the example exactly of mechanical birds and airplanes.

    11. MT

      Yeah.

    12. LF

      And airplanes do a pretty good job of flying without really mimicking bird flight.

    13. MT

      And even now, after 100... It's 100 years later, did you see the TED Talk with, um, this German group-

    14. LF

      I did not.

    15. MT

      ... with the mechanical bird?

    16. LF

      I've heard you mention it. I want to see it.

    17. MT

      It's... Check it out, it's amazing. But even after that, right, we still don't fly on mechanical birds, because it turned out that the way we came up with was simpler, and it's better for our purposes. And I think it might be the same there. That's one lesson. Uh, and another lesson, which is more what the, what our paper was about... Uh, first w- we, I, I as a physicist thought it was fascinating how there's a very close mathematical relationship actually between our artificial neural networks and a lot of things that we've studied before in physics-

    18. LF

      Mm-hmm.

    19. MT

      ... that go by nerdy names like the renormalization group equation and Hamiltonians and yada, yada, yada. And, and, um, when you look a little more closely at this, you have, um... At first I was like, whoa, there's something crazy here that doesn't make sense 'cause we know that if you even wanna build a super simple neural network to tell apart cat pictures and dog pictures, right? Like, we can do that very, very well now.

    20. LF

      Mm-hmm.

    21. MT

      But if you think about it a little bit, you convince yourself it must be impossible, because if I have one megapixel, even if each pixel is just black or white, there's two to the power of one million possible images.

    22. LF

      Mm-hmm.

    23. MT

      There's way more than there are atoms in our universe, right?

    24. LF

      Yeah.

    25. MT

      So in order to s- And then for each one of those, I have to assign a number which is the probability that it's a dog.

    26. LF

      Right.

    27. MT

      So an arbitrary function of images is a list of more numbers than there are atoms in our universe, so clearly I can't store that under the hood of my, my GPU or my, my computer, yet it somehow works.

    28. LF

      Mm-hmm.

    29. MT

      So what does that mean? Well, it means that the f- out of all of the problems that you could tr- all... Try to solve with a neural network, almost all of them are impossible to solve with a reasonably sized one. But then w- what we sh- showed in our paper was, was that the fractio- the kind of problems, the fraction of all the problems that you could possibly pose that the, that we actually care about, given the laws of physics-

    30. LF

      Mm-hmm.

  6. 1:15:001:21:38

    That's really well-put. …

    1. MT

    2. LF

      That's really well-put.

    3. MT

      And, and, uh, and, uh, I, I feel it's, it's very challenging to come up with a vision for the f- a future which we're, which we are unequivocally excited about. And I'm not just talking now in the vague terms, like, "Yeah, let's cure cancer." Fine.

    4. LF

      Right.

    5. MT

      I'm talking about what kind of society do we wanna create? What do we want it to mean, you know, to be human in the age of AI when we ha- in the age of AGI? So, if we can have this conversation, broad, inclusive conversation, and gradually start converging towards some, some future that... with some direction, at least, that we wanna steer towards, right? Then, then, uh, we'll be much more motivated to constructively take on the obstacles. And I think, uh, if I, uh, if I had to... if, if you make... if I try to wrap this up in a more succinct way, I think, I think we can all agree already now that we should aspire to build AGI that doesn't overpower us, but that empowers us.

    6. LF

      And think of the many various ways it can do that, whether that's from, uh, my, uh, side of the world of autonomous vehicles. I, I per-... I'm personally actually from the camp that believes there's... human-level intelligence is required to, to achieve something-

    7. MT

      Mm-hmm.

    8. LF

      ... like vehicles that would actually be something we would enjoy using-

    9. MT

      Mm-hmm.

    10. LF

      ... and being part of. So, that's the one example, and certainly there's a lot of other types of robots in medicine, and, and so on. Uh, so focusing on those, and then, and then coming up with the obstacles, coming up with the ways that that can go wrong, and solving those one at a time.

    11. MT

      And just because you can build an autonomous vehicle, e- even if you could build one that would drive just fine without you... You know, m- maybe there are some things in life that we would actually wanna do ourselves.

    12. LF

      That's right.

    13. MT

      Right? Like, for example in... if you think of our society as a whole, there's some things that w- that we find very meaningful to do, and, uh, that doesn't mean we have to stop doing them just because machines can do them better, you know? I'm not gonna stop playing tennis just the day, the day someone builds a tennis robot-

    14. LF

      Yeah.

    15. MT

      ... to beat me.

    16. LF

      People are still playing chess and even Go.

    17. MT

      Yeah, and, and, uh, I... in this, the... in the very near term even, some people are advocating basic income-

    18. LF

      Mm-hmm.

    19. MT

      ... to replace jobs. But if you... if, if the government is gonna be willing to just hand out cash to people for doing nothing, uh, then one should also seriously consider whether the government should also just hire a lot more teachers, and nurses, and the kind of jobs which people often find great fulfillment in doing, right? I get very tired of hearing politicians saying, "Oh, we can't afford hiring more teachers."

    20. LF

      Right.

    21. MT

      "But we're gonna maybe have basic income." If we can have more, more serious research and thought into what gives meaning to our lives, then jobs give so much more than income, right?

    22. LF

      Mm-hmm.

    23. MT

      And then think about, in the future, wh- what are the roles that... yeah, what are the, the roles that we wanna have people continually filling empowered by machines?

    24. LF

      And I think, sort of, um... I come from the... Russia, from the Soviet Union, and, um, I think for a lot of people in the 20th century, going to the moon, going to space was ins- an inspiring thing. I feel like the, the, uh... the, the universe of the mind, so AI, understanding, creating intelligence, is that for the 21st century. Uh, so it's really surprising, and I've heard you mention this. It's really surprising to me, both on the research funding side that it's not funded as greatly as it could be, but most importantly on the pol- politician side, that it's not part of the public discourse except in the killer bots, Terminator kind of view. That people are not yet, I think, perhaps excited by the possible positive future that we can, uh, build together, so-

    25. MT

      And we should be, because politicians usually just focus on the next election cycle, right?

    26. LF

      Right.

    27. MT

      The single most important thing I feel we humans have learned in the entire history of science is that we're the masters of underestimation. We underestimated the size of, of, of our cosmos, again and again, realizing that everything we thought existed was just a small part of something grander, right? Planet, solar system, the galaxy, you know, clusters of galaxies, the universe. And we now know that we- that the future has just so much more potential-

    28. LF

      Mm-hmm.

    29. MT

      ... than our ancestors could ever have dreamt of. Uh, this cosmos, I- I- imagine if all of Earth was completely devoid of life except for Cambridge, Massachusetts, right?

    30. LF

      Mm-hmm.

Episode duration: 1:22:57

Install uListen for AI-powered chat & search across the full episode — Get Full Transcript

Transcript of episode Gi8LUnhP5yU

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome