Lex Fridman PodcastMax Tegmark: AI and Physics | Lex Fridman Podcast #155
EVERY SPOKEN WORD
150 min read · 30,004 words- 0:00 – 2:49
Introduction
- LFLex Fridman
The following is a conversation with Max Tegmark, his second time on the podcast. In fact, the previous conversation was episode number one of this very podcast. He is a physicist, an artificial intelligence researcher at MIT, co-founder of the Future of Life Institute, and author of Life 3.0: Being Human in the Age of Artificial Intelligence. He's also the head of a bunch of other huge, fascinating projects, and has written a lot of different things that you should definitely check out. He has been one of the key humans who has been outspoken about long-term existential risks of AI, and also its exciting possibilities and solutions to real world problems, most recently at the intersection of AI and physics, and also in re-engineering the algorithms that divide us by controlling the information we see, and thereby creating bubbles and all other kinds of, uh, complex social phenomena that we see today. In general, he's one of the most passionate and brilliant people I have the fortune of knowing. I hope to talk to him many more times on this podcast in the future. Quick mention of our sponsors: The Jordan Harbinger Show, Four Sigmatic Mushroom Coffee, BetterHelp Online Therapy, and ExpressVPN. So the choice is wisdom, caffeine, sanity, or privacy. Choose wisely, my friends. And if you wish, click the sponsor links below to get a discount and to support this podcast. As a side note, let me say that much of the researchers in the machine learning and artificial intelligence communities do not spend much time thinking deeply about existential risks of AI. Because our current algorithms are seen as useful but dumb, it's difficult to imagine how they may become destructive to the fabric of human civilization in the foreseeable future. I understand this mindset, but it's very troublesome. To me, this is both a dangerous and uninspiring perspective, reminiscent of a lobster sitting in a pot of lukewarm water that a minute ago was cold. I feel a kinship with this lobster. I believe that already the algorithms that drive our interaction on social media have an intelligence and power that far outstrip the intelligence and power of any one human being. Now really is the time to think about this, to define the trajectory of the interplay of technology and human beings in our society. I think that the future of human civilization very well may be at stake over this very question of the role of artificial intelligence in our society. If you enjoy this thing, subscribe on YouTube, review it on Apple Podcasts, follow on Spotify, support on Patreon, or connect with me on Twitter @lexfriedman. And now, here's my conversation with Max Tegmark.
- 2:49 – 16:07
AI and physics
- LFLex Fridman
So people might not know this, but you were actually episode number one of this podcast just a couple of years ago, and now we're back. And it so happens that a lot of exciting things happened in both physics and artificial intelligence, both fields that you're super passionate about. Can we try to catch up to some of the exciting things happening in artificial intelligence, especially in the context of the way it's cracking open the different problems of s- the sciences?
- MTMax Tegmark
Yeah, I would love to, especially now, uh, as we start 2021 here. And it's a really fun time to think about w- what were the biggest breakthroughs in AI. Not the ones necessarily the media wrote about, but that-
- LFLex Fridman
Yeah.
- MTMax Tegmark
... really matter, and- and what do they mean for our ability to do better science, what does it mean for our ability, uh, to help people around the world, and what does it mean for new, um, t- problems that they could cause if we're not smart enough to avoid them. So, you know, what do we learn, basically, from this?
- LFLex Fridman
Yes, absolutely. So one of the amazing things you're part of is the AI Institute for Artificial Intelligence & Fundamental Interactions. What's up with this institute? What- what are you working on? What are you thinking about?
- MTMax Tegmark
The idea is something I'm very on fire with, which is basically AI meets physics, and, um, you know, it's been almost five years now since I shifted my own MIT research from physics to machine learning. And in the beginning, I noticed a lot of my colleagues, even though they were polite about it, were like kind of, "Hm. What is Max doing? What is this weird stuff?" (laughs)
- LFLex Fridman
(laughs) He's lost his mind. (laughs)
- MTMax Tegmark
Then- But then gradually, I, uh, together with some colleagues, were able to persuade more and more of the other professors in the- our physics department to get interested in this, and- and- and, um, now we got this amazing NSF center, so 20 million bucks for- for the next five years, MIT and a bunch of neighboring universities here also. And I notice now those colleagues (laughs) who were looking at me funny have stopped asking-
- LFLex Fridman
(laughs)
- MTMax Tegmark
... why I'm- what the point is of this because it's becoming more clear. And I ba- really believe that, of course, AI can help physics a lot to do better physics, but physics can also help, uh, AI a lot, both by building better hardware. My colleague Marin Soljačić, for example, is working on an optical chip for much faster machine learning, where the computation is done not by moving electrons around and- but by moving photons around. Dramatically less energy use, faster, better. Um, we all can- we can also help AI a lot, I think, by having a- a different set of tools and a different, maybe more audacious attitude. You know, AI has lo- to a significant extent been an engineering discipline, where you're just trying to make things that work-
- LFLex Fridman
Mm-hmm.
- MTMax Tegmark
... and being less int- more interested in maybe selling them than in figuring out exactly how they work.... and proving theorems about, that they will always work, right?
- LFLex Fridman
Yeah.
- MTMax Tegmark
Contrast that with physics. You know, when Elon Musk sends a rocket to the International Space Station, they didn't just train with machine learning, "Oh, let's fire it a little bit left, more to the left, a bit more to the right. Oh, that also missed. Let's try here." No, you know, we, uh, figured out Newton's laws of gravitation and other... and got other things, and got a really deep, fundamental understanding. Uh, and that's what gives us such confidence in, in rockets. And my vision is that, in the future all machine learning systems that actually have impact on, on people's lives will be understood at a really, really deep level, right? So we trust them not 'cause some sales rep told us to, but because they've earned our trust. We can... And really safety critical things, even prove that they will always do, you know, what we expect them to do. That's very much the physics mindset. So, it, it's interesting, if, if you look at big breakthroughs that have happened in machine learning this year, you know, from dancing robots, you know... It's pretty fantastic, um, not just because it's cool, but if you just think about, not that many years ago, this YouTube video at this DARPA challenge with the MIT robot, comes out of the car and face plants.
- LFLex Fridman
Yeah (laughs) .
- MTMax Tegmark
(laughs) How far we've come-
- LFLex Fridman
Yes.
- MTMax Tegmark
... in, in just a few years. Similarly, AlphaFold2, you know, crushing the protein folding problem. We can talk more about implications for medical research-
- LFLex Fridman
Yeah.
- MTMax Tegmark
... and stuff, but hey, you know, that's huge progress. Uh, you, you can look at the GPT-3 that can spout off English te- texts, which sometimes really, really blows you away.
- LFLex Fridman
Yeah.
- MTMax Tegmark
You can look at the Go- uh, DeepMind's, uh, MuZero, which doesn't just kick our butt in, um, Go and chess and shogi, but also in all these Atari games. And you don't even have to teach it the rules now.
- LFLex Fridman
Mm-hmm.
- MTMax Tegmark
You know, what all of those have in common is, besides being powerful, is we don't fully understand how they work. And that's fine if it's just some dancing robots, and the worst thing that can happen is they face plant, right? Or if they're playing Go, and the worst thing that can happen is that they make a bad move and lose the game, right? It, it's less fine if that's what's controlling your self-driving car or your nuclear power plant. And, uh, we've seen already that even though Hollywood had all these movies where they try to make us worry about the wrong things, like machines turning evil, the actual bad things that have happened with automation have not been machines turning evil.
- LFLex Fridman
Mm-hmm.
- MTMax Tegmark
They've been caused by over-trust in things we didn't understand as well as we thought we did, right?
- LFLex Fridman
Yeah.
- MTMax Tegmark
Even very simple automated systems like what, uh, Boeing put into the 737 MAX, right?
- LFLex Fridman
Yes.
- MTMax Tegmark
Killed a lot of people. Was it that that little simple system was evil? Of course not. But we didn't understand it as well as we should have, right?
- 16:07 – 24:57
Can AI discover new laws of physics?
- MTMax Tegmark
So, for example, I'll give you one example, this AI Feynman, um, project-
- LFLex Fridman
Yes.
- MTMax Tegmark
... that we just published, right? So, we took the 100 most famous or complicated equations from one of my favorite physics textbooks, in fact, the one that got me into physics in the first place-
- LFLex Fridman
Mm-hmm.
- MTMax Tegmark
... the Feynman Lectures on Physics. And, uh, so you have a formula, you know, maybe it has... What goes into the formula is six different variables and then what comes out is one. So, then you can make like a giant Excel spreadsheet with seven columns.
- LFLex Fridman
Mm-hmm.
- MTMax Tegmark
You put in just random numbers for the six columns for those six input variables and then you calculate with a formula the seventh column-
- LFLex Fridman
Mm-hmm.
- MTMax Tegmark
... the output. So, maybe it's like the force equals in the last column some function of the other, and now the task is, okay, if I don't tell you what the formula was, can you figure that out from looking at my spreadsheet I gave you?
- LFLex Fridman
Yes.
- MTMax Tegmark
This problem is called symbolic regression.
- LFLex Fridman
Mm-hmm.
- MTMax Tegmark
If I tell you that the formula is what we call a linear formula, so it's just that the output is, uh, some sum of all the things inputted times some constants, that's a famous easy problem we can solve. I, we do it all the time in science and engineering. But the general one if it's more complicated functions with logarithms or cosines or other math, it's a very, very hard one and probably impossible to do fast in general just because the number of formulas with n symbols, you know, just grows exponentially-
- LFLex Fridman
Yeah.
- MTMax Tegmark
... just like the number of passwords you can make grow dramatically with length. So, but what was... So, we... But we had this idea that if you first have a neural network that can actually approximate the formula, you just trained it, even if you don't understand how it works-
- LFLex Fridman
Mm-hmm.
- MTMax Tegmark
... that can be the first step-
- LFLex Fridman
Mm-hmm.
- MTMax Tegmark
... towards actually understanding how it works. So, then what... So, that's what we do first, and then we study that neural network now and put in all sorts of other data that wasn't in the original training data and use that to discover simplifying properties of the formula-... and that lets us break it apart, often into many simpler pieces, in a kind of divide and conquer approach that we... So we were able to solve all of those 100 formulas, discover them automatically-
- LFLex Fridman
Mm-hmm.
- MTMax Tegmark
... plus a whole bunch of other ones. And, uh, (laughs) it's, uh, it's actually kind of humbling to see that th- this code, which anyone who wants now, listening to this, can type "pip install A.I. Feynman" on their computer-
- LFLex Fridman
Mm-hmm.
- MTMax Tegmark
... and run it, you know, it can actually do what Johannes Kepler spent four years doing-
- LFLex Fridman
(laughs)
- MTMax Tegmark
... when he stared at Mars data-
- LFLex Fridman
Yeah.
- MTMax Tegmark
... until he's like, "Finally! Eureka! This is an ellipse!"
- LFLex Fridman
Yeah.
- MTMax Tegmark
You know? (laughs) This will do it automatically for you in one hour, right? Or Max Planck, he was looking at, uh, how much radiation comes out from, at different wavelengths, from a hot object and discovered the famous blackbody formula. (laughs) This discovers it automatically. Uh, I'm, I'm actually excited about th- seeing if we can discover not just old formulas again, but new formulas that no one has seen before. S-
- LFLex Fridman
And do you like this process of using kind of a neural network to find some basic insights, and then dissecting the neural network to then gain the final... So that that's, in that way, you've, um... F- forcing the explainability issue, uh, f- you know, dr- really trying to analyze a neural network, uh, for the things it knows in order to come up with the final, beautiful, simple theory underlying-
- 24:57 – 42:33
AI safety
- MTMax Tegmark
and there seems to be basically two strategies I see in industry now. One scares the heebie-jeebies out of me, and the other one I find much more encouraging.
- LFLex Fridman
Okay. Which one? Uh, can, can we break them apart? Which, which of the two? (laughs)
- MTMax Tegmark
The one that scares the heebie-jeebies out of me is this attitude that we're just gonna make ever bigger systems that we still don't understand-
- LFLex Fridman
Uh-huh.
- MTMax Tegmark
... until they can do ... be as smart as humans. I ... What could possibly go wrong? (laughs) Right?
- LFLex Fridman
Yeah.
- MTMax Tegmark
I, I think it's just such a reckless thing to do. And, and unfortunately ... And if we actually succeed as a species to build artificial general intelligence and we still have no clue how it works, I think at least 50% chance we're gonna be extinct before too long, and it's just gonna be an utter epic, uh, own goal, you know?
- LFLex Fridman
So it's that 44-minute s- uh, losing money problem or, like, the paperclip problem, like where we don't understand how it works and it just in a matter of seconds runs away in some kinda direction that's going to be very problematic.
- MTMax Tegmark
Even long before you have to worry about the machines themselves, uh, somehow deciding to do things and ... to us, that we have to worry about people using machines that are short of AI, AGI in power, to do bad things. I mean, just t- take a moment and if, if anyone is not worried particularly about advanced AI, just take 10 seconds and just think about your least favorite leader on the planet right now. Don't tell me who it is.
- LFLex Fridman
Mm-hmm.
- MTMax Tegmark
I wanna keep this apolitical (laughs) . But just see the face in front of you of that person for 10 seconds.
- LFLex Fridman
Yes.
- MTMax Tegmark
Now imagine that that person has this incredibly powerful AI under their control, and can use it to impose their will on the whole planet. How does that make you feel? (laughs)
- LFLex Fridman
Yeah. So the, the, the ... Can, can we break that apart just briefly? For the 50% chance that we'll run into trouble with this approach, do you see the bigger worry in that leader or humans using the system to do damage? Or are you more worried ... And I think, um, I'm in this camp, more worried about, like, accidental, unintentional destruction of everything. Sort of like humans trying to do good, and like in a way where everyone agrees it's kinda good. It's just they're trying to do good without understanding, 'cause I think every-
- MTMax Tegmark
Yeah.
- LFLex Fridman
... evil leader in history thought they're ... to some degree thought they're trying to do good.
- MTMax Tegmark
Oh yeah, I'm sure Hitler thought he was doing-
- LFLex Fridman
Doing good, yeah.
- MTMax Tegmark
... committing himself he was trying to do good too.
- LFLex Fridman
Stal- I've been reading a lot about Stalin. I'm sure Stalin is from ... He legitimately thought that communism was good for the world and that he was doing good. Yeah.
- MTMax Tegmark
I think Mao Zedong thought what he was doing-
- LFLex Fridman
Yeah.
- MTMax Tegmark
... with the Great Leap Forward was good too. Yeah, uh, I th- I'm actually concerned about both of those. Uh, before ... I promised to answer this, uh, in detail, but before we do that I, let me finish answering the first question, 'cause I told you that there were two different routes-
- LFLex Fridman
Yes.
- MTMax Tegmark
... we could get to artificial general intelligence, and one scares the heebie-jeebies out of me.
- LFLex Fridman
(laughs)
- MTMax Tegmark
Uh, which is this one where we build something, we just say bigger neural networks, ever more hardware and-
- LFLex Fridman
Yes.
- MTMax Tegmark
... just train the heck out of it with more data, and poof, now it's very powerful. Um, that I think is the w- the most unsafe and reckless approach. The alternative to that is the intelligent, intelligible intelligence approach instead, where we, um, say neural networks is just a tool to disc- like, for the first step to get the intuition. But then we're gonna spend also serious resources, sources on o-other AI techniques for demystifying this black box and figuring out what's it actually doing so we can convert it into something that's equally intelligent, but that we actually understand what it's doing.
- LFLex Fridman
Mm-hmm.
- 42:33 – 53:31
Extinction of human species
- MTMax Tegmark
better off.
- LFLex Fridman
It seems that naturally with human beings, just like you've beautifully described the history of this whole thing, uh, of it all started with the genes, and they're, they're probably pretty upset by all the unintended consequences that happened since.
- MTMax Tegmark
(laughs)
- LFLex Fridman
But the, it seems that it kind of works out. Like, it's in this collective intelligence that emerges at the different levels, it seems to find, sometimes last minute, a way to realign the values or keep the values aligned. It's almost, um, it finds a way, like, uh, different leaders, different humans pop up all over the place that, uh, reset the system. Do, do you, one, I mean, (laughs) do you have an explanation why that is? Or is that just survivor bias?
- MTMax Tegmark
Uh...
- LFLex Fridman
And also, is that different, somehow fundamentally different than with AI systems, where, um, you're no longer dealing with something that was a, a direct, maybe companies are the same, a direct byproduct of the evolutionary process?
- MTMax Tegmark
I think there is one thing which has changed. That's why I'm not (laughs) all optimistic. That's why I think there's about a 50% chance if we, if we take the dumb route with, um, artificial intelligence, that, that we, human, humanity will be extinct in this century. Uh, f- first, just the big picture, yeah, companies need to have the right incentives. Even governments, right? We used to have governments, usually there were just some king, you know, who was the king because his dad was the king, you know? And, and then there were some benefits of having this powerful kingdom because, or empire of any sort, because then it could prevent a lot of local squabbles.
- LFLex Fridman
Right.
- MTMax Tegmark
So at least everybody in that region would stop warring against each other, and their incentives of different cities in the kingdom became more aligned, right? That's, that was the whole selling point.
- LFLex Fridman
(laughs) Yes.
- MTMax Tegmark
Harari.
- LFLex Fridman
Yeah.
- MTMax Tegmark
Yuval Harari has a beautiful piece on how empires were collaboration enablers, and then we also... Harari says he invented money for that reason, so we could have better alignment, and we could do trade even with people we didn't know. So, this sort of stuff has been playing out since time immemorial, right? What's changed is that it happens on ever-larger scales, right? The technology keeps getting better because science gets better. So now we can communicate over larger distances, transport things fast over larger distances, and the, so the entities get ever bigger, but our planet is not getting bigger anymore. So, in the past, you could have one experiment that just totally screwed up, like Easter Island, where they actually managed to have such poor alignment that when they went extinct, people there, (laughs) there was no one else to come back and replace them, right?
- LFLex Fridman
Yes.
- MTMax Tegmark
If Elon Musk doesn't get us to Mars, and then we go extinct on a global scale, then we're not coming back. That's, that's the fundamental difference, and that's an exp- a mistake I would rather that we don't make for that reason. In the past, of course, history is full of fiascos, right? But it was never the whole planet.
- LFLex Fridman
Mm-hmm, right.
- MTMax Tegmark
And, and then, okay, now there's this nice uninhabited land here, some other people could move in and organize things better. This is different. The second thing, which is also different, is that technology gives us so much more empowerment, right, both to do good things and also to screw up. In the Stone Age, even if you had someone whose goals were really poorly aligned, like maybe he was really pissed up- off because his Stone Age girlfriend dumped him, and he just wanted to... If he wanted to, like, kill as many people as he could-
- LFLex Fridman
Yeah.
- MTMax Tegmark
... how many could he really take out with a rock and a stick before he was overpowered, right?
- LFLex Fridman
Right.
- MTMax Tegmark
Just a handful, right?
- LFLex Fridman
Yeah.
- MTMax Tegmark
Uh, now... with today's technology, if we have an accidental nuclear war between Russia and the US, which we almost have about a dozen times, and then we have a nuclear winter, it could take out seven billion people, or six billion people, or we don't know. Uh, so the, so the scale of the damage is bigger that we can do, and if, if, um, it's, there's obviously no law of physics that says that technology will never get powerful enough that we could wipe out our species entirely. That would just be fantasy to think that science is somehow doomed not to get more powerful than that, right? And, and it's not at all unfeasible in our lifetime that someone could design a designer pandemic which spreads as easily as COVID, but just basically kills everybody. We already had smallpox. It killed one third of everybody who got it. And, and, um...
- LFLex Fridman
What do you think of the... here's an intuition, maybe it's completely naive and this optimistic intuition I have, which it seems, and maybe it's a biased experience that I have, but it seems like the most brilliant people I've met in my life all are g- really, like, fundamentally good human beings. And not like naive good, like they really want to do good for the world in a way that, well, maybe is aligned to my sense of what good means. And so I have a sense that the, uh, the people that will be defining the very cutting edge of technology, there will be much more of the ones that are doing good versus the ones that are doing evil. So the race, I'm optimistic on the us always, like, last minute coming up with a solution. So if, if there's an engineered pan- pandemic that has the capa- uh, capability to destroy most of the human civilization, it, it feels like to me either leading up to that, before, or as it's going on, there will be, uh, we're able to rally the, the collective genius of the human species. (laughs) I could tell by your smile that you're, uh-
- MTMax Tegmark
(laughs)
- LFLex Fridman
... at least some percentage, uh, um, doubtful. But is there, could that be a fundamental law of human nature, that evolution only create, it creates, uh, like, karma is beneficial, good is beneficial, and therefore we'll be all right?
- MTMax Tegmark
I (laughs) hope you're right. I would really love it if you're right, if there's some sort of law of nature that says that we always get lucky in the last second-
- LFLex Fridman
(laughs)
- MTMax Tegmark
... because of karma. But, you know, (laughs) I prefer, I prefer, uh, not playing it so close-
- LFLex Fridman
Yes.
- 53:31 – 1:15:05
How to fix fake news and misinformation
- MTMax Tegmark
- LFLex Fridman
So one of the observations, as one little ant/human that I am, of disappointment is the political division over information that has been observed, that I observed this year, that it seemed, uh, the discussion was less about, um, sort of, uh, what happened and understanding what happened deeply, and more about there's different truths out there. And it's like a argument, my truth is better than your truth, and it's, it's like red versus blue, or different, like, it, it was like this ridiculous discourse that doesn't seem to get at any kind of notion of the tru- It's not like a, some kind of scientific process. Even science got politicized-
- MTMax Tegmark
Yeah.
- LFLex Fridman
... in ways that's very-
- MTMax Tegmark
Yeah.
- LFLex Fridman
... heartbreaking to me. Uh, you have an exciting project on the AI front, uh, of trying to rethink one of the, you mentioned corporations. There's one of the other collective intelligence systems that have emerged through all of this is social networks, and just the spread... The internet is, is the spread of information on the-... uh, the internet, our ability to share that information. There's all different kinds of news sources and so on. And so you said, like, let's, from first principles, let's rethink how we think about the news, how we think about information. Can you talk about this, uh, amazing effort that you're undertaking?
- MTMax Tegmark
Oh, I'd love to. This has been my big COVID project-
- LFLex Fridman
(laughs)
- MTMax Tegmark
... to spend nights and weekends on, ever since the lockdown. To segue into this actually, let me come back to what you said earlier, that you had this hope that, in your experience, people who you felt were very talented were often idealistic and wanted to do good. Frankly, I- I feel the same about all people, by and large. There are always exceptions. But I think the vast majority of everybody, regardless of education and whatnot, really are fundamentally good, right? So, how can it be that people still do so much nasty stuff, right?
- LFLex Fridman
Mm-hmm.
- MTMax Tegmark
I think it has everything to do with this, with the information that we're given.
- LFLex Fridman
Yes.
- MTMax Tegmark
You know? If you go into Sweden 500 years ago and you start telling all the farmers that, "Those Danes in Denmark, they're so terrible people," you know, and, "We have to invade them-"
- LFLex Fridman
Yes.
- MTMax Tegmark
... because they've done all these terrible things, that you can't fact check yourself, eh, a lot of peop- Swedes did that, right? And it, and, um, we've seen, we're seeing so much of this today in the world, both geopolitically, you know, where we are told that, that China is bad and Russia is bad and Venezuela is bad, and people in those countries are often told that we are bad. And we also see it at a micro level, you know, where people are told that, "Oh, those who voted for the other party are bad people." It's not just an intellectual disagreement, but they're bad people. And, um, we're getting ever more divided. And so how do you reconcile this with, with this intrinsic, uh, goodness, I, in people? I, I, I think it's pretty obvious that it has, again, to do with this, with the information that we're fed and given, right? We evolved to live in small groups where you might know 30 people in total, right? So you then had a system that was quite good-
- LFLex Fridman
Mm-hmm.
- MTMax Tegmark
... for assessing who you could trust and who you could not. And if someone told you that, you know, "Joe there is a jerk," but you had interacted with him yourself and seen him in action, and, and you would pre- quickly realize maybe that that's actually not quite accurate, right? Uh, but now that we ... the most people on the planet are people we've never met, it's very important that we have a way of trusting the information we're given. And so, okay, so where does the news project come in? Well, throughout history, you can go read Machiavelli, you know, from the 1400s, and you'll see how already then, they were busy manipulating people with propaganda and stuff. Propaganda is not new at all.
- LFLex Fridman
(laughs)
- MTMax Tegmark
And the incentives to manipulate people is just not new at all. What is it that's new? What's new is machine learning meets propaganda. That's what's new. That's why this has gotten so much worse. You know, some people like to blame certain individuals. Like, in my liberal university bubble, many people blame Donald Trump and say it was his fault. I see it differently. I think that what ha- ... Donald Trump just had this extreme skill at playing this game in the machine learning algorithm age, a game he couldn't have played, you know, 10 years ago. So what's changed? What's changed is, well, Facebook and Google and other companies, and I- I don't want ... I'm not bad-mouthing them. I have a lot of friends who work for these companies. They're good people. They deployed machine learning algorithms just to increase their profit a little bit, to just maximize the time people spent watching ads. And they had totally underestimated how effective they were gonna be. This was, again, the black box, non-intelligible intelligence.
- LFLex Fridman
Mm-hmm.
- MTMax Tegmark
They just noticed, "Oh, we're getting more ad revenue. Great." It took a long time until they even realized why and how, and how damaging this was for society. 'Cause of course, what the machine learning figured out was that the by far most effective way of gluing you to your little rectangle was to show you things that triggered strong emotions-
- LFLex Fridman
Mm-hmm.
- MTMax Tegmark
... anger, et cetera, resentment. And, uh, if it was true or not didn't really matter. It was also easier to find stories that weren't true if you weren't limited. That's just a limitation-
- LFLex Fridman
Right. (laughs)
- MTMax Tegmark
... to show people.
- LFLex Fridman
That's a- that's a very limiting factor, yes.
- MTMax Tegmark
And before long, we've got these amazing filter bubbles on a scale we had never seen before.
- LFLex Fridman
Yeah.
- MTMax Tegmark
Couple this to the fact that also the online news media were so effective that they killed a lot of print journalism. There is on- ... There's on- less than half as many journalists now in America, I believe, as there was, you know, a generation ago. You just couldn't compete with the online advertising. So all of a sudden, most people are not getting ... even reading newspapers. They get, get their news from social media. And most people only get news (laughs) in their little bubble. And so along comes now some people like Donald Trump who, who figured out ... among the first successful politicians to figure out how to really play this new game and become very, very influential. But I think that w- ... Donald Trump was a symp- ... Well, he, he, he took advantage of it. He didn't create ... The, the fundamental conditions were created by machine learning sort of taking over the news media. So this is what motivated my little COVID project here. So, you know, I said before, machine learning and tech in general is not evil, but it's also not good. It's just a tool-
- LFLex Fridman
Mm-hmm.
- 1:15:05 – 1:30:28
Autonomous weapons
- LFLex Fridman
- MTMax Tegmark
Yeah, even look at, look at just military AI now, right? Those 2020, um, it was so awesome to see these dancing robots. I loved it, right?
- LFLex Fridman
(laughs)
- MTMax Tegmark
Uh, but one of the biggest growth areas in robotics now is of course autonomous weapons, right? And, and 2020 was like the best marketing, uh, year ever for autonomous weapons, because in both Libya Civil War and in Nagorno-Karabakh, they made the decisive difference, right? And everybody else is like watching this, "Oh yeah, we wanna build autonomous weapons too." In, in Libya, you had on one hand our ally, the United Arab Emirates, that were flying their autonomous weapons that they bought from China, bombing Libyans, and on the other side you had our other ally, Turkey, flying their drones. And they, they had no skin in the game, any of these other countries, and of course it was the Libyans who really got screwed. In Nagorno-Karabakh you had actually... Again, so now Turkey is sending drones built by this company that was actually founded by a guy who went to MIT AeroAstro. Do you know that?
- LFLex Fridman
No.
- MTMax Tegmark
Bakroutiar? Yeah?
- LFLex Fridman
Mm-hmm.
- MTMax Tegmark
So MIT has a direct responsibility for, ultimately, this, and a lot of civilians were killed there, you know? And s- so because it was militarily so effective, now, now suddenly there's like a huge push, "Oh yeah, yeah, let's go build ever more autonomy into these li- these weapons and it's gonna be great." A- a- and, uh... I think, actually, people who are obsessed about some sort of future Terminator scenario right now are... Should start focusing on the fact that we have two much more urgent threats happening from machine learning. One of them is the whole destruction of democracy that we've talked about now-
- LFLex Fridman
Yes.
- MTMax Tegmark
... where, where, where our flow of information is being manipulated by machine learning, and the other one is that right now, you know, this is the year when the big arms race in... Out of control arms race in at least autonomous weapons is gonna start or it's gonna stop.
- LFLex Fridman
So you have a sense that there is, uh... Like 2020 was a instrumental catalyst for the race of... For the autonomous weapons race?
- MTMax Tegmark
Yeah, 'cause it was the first year when, when they proved decisive in the battlefield.
- LFLex Fridman
Oh, man.
- MTMax Tegmark
And, and these ones are still not fully autonomous, mostly they're remote controlled, right? But, you know, we could very quickly make things about, you know, the size and cost of a smartphone, which you just put in the GPS coordinates or the face of the one you want to kill, a skin color or whatever, and it flies away and, you know, does it, and... The, the real good reason why the US and all the other superpowers should put the kibosh on this is the same reason we decided to put the kibosh on bioweapons. So, you know, we gave the Future of Life Award, that we can talk more about later-
- LFLex Fridman
Yes.
- MTMax Tegmark
... to Mathew Meselson from Harvard before for convincing Nixon to ban bioweapons, and I asked him, "How did you do it?" (laughs) And he was like, "Well, I just said, 'Look, we don't want there to be a $500 weapon of mass destruction that even, all our enemies can afford, even non-state actors.'" And Nixon was like, "Good point." (laughs) Y- you know, it's in America's interest that the powerful weapons are all really expensive, so only we can afford them, or maybe some more stable adversaries, right? Uh, nuclear weapons are like that. But bioweapons were not like that, that's why we banned them, and that's why you never hear about them now. That's why we love biology.
- LFLex Fridman
So y- you have a sense that it's possible-
- MTMax Tegmark
Oh, yeah.
- LFLex Fridman
... for, for the big powerhouses, in terms of the, the big nations in the world to agree that autonomous weapons is not a race we wanna be on, that it doesn't end well?
- MTMax Tegmark
... yeah, because we, we know it's just gonna end in mass proliferation, and every terrorist e- everywhere is gonna have these super cheap weapons that they will use against us. Um, it, and our pa- and our politicians have to constantly worry about being assassinated every time they go outdoors by some anonymous little mini drone, you know? We don't want that. And i- if, even if the US and China and everyone else could just agree that you can only build these weapons if they cost at least 10 million bucks-
- LFLex Fridman
Mm-hmm.
- MTMax Tegmark
... that would, that would be a huge win for the superpowers and, frankly, for everybody. Um, the, um, you don't ... And people often push back and say, "Well, it's so hard to prevent cheating."
- LFLex Fridman
Mm-hmm.
- MTMax Tegmark
But hey, you could say the same about bioweapons, you know? Take any of your, our MIT colleagues in biology. Of course they could build some nasty bioweapon if they really wanted to. But first of all, they don't want to, 'cause they think it's disgusting, 'cause of the stigma. And second, even if they're some sort of nutcase and want to, it's very likely that some of their grad students or someone would rat them out because everyone else-
- LFLex Fridman
Yes.
- MTMax Tegmark
... thinks it's so disgusting, right?
- LFLex Fridman
Yes. Yeah.
- MTMax Tegmark
And in fact, we now know there was even a fair bit of cheating on the bioweapons ban.
- LFLex Fridman
Mm-hmm.
- MTMax Tegmark
But none, no countries used them because it was so stigmatized that it just wasn't worth revealing that they had cheated.
Episode duration: 3:02:43
Install uListen for AI-powered chat & search across the full episode — Get Full Transcript
Transcript of episode RL4j4KPwNGM
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome