Skip to content
Lex Fridman PodcastLex Fridman Podcast

Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9

Lex Fridman and Stuart Russell on stuart Russell on Controlling Superhuman AI and Humanity’s Future Choices.

Lex FridmanhostStuart Russellguest
Dec 9, 20181h 26mWatch on YouTube ↗

EVERY SPOKEN WORD

  1. 0:0015:00

    The following is a…

    1. LF

      The following is a conversation with Stuart Russell. He's a professor of computer science at UC Berkeley and a co-author of a book that introduced me and millions of other people to the amazing world of AI, called Artificial Intelligence: The Modern Approach. So, it was an honor for me to have this conversation as part of MIT course on Artificial General Intelligence and The Artificial Intelligence podcast. If you enjoy it, please subscribe on YouTube, iTunes, or your podcast provider of choice, or simply connect with me on Twitter @lexfridman, spelled F-R-I-D. And now, here's my conversation with Stuart Russell. So you've mentioned in 1975, in high school, you've created one of your first AI programs that played chess.

    2. SR

      Yeah.

    3. LF

      Were you ever able to build a program that beat you at chess or another board game?

    4. SR

      Uh, so my program never beat me at chess. I actually wrote the program at Imperial College, so I used to take the bus every Wednesday with a box of cards this big, uh, and shove them into the card reader, and they gave us eight seconds of CPU time. It took about five seconds to read the cards in and compile the code, so we had three seconds of CPU time, which was enough to make one move, you know, with a not very deep search, and then we would print that move out, and then we'd have to go to the back of the queue and wait to feed the cards in again.

    5. LF

      How deep was the search?

    6. SR

      (laughs)

    7. LF

      What, are we talking about one move, two moves, three moves?

    8. SR

      Uh, so, no, I think we got, uh, we got an eight-move, uh, eight, you know, depth eight, um, with alpha-beta, and we had some tricks of our own about, um, move ordering and some pruning of the tree, and...

    9. LF

      But you were still able to beat that program?

    10. SR

      Yeah, yeah. I, I was a reasonable chess player in my youth.

    11. LF

      (laughs)

    12. SR

      I did an Othello program, uh, and a backgammon program. So when I got to Berkeley, I worked a lot on what we call meta-reasoning, which really means reasoning about reasoning. In, in the case of a game-playing program, you need to reason about what parts of the search tree you're actually going to explore, because the search tree is enormous, uh, you know, bigger than the number of atoms in the universe, and, and, uh, the way programs succeed and the way humans succeed is by only looking at a small fraction of the search tree. And if you look at the right fraction, you play really well. If you look at the wrong fraction, if you waste your time thinking about things that are never gonna happen, the moves that no one's ever gonna make, then you're gonna lose 'cause you, you won't be able to figure out the right decision. So that question of how machines can manage their own computation, how they, how they decide what to think about is the meta-reasoning question. We developed some methods for doing that, and very simply, a machine should think about whatever thoughts are going to improve its decision quality. We were able to show that both for Othello, which is a standard two-player game, and, uh, for backgammon, which includes, uh, dice rolls, so it's a two-player game with uncertainty. For both of those cases, we could come up with algorithms that were actually much more efficient than the standard alpha-beta search, uh, which chess programs at the time were using, and that, those programs could beat me. And I think you can see the same basic ideas in AlphaGo and AlphaZero today. The way they explore the tree is using a form of meta-reasoning to select what to think about based on how useful it is to think about it.

    13. LF

      Is there any insights you can describe without Greek symbols of how do we select which paths to go down?

    14. SR

      There, there's really two kinds of learning going on. So, uh, uh, as you say, AlphaGo learns to evaluate board positions. So it can, it can look at a Go board, and it actually has probably a superhuman ability to instantly tell how promising that situation is.

    15. LF

      Mm-hmm.

    16. SR

      To me, the amazing thing about AlphaGo is not that it can beat the world champion with its hands tied behind its back, but, uh, the fact that if you stop it from searching altogether, so you say, "Okay, you're not allowed to do any thinking ahead, right? You can just consider each of your legal moves, and then look at the resulting situation and evaluate it." So what we call a depth-one search.

    17. LF

      Mm-hmm.

    18. SR

      So just the immediate outcome of your moves and decide if that's good or bad. That version of AlphaGo can still play at a professional level, right? And even professionals are sitting there for five, 10 minutes deciding what to do, and AlphaGo, in less than a second, can instantly intuit what is the right move to make based on its ability to evaluate positions. Um, and that is remarkable, um, because, you know, we don't have that level of intuition about Go. We actually have to think about the situation. So anyway, that capability that, um, AlphaGo has is one big part of why it beats humans.

    19. LF

      Right.

    20. SR

      The other big part is that it's able to look ahead 40, 50, 60 moves into the future.

    21. LF

      Mm-hmm.

    22. SR

      And, you know, if it was considering all possibilities 40 or 50 or 60 moves into the future, that would be, you know, 10 to the 200 possibilities, so way, way more than, you know, atoms in the universe.

    23. LF

      Mm-hmm.

    24. SR

      And, and so on. So it's very, very selective about what it looks at.So, let me try to give you an intuition about how you decide what to think about. It's a combination of two things. One is, um, how promising it is.

    25. LF

      Mm-hmm.

    26. SR

      Right? So, if you're already convinced that a move is terrible, there's no point spending a lot more time convincing yourself that it's terrible.

    27. LF

      Mm-hmm.

    28. SR

      Uh, because it's probably not gonna change your mind. So, the- the real reason you think is because there's some possibility of changing your mind about what to do.

    29. LF

      Mm-hmm.

    30. SR

      Right? And it's that changing of mind that would result then in- in a better final action in the real world. So, that's the purpose of thinking, is to improve the final action in the real world. And so, if you think about a move that is guaranteed to be terrible, you can convince yourself it's terrible, you're still not gonna change your mind.

  2. 15:0030:00

    Mm-hmm. …

    1. SR

      plan ahead on the timescales involving billions or trillions of steps.

    2. LF

      Mm-hmm.

    3. SR

      Um, now we don't plan those in detail, but, you know, when you choose to do a PhD at Berkeley, that's a five-year commitment and that amounts to about a trillion motor control steps that you will eventually, uh, be committed to.

    4. LF

      Including going up the stairs, opening doors, drinking water-

    5. SR

      Type, yeah, I mean every-

    6. LF

      ... typing. (laughs)

    7. SR

      ... every, every finger movement while you're typing, every character of every paper and, you know-

    8. LF

      Yeah.

    9. SR

      ... thesis and everything else. So you're not committing in advance to the specific motor control steps, but you're still reasoning on a timescale that will eventually reduce to, uh, trillions of motor control actions. And, uh, so for all of these reasons, you know, AlphaGo and, and Deep Blue and so on don't represent any kind of threat to humanity, but they are a step towards it, right?

    10. LF

      Yes.

    11. SR

      That... And progress in AI occurs by essentially removing one by one-

    12. LF

      Mm-hmm.

    13. SR

      ... these assumptions that make problems easy, like the assumption of complete observability of the situation, right? If we remove that assumption, you need a much more complicated kind of computing design and you n- you need something that actually keeps track of all the things you can't see and tries to estimate what's going on, uh, and there's inevitable uncertainty, uh, in that. So it becomes a much more complicated problem. But, you know, we are removing those assumptions. We are starting to have algorithms that can cope with much longer timescales, that can cope with uncertainty, that can cope with partial observability. And so each of those steps sort of magnifies by 1,000 the range of things that we can do with AI systems.

    14. LF

      So the way I started in AI, I wanted to be a psychiatrist for a long time. I wanted to understand the mind in high school, and of course program and so on. And then I showed up, uh, University of Illinois to an AI lab and they said, "Okay, I don't have time for you, but here's a book, AI: A Modern Approach." I think it was the first edition at the time.

    15. SR

      Mm-hmm.

    16. LF

      (laughs) And, "Here, go, go, go learn this." And I remember the lay of the land was, well, it's incredible that we solved chess, but we'll never solve Go. I mean, it was pretty certain that Go in the way we thought about systems that reason was impossible to solve, and now we've solved it, so it's a very-

    17. SR

      Well, I think I, I would have said that it's unlikely we could take the kind of algorithm that was used for chess and just get it to scale up, uh, and work well for Go. And at the time what we thought was that in order to solve Go, we would have to do something similar to the way humans manage the complexity of Go, which is to break it down into kind of sub-games. So-

    18. LF

      Mm-hmm.

    19. SR

      ... when a human thinks about a Go board, they think about different parts of the board as sort of weakly connected to each other, and they think about, "Okay, within this part of the board, here's how things could go. In that part of the board, here's how things could go."

    20. LF

      Mm-hmm.

    21. SR

      And then you try to sort of couple those two analyses together.... and deal with the interactions and maybe revise your views of how things are gonna go in each part, and then you've got maybe five, six, seven, 10 parts of the board.

    22. LF

      Mm-hmm.

    23. SR

      And, um, that actually resembles the real world much more than chess does-

    24. LF

      Mm-hmm.

    25. SR

      ... because in the real world, you know, we have work, we have home life, we have sport, you know, whatever... different kinds of activities, you know, shopping. These all are connected to each other but they're weakly connected. So, when I'm typing a paper, you know, I don't simultaneously have to decide which order I'm gonna get the, you know, the milk and the butter. You know, that doesn't affect the typing. But I do need to realize, "Okay, I better finish this before the shops close because I don't have anything... I don't have any food at home."

    26. LF

      Right.

    27. SR

      Right? So there's some weak connection, but not in the way that chess works where everything is tied into a single stream of thought. So the, the thought was that Go... to, to solve Go, we'd have to make progress on stuff that would be useful for the real world. And in a way, AlphaGo is a little bit disappointing-

    28. LF

      (laughs) Right.

    29. SR

      ... because the p- the program design for AlphaGo is actually not that different from, from Deep Blue or f- or even from Arthur Samuel's checker playing program from the 1950s. And in fact, the... so the two things that make AlphaGo work is one, one is its amazing ability, ability to evaluate the positions, and the other is the meta-reasoning capability which, which allows it to, to explore some paths in the tree very deeply and to abandon other paths very quickly.

    30. LF

      So th- this word meta-reasoning, uh, while technically correct, inspires perhaps the, the wrong degree of power that AlphaGo has. For example, the word reasoning is a, is a powerful word. Let me ask you sort of... So you were part of the symbolic AI world for a while, like, where AI was, uh-

  3. 30:0045:00

    Yeah, on- on several…

    1. SR

      multiple deaths caused by poorly designed, uh, machine learning algorithms that don't really understand what they're doing.

    2. LF

      Yeah, on- on several levels. I think, uh, so on the perception side, uh, there's mistakes being made by those algorithms where the perception is very shallow. On the planning side, the lookahead like you said, and the thing that we come- come up against that's really interesting when you try to deploy systems in the real world is...... you can't think of an artificial intelligence system as a thing that responds to the world always. You have to realize that it's an agent that others will respond to as well. So, in order to drive successfully, you can't just try to do obstacle avoidance.

    3. SR

      Right.

    4. LF

      You-

    5. SR

      You can't pretend that you're invisible. (laughs) Right?

    6. LF

      (laughs)

    7. SR

      You're the invisible car.

    8. LF

      Right. So-

    9. SR

      Uh, it doesn't work that way.

    10. LF

      I mean, but y- you have to assert... y- others have to be scared of you. Just, we're all s- there's this tension. There's this game... So if w- we study a lot of work with pedestrians.

    11. SR

      Mm-hmm.

    12. LF

      If you approach pedestrians as purely an obstacle avoidance, so you're, you're doing look ahead as in modeling the intent, then you're, you're- they're not going to... they're going to take advantage of you. They're not going to respect you at all. There has to be a tension, a fear, a some amount of uncertainty. That's how we have creat- we keep on-

    13. SR

      Yeah. Or, or, or at least just a kind of a, a resoluteness. (laughs)

    14. LF

      Right. Yes, a cert-

    15. SR

      Let's put it that way.

    16. LF

      ...tainess.

    17. SR

      You ha- you have to display a certain amount of resoluteness. You can't, you can't be too tentative.

    18. LF

      Yeah.

    19. SR

      And, uh, yeah. So the... right. The, uh, the solutions then become pretty complicated, right?

    20. LF

      Right.

    21. SR

      You get into game theoretic-

    22. LF

      Yes.

    23. SR

      ...analyses and... So w- you know, at Berkeley now we're working a lot on this kind of interaction between machines and humans. Uh-

    24. LF

      And that's exciting. Yep.

    25. SR

      And, uh, so my colleague, uh, Anca Dragan, actually... You know, if you, if you formulate the problem game theoretically, and you just let the system figure out the solution-

    26. LF

      Mm-hmm.

    27. SR

      ... you know, it does interesting unexpected things. Like sometimes at a stop sign, if no one is going first, right, the car will actually back up a little.

    28. LF

      (laughs)

    29. SR

      Right?

    30. LF

      Interesting.

  4. 45:001:00:00

    (laughs) …

    1. SR

    2. LF

      (laughs)

    3. SR

      Uh, and you get a more complicated problem because, because now the interaction with the human becomes part of the problem. Because the human by making choices is giving you more information about the true objective. And that information helps you achieve the objective better. And so, that really means that you're mostly dealing with game theoretic problems, where you've got the machine and the human and they're coupled together, uh, rather than a machine going off by itself with a fixed objective.

    4. LF

      Which is fascinating on the machine and the human level, that we... When you don't have an, an objective means you're together coming up with an objective. I mean, uh, there's a lot of philosophy that, you know, you could argue that life doesn't really have meaning. We, we together agree on what gives it meaning, and we kind of culturally create, uh, things that give why the heck we are on this Earth anyway. Uh, we together-

    5. SR

      Yeah.

    6. LF

      ... as a society create that meaning, and you have to learn that objective. And one of the biggest... I, I thought that's where you were gonna go for a second. Uh, one of the biggest troubles we run into outside of statistics and machine learning and AI, in just human civilization, is when, uh, you look at, uh... I came from the S- I was born in the Soviet Union. And the history of the 20th century, uh, we ran into the most trouble, us humans, when there was a, a, a h- a certainty about the objective. And you do whatever it takes to achieve that objective, whether you're talking about Germany or communist Russia.

    7. SR

      Mm-hmm.

    8. LF

      Uh, all...

    9. SR

      Yeah, I, and, and, and, and-

    10. LF

      You, you get into trouble with humans' faith-

    11. SR

      ... as I would say with, uh, you know, corporations. In fact, some people argue that, you know, we don't have to look forward to a time when AI systems take over the world. They already have, and they're called corporations. Right? The corporations happen to be, uh, using people as components right now.

    12. LF

      Mm-hmm.

    13. SR

      Um, but they are effectively algorithmic machines, and they're optimizing an objective, which is quarterly profit, that isn't aligned with overall wellbeing of the human race. And they are destroying the world. I mean-

    14. LF

      Right.

    15. SR

      ... they're, they are primarily responsible for our inability to tackle climate change.

    16. LF

      Right.

    17. SR

      So, I think that's one way of thinking about what's going on with, uh, with corporations. But, uh, I think the point you're making i- is valid, that there, there are many systems in the real world where we've sort of prematurely fixed on the objective and then decoupled the, uh, the machine from those that it's supposed to be serving. Um, and I think you see this with government. Right? Government is supposed to be a machine that serves people, but instead, it tends to be taken over by people who have their own objective-

    18. LF

      Mm-hmm.

    19. SR

      ... uh, and use government to optimize that objective regardless of what people want.

    20. LF

      Do, do you have... Do you find appealing the idea of almost, uh, arguing machines where you have multiple AI systems with a clear fixed objective? We have in government the red team and the blue team-

    21. SR

      Mm-hmm.

    22. LF

      ... that are very fixed on their objectives and they argue and it kinda... Uh, many would disagree, but it kinda seems to make it work somewhat? That (laughs) , uh, the, the, the duality of it... Uh, I, th- a- I know a lot of ...

    23. SR

      Yeah. (laughs)

    24. LF

      ... ... we'll disagree. Okay, let's go 100 years back when there was... still was going on-

    25. SR

      Mm-hmm.

    26. LF

      ... or at the founding of this country. There was disagreements, and that disagreement is where, uh... So it was a balance between certainty and forced humility because the power was distributed.

    27. SR

      Yeah, I think that the, um, the- the nature of debate and- and disagreement argument takes, uh, as a premise, the idea that you could be wrong, right? Which means that y- y- you're not necessarily absolutely convinced that your objective is- is the correct one. Right? Um, if you were absolutely convinced, there'd be no point in having any discussion or argument because you would never change your mind-

    28. LF

      Mm-hmm.

    29. SR

      ... um, and there wouldn't be any- any sort of synthesis or- or anything like that. So- so I think you can think of argumentation as a- as an implementation-

    30. LF

      Mm-hmm.

  5. 1:00:001:15:00

    Yeah. …

    1. SR

      you know, we're just at the beginning, but already it's possible to make a movie of anybody saying anything-

    2. LF

      Yeah.

    3. SR

      ... uh, in ways that are pretty hard to detect. Yeah.

    4. LF

      Including yourself because you're on camera now and your voice is coming through with high resolution.

    5. SR

      Yeah, so- so you could-

    6. LF

      (laughs)

    7. SR

      ... take what I'm saying and replace it with pretty much anything else you wanted me to be saying.

    8. LF

      Yeah.

    9. SR

      Uh, and- and even it would change my lips and-

    10. LF

      Yeah.

    11. SR

      ... ex- facial expressions to fit. And, uh, there's actually not much in the way of, uh, real legal protection against that. I think in the commercial area you could say, "Yeah, that's, uh, you're using my brand," and so on. That there are, there are rules about that. But in the political sphere, I think it's, uh, at- at the moment, it's, you know, anything goes. So-

    12. LF

      Let me-

    13. SR

      ... that- that could be really, really damaging.

    14. LF

      And let me just, uh, try to make not an argument, but try to look back at history...... and, say something dark in essence is while regulation seems to be, oversight seems to be exactly the right thing to do here, it seems that human beings, what they naturally do is they wait for something to go wrong. If you're talking about nuclear weapons, you can't talk about nuclear weapons being dangerous until somebody actually, like the United States drops the bomb, or Chernobyl melting. Do you think we will have to wait f- for things going wrong in a way that's obviously damaging to society? Not an existential risk, but obviously damaging, or do you have faith that...

    15. SR

      I, I hope not but, I mean it, it... I, I think we do have to look at history and when... So the two examples you gave, nuclear weapons and nuclear power are very, very interesting because, you know, nuclear weapons, we knew in the early years of the 20th century that atoms contained a huge amount of energy, right? We had E=MC², we knew the, the mass differences between the different atoms and their components, and we knew that you might be able to make an incredibly powerful explosive. So H.G. Wells wrote science fiction book, I think, in 1912.

    16. LF

      Mm-hmm.

    17. SR

      Frederick Soddy, who was the guy who discovered isotopes, he's a Nobel Prize winner, he gave a speech in 1915 saying that, you know, one pound of this new explosive would be the equivalent of 150 tons of dynamite, which turns out to be about right. And, uh, you know, c- this was in World War I, right? So he was imagining how much worse the World, uh, War would be, uh, if we were using that kind of explosive. But the physics establishment simply refused to believe that these things could be made.

    18. LF

      Including the people who were making it. (laughs)

    19. SR

      Well, so they were doing the nuclear physics...

    20. LF

      I mean, eventually were the ones who made it-

    21. SR

      And, so-

    22. LF

      ... you talk about Fermi or whoever.

    23. SR

      Well, so up to, um... The, the development, uh, was, was mostly theoretical, so it was people using sort of primitive kinds of particle acceleration and, and doing experiments, uh, at the, at the level of single particles or collections of particles. They, they, they weren't yet thinking about how to actually make a bomb or anything like that, but they knew the energy was there and they figured if they understood it better, uh, it might be possible. But the physics establishment, their view, and I think because they did not want it to be true, their view was that it could not be true, uh, that this could not provide a way to make a super weapon. And, um, you know, there was this famous speech, um, ru- given by Rutherford, who was the sort of leader of nuclear physics and, um, it was on September 11th, 1933 and he, he said, you know, "Anyone who talks about the possibility of obtaining energy from transformation of atoms is talking complete moonshine." And, uh, the next, uh, the next morning Leo Szilard read about that speech and then invented the nuclear chain reaction. And so as soon as he invented... As soon as he had that idea that you could make a chain reaction with neutrons, because neutrons were not repelled by the nucleus so they could enter the nucleus and then continue the reaction, as soon as he has that idea, he instantly realized that the world was in deep doodoo.

    24. LF

      (laughs)

    25. SR

      Uh, because this is 1933, right? You know, Hitler had recently come to power in Germany, Szilard was in London, uh, and eventually became a refugee and, uh, and came to the US. And the, um... In the process of, of having the idea about the chain reaction, he figured out basically how to make a bomb and also how to make a reactor, and he patented the reactor in 1934, but because of the s- situation, the great power conflict situation that he could see happening, um, he kept that a secret. And so, um, between then and the beginning of World War II, people were working, including the Germans, on how to actually create neutron sources, right? What specific fission reactions would produce neutrons of the right energy to continue the reaction, and that was demonstrated in Germany, I think, in 1938, if I remember correctly. The first, uh, nuclear weapon patent was 1939 by the French, um, so this was actually-

    26. LF

      Interesting, isn't it?

    27. SR

      ... uh, you know, this was actually going on, you know, well before World War II really got going. And then, you know, the British probably had the most advanced capability in this area, but for safety reasons among others and plus just sort of just resources, they moved the program from Britain to the US, and then that became Manhattan Project. Uh, so the, the, the reason why we couldn't have any kind of oversight of nuclear weapons and nuclear technology was because we were basically already in, uh, an arms race and a war, and um...

    28. LF

      But you, you mentioned then in the 20s and 30s, so what are the echoes... (laughs) The way you've described this story, I mean, there's clearly echoes. Why do you think most AI researchers...... folks who are really close to the metal. They really are not concerned about AI. They don't think about it, uh, whether it's they don't want to think about it, it's... But what are the... Yeah, why do you think that is? What are the echoes of the nuclear situation to the current AI situation and, uh, what can we do about it?

    29. SR

      Mm-hmm. I, I think there is a, you know, a kind emot- motivated cognition, which is a... A term in psychology means that you believe what you would like to be true, uh, rather than what is true. And, uh, you know, it's- it's unsettling to think that what you're working on might be the end of the human race, obviously. So you would rather instantly deny it and come up with some reason why it couldn't be true. And, uh, you know, I have... I collected a, a long list of reasons that extremely intelligent, competent AI scientists have come up with for why we shouldn't worry about this. You know, for example, calculators are superhuman at arithmetic and they haven't taken over the world, so there's nothing to worry about.

    30. LF

      (laughs)

  6. 1:15:001:26:05

    Yes. …

    1. SR

      systems that are pursuing incorrect objectives, and because the AI system believes it knows what the objective is, it has no incentive to listen to us anymore, so to speak, right? It, it's just carrying out the, the strategy that it, it has computed as being the optimal solution. And, uh, you know, it may be that, in the process, it needs to acquire more resources to increase the possibility of success or, you know, prevent various failure modes by defending itself against interference. And so that collection of problems, I think, is something we can address.

    2. LF

      Yes.

    3. SR

      Uh, the other problems, uh, are, roughly speaking, you know, misuse, right? So even if we solve the control problem, we make perfectly safe, controllable AI systems, well, why? You know, why does Dr. Evil going to use those, right? He wants to just take over the world, and he'll make unsafe AI systems that, that then get out of control. So that's one problem which is sort of a, you know, uh, partly a policing problem, partly a, a sort of a cultural problem for the profession of how we teach people, uh, what kinds of AI systems are safe.

    4. LF

      You talk about autonomous weapons system and how pretty much everybody agrees that there's too many ways that that can go horribly wrong, and this great, uh, Slaughterbots movie that kinda illustrates that beautifully.

    5. SR

      (laughs)

    6. LF

      And it-

    7. SR

      Well, I want to talk a- that, that's another, there's another topic I, I'm happy to talk about. The... Just wanna mention the th- what I see as the third major failure mode, which is overuse. Not so much misuse, but overuse of AI, that we become overly dependent. So I call this the WALL-E problem. So if you've seen WALL-E-

    8. LF

      Yeah.

    9. SR

      ... the movie, all right, all the humans are on the spaceship, and the machines look after everything for them, and they just watch TV and drink Big Gulps. And, uh, they're all sort of obese and stupid and, and they've sort of totally lost any notion of human autonomy. And, um, you know, so i- i- in effect, right, this would happen like the slow-boiling frog, right? We would gradually turn over more and more of the management of our civilization to machines, as we are already doing.

    10. LF

      Mm-hmm.

    11. SR

      And this, you know, this, if this process continues, you know, we, we sort of gradually switch from sort of being the masters of technology to just being the guests. Right? So, so we become guests on a cruise ship, you know, which is fine for a week, but not f- not for the rest of eternity.

    12. LF

      Right.

    13. SR

      You know, and it's almost irreversible, right? Once you, once you lose the incentive to, for example, you know, learn to be an engineer or a doctor or a sanitation, uh, operative or, or any other of the, uh, uh, the infinitely many ways that we maintain and propagate our civilization. You know, if you, if you don't have the incentive to do any of that, you won't, and then it's really hard to recover.

    14. LF

      And of course, AI is just one of the technologies that could... That third failure mode result in that. There's probably other... Technology in general detaches us from, um...

    15. SR

      It does a bit, but the, the, the-

    16. LF

      ...

    17. SR

      (...) independence. ...difference is that in terms of the knowledge to, to run our civilization, you know-

    18. LF

      Ah.

    19. SR

      ...up to now, we've had no alternative but to put it into people's heads.

    20. LF

      Right.

    21. SR

      Right? And if you, if you-

    22. LF

      But with software, with Google, I mean, so software in general, so AI broadly defined

    23. SR

      So, so compute- computers in general-

    24. LF

      Yeah.

    25. SR

      ...but, but the, you know, the knowledge o- of how, you know, how a sanitation system works, you know, that's an... The AI has to understand that. It's no good putting it into Google. So, I mean, we've, we've always put knowledge in- on paper, but paper doesn't run our civilization. It only runs when it goes from the paper into people's heads again. Right? So we've always propagated civilization-... through human minds.

    26. LF

      Mm-hmm.

    27. SR

      And we've spent about a trillion person years doing that.

    28. LF

      (laughs)

    29. SR

      I, I... Literally, right?

    30. LF

      Trillion.

Episode duration: 1:26:20

Install uListen for AI-powered chat & search across the full episode — Get Full Transcript

Transcript of episode KsZI5oXBC0k

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome