Skip to content
The Joe Rogan ExperienceThe Joe Rogan Experience

Joe Rogan Experience #1350 - Nick Bostrom

Nick Bostrom is a Swedish philosopher at the University of Oxford known for his work on existential risk, the anthropic principle, human enhancement ethics, superintelligence risks, and the reversal test.

Joe RoganhostNick Bostromguest
Sep 12, 20192h 32mWatch on YouTube ↗

EVERY SPOKEN WORD

  1. 0:0015:00

    And here we go.…

    1. JR

      And here we go. All right, Nick. This is, uh, one of the things that scares people more than anything, is the idea that we're creating something, or someone's gonna create something, that's gonna be smarter than us, that's gonna replace us. Is that something we should really be concerned about?

    2. NB

      I presume you're referring to babies.

    3. JR

      (laughs)

    4. NA

      (laughs)

    5. JR

      I'm referring to artificial intelligence.

    6. NB

      Ah, yes.

    7. JR

      Ugh.

    8. NB

      Well, it's the, the big fear and the big hope, I think.

    9. JR

      Both?

    10. NB

      At the same time, yeah.

    11. JR

      How is it the big hope?

    12. NB

      Well, there are a lot of things wrong with the world as it is now.

    13. JR

      I'm trying to pull this up to your face, if you would.

    14. NB

      Um, all, all the problems we have, uh, most of them could be solved if we were smarter or if we had somebody on our side who are a lot smarter with better technology and so forth. Um, also, I think if we wanna imagine some really grand future where humanity or our descendants one day go out and colonize the universe, I think that's likely to happen, if it's gonna happen at all, after we have superintelligence that then develops the technology to make that possible.

    15. JR

      The real question is whether or not we would be able to harness this intelligence, or whether it would dominate.

    16. NB

      Yeah, that certainly is one question. Um, not the only. You could imagine that we harness it, but then use it for bad purposes as we have a lot of other technologies through history.

    17. JR

      Yeah.

    18. NB

      So I think there are really two challenges we need to meet. One, one is to make sure we can align it with human values, and then make sure that we, together, do something better with it than fighting wars or oppressing one another.

    19. JR

      I think... Well, what I'm worried about more than anything is that human beings are gonna become obsolete, that we're going to invent something that's the next stage of evolution. I'm, I'm really concerned with that. I'm really concerned with if we look back on ancient hominids, uh, Australopithecus, just think of some primitive ancestor of man, we don't wanna go back to that. Like, that, that's a terrible way to live. I'm worried that what we're creating is the next thing.

    20. NB

      I think we don't necessarily want, or at least I wouldn't be totally thrilled with, with a future where humanity as it is now was, was the last and final word, the pa- like, ultimate version beyond.

    21. JR

      Right.

    22. NB

      I, I think there's a lot of room for improvement.

    23. JR

      Sure.

    24. NB

      But not anything that is different is an improvement.

    25. JR

      Right.

    26. NB

      So, so the key would be, I think, to find some path forward where the best in us, uh, can continue to exist and develop, uh, to even greater levels. And maybe at the end of that path, it looks nothing like we do now. Maybe it's not two-legged, two-armed creatures running around with three pounds of thinking matter, right? It might be something quite different. But as long as it... what, what we value is, is present there, and ideally in a much higher degree than in the current world, then that could count as a success.

    27. JR

      Yeah, the idea that we're in a state of evolution, that we are... just like we look at ancient hominids, that we are eventually going to become something more advanced or at least more complicated than we are now. But what I'm worried is that biological life itself has so many limitations. When we look at the evolution of technology, if you look at Moore's law or if you just look at new cellphones, like, they just released a new iPhone yesterday and they talked about all these incremental increases in the ability to take photographs and wide-angle lenses and night mode and a new chip that works even faster, these things, uh... there's not... the, the word evolution's incorrect, but the innovation of technology is so much more rapid than anything we could ever even imagine, biologically. Like, if we had a thing that we'd create, if we created, um... instead of artificial intelligence in terms of, like, some- something in a chip or a computer, if we created a life form, a biological life form, but this biological life form was improving radically every year, like, it didn't even exist, like a... the iPhone existed in 2007, that's when it was invented. If we had something that was 12 years old, but all of a sudden was infinitely faster and better and smarter and wiser than it was 12 years ago and the newest version of it, version X1, we would, we would start going, "Whoa, whoa, whoa! P- hit the brakes on this thing, man." How, how many more generations before this thing's way smarter than us? How many more generations before this thing thinks that human beings are obsolete?

    28. NB

      Yeah, it's coming, coming at us fast it feels like.

    29. JR

      Yes.

    30. NB

      I mean, uh... and, but some, some people think, "Oh, it's, uh, slowing down now." Um...

  2. 15:0030:00

    One of the things…

    1. NB

    2. JR

      One of the things that scares me the most is the idea that if we do create artificial intelligence, then it will improve upon our design and create far more sophisticated versions of itself, and that it'll continue to do that until it's unrecognizable, until it reaches literally a godlike potential.

    3. NB

      Mm-hmm.

    4. JR

      That su- I mean, I forget what the real numbers were, maybe you could tell us, but someone had calculated, some reputable source had calculated the amount of improvement that sentient artificial intelligence would be able to create inside of a small window of time. Like, if it was allowed to innovate and then make better versions of itself, and those better versions of itself were allowed to innovate and make better versions of itself, you're talking about not an exponential increase of intelligence, but an explosion.

    5. NB

      Well, well, we don't know. So it, it's hard not to forecast the pace at which we will make advances in AI, b- because we just don't know how hard the problems are that we haven't yet solved.

    6. JR

      Right.

    7. NB

      And, you know, once you get to human level or a little bit above, I mean, who, who knows? It could be that there is some level where to get further you would need, like, to put in a lot of thinking time to kind of get there. Now, what is easier to, to estimate is if, if you just look at the speed, 'cause that's just a function of the hardware that you're running it on, right? So, so there we know that there is a lot of room in principle. If, if you look at the physics of computation and you look at what would an optimally arranged physical system be that was optimized for computation, that would be like way many, many orders above what, what we can do now. Um, and that then you could have arbitrarily large systems like that. So, um, from, from that point of view, we, we know that that could be things that would be like a million times faster than the human brain and, and with a lot more memory and stuff like that.

    8. JR

      That... And then something... If it did have a million times more power than the human brain, it could create something with a million times more comput- (clears throat) computational power than itself.

    9. NB

      Well-

    10. JR

      It could make better versions. It could continue to innovate. Like, if we-

    11. NB

      Let me-

    12. JR

      ... create something that we, and we say, "You are..." I mean, "It is sentient. It is artificial intelligence. Now, please go innovate. Please go follow the same directive and improve upon your design."

    13. NB

      Yeah. Well, but we don't know how, how long that would take then-

    14. JR

      Right.

    15. NB

      ... to get to... So, I mean, we already have sort of millions of times more thinking capacity than a human has. I mean, we have millions of humans.

    16. JR

      Right.

    17. NB

      Um, so if you, if you kind of break it down, you think there's like one milestone when you have maybe an AI that could do what one human can do, but then that might still be quite a lot of orders of magnitude, uh, you know, until it would be equivalent of the whole human species. Um, and maybe during that time other things happen. Maybe we upgrade, you know, our, our own abilities in some way. So there, there are some scenarios where it's so hard to get even to one human baseline that, that we kind of use this massive amount of resources just to barely create kind of, you know, a village idiot.

    18. JR

      Yes.

    19. NB

      Uh, using billions of dollars of compute, right?

    20. JR

      (laughs)

    21. NB

      So if, if that's the way we get there, then, I mean, it might take quite a while.... because you can't easily scale something that you've already spent billions of dollars building.

    22. JR

      Yeah, some people think the whole thing is blown out of proportion, that we're so far away from creating artificial general intelligence that resembles human beings, that it's all just vaporware.

    23. NB

      Mm-hmm.

    24. JR

      What do you say to those people?

    25. NB

      Uh, well, I mean, uh, uh, well, one, one would be that I would wanna be more precise about just how far away does it have to be in order for us, uh, to be rational to ignore it.

    26. JR

      Right.

    27. NB

      And it, it might be that if something is sufficiently important and high stakes, then even if it's not gonna happen in the next five, 10, 20, 30 years, it might still be wise for, you know, our pool of seven billion plus people to have some people actually thinking about this ahead of time. Um-

    28. JR

      Yeah, for sure.

    29. NB

      So, so, so some of these disagreements, I guess this is my point, are, are more apparent than real. Like, there's some people say it's gonna happen soon, and some other people say, "No, it's not gonna happen for a long time." And then, (laughs) you know, one, one person means by soon five years, and another person means by a long time five years. And, uh, you know, it's more of different attitudes rather than different specific beliefs. So, so I would first want to make sure that there actually is a disagreement.

    30. JR

      Mm-hmm.

  3. 30:0045:00

    And so this could…

    1. NB

      most desirable attributes.

    2. JR

      And so this could be a trend in terms of how human beings reproduce, that we instead of just randomly having sex, woman gets pregnant, gives birth to a child, we don't know what it's gonna be, what's, what's gonna happen, we just hope that it's a good kid. Instead of that, you start looking at the... all the various components that we can measure-

    3. NB

      Yeah. Uh, and so, I mean, in, to some extent, we already do this. There are a lot of, um, testing done, um, um, for various chromosomal ab- abnormalities that you can already check for. But, but our ability to, uh, to look beyond clear, stark diseases that is one gene is wrong-

    4. JR

      Right.

    5. NB

      ... like the, like the, to look at more complex trait is, is, is, is increasing rapidly. Um, so obviously, there are a lot of ethical issues and-

    6. JR

      Yeah.

    7. NB

      ... different things that come into that.

    8. JR

      That's what I was gonna get to.

    9. NB

      But if I, if we're just talking what is technologically feasible-

    10. JR

      Mm-hmm.

    11. NB

      ... I, I think that, that ... I mean, already you could do a very limited amount of that today, and maybe you'd get, you know, two or three IQ points in expectation more if you selected using current technology based on 10 embryos, let us say, so very small. But, but as genomics, uh, gets better at deciphering the genetic architecture of complex traits, like whether it's intelligence or, or personality attributes, then, then you, you would have more selection power and you could do more. A- and then there is a number of other technologies we don't yet have, but which if you did, would then kind of stack with that and, and enable much more powerful forms of, of enhancement. Um, so, so, so there, uh, yeah, I don't think there are any major technological hurdles really in, in the way, just some small amount of incremental further improvement.

    12. JR

      That's wh- when you talk about doing something with genetics and human beings and selecting, selecting for the superior versions, and then if everybody starts doing that, the ethical concerns when you start discussing that, people get very nervous-

    13. NB

      Mm-hmm.

    14. JR

      ... 'cause they start to look at their own genetic defects and they go, "Oh my God, what if I didn't make the cut?"

    15. NB

      Yeah.

    16. JR

      Like, "I wouldn't be here."

    17. NB

      Yeah.

    18. JR

      And then you start thinking about all the imperfect people that have actually contributed in some pretty spectacular ways-

    19. NB

      Yeah.

    20. JR

      ... to what our culture is. And like, w- what if everybody has perfect genes, would all these things even take place? Like, what are we doing really if we're bypassing nature and we're choosing to select for the traits and the attributes that we find to be the most positive and attractive? Like, what are ... like, that gets slippery.

    21. NB

      I, uh, and, and you, you think what, what would happen if, say, at some earlier age had had this ab- ability to kind of lock in their-

    22. JR

      Yes.

    23. NB

      ... you know, their, their, their prejudices or, um, if the Victorians had had this.

    24. JR

      Sure.

    25. NB

      Maybe we would all be, uh, whatever, pious and patriotic now or something.

    26. JR

      Yeah. Who knows? The Nazis.

    27. NB

      Uh, or any other, yeah. So, um, so, so in general, with all of these powerful technologies we, we are developing, there, there is ... I- I think the ideal course would be that we would first gain a bit more wisdom, and then we would get all of these powerful tools. Um, but it looks like we're getting the powerful tools before we have really a- achieved a very high level of wisdom.

    28. JR

      Yeah.

    29. NB

      And so-

    30. JR

      But we haven't earned them. The people that are using them are sort of, uh, we're, we haven't ... Like, think about the th- the technology that all of us use. How many, how many pieces of technology do you use in a day and how much do you actually understand any of those? Most people have very little understanding of how any of the things they use work. They put no effort at all into creating those things, but yet they've inherited the responsibility of the power that those things possess.

  4. 45:001:00:00

    Now, you pay attention…

    1. NB

      Um, so I think we could have gotten to all the, the good uses of nu- nuclear technology that we have today w- without having to had, had kind of the nuclear bomb developed.

    2. JR

      Now, you pay attention to, like, Boston Dynamics and all these, uh, all these different robotic creations that they've made?

    3. NB

      Well, they seem to have a penchant for doing really sinister-looking, um, bots.

    4. JR

      (laughs) I think all robots that are... uh, you know, anything that looks autonomous is kind of sinister-looking, if it could do backflips.

    5. NB

      Well, I mean, you see the Japan... Yeah, I mean, th- like, the Japanese have these, like, big eyes, sort of rounded.

    6. JR

      Yeah.

    7. NB

      So it's a different type.

    8. JR

      They're trying to trick us.

    9. NB

      Boston Dynamics is-

    10. JR

      Yes.

    11. NB

      ... I guess, they want the Pentagon to, uh, give them funding or something.

    12. JR

      Right, DARPA. Yeah, they, they look like they're developing terminators.

    13. NB

      Yeah.

    14. JR

      Yeah. But what, what I was thinking is, if we do eventually come to a time where those things are going to war for us instead of us, like, if we get involved in robot wars, our robots versus their robots-

    15. NB

      Yeah.

    16. JR

      ... and this becomes the next motivation for increased technological innovation to try to deal with superior robots by the Soviet Union or by China, like these, these are more things that could be threats that could push people to some crazy level of technological innovation.

    17. NB

      Yeah, it, it, it could. I mean, I think there are other drivers for technological innovation as well, um, that, that seems, uh, plenty, um, strong, um-

    18. JR

      Sure.

    19. NB

      ... like com- commercial drivers, let us say, um, that we wouldn't have to rely on, on war or the, the threat of war to, to kind of stay innovative. Um, and I mean, there has been this effort to try to see if it would be possible to, uh, have some kind of ban on lethal autonomous weapons.

    20. JR

      Mm-hmm.

    21. NB

      Um, just as... I mean, there are, there are a few-

    22. JR

      Drone.

    23. NB

      There are a few te- technologies that we have, like there is, has been a relatively successful ban on, on chemical and biological weapons, um, which have, by and large, been, you know, uh, honored and upheld. Um, there, there are kind of treaties on, on nuclear weapons, which has limited proliferation. Yes, there are now maybe, I don't know, a, a dozen. I don't know the exact number. But it, it's certainly a lot better than 50 or 100 countries.

    24. JR

      Yes.

    25. NB

      Um, and some other weapons as well, uh, uh, blinding lasers, um, landmines, cluster munitions.

    26. JR

      Mm-hmm.

    27. NB

      And so, so, so some people think may- maybe we could do something like this with, um, lethal autonomous weapons, killer bots that-

    28. JR

      Yeah.

    29. NB

      ... you know, do we... is that really what humanity needs m- most now? Like, another arms race to develop, like, killer bots? It seems arguably the answer to that is no. Um, I've, I've, I've kind of... there's a lot of my friends who are, who are supportive. I, I kind of stood a little bit on the sidelines on that particular campaign, being a little unsure, um, exactly what it is that... Well, I mean, certainly, I think it'd be better if we refrain from having some arms race to develop these than not. But if, if you start to look in more detail, what, what precisely is the thing that you're hoping to ban? So if the idea is the autonomous bit, like, the robot should not be able to make its own firing decision, well-

    30. JR

      Right.

  5. 1:00:001:15:00

    Right. Yeah. I mean,…

    1. JR

      beings. If you develop an autonomous robot that's really autonomous, it has no need for other people, that's where we get weirded out. Like it, it doesn't need us.

    2. NB

      Right. Yeah. I mean, I, I think the same would hold even if it were not a robot but just a program in- inside a computer.

    3. JR

      Sure.

    4. NB

      Um, but, but yeah. Yeah. And the, and the idea that you could have something that is strategic and deceptive and so forth.

    5. JR

      Yeah.

    6. NB

      So that, that... I mean, but then other elements of the movie, of course, and in general, w- a reason why it's bad to get your kind of map of the future from, from Hollywood is if it... So if, if you think, so is this one guy, presumably some genius living out in the nowhere and kind of inventing this whole system. Like in reality-

    7. JR

      Yeah.

    8. NB

      ... um, it's like anything else. There are a lot, like hundreds of people programming away on their computers, writing on whiteboards, and sharing ideas with other people across the world. Uh, it doesn't look like a human. Um, s- s- and, and that would often be some economic reason for doing it in the first place. Like not just, "Oh, we have this Promethean attitude that we want to-"

    9. JR

      Yeah.

    10. NB

      "... kind of bring..." And I like... So-Um, so all of those things don't make for such good plot lines, so they just get removed. But then-

    11. JR

      Mm-hmm.

    12. NB

      ... I wonder if people actually think th- of the future in terms of some kind of super villain and some hero and it's gonna come down to these two people and they're gonna wrestle and-

    13. JR

      Yeah.

    14. NB

      ... you know? Um, and it's gonna be very personalized and concrete and localized, whereas a lot of things that determine what happens in the world are very spread out and bureaucracies churning away and...

    15. JR

      Sure.

    16. NB

      Um...

    17. JR

      Yeah, that was a big problem that a lot of people had with the movie, was the idea that this one man could innovate at such a high level and be so far beyond everyone else is ridiculous. That he's just doing it by himself on-

    18. NB

      Yeah.

    19. JR

      ... this weird compound somewhere.

    20. NB

      Yeah.

    21. JR

      Come on.

    22. NB

      Yeah.

    23. JR

      That's, that... But that makes a great movie, right?

    24. NB

      Yeah.

    25. JR

      Fly in in the helicopter, drop you off in a remote location.

    26. NB

      Yeah.

    27. JR

      This guy shows you something he's created that is gonna change the whole world.

    28. NB

      And it looked beautiful. I mean, I-

    29. JR

      Yeah.

    30. NB

      ... can imagine doing some writer's-

  6. 1:15:001:18:12

    Right. …

    1. JR

      slaves.

    2. NB

      Right.

    3. JR

      They used to think it was slaves, but now because of the bones or the food they were eating really well, and they think that, well ... And also, the, the level of sophistication involved, this is not something you just get kind of slaves to do. You, this seems to be that there was a population of structural engineers, that there was a, a population of skilled construction people, and that they tried to, you know, utilize all of these great minds that they had back then-

    4. NB

      Mm.

    5. JR

      ... to put this thing together. But it's still a mystery. I think that's the spot that I would go to because I think it would be amazing to see so many different innovative times. I mean, it would amaze ... It'd be amazing to, to be, uh, alive during the time of Genghis Khan or, you know, to be alive during some of the, some of the wars of thousand, 2,000 years ago, just to see what it was like on the ... But the pyramids would be the big one. But I think if I was in the future, some weird dystopian future where artificial intelligence runs everything and, and human beings are, you know, linked to some sort of neurological implant that connects us all together and we long for the days of biological independence and we would like to see, what, what was it like when they first-

    6. NB

      Mm-hmm.

    7. JR

      ... started inventing phones? What was it like when the internet was first opened up for people? What was it like when people saw ... When, when, when someone had someone like you on a podcast and was talking about potential artificial intelligence and where it could lead us and what it could do?

    8. NB

      It's the most interesting time.

    9. JR

      It is the most interesting time.

    10. NB

      It is now.

    11. JR

      Yeah. That's what's cool about it to me is that we seem to be in this, this really Goldilocks period of great change, where we're still human but we're worried about privacy, we, we're concerned our phones are listening to us, we're concerned about surveillance states and, you know, p- people put little stickers over their laptop camera. W- we see it coming-

    12. NB

      Yeah.

    13. JR

      ... but it hasn't quite hit us yet. We're just seeing the problems that are associated with this increased level of technology in our lives.

    14. NB

      Which is, yeah, that, that is a strange thing if you add up all these pieces. It does-

    15. JR

      Yeah.

    16. NB

      ... put us in this very weirdly special position.

    17. JR

      Yeah.

    18. NB

      And you wonder, hmm, it's a little bit too much of a coincidence. I mean, it might be the case, but yeah, it, it does put some strain on it.

    19. JR

      When you say a little too much of a coincidence, how so?

    20. NB

      Well, so, um, I mean, I guess the intuitive way of thinking about it, like what way, like what, what are the chances that-

    21. JR

      Right.

    22. NB

      ... just by chance you would happen to be, uh, living in the most interesting time in history-

    23. JR

      Yeah.

    24. NB

      ... being like a celebrity, like whatever, like what, well, like that's pretty low prior probability. Like most people-

    25. JR

      Like you mean like for me?

    26. NB

      Well, for you. Or I mean, for, for ... But for all of us, really.

    27. JR

      For all of us.

    28. NB

      Um, um, and so, that, that, that could just be. I mean, uh, uh, if there's a lottery, somebody's got to have the ticket, right?

    29. JR

      Yeah. But, um-

    30. NB

      Or.

Episode duration: 2:32:58

Install uListen for AI-powered chat & search across the full episode — Get Full Transcript

Transcript of episode 5c4cv7rVlE8

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome