Lex Fridman PodcastNick Bostrom: Simulation and Superintelligence | Lex Fridman Podcast #83
EVERY SPOKEN WORD
150 min read · 30,181 words- 0:00 – 2:48
Introduction
- LFLex Fridman
The following is a conversation with Nick Bostrom, a philosopher at University of Oxford and the director of the Future of Humanity Institute. He has worked on fascinating and important ideas in existential risk, simulation hypothesis, human enhancement ethics, and the risks of super intelligent AI systems, including in his book, Superintelligence. I can see talking to Nick multiple times in this podcast, many hours each time, because he has done some incredible work in artificial intelligence, in technology space, science, and really philosophy in general. But we have to start somewhere. This conversation was recorded before the outbreak of the coronavirus pandemic, that both Nick and I, I'm sure, will have a lot to say about next time we speak. And perhaps that is for the best, because the deepest lessons can be learned only in retrospect, when the storm has passed. I do recommend you read many of his papers on the topic of existential risk, including the technical report titled Global Catastrophic Risks Survey that he co-authored with Anders Sandberg. For everyone feeling the medical, psychological, and financial burden of this crisis, I'm sending love your way. Stay strong. We're in this together. We'll beat this thing. This is the Artificial Intelligence podcast. If you enjoy it, subscribe on YouTube, review it with five stars on Apple podcast, support on Patreon, or simply connect with me on Twitter @lexfridman, spelled F-R-I-D-M-A-N. As usual, I'll do one or two minutes of ads now and never any ads in the middle that can break the flow of the conversation. I hope that works for you and doesn't hurt the listening experience. This show is presented by Cash App, the number one finance app in the App Store. When you get it, use code LEXPODCAST. Cash App lets you send money to friends, buy bitcoin, and invest in the stock market with as little as $1. Since Cash App does fractional share trading, let me mention that the order execution algorithm that works behind the scenes to create the abstraction of fractional orders is an algorithmic marvel. So big props to the Cash App engineers for solving a hard problem that, in the end, provides an easy interface that takes a step up to the next layer of abstraction over the stock market, making trading more accessible for new investors and diversification much easier. So again, if you get Cash App from the App Store or Google Play and use the code LEXPODCAST, you get $10 and Cash App will also donate $10 to FIRST, an organization that is helping to advance robotics and STEM education for young people around the world. And now, here's my conversation with Nick Bostrom.
- 2:48 – 12:17
Simulation hypothesis and simulation argument
- LFLex Fridman
At the risk of asking the Beatles to play Yesterday or the Rolling Stones to play Satisfaction, let me ask you the basics. What is the simulation hypothesis?
- NBNick Bostrom
That we are living in a computer simulation.
- LFLex Fridman
What is a computer simulation? How are we supposed to even think about that?
- NBNick Bostrom
Well, so the hypothesis is meant to be understood in a literal sense. Not that we can kind of metaphorically view the universe as an information processing physical system, but that there is some advanced civilization who built a lot of computers, and that what we experience is an effect of what's going on inside one of those computers. So that the- the world around us, uh, our own brains, everything we see and perceive and think and feel would exist because this computer, uh, is, uh, running certain programs.
- LFLex Fridman
So- so do you think of this computer as something similar to the computers of today, these deterministics of Turing machine type things? Is that what we're supposed to imagine, or are we supposed to think of something more like, um, like a- like a quantum mechanical system? Something much bigger, something much more complicated, something much more mysterious from our current perspectives? Or do-
- NBNick Bostrom
Uh, the- the ones we have today would do fine. I mean, bigger, certainly. You'd need more- ... memory and more processing power. I don't think anything else would be required. Now, it might well be that they do have addition- may- maybe they have quantum computers and other things that would give them even more umph. It seems kind of plausible, but I don't think it's a- a necessary assumption in order to, uh, get to the conclusion that a ma- a technologically mature civilization would be able to create these kinds of computer simulations with conscious beings inside them.
- LFLex Fridman
So do you think the simulation hypothesis is an idea that's most useful in philosophy, computer science, physics? Sort of where do you see it, um, having valuable kind of star- starting point in terms of a thought experiment of it?
- NBNick Bostrom
Is it useful? I- I guess it's more-
- LFLex Fridman
(laughs)
- NBNick Bostrom
... uh, in- in- in- in informative and interesting and maybe important-
- LFLex Fridman
But-
- NBNick Bostrom
... but it's not designed to be useful for something else. Um...
- LFLex Fridman
Well, okay, interesting, sure. But is it philosophically interesting or d- is there some kind of implications to computer science and physics?
- NBNick Bostrom
I think not so much for computer science or- or physics per se. Certainly it would be of interest in philosophy. I think also, um, to say cosmology or physics inasmuch as you are interested in the fundamental building blocks of the world and the rules that govern it. Um, if we are in a simulation, there is then the possibility that, say, physics at the level where the computer running the simulation, um, could- could be different from the physics governing phenomena in the simulation. So I- I think might be interesting from-... point of view of r- religion, or just for, for kind of trying to figure out what- what the heck is going on. Um, so we mentioned the simulation hypothesis so far. Now, there- there is also the simulation argument, which I- I- I- I tend to make a distinction. So, simulation hypothesis, we are living in a computer simulation. Simulation argument, this argument that, um, tries to show that one of three propositions is true, one of which is the simulation hypothesis, but there are two alternatives, um, in the original simulation argument, whi- which we can get to.
- LFLex Fridman
Yeah, let's- let's go there. By the way, confusing terms because-
- NBNick Bostrom
Yeah.
- LFLex Fridman
... uh, people will, I think, probably naturally think simulation argument equals simulation hypothesis.
- NBNick Bostrom
Yeah.
- LFLex Fridman
Uh, just terminology-wise. But let- let's go there. So, simulation hypothesis means that we are living in a simulation. It's the hypothesis that we're living in a simulation, and the simulation argument has these three complete possibilities that cover all possibilities. So, what are they?
- NBNick Bostrom
Yeah. So, it's like a disjunction. It says at least one of these three is true.
- LFLex Fridman
Yes.
- NBNick Bostrom
Although it doesn't, on its own, tell us which one. Um, so the first one is that almost all, uh, civilizations at our current stage of technological development, um, go extinct before they reach technological maturity. So, there is some, you know, Great Filter, um, that makes it so that basically none of the civilizations throughout, you know, maybe vast cosmos, uh, will ever get to realize the full potential of technological development.
- LFLex Fridman
And this could be... theoretically speaking, this could be because most civilizations kill themselves too eagerly, or destroy-
- NBNick Bostrom
Yeah.
- LFLex Fridman
... themselves too eagerly, or it might be super difficult to build a simulation so that the span of time-
- NBNick Bostrom
Theoretically, it could be both. Now, I think it looks like we would technologically be able to get there in- in a time span that is short compared to, say, the lifetime of planets and other sort of astronomical processes.
- LFLex Fridman
So, your intuition is, uh, to build a simulation is not...
- NBNick Bostrom
Well, so there's this interesting concept of technological maturity. Um, it's kind of an interesting concept to have other purposes as well. We can see, even based on our current limited understanding, what some lower bound would be on the capabilities that you could realize by just developing technologies that we already see are possible. So, for example, one- one of my research fellows here, Eric Drexler, back in- in the '80s, um, studied, uh, molecular manufacturing. That is, you could, um, analyze, using theoretical tools and computer modeling, the performance of various molecularly precise structures that we didn't then, and still don't today, have the ability to actually, uh, fabricate.
- LFLex Fridman
Mm-hmm.
- NBNick Bostrom
But you could say that, "Well, if we could put these atoms together in this way, then the system would be stable and it would, you know, rotate with- at this speed and have all these computational, uh, characteristics." And he also outlined some pathways that would enable us to get to this kind of molecularly manufacturing in- in the fullness of time. And you could do other- other studies we've done. You could look at the speed at which, say, it would be possible to colonize the galaxy if you had mature technology. We- we have an upper limit, which is the speed of light. We have sort of a lower current limit, which is how fast current rockets go. We- we know we can go faster than that by just, you know, making them bigger and have more fuel and stuff. And- and you can then start to, um, describe the technological affordances that would exist once a civilization has had enough time to develop, even- at least those technologies we already know are possible. Then maybe they would discover other new physical phenomena as well that we haven't realized that would enable them to do even more. But- but at least there is this kind of basic set of capabilities.
- 12:17 – 15:30
Technologically mature civilizations
- LFLex Fridman
So, uh, sorry, just to define some terms. So technologically mature civilization is one that took that piece of technology to its, to its lower bound? What is a technologically mature civilization?
- NBNick Bostrom
Well, okay, so that mean it's a stronger concept than we really, really need for the simulation hypothesis. I just think it's interesting in its own right.
- LFLex Fridman
Definitely.
- NBNick Bostrom
Um, so it would be the idea that there is, um, a- a- some stage of technological development where you've basically maxed out, that you developed all those general purpose, widely useful technologies that could be developed, um, or at least kind of come very close to the 99.9% there or something. Um, so that's- that's- that's an independent question. You can think either that there is such a ceiling, or you might think it just goes... the technology tree just goes on forever.
- LFLex Fridman
Where- where- where does your sense fall? Like-
- NBNick Bostrom
I would guess that there is a- a- a maximum that you would start to asymptote towards.
- LFLex Fridman
So new things won't keep springing up?
- NBNick Bostrom
I don't-
- LFLex Fridman
New- new ceilings.
- NBNick Bostrom
Uh, in terms of basic technological capabilities, I- I think that, yeah, there is like a finite set of those that can exist in this universe. Um, more of a... I mean, I- I- I- I wouldn't be that surprised if we actually reached close to that level fairly shortly after we have, say, machine superintelligence. So I don't- I don't think it would take millions of years, uh, for a human originating civilization to- to- to begin to do this. It- it- it... I think it's, like, more- more likely to happen on historical time scales. But that- that's- that's an independent speculation from-
- LFLex Fridman
(laughs)
- NBNick Bostrom
... the simulation argument. I mean, for the purpose of the simulation argument, uh, it doesn't really matter whether it goes indefinitely far up or whether there is a ceiling, as- as long as we know we could at least get to a certain level. And- and it also doesn't matter whether that's gonna happen in 100 years or- or 5,000 years or 50 million years. Like the time scales really don't make any difference for the simulation.
- LFLex Fridman
C- can you linger on that a little bit? Like, uh, there's a big difference between 100 years and, uh, 10 million years.
- NBNick Bostrom
Yeah.
- LFLex Fridman
So wh- it does it really not matter... like, 'cause you just said... does it matter if we jump scales to, uh, beyond historical scales? So w- we described that. So for the simulation argument, sort of doesn't it matter that we, um... if it takes 10 million years, it gives us a lot more opportunity to destroy civilization in the meantime?
- NBNick Bostrom
Yeah, well, so it would shift around the probabilities between these three alternatives.
- LFLex Fridman
Okay.
- NBNick Bostrom
That is, if- if we are very, very far away from being able to create these simulations, if it's like say billions of years into the future, then it's more likely that we will fail ever to get there. There are more time for us to kind of, you know, go- go extinct along the way, and so similarly for other civilizations.
- LFLex Fridman
So it is important to think about how hard it is to build a simulation in terms of-
- NBNick Bostrom
For- for- i- in terms of, yeah, figuring out which of the, uh, disjuncts.
- LFLex Fridman
Yeah.
- NBNick Bostrom
Uh, but for the simulation argument itself, which is agnostic as to which of these three alternatives is true.
- LFLex Fridman
Yeah, yeah, okay, okay. (laughs)
- NBNick Bostrom
Uh, we're... is... like, you don't have to assu-... like the simulation argument would be true whether or not we thought this could be done in 500 years or it would take 500 million years.
- LFLex Fridman
No,
- 15:30 – 19:08
Case 1: if something kills all possible civilizations
- LFLex Fridman
for sure. The simulation argument stands. I mean, I'm sure there might be some people who oppose it, but, uh, it doesn't matter. I think it's- it's very nice those three cases cover it, but n- the fun part is at least not saying what the probabilities are, but kind of thinking about kind of intuiting reasoning about like what's more likely, what, uh, what are the kind of things that would make some of the arguments less and more so like... but let's actually... I don't- w- I don't think we went through them. So number one is we destroy ourselves before we ever create simulate- uh...
- NBNick Bostrom
Right. So that's kind of sad, but we have to think not just what- what might destroy us. I mean, so there- there could be some whatever disasters or meteor slamming the earth, uh, a few years from now that- that could destroy us, right? But, um, you'd have to postulate in order for this first disjunct to be true that almost all civilizations throughout the cosmos also, um, failed to reach technological maturity.
- LFLex Fridman
And the underlying assumption there is that there's likely a very large number of other intelligent civilizations.
- NBNick Bostrom
Well, if- if there are, yeah, uh, then- then they would virtually all have to succumb in the same way. I mean then that- that leads off another... I guess there are a lot of little digressions that are interesting.
- LFLex Fridman
Definitely, let's go there. Let's go there.
- NBNick Bostrom
So, yeah. I mean-
- LFLex Fridman
You're keeping dragging us back. (laughs)
- NBNick Bostrom
Well there are these... there is a set of basic questions that always come up, uh, in conversations with interesting people.
- LFLex Fridman
Yeah.
- NBNick Bostrom
Like the Fermi paradox, like there's like... (laughs) you could almost define whether a person is interesting, whe- whether at some point the question of the Fermi paradox comes up, like, um...
- LFLex Fridman
(laughs)
- NBNick Bostrom
Well, so for what it's worth, it looks to me that the universe is very big, uh, meaning in fact, according to m- the most popular current cosmological theories, infinitely big. Um, and so then it- it would follow pretty trivially that- that it would contain a lot of other civilizations. Uh, in fact, infinitely many, um, if you have some local stochasticity and infinitely many... it's like, you know, infinitely many lumps of matter one next to another, there's kind of random stuff in each one, then you're gonna get all possible outcomes with probability one, uh, infinitely repeated. Um, so- so then- then certainly there would be a lot of extraterrestrials out there. Um, I- m- even- even short of that, if the universe is very big, there might be a finite but large number. Um, if- if we were literally the only one, yeah, then- then of course, uh, if we went extinct, then all of civilizations at our current stage would have gone extinct before becoming technologically mature. So then it kind of becomes trivially true that-... a very high fraction of those went extinct. Um, but if we think there are many, I mean, it's interesting because there are certain things that plausibly could kill us, like a cer- if you look at existential risks. Um, and it might be a different, like that- that- that the best answer to what would be most likely to kill us might be a different answer than the best answer to the question, um, "If there is something that kills almost everyone, what would that be?" 'Cause that would have to be some risk factor that was kind of uniform-
- LFLex Fridman
Mm-hmm.
- NBNick Bostrom
... uh, over all possible civilizations.
- LFLex Fridman
Yeah. So w- in this, for the- for the sake of this argument, you have to think about not just us, but, like, every civilization dies out before they create the simulation.
- NBNick Bostrom
Yeah. Or- or something very close to-
- LFLex Fridman
To- to everybody.
- NBNick Bostrom
... uh, everybody.
- LFLex Fridman
Okay.
- 19:08 – 22:03
Case 2: if we lose interest in creating simulations
- LFLex Fridman
So, what's number two in the simulation?
- NBNick Bostrom
Well, so number two is the convergence hypothesis that is- that maybe, like, a lot of- some of these civilizations do make it through to technological maturity, but out of those who do get there, they, um, all lose interest in creating these simulations. So, they just- they have the capability of doing it, but they choose not to. And-
- LFLex Fridman
Yeah.
- NBNick Bostrom
... not just a few of them decide not to, but, you know, uh, you know, out of a million, you know, maybe not even a single one of them would do it.
- LFLex Fridman
And I- I think when you say, "lose interest," that sounds, like, unlikely because it's like, uh, they get bored or whatever, but it could be so many possibilities-
- NBNick Bostrom
Yeah.
- LFLex Fridman
... within that. E- th- I mean, losing interest could be, um, it could be a- anything from it being exceptionally difficult to do, uh, to fundamentally changing the sort of the fabric of reality if you do it, eh, ethical concerns. All those kinds of things could be exceptionally strong pressures.
- NBNick Bostrom
Well, certainly, I mean, yeah, uh, ethical concerns. I mean, not really too difficult to do. I mean, in a sense, that's the first assumption that you- you get to technological maturity, where you would have the ability using only a tiny fraction of your resources, uh, to create many, many simulations. So, it wouldn't be the case that they would need to spend half of their GDP forever in order to create one simulation, and they had this, like, difficult debate about whether they should, you know, invest half of their GDP for this. It would more be like, well, if any little fraction of the civilization feels like doing this at any point during maybe their, you know, millions of years of existence, then (blows raspberry) there would be millions of simulations. Um, but- but certainly there could be m- many conceivable reasons for why there would be this conver- ma- ma- many possible reasons for not running ancestor simulations or other computer simulations, even if you could do so cheaply.
- LFLex Fridman
By the way, what's an ancestor simulation?
- NBNick Bostrom
Well, that would be a type of computer simulation that would contain people like those we think have lived on- on our planet in the past and like ourselves in terms of the types of experiences they have, and- and where those simulated people are conscious. So like, not just simulated in the same sense that a- a- a- a- a- a non-player character would be simulated in the current computer game, where it- it kind of has like an avatar body, and then a very simple mechanism that moves it forward or backwards or... But- but something where the- the- the simulated, uh, being has a brain, let's say, that's simulated at a sufficient level of granularity that, um, that it would have the same subjective experiences as we
- 22:03 – 26:27
Consciousness
- NBNick Bostrom
have.
- LFLex Fridman
So, where does consciousness fit into this? Do you think simulation... Like is there different ways to think about how this can be simulated, just like you're talking about now? Do we have to simulate each brain within the larger simulation? Is it enough to simulate just the brain, just the minds, and not the simulation, like not the big u- universe itself? Like is there different ways to think about this?
- NBNick Bostrom
Yeah. I guess there is a kind of premise in the simulation argument rolled in from philosophy of mind that is that it would be possible to create a- a- a conscious mind in a computer, and that what determines whether some system is conscious or not is- is not, like, whether it's built from or- organic biological neurons, but maybe something like what the structure of the computation is that it implements.
- LFLex Fridman
Right.
- NBNick Bostrom
Then we can discuss that if we want. But I think it would be more forward as far as my- my view would- that it would be sufficient, say, um, if you had a computation that was, um, identical to the computation in the human brain down to the level of neurons. So if you- if you had a simulation with 100 billion neurons connected in the same way as the human brain, and you then roll that forward with- with the same kind of synaptic weights and so forth, so you actually had the same behavior coming out of this as a human w- with that brain would have, then- then I think that would be conscious. Now, it's possible you could also generate consciousness, uh, without having that detailed a simulation. There I'm getting more uncertain exactly how much you could simplify or abstract away.
- LFLex Fridman
Can you linger on that? Wha- what do you mean? Is that... I- I missed where you're placing consciousness in this second.
- NBNick Bostrom
Well, so the- so if you are a computationalist, you think that what creates consciousness is the, uh, implementation of a computation.
- LFLex Fridman
So some property, emergent property of the computation itself-
- NBNick Bostrom
Yeah.
- LFLex Fridman
... is the idea.
- NBNick Bostrom
Yeah. You could say that. But then the question is which- wh- wha- what's the class of computations such that when they are run, consciousness emerges? So if you just have, like, something that adds one plus one plus one plus one, like a simple computation, you think maybe that's not gonna have any consciousness.If, if on the other hand, the computation is one, uh, like our human brains are performing, where, uh, as part of the computation there is like, you know, a, a global workspace, a sophisticated attention mechanism. There is like s- self-representations of other cognitive processes, and a whole lot of other things that plausibly would be conscious. And in fact, if it's exactly like ours, I think definitely it would. But exactly how much less than the full computation that the human brain is performing would be required, uh, is a little bit, I think, of an open question. Um, and he asked another, uh, interesting question as well, which is, would it be sufficient to just have, say, the brain or would you need the environment-
- LFLex Fridman
Right. That's a nice way of putting it.
- NBNick Bostrom
... in order to generate the same kind of experiences that we have? And there is a bunch of stuff we don't know. I mean, if you look at, say, current virtual reality environments, one thing that's clear is that w- we don't have to simulate all details of them all the time in order for, say, the- the human player to have the perception that there is-
- LFLex Fridman
Yeah.
- NBNick Bostrom
... a full reality in there. You can have, say, procedurally generated, which might only render a scene when it's actually within the view of the player character. Um, and so similarly, if this, if this, if this environment that, that we perceive is simulated, it might be that only the parts that come into our view are rendered at any given time. And a lot of aspects that never come into view, say, the- the- the details of this microphone I'm talking into, exactly what each atom is doing at any given point in time, mi- might not be part of the simulation, only a more course-grained representation.
- 26:27 – 28:50
Immersive worlds
- LFLex Fridman
So that- that to me is actually from an e- engineering perspective why the simulation hypothesis is really interesting to think about-
- NBNick Bostrom
Right.
- LFLex Fridman
... is how much, how difficult is it to fake sort of in a virtual reality context, I don't know if fake is the right word, but to construct a reality that is sufficiently real to us to be, um, to be immersive-
- NBNick Bostrom
Yeah.
- LFLex Fridman
... in the way that the physical world is? I think that's just, that's actually probably an answerable question of psychology of computer science, of how, how... where's the line where it becomes so immersive that you don't want to leave that world?
- NBNick Bostrom
Yeah, or that you don't realize while you're in it that it is a virtual world.
- LFLex Fridman
Yeah, those are two actually questions... yours is the more sort of the good question about the realism, but mine, from my perspective, what's interesting is, it doesn't have to be real, but it... um, how c- how can we construct a world that we wouldn't want to leave?
- NBNick Bostrom
Ah, yeah. I mean, I think that might be too low a bar. I mean, if you think, say, when- when people first had Pong or something like that, that I'm sure there were people who wanted to keep playing it for a long time, um, 'cause it was fun and they wanted to be in this little world. M- I'm not sure we would say it's immersive. I mean, I- I guess in some sense it is, but like an absorbing activity doesn't even have to be.
- LFLex Fridman
But they left that world though. That's the th-... so like I think that bar is, um, deceivingly high. So they eventually le-... so they... you can play Pong or StarCraft or whatever more sophisticated games for hours, for- for months, you know? Wow. Well, the war could have to be a big addiction, but eventually they escape that.
- NBNick Bostrom
Ah, so you mean when it's a- um, absorbing enough that you would spend your entire... you would-
- LFLex Fridman
Yeah.
- NBNick Bostrom
... choose to spend your entire life in there?
- LFLex Fridman
And then thereby changing the concept of what reality is.
- NBNick Bostrom
Right.
- LFLex Fridman
Because your reality... your- your- your reality becomes the- the game, not because you're fooled, but because you've made that choice.
- NBNick Bostrom
Yeah, and it may be different. People might have different preferences regarding that. Some- some might, uh, ev- even if you had any perfect virtual reality, uh, might still prefer not to spend the rest of their lives there. Um,
- 28:50 – 41:10
Experience machine
- NBNick Bostrom
I mean, in philosophy, there's this experience machine thought experiment. Have- have you come across this? So Robert Nozick had this thought experiment where you imagine some crazy, uh, super-duper neuroscientists of the future have created a machine that could give you any experience you want if you step in there. Um, and for the rest of your life, you can kind of pre-programmed it in different ways. Um, so y- y- your- your- your fondest dreams could come true. You could... hmm, whatever you- you dream you want to be, a- a great artist, a great lover, like have a wonderful life, all of these things, if you step into the experience machine will be your experiences constantly happy. And, um, but w- would you kind of disconnect from the rest of reality and you would float there in a tank. Um, and so Nozick thought that most people would, uh, choose not to enter the experience machine. I mean, men- many might want to go there for a holiday, but they wouldn't want us to check out of existence permanently. And so he thought that was an argument against, um, certain views of value according to what we... what we value is a function of what we experience, because in the experience machine, you can have any experience you want, and yet many people would think that would not be much value. So therefore, what we value depends on other things than what we, um, experience. So-
- LFLex Fridman
Okay, can you... can you take that argument further? I mean, wh- what about the fact that maybe what we value is the up and down of life? So-
- NBNick Bostrom
You could have up and downs in the experience machine-
- LFLex Fridman
Right.
- NBNick Bostrom
But what can't you have in the experience machine? Well, I mean-
- LFLex Fridman
Right.
- NBNick Bostrom
... that then becomes an interesting question to explore. But, for example, real connection with other people, if the experience machine is a solo machine, where it's only you, like, that's something you wouldn't have there. You would have this objective experience that would be, like, fake people.
- LFLex Fridman
Yeah.
- NBNick Bostrom
Um, but, y- y- when, if you gave somebody flowers, there wouldn't be anybody there who actually got happy. It would just be a little simulation of somebody smiling. But the simulation would not be the kind of simulation I'm talking about in the simulation argument, where the simulated creature is conscious. It would just be a kind of, uh, smiley face that would look perfectly real to you. Um-
- LFLex Fridman
So, uh, we're now drawing a distinction between appear to be perfectly real and actually being real?
- NBNick Bostrom
Yeah. Um, so that could be one thing. I mean, like, a big impact on history maybe is also something you won't have if you check into this experience machine. So, some people might actually feel the life I wanna have for me is one where I have a- a- a- a big, positive impact on, uh, how history unfolds. So, y- uh, so you could kind of explore these different possible explanations for why it is you wouldn't want to go into the experience machine, if- if that's- if that's what you feel. And o- one- one interesting observation regarding this Nozick thought experiment and the conclusions he wanted to draw from it is, how much is a kind of a status quo effect? So, a lot of people might not wanna jettison current reality to plug into this dream machine.
- LFLex Fridman
Mm-hmm.
- NBNick Bostrom
But, if they instead were told, "Well, what you've experienced up to this point was a dream. Now, uh, do you wanna disconnect from this and enter the real world?" When you have no idea maybe what the real world is. Or- or maybe it could say, "Well, you're actually a farmer in Peru, uh, growing, you know, uh, peanuts, and you could live for the rest of your life in this." W- where... Or- or would you wanna cons- continue your- your- your dream life as a Lex Friedman, going around the world, making podcasts and doing research?
- LFLex Fridman
Yeah.
- NBNick Bostrom
S- s- so, if- if the status quo was that the- that- that they were actually in the experience machine, I think a lot of people might then prefer to live the life that they are familiar with, rather than sort of bail out into... So...
- LFLex Fridman
That's interesting. The change itself, the leap, whatever.
- NBNick Bostrom
Yeah. So, it might not be so much the- the reality itself that we're after, but it's more that we are maybe involved in certain projects and relationships, and we have, you know, a self-identity and- and these things. That's... our values are kind of connected with carrying that forward. And then whether it's inside, uh, a tank or outside a tank in Peru, or whether inside a computer or outside a computer, that's kind of less important to what- what we ultimately care about.
- LFLex Fridman
Yeah. But, s- still, so just to linger on it, it is interesting. I find may- maybe people are different, but I find myself quite willing to take the leap to the farmer in P- uh, in Peru, uh, especially as the virtual reality system become more realistic. Uh, I- I find that possibility, and I think more people would take that leap.
- NBNick Bostrom
But so in this, in this thought experiment, just to make sure we are on the sa- so, in this case, the- the farmer in Peru would not be a virtual reality. That would be the real-
- LFLex Fridman
The real.
- NBNick Bostrom
... the- the real, your life, uh, like before, uh, this whole experience machine started.
- LFLex Fridman
Well, I- I kind of assumed from that description, uh, you're being very specific, but that kind of idea just, like, w- washes away the concept of what's real. I mean, I- I'm- I'm still a little hesitant about your kind of distinction between real and illusion. Because when you can have an illusion that's feels... I mean, that looks real, I mean, what... I- I don't know how you can definitively say something is real or not. Like, what's- what's a good way to prove that something is real in that context?
- NBNick Bostrom
Well, so I- I guess, in this case, it's more a stipulation. In one case, you're floating in a tank with these wires by the super-duper neuroscientist plugging into your head, um, giving you Lex Friedman experiences.
- LFLex Fridman
Yeah.
- NBNick Bostrom
In- in the other, you're actually tilling the soil in Peru, growing peanuts, and then those peanuts are being eaten by other people all around the world who buy the exports, and-
- LFLex Fridman
So, that's- that's real.
- NBNick Bostrom
... so there's two different possible situations in- in the one and the same real world that- that you could choose to occupy.
- LFLex Fridman
But just to be clear, when you're in a vat with wires and the neuroscientists, you can still go farming in Peru, right?
- NBNick Bostrom
Hmm.
- LFLex Fridman
But like-
- 41:10 – 48:58
Intelligence and consciousness
- NBNick Bostrom
mind.
- LFLex Fridman
Yeah, I'm with you on the intelligence part, but there's something about me that says consciousness is easier to fake. Like I- I've recently gotten my hands in a lot of Roombas. Don't ask me why or how, but, uh, and I've made them, uh, it's just a nice robotic mobile platform for experiments, and I made them scream and/ or moan in pain and so on, just to see when they're responding to me.
- NBNick Bostrom
Uh-huh.
- LFLex Fridman
And it's ju- just a sort of psychological experiment on my- on myself, and uh, I think they appear conscious to me pretty quickly.
- NBNick Bostrom
Mm-hmm.
- LFLex Fridman
Like I... to me, at least my brain can be tricked quite easily.
- NBNick Bostrom
Right.
- LFLex Fridman
So if- if I introspect and they... it's harder for me to be tricked that something is intelligent. So I- I just have this feeling that inside this experience machine, just saying that you're conscious and having certain qualities of the interaction, like being able to suffer, like being able to hurt, um, like being able to wonder about the essence of your own existence, not actually... I mean, you know, the creating the illusion that you're wondering about it-
- NBNick Bostrom
Mm-hmm.
- LFLex Fridman
... is enough to create the feeling of consciousness and be... cre- um, the illusion of consciousness, and because of that create a really immersive experience to where you feel like that is the real world.
- NBNick Bostrom
So you think there's a big gap between, uh, appearing conscious and being conscious? Or is it-
- LFLex Fridman
N-
- NBNick Bostrom
... that you think it's very easy to be conscious?
- LFLex Fridman
I'm not actually sure what it means to be conscious. All I'm saying is, uh, the illusion of consciousness...... is enough for this- to, to create a social interaction that's as good as if the thing was conscious, meaning I'm making it about myself. Uh-
- NBNick Bostrom
Right, yeah.
- LFLex Fridman
... but-
- NBNick Bostrom
I mean, I guess there are a few different takes. One is how good the interaction is, which might, I mean, if you don't really care about, like, probing hard for whether the thing is conscious, may- may- may- m- maybe it would be a satisfactory interaction (clears throat) , whe- whether or not you really thought it was conscious. Uh, now, if, if you really care about it being conscious in, in, like, inside this experience machine-
- LFLex Fridman
Yes.
- NBNick Bostrom
... um, how easy would it be to fake it? And you say, "It sounds-"
- LFLex Fridman
Easy.
- NBNick Bostrom
"... fairly easy."
- LFLex Fridman
Yeah.
- NBNick Bostrom
But then the question is would that also mean it's very easy to instantiate consciousness?
- LFLex Fridman
That's-
- NBNick Bostrom
Like, it's much more widely spread in the world than we have thought. It doesn't require a big human brain with 100 billion neurons. All you need is some system that exhibits basic intentionality and can respond, and you already have consciousness. Like, in, in that case, I guess you still have a close coupling. The, the, the-
- LFLex Fridman
Right.
- NBNick Bostrom
... the, the, the, I guess, the... A, a data case would be where-
- LFLex Fridman
Yeah.
- NBNick Bostrom
... they can come apart, where, where you could create the appearance of there being a conscious mind without actually not being another conscious mind. I'm... Yeah, I'm somewhat agnostic exactly where these lines go. I think one, one observation that makes it plausible, that you could have very realistic appearances relatively simply, (laughs) um, which also is relevant for the simulation argument and in terms of thinking about how realistic would a virtual reality, uh, model have to be in order for the simulated creature not to notice that anything was awry. Well, um, just think of, uh, our own humble brains during the wee hours of the night when we are dreaming.
- LFLex Fridman
Mm-hmm.
- 48:58 – 1:01:43
Weighing probabilities of the simulation argument
- NBNick Bostrom
- LFLex Fridman
So we talked about the th-... the three parts of the simulation argument. One, we destroy ourselves before we ever create the simulation. Uh, two, we somehow, everybody somehow, loses interest in creating the simulation. And three, we're living in a simulation. So, you've kind of, um, I don't know if your thinking has evolved on this point, but you kinda said that we know so little that these three cases might as well be equally probable. So, probabilistically speaking, where, where do you stand on this? So, ƒ-
- NBNick Bostrom
Yeah. I no- I mean, I don't think equal necessarily would be the most, um, supported probability assignment.
- LFLex Fridman
So how would you, without assigning actual numbers, what- what- what's more or less likely in your, in your view?
- NBNick Bostrom
Well, I mean, I've historically tended to punt on the, the question of like, us between these three. And-
- LFLex Fridman
So, maybe actually another way is, which kind of things would make it, each of these more or less likely? What, what kind-
- NBNick Bostrom
Right.
- LFLex Fridman
... of, yeah intuitively?
- NBNick Bostrom
I mean, I mean certainly in general terms, if you, if you take anything that say, r- r- increases or reduces the probability of one of these, would tend to, uh, slush probability around on the other. So, if, if one becomes less probable, like the other would have to... 'cause it's gotta add up to one.
- LFLex Fridman
Yes.
- NBNick Bostrom
So, if we consider the first hypothes- the first alternative that, that there's this filter that makes it so that virtually no civilization reaches technological maturity, um, in particular our own civilization. If, if that's true, then it's like very unlikely that we would reach technological maturity, just because if almost no civilization at our stage does it, then it's unlikely that we do it. So, hence-
- LFLex Fridman
I'm sorry, can you linger on that for a second? Or, or what-
- NBNick Bostrom
Well, so if it's the case that almost all civilizations at our current stage of technological maturity fail, so, uh, fail, at their current stage of technological development, fail to reach maturity-
- LFLex Fridman
Mm-hmm.
- NBNick Bostrom
... um, that would give us very strong reason for thinking we will fail to reach technological maturity.
- LFLex Fridman
Oh, and also sort of the flip side of that is the fact that we've reached it, means that many other civilizations have reached this point.
- NBNick Bostrom
Yeah. So, that, that means if we get closer and closer to actually reaching technological maturity, uh, there's less and less distance left where we could go extinct before we are there. And therefore, the probability that we will reach increases as we get closer, uh, and that would make it less likely to be true that almost all civilizations at our current stage failed to get there. Like, we would have this... What the one case we'd studied ourselves, we'd be very close to getting there. That would be strong evidence that it is not so hard to get to technological maturity. So, to the extent that we, you know, feel we are moving nearer to technological maturity, that, that would tend to reduce the probability of the first alternative and increase the probability of the other two. Uh, it, it doesn't need to be a monotonic change, like if every once in a while some new threat comes into view, some bad new thing you could do with some novel technology, for example, you know, that, that could change our probabilities in the other direction.
- LFLex Fridman
But that, that technology again, you have to think about as, that technology has to be able to equally, in an even way, affect every, uh, civilization out there.
- NBNick Bostrom
Yeah, pretty much. I mean, that str- strictly speaking, it's not true. I mean, there could, there could be two different existential risks and every civilization, you know, uh-
- LFLex Fridman
Is in one of them.
- NBNick Bostrom
... die from one or the other. Like, but none of them kills more than 50%. Like it, it-
- LFLex Fridman
Yes. Gotcha.
- NBNick Bostrom
But, um, uh, incidentally, so in some of my other work, I mean, on machine superintelligence, like something, pointed to some existential risks related to sort of superintelligent AI and how we must make sure, you know, to handle that, uh, wisely and carefully. Uh, it, it's not the right kind of, um, existential catastrophe to make the first alternative true though. Like, it might be bad for us if the future lost a lot of value as a result of it being shaped by some process that optimized for some completely non-human value. Uh, but even if we got killed by machine superintelligences, that machine superintelligence might still attain technological maturity. So-
- LFLex Fridman
Oh, I see. So, you're not very, you're not human exclusive. This could be any intelligent species that achieves... Like, it's all about the technological maturity, it's not that the humans have to, uh, uh, attain it.
- NBNick Bostrom
Right.
- LFLex Fridman
So like, superintelligence could replace us, and that's just as well for the simulation argument.
- NBNick Bostrom
And then ƒ- it still... Yeah, yeah. I mean, it, it, it could interact with the second hypo- my alternative, like if, if the thing that replaced us was either more likely or less likely than we would be to have an interest in creating ancestor simulations, you know, that, that could affect probabilities. But yeah, to a first order, um, like if, if we all just die, then yeah, we won't produce, uh, any simulations 'cause we are dead. But if we all die and get replaced by some other intelligent thing that then gets to technological maturity, the question remains, of course, if might not that thing then use some of its resources to, to do this stuff.
- LFLex Fridman
So, can you reason about this stuff? So, given how little we know about the universe, is it, um, reasonable to, uh, to reason about these probabilities? So like, how little... Well, maybe you can disagree, but, uh, th- to me it's not trivial to figure out how difficult it is to build a simulation. We kind of talked about it a little bit. We also don't know...... like, as we try to start building it, like cr- start creating virtual worlds and so on, how that changes the fabric of society. Like, there's all these things along the way that can fundamentally change just so many-
- NBNick Bostrom
Mm-hmm.
- LFLex Fridman
... aspects of our society, about our existence, that we don't know anything about. Like, the kind of things b- we might discover when we understand to a greater degree the fundamental, the physics. Like, the, the theory, if we br- have a breakthrough, have a theory and everything, how that changes stuff, how that changes deep space exploration and so on.
- 1:01:43 – 1:05:53
Elaborating on Joe Rogan conversation
- NBNick Bostrom
- LFLex Fridman
So, part three of the argument says that, um, so that leads us to a place where eventually somebody creates a simulation. That, I think you- you had a conversation with Joe Rogan. I think there is some aspect here where you got stuck a little bit. Um, how does that lead to we're likely living in a simulation? So, this kind of probability argument-
- NBNick Bostrom
(clears throat)
- LFLex Fridman
... if somebody eventually creates a simulation, why does that mean that we're now in a simulation?
- NBNick Bostrom
What- what you get to, if you accept alternative three first, is there would be more simulated people with our kinds of experiences than non-simulated ones. Like if ... in- in- in kind of ... if you look at the world as a whole, by the end of time, as it were, you just count it up, there would be more simulated ones than non-simulated ones. Then there is a- an extra step to get from that. If you assume that, suppose for the sake of the argument, that that's true-
- LFLex Fridman
Mm-hmm.
- NBNick Bostrom
... um, how do you get from that to, uh, the statement we are probably in a simulation? Um, eh, so- so here you are introducing an indexical statement. Like, it's, um, that this person, uh, right now is in a simulation. There are all these other people, you know, that are in simulations, there's some that are not in a simulation. Um, but what probability should you have that y- you yourself is one of the, uh, simulated ones-
- LFLex Fridman
Right, in this-
- NBNick Bostrom
... given that setup? So, so yeah, so I- I call it the bland principle of indifference.
- LFLex Fridman
(laughs) .
- NBNick Bostrom
Which is that, um, in- in cases like this when you have two, I guess, sets of observers, um, one of which is much larger than the other, and you can't, from any internal evidence you have, tell which set you belong to, you, uh, should assign a probability, uh, that's proportional to the size of these sets. So that if there are 10 times more simulated people with your kinds of experiences, you would be 10 times more likely to be one of those.
- LFLex Fridman
Is that as intuitive as it sounds? I mean, uh, so that- that seems kinda ... if you don't have enough information, you should, uh, rationally just assign the same probability as- as- as the-
- NBNick Bostrom
Yeah, I'd say-
- LFLex Fridman
... as the size of the set.
- NBNick Bostrom
It seems ... it seems pretty sp- um, plausible to me.
- LFLex Fridman
Where are the holes in this? Is it at the- at the very beginning, the assumption that everything stretches sort of, you have infinite time essentially?
- NBNick Bostrom
You don't need infinite time.
- LFLex Fridman
You just need ... what ... how long does the time need to be?
- NBNick Bostrom
Well, however long it takes, I guess, for a universe to produce an intelligent civilization-
- LFLex Fridman
I see.
- NBNick Bostrom
... that then attains to technology to run some ancestor simulations.
- LFLex Fridman
Gotcha. At some point ... like, when the first simulation is created, that stretch of time just a little longer, then they'll all start creating simulations? Kinda like quarter managing-
- NBNick Bostrom
Yeah. Well, I mean, there might at different ... it might diff- if- if you think of there being a lot of different planets, and a- some subset of them have life, and then some subset of those get intelligent life, and some of those maybe eventually start creating s- simulations, they might get started at quite different times. Like, maybe on- on some planet, it takes a billion years longer before you get like, um, monkeys or before you get even bacteria than on another planet. Uh, so that like ... the- the- this might happen kind of at different cosmological epochs.
- LFLex Fridman
Is there a connection here to the doomsday argument in that sampling there of ...
- NBNick Bostrom
Yeah, there is a connection, um, in that they both involve an application of anthropic reasoning. That is, reasoning about these kind of indexical propositions.
- LFLex Fridman
Yeah.
- NBNick Bostrom
But the assumption you need, uh, in the case of the simulation argument, um, is much weaker than the simula- the- the- the assumption you need to, uh, uh, make the doomsday argument
- 1:05:53 – 1:23:02
Doomsday argument and anthropic reasoning
- NBNick Bostrom
go through.
- LFLex Fridman
What is the doomsday argument? And maybe you can speak to the anthropic reasoning in- in more general
- NBNick Bostrom
Yeah.
- LFLex Fridman
... in this way of thinking.
- NBNick Bostrom
That's- that's a big and interesting topic in its own right, anthropics. But the doomsday argument is this really first, uh, discovered by Brandon Carter, who was a theoretical physicist, and then developed by, um, philosopher John Leslie. Um, I think it might have been discovered initially in the '70s or '80s, and Leslie wrote this book, I think, in '96. And there are some other versions as well, um, by Richard Gott, who's a physicist, but let's focus on the Carter-Leslie version, where, um ... it's an argument, um, that we have systematically underestimated the probability that humanity will go extinct soon. Um, now I should say, most people probably think at the end of the day there is something wrong with this doomsday argument, that it doesn't really hold. It's like there's something wrong with it. But it- it's proved hard to say exactly what is wrong with it, uh, and different people have different accounts. Uh, my own view is it's pr-... seems inconclusive. But, um, and I can say what the argument is, and then-
- LFLex Fridman
Yeah, that would be good. Yeah.
- NBNick Bostrom
Yeah. So, maybe it's easiest to explain via, uh, an a- analogy to, um, sampling from urns. Uh, so you imagine you have a big, um... Imagine you have two urns in front of you and they have balls in them that have numbers. So there is the, the, the two urns look the same, but inside one there are 10 balls. Ball number 1, 2, 3, up to ball number 10. And then in the other urn, you have a million balls, uh, numbered one to a million.
- LFLex Fridman
Mm-hmm.
- NBNick Bostrom
And now somebody puts one of these urns in front of you and asks you to guess what's the chance it's the 10 ball urn. And you say, "Oh, 50/50 that, you know, I can't tell which urn it is." Um, but then you're allowed to reach in and pick a ball at random from the urn. And let's suppose you find that it's ball number seven. So that's strong evidence for the 10 ball hypothesis. Like it's a lot more likely that you would get such a low numbered ball if there are only 10 balls in the urn. Like it's in fact 10% done, right?
- LFLex Fridman
Mm-hmm.
- NBNick Bostrom
Um, than if there are a million balls. It would be very unlikely you would get number seven. So you perform a Bayesian update, and if your prior was 50/50 that it was the 10 ball urn, you become virtually certain after finding, uh, the random sample was seven, that it only has 10 balls in it.
- LFLex Fridman
Mm-hmm.
- NBNick Bostrom
So in the case of the urns this is uncontroversial, just elementary probability theory. The doomsday argument says that you should reason in a similar way with respect to different hypotheses about how many, um, how many balls there will be in the urn of humanity, as it were. How many humans there will ever have been-
- LFLex Fridman
Humans, yeah.
- NBNick Bostrom
... by the time we go extinct. Um, so to simplify, let's suppose we only consider two hypotheses. Either maybe 200 billion humans in total, or 200 trillion humans in total. Like it, it, it does, you could fill in more, more hypothesis, but it doesn't change the principle here, so it's easiest to see if we just consider these two. So you start with some prior based on ordinary empirical ideas about threats to civilization and so forth, and may- maybe you say it's a 5% chance that we will go extinct by the time there will have been 200 billion only. You're kind of optimistic, let's say. You think probably we'll make it through, colonize the universe, and... Um, but then according to this doomsday argument, you should take off your own birth rank, um, as a random sample. So your birth rank is your sequence in the position of all humans that have ever existed.
- LFLex Fridman
(laughs)
- NBNick Bostrom
And it turns out you're about human number 100 billion, you know, give or take.
- LFLex Fridman
100 billion, yeah.
- NBNick Bostrom
That's like roughly how many people have been born before you.
- LFLex Fridman
That's fascinating, 'cause I, I probably... Yeah, we each have a number. We each have a-
- NBNick Bostrom
We, we, we would each have a number in this. I mean, obviously the exact number would depend on where you started counting, like which ancestor start- was human enough to count as human. But those, those are not really important. There are relatively few of them.
- LFLex Fridman
Yeah.
- NBNick Bostrom
So, um, yeah, so you're roughly 100 billion. Now, if there are only gonna be 200 billion in total, that's a perfectly unremarkable number. You're somewhere in the middle, right? Just run of the mill human. Completely unsurprising.
- LFLex Fridman
Yes.
- NBNick Bostrom
Now, if they're gonna be 200 trillion, you would be remarkably early. Like you, like what are the chances out of these 200 trillion human that you should be human number 100 billion? That seems it would have a much lower conditional probability. Um, and so analogously to how in the urn case you, uh, thought after finding this low numbered random sample, you updated in favor of the urn having few balls. Similarly, in this case, you should update in favor of the human species having a lower total number of members. That is doomsoon. And look-
- LFLex Fridman
Sorry, you said doomsoon?
- NBNick Bostrom
Yeah.
- LFLex Fridman
That's the (laughs) -
- NBNick Bostrom
Well, that, that would be the hypothesis in this case, that it will end after 200 billion.
- LFLex Fridman
I just like, I just like that term for that hypothesis, yeah.
- 1:23:02 – 1:25:26
Elon Musk
- LFLex Fridman
reality. Let me ask, one of the popularizers, you said there's many through this... when you look at sort of the last few years of the simulation hypothesis, just like you said, it comes up every once in a while, some new community discovers it and so on. But I would say one of the biggest popularizers of this idea is Elon Musk. Um, do you have any kind of intuition about what Elon thinks about when he thinks about simulation? Why is this of such interest? Is it all the things we've talked about, or is there some special kind of intuition about simulation that he has?
- NBNick Bostrom
I mean, you might have a better mo- I think... I mean, why it's of interest, I think it's, it's like seems pretty obvious why if it... to the extent that one thinks the argument is credible, why it would be of interest, it would, if, if it's correct, tell us something very important about the world in one way or the other, whichever of the three alternatives for a simulation that seems like... arguably one of the most fundamental discoveries, right? Now interestingly in the case of someone like Elon, so there is like the standard arguments for why you might want to take the simulation hypothesis seriously, the simulation argument, right? In the case that... i- if you are actually Elon Musk let us say, um, there's a kind of an additional reason in that what are the chances you would be Elon Musk?
- LFLex Fridman
(laughs)
- NBNick Bostrom
Like, it seems like maybe there would be more interest in simulating the lives of very unusual and remarkable people. So if you consider not just assimilations where, um, all of human history or the whole of human civilization are simulated, but also other kinds of simulations which only, um, include some subset of people.
- LFLex Fridman
(laughs)
- NBNick Bostrom
Uh, like in the, in the se- in, in those simulations that only include a subset it might be more likely that that would include subsets of people with unusually interesting or consequential lives.
- LFLex Fridman
So if you're Elon Musk-
- NBNick Bostrom
You gotta wonder, right?
- LFLex Fridman
... it's more likely that you're an assimilation.
- NBNick Bostrom
Yeah. Or if you are... like if you are Donald Trump or if you are Bill Gates or you're like some particularly, uh, like distinctive character.
- LFLex Fridman
Yeah.
- NBNick Bostrom
Y- you might think that that, uh... I mean, if you just think of yourself into the shoes, right, it's got to be like an extra reason to think, huh, that's kind of...
- LFLex Fridman
So interesting.
- NBNick Bostrom
Right. So-
- LFLex Fridman
On a, on a scale of like farmer in Peru to Elon Musk, the more you get towards the Elon Musk, the higher the probability that-
- NBNick Bostrom
Yeah, I'd imagine there would be some extra boost from that.
- LFLex Fridman
(laughs) There's an extra boost.
- 1:25:26 – 1:29:52
What's outside the simulation?
- LFLex Fridman
So, he also asked the question of, um, what he would ask an AGI saying, the question being, what's outside the simulation? Do you think about the answer to this question, if we are living in a simulation, what is outside the simulation? So, the programmer of the simulation.
- NBNick Bostrom
Um, yeah, I mean, I think it connects to the question of what's inside the simulation, in that, uh, if- if you had views about, uh, the- the- the creators of the simulation, it might help you make predictions about what kind of simulation it is, wha- what might, what might happen, what, you know, happens after the simulation, if there is some after, but also like the kind of setup. So, these- these two questions would be quite, uh, closely intertwined.
- LFLex Fridman
But do you think it would be very surprising to, like, i- is the stuff inside the simulation, is it possible for it to be fundamentally different than the stuff outside?
- NBNick Bostrom
Yeah.
- LFLex Fridman
Like, like a- a- another way to put it, can the creatures inside the simulation be smart enough to even understand or have the cognitive capabilities or any kind of information processing capabilities enough to understand the mechanism that's created them?
- NBNick Bostrom
They might understand some aspects of it. Um, I mean, it's a level of, it's kind of there are l- l- levels of explanation, like degrees to which you can understand. So, does your dog understand what it is to be human? Well, it's got some idea, like humans are these physical objects that move around and do things. And like, a normal human would have a- a deeper understanding of what it is to be a human. And maybe some very experienced psych- psychologist or great novelist might understand a little bit more about what it is to be human, and maybe superintelligence could see right through your soul. Um, so si- similarly, uh, I- I- I do think, um, that, that we are quite limited in our ability to understand all of the relevant aspects of the larger context that we exist in. Um-
Episode duration: 1:56:38
Install uListen for AI-powered chat & search across the full episode — Get Full Transcript
Transcript of episode rfKiTGj-zeQ
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome