Skip to content
Dwarkesh PodcastDwarkesh Podcast

David Deutsch - AI, America, Fun, & Bayes

David Deutsch is the founder of the field of quantum computing and the author of The Beginning of Infinity and The Fabric of Reality. Read me Contra David, on AI: https://dwarkeshpatel.com/universal-explainers/ Buy The Beginning of Infinity: https://www.amazon.com/dp/0143121359/ Episode website + Transcript: https://www.dwarkeshpatel.com/p/david-deutsch Apple Podcasts: https://apple.co/3PZXA1j Spotify: https://spoti.fi/3Q4fmjS Follow me on Twitter to be notified of future content: https://twitter.com/dwarkesh_sp Follow David on Twitter: https://twitter.com/DavidDeutschOxf Timestamps: 0:00:00 Will AIs be smarter than humans? 0:06:30 Are intelligence differences immutable/heritable? 0:20:08 IQ correlation of twins separated at birth 0:27:08 Do animals have bounded creativity? 0:33:28 How powerful can narrow AIs be? 0:36:55 Could you implant thoughts in VR? 0:38:45 Can you simulate the whole universe? 0:41:19 Are some interesting problems insoluble? 0:44:55 Does America fail Popper's Criterion? 0:49:57 Does finite matter mean there's no beginning of infinity? 0:53:12 The Great Stagnation 0:55:30 Changes in epistemic status is Popperianism 0:59:25 Open-ended science vs gain of function 1:02:51 Contra Tyler Cowen on civilizational lifespan 1:07:16 Fun criterion 1:14:12 Does AGI through evolution require suffering? 1:17:57 Would David enter the Experience Machine? 1:20:05 (Against) Advice for young people

Dwarkesh PatelhostDavid Deutschguest
Jan 31, 20221h 24mWatch on YouTube ↗

EVERY SPOKEN WORD

  1. 0:006:30

    Will AIs be smarter than humans?

    1. DP

      Okay. Today, I'm speaking with David Deutsch. Now, this is a conversation that I've been, um, eagerly wanting to have for years, so this is very exciting for me. So first, let's talk about AI. Um, can you, uh, briefly explain why you anticipate that AIs will be no more fundamentally intelligent than humans?

    2. DD

      Uh, I, I suppose you mean AGIs.

    3. DP

      Yes.

    4. DD

      Um, and, uh, by fundamentally intelligent, I suppose you mean capable of all the same types of cognition as humans are, in principle.

    5. DP

      Yes.

    6. DD

      So, uh, that would include, um, you know, doing science and doing art and, and in principle, also falling in love and, and, uh, being good and being evil and, and all that. So the reason, uh, it, it, it, it, um... The reason is twofold and, uh, one-half is about computation hardware, computation hardware, and the other is about, um, software. So if we take the hardware, um, eh we know that, uh, our, our brains are Turing-complete, um, bits of hardware, and therefore, can, uh, e- exhibit the functionality of running any computable, um, function, program for any computable function. Now, uh, when I say any, I don't really mean any, because you and I sitting here, you know, we're having a conversation and we could say, you know, we could have any conversation. Well, uh, we, we can assume that maybe in 100 years time, we'll both be dead, and therefore, uh, the number of conversations we could have is strictly limited, uh, and also some conversations, uh, depend on speed of computation, so, uh, you know, if, if, if we're gonna be solving the traveling salesman problem, then, uh, there are, there are many traveling salesman problems that we wouldn't be able to solve in the age of the universe. So, when I say any, I'm, uh, what I mean is that we're, we're not limited in the programs we can run apart from by speed and memory capacity. So, uh, all limitations on us, hardware limitations on us boil down to speed and memory capacity, and both those can be augmented to the level of any other entity that is in the universe. Because, you know, if somebody builds a, a computer that can think faster than the brain, then we can use that very computer or that very technology to make our thinking go just as fast as that. Uh, so that's the hardware. Um, as far as, uh, explanations go, can we, can we, uh, reach the same kind of explanations as any other entity? Let's say, uh, usually this is said not in terms of AGIs but in terms of, um, uh, extraterrestrial intelligences, but also it's said about AGIs, you know. What, what if they are to us as we are to ants? Um, and so on. Well, again, part of that is just hardware, which is easily fixable by adding more hardware, so let's, let's forget about that. Um, so really the, the idea is, is there, are, are there, are there, uh, concepts that we are inherently incapable of comprehending? I think, uh, Martin Reese believes this. Uh, he, he, he thinks that, uh, you know, we can comprehend quantum mechanics, apes can't, and maybe the extraterrestrials can comprehend, um, something beyond quantum mechanics which we can't comprehend, and, and no amount of, uh, uh, brain add-ons with extra hardware can give us that, because they have the hardware that is, that is, um, adapted to having these concepts which, which, uh, we haven't. The, the same kind of thing is said about maybe certain qualia that, that maybe w- we can experience love and an AGI couldn't experience love, because it has to do with our hardware. Not just memory and speed, but specialized hardware, and, um, I, I think that falls victim to the same argument. The, the thing is, this specialized hardware can't be anything except a, uh, a computer, and if there's hardware that, that, uh, is needed for love, let, let's say that somebody is born without that hardware, then that hardware, that bit of the brain that does love or that, or that does mathematical insight or whatever, is just a bit of the brain and it's connected to the rest of the brain in the same way that any other part of the brain is connected to the rest of the brain, namely by neurons, uh, passing electrical signals, and by chemicals, uh, th- whose concentrations are altered and so on. So therefore, an artificial device that computed which signals were to be sent and which, uh, uh, um, um, chemicals were to be adjusted, uh, could do the same job and it would be indistinguishable, and therefore, a person augmented with one of those who couldn't feel love could feel love after that au- augmentation. So th- those are, those are... An I think those two.... things are the only relevant ones. So that's why I think, um, that a- AGIs and, and humans have the same range in the sense I've defined.

    7. DP

      Okay, interesting. Um, okay, so I, uh, uh, I think, um, the software question is more interesting, uh, than the hardware one immediately. But I do wanna take issue with, um, the idea that, uh, the memory and speed of human brains can be arbitrarily and easily expanded. But we can get into that later.

  2. 6:3020:08

    Are intelligence differences immutable/heritable?

    1. DP

      Uh, w- we can just start with this question. Can a- a- all humans explain everything that even the smartest humans can, uh, explain, right? So if I took the village idiot and I asked him to, you know, create the theory of quantum computing, um, should I anticipate that if he wanted to, he could do this? And just for frame of reference, about 21% to 24% of Americans on, uh, the National Adult Literacy Survey, they, um, they fall in level one which means that they can't even perform basic tasks like identifying the expiry date of a driver's license, for example, or totaling a bank deposit slip. So, um, i- i- are- are- are these humans capable of explaining quantum computing or creating the Deutsch-Jozsa algorithm? And if they're not capable of doing this, um, doesn't that mean that the theory of universal explainers falls apart?

    2. DD

      Well, there are, there are people who... Uh, so these tasks that you're talking about are tasks that no ape could do. Uh, however, there are humans who are brain-damaged to the extent that they can't even do tasks that an ape can do. And there comes a point when, when, um, uh, installing the program that would, you know, be able to read a driver's license or whatever would require, uh, augmenting their hardware as well as their software. So if, if a person... W- we don't know that much, we don't know enough about the brain yet. But if, if some of the people that you're ta- uh, if it's 24% of the population, then it's definitely not hardware. So, uh, I, I would, uh, uh, I would say that for those people, it's definitely software. If it was hardware, then, um, getting them to do this would be a matter of repairing the br- the i- imperfect hardware. If it's software, it is not just a matter of them wanting to or them wanting to be taught or whatever. It, it is, it is a matter of, um, whether the existing software is, um... what word can I use instead of wants to?Uh, is, is conceptually ready to do that. For, for example, um, Brett Hall has often said that, that he would like to speak Mandarin Chinese. And so he wants to, but he will never be able to speak Mandarin Chinese because he's never going to want it (laughs) enough to be able to go through the process, um, of acquiring that program. But there is nothing about his hardware that prevents him learning Mandarin Chinese, and there's nothing about his software either, e- except that... Well, what word can we use to say that he doesn't want to go through that process? I mean, he does want to learn it. He does want to learn it, but he doesn't want to, uh, go through the process of being programmed with that program. But, uh, if his circumstances changed, he might well want to. Uh, so for example, uh, uh, many of my relatives a couple of generations ago were forced to migrate to very alien places where th- they had to learn languages that they never thought they would ever, uh, speak and never wanted to speak, and yet very quickly, they did speak those languages. Uh, again, was it because what they wanted changed? Uh, in, in the big picture perhaps you could say what they wanted changed. So if your, uh, driving license-blind people wanted to be educated to read driving licenses in the sense that my ancestors wanted to learn languages, then yes, they could learn that. Uh, there is a level of dysfunction below which they couldn't, and I think those are hardware limitations. On the borderline between those two, uh, there's not that much difference. It's, it's like, you know, that's like the question of could apes be programmed with a fully human, uh, intellect? And I think the answer to that is yes, but there, uh, although programming them would not require hardware, you know, surgery in the sense that repairing a defect w- that it would be repairing a defect, but it would require intricate changes at the neuron level and, and that so that, to transfer the program from a human mind into the, the ape's mind. I, I, I would guess that that is possible because although the ape has far less, um, uh, far less, uh, memory space than humans do and also doesn't have certain specialized modules that humans have, neither of those things is a thing that we use to the full anyway. I mean, wh- when I'm speaking to you now, there's a lot of knowledge in my brain that I'm not referring to at all. Like, you know, the fact that I can play the piano or, or drive a car, uh, I- is not being used in this conversation. So I don't think the fact that we have such a large memory capacity would affect this.... this, uh, project. Well, uh, the project would be highly immoral-

    3. DP

      Mm-hmm.

    4. DD

      ... because you'd be, uh, intentionally creating a, a, a person inside a deficient, um, brain hardware.

    5. DP

      Mm-hmm. So suppose there's hardware differences that distinguish, you know, um, uh, different humans in terms of, uh, their intelligence, if it were just up to the people who are not even functionally literate, right? So these are again, people who-

    6. DD

      Wait, wait, wait. Uh, I, I said that it could only be hardware at the low level, uh, uh... Well, either at the level of brain defects or at the level of using up the whole of our allocation of memory or speed or whatever. So-

    7. DP

      Right, but if it-

    8. DD

      Apart from that, I don't think it can be hardware.

    9. DP

      By the way, is software analogous... I mean, are, are, is synom- is, um, hardware synonymous with genetic influences for you, or, uh, c- can software be genetic too?

    10. DD

      Um, s- software can be genetic too, though, though, uh, that doesn't mean it's immutable.

    11. DP

      Mm-hmm.

    12. DD

      It just means it's there, there at the beginning.

    13. DP

      Okay. Um, the reason I suspect it's not software is because these people also happen to be the same people who... Let's suppose it was software and something that they chose to do or it's something they could change, um, it's mysterious to me why these people would also choose to, um, accept jobs that have lower pay but are less cognitively demanding or why they would choose to, you know, do worse on academic tests or, you know, IQ tests. So, uh, why they would choose to do exactly the sort of thing somebody who's less cognitively powerful would do, um, it seems the more parsimonious explanation there is just that they are cognitively less powerful.

    14. DD

      (laughs) No, not at all. Why would someone choose not to go to school, for instance, if they were given a choice, and not to have any lessons? Well, there are many reasons why they might choose that. Some of them good, some of them bad. And the people who... Uh, uh, you know, calling some jobs cognitively demanding is already begging the question because, uh, you're just referring to a choice that people make, which I think is a software choice, as being, by definition, a, uh, forced on them by hardware. It's not cognitively deficient, it's just that they don't want to do it. The same way, you know-

    15. DP

      So-

    16. DD

      ... if, if, if there was a culture that required Brett Hall to, um, uh, uh, be able to speak fluent Mandarin Chinese in order to do a wide range of tasks and if he, if he didn't know Mandarin Chinese, he'd be relegated to low level tasks, then he would be, quote, quote, "Choosing the low level tasks" rather than the, quote, "cognitively demanding task." But it's only culture that makes that cognitively demanding task, um... that, that assigns a hardware interpretation to the difficulty of doing that task.

    17. DP

      Right. I mean, it doesn't seem that arbitrary to say that the kind of jobs you could do sitting down on a laptop are, uh, uh, r- require different cognitive, um, uh, require probably more cognition than the ones you can do in, uh, in a construction site. And, um, if, if it's not, if it's not cognition that distinguishes, or if there's not something like intelligence or cognition or whatever you wanna call it that is a thing that is measured by both these literacy tests and by, uh, what you're doing at your job, then what is the explanation for why there's such a high correlation between people who are not functionally literate and... Or I, I guess an anti-correlation between people who are not functionally literate and people who are doing... Like, let's say programmers, right? Like, I guarantee you people working at Apple, uh, all of them are above level one on this, uh, li- literacy survey. Um-

    18. DD

      Well-

    19. DP

      Why, why do they just happen to make the same choices? Why is that their correlation?

    20. DD

      Well, uh, there are correlations everywhere and, and c- culture is built in order to use, in order to make certain, uh, abilities, um, make use of certain abilities that people have. So if you're, if you're setting up a company that is going to employ 10,000, um, uh, employees, then it's best to make the way that the company works... You know, it's, it's best, for example, to make the signs above the doors or the signs on the doors or the, the numbers on the dials all be ones that people in that culture who are highly educated can read. You could, in principle, make each label on each door a different language. I, I don't know, you know, there are thousands of human languages. Let's say there are 5,000 languages and 5,000 doors in the company. You could, given the same meaning, make them all different languages. The reason that they're all the same language, and what's more, not just any old language, it's a language that many educated people know fluently, that's why. And then you can misinterpret that as saying, "Oh, there is something, uh, um, there is some hardware reason why everybody speaks the same language." Well, no, there isn't. It's a cultural reason.

    21. DP

      Okay. So if the culture was, uh, different somehow, maybe if there was some other way of communicating ideas, um, i- is it ca- uh, do you think w- that the people who are currently designated as not functionally literate could be in a position to learn about quantum computing, for example? If they-

    22. DD

      Um-

    23. DP

      And if they made the right choices, um... Or not the right choices, but, uh, the, the choices that could lead to them understanding quantum reading.

    24. DD

      Well, uh, l- m- so I don't want to evade the question. The answer is yes, but the way you, the way you put it is, is a- again, uh, rather begs the question. It's not only language that is like this, it's all knowledge. So, so, uh, it, uh, just learning... So if someone doesn't speak English-... quantum computing is, is a, is a field in which English is the standard language. Um, used to be German, now it's English. Now, someone who doesn't know English is at a disadvantage learning about quantum computers, but not only because of their deficiency in language. If they come from a culture in which the culture of physics and of mathematics and, and of logic and so on, is, is, is equivalent and only the language is different, then if they just learn the language, they, they will find it as easy as anyone else. But if a whole load of things are different, if a, a person doesn't think in terms of, for example, logic, but thinks in terms of pride and manliness and, and fear and, and, um, uh, uh, you know, all sorts of, um, concepts that govern, that fill the lives of, uh, let's say, um, uh, prehistoric people or pre-Enlightenment people, then to be able to understand quantum computers, they would have to learn a lot more than just the language of the civilization. They'd have to learn all of other, well not all, but a, a, a range of other features of the civilization. And on that basis, I- the people who can't read driving licenses are similarly in a different culture which they would also have to learn if they are to increase their IQ, i.e. their ability to function at a high level in intellectual culture in our civilization.

    25. DP

      Okay. So-

    26. DD

      But if they did, they would be able to.

  3. 20:0827:08

    IQ correlation of twins separated at birth

    1. DD

    2. DP

      Okay. So if it's those kinds of differences, then how do you explain the fact that, um, identical twins separated at birth and adopted by different families, um, they tend to have a, a, a, you know, I, th- the most of the variance, um, that does exist between humans in terms of IQ doesn't exist between, uh, identical twins. In fact, the correlation is, uh, 0.8 which is the correlation that you, you would have when you took the test on different days, like depending on how well, good a day you were having. Um, and these are, you know, people who are adopted by families who have different cultures, who are often in different countries. Um, yet, in fact, uh, b- b- hardware theory explai- uh, explains very well why they would have similar, um, scores on IQ tests, which are themselves correlated with literacy and-

    3. DD

      Oh, uh-

    4. DP

      ... job performance and so on. Whereas, I don't know how software would explain why being adopted by different families-

    5. DD

      Yes. Well, the, uh, (laughs) the hardware theory explains it in the sense that it might be hardware, might be true. So, uh, it doesn't cons- it doesn't have an explanation beyond that and nor-

    6. DP

      Well, uh-

    7. DD

      ... does the software theory. Um, sorry, go on.

    8. DP

      Uh, the, I mean, the, the, so there are actually like, differences at the level of brain that are correlated with IQ right? So you, if y- actual skull size is like, has like, a 0.3 correlation with IQ. There's a few more like this. They don't explain the entire, um, the entire variance in human intelligence or the entire genetic variance in human intelligence, but we do have-

    9. DD

      Yes.

    10. DP

      ... we have identified a few actual hardware differences that correlate with IQ.

    11. DD

      Well, suppose, suppose on the contrary, that, that, su- suppose that the, uh, results of these experiments had been different. Uh, suppose that the result was that, um, uh, um, people who, uh, are brought up in the same family and differ only in, um, the amount of hair they have or in the amount, in, in their appearance in any other way, that, that, um, none of those differences make any difference to their IQ. Only their, only who their parents were makes a difference. Now, wouldn't that be surprising? It, wouldn't it be surprising that there's nothing else correlated with IQ other than who your parents are?

    12. DP

      Yes.

    13. DD

      Um, um, now how much correlation should we expect? There are correlations everywhere. You know, there are these things on the internet which, (laughs) uh, joke, uh, memes or whatever you call it, uh, but, but they make a serious point, where they correlate, uh, things like how many, uh, mm, how many, uh, uh, adventure movies have been made in a given year correlated with, uh, how much the GNP per capita- That's a bad example 'cause there's no obvious relation, but you know what I mean. It's, it's, uh, the number of films made by a particular actor, uh, uh, against the, uh, um, number of outbreaks of bird flu or that, and i- um, part of being surprised by randomness is the fact that correlations are everywhere. It, it's not just that correlation isn't causation, it's that correlations are everywhere. It's, it's not a rare event to get a correlation between two things, and the more things you ask about, the more you are going to get correlations. So, uh, the, the, the, uh, it's, it's not surp- what, what is surprising is that the, um, things that are correlated are, uh, are things that you expect to be correlated and measured. For example, they, when they, when they, um, uh, do these twin studies and measure the IQ, they control for certain things. Uh, and, and l- like you said, uh, identical twins reared together, they've got to be reared together.

    14. DP

      Or apart.

    15. DD

      Or, or apart, yes.

    16. DP

      Yeah, yeah.

    17. DD

      So, uh, but, but th- there's infinitely more things that they don't control for, so i- i- it could be that the real determinant of, uh, IQ is, for example, how well a child is treated between the ages of three and a half and four and a half, where well is defined by something that we don't know yet, but, you know, something like that. Then you would expect that thing, which we don't know about and nobody has bothered to, um, uh, control for in these experiments, we would expect that thing to be correlated with IQ. Uh, but unfortunately, that thing is also correlated with whether something, someone's an identical twin or not.

    18. DP

      Mm-hmm.

    19. DD

      So it's, it's not the identical twin-ness that is causing the similarity, it's this other thing.

    20. DP

      Right.

    21. DD

      This, say an aspect of appearance or something. And if you were to, uh, surgically change a person, uh, uh, with a view, if you knew what this thing was and surgically changed the person, you would be able to have the same effect as ha- making an identical twin would have.

    22. DP

      Right. But I mean, as you say, in science, or to explain any phenomenon, there's an infinite amount of possible explanations, right? You gotta pick the best one. So it could-

    23. DD

      Yes.

    24. DP

      ... be that there's some unknown trait which is so obvious to adoptive parents, different adoptive parents, so they can use it as a basis for, um, discrimination or for different treatment, but that is-

    25. DD

      Well, I, I mean, I would assume they don't know what it is.

    26. DP

      Uh, but then aren't, aren't they using it as a basis to treat kids differently at the age of three, for example?

    27. DD

      They are, but not, not by, not by consciously identifying it. It's, it's, it's like it would be something like getting the idea that this child is really smart.

    28. DP

      Sure.

    29. DD

      But I'm just trying to show you that it could be something that the parents are not aware of. If you ask parents to list the traits in their children that cause them to behave differently towards their children, they might list, like, 10 traits, but then there are another thousand traits that they're not aware of which also affect their behavior.

    30. DP

      Mm-hmm. So we'd first need an explanation for what this trait is that researchers have not been able to identify, but is so, um, obvious that even unconsciously, parents are able to reliably use it as a way to, um, treat-

  4. 27:0833:28

    Do animals have bounded creativity?

    1. DP

      if it, if creativity is something that doesn't exist in increments, it's, or, you know, ex- the capacity to create explanations, um, you can just use a simple example. Go on YouTube and look up cat opening a door, right? So you'll see, for example, um, a, a cat, uh, develops a theory that applying torque to this handle, to this metal thing, will open a door. Now it hasn't... And then what it'll do is it'll climb onto a countertop and it'll jump on top of that door handle. It hasn't seen another cat do it. It hasn't seen another human, like, get on a countertop and try to open the door that way, but it conjectures that this is a way, given its, um, given its morphology, that it can access the door. Um, and then, you know, so that's its theory, and then the experiment is will the door open? Um, this seems like a classic, uh, this seems like a classic cycle of conjecture and refutation. Um, is, is this compatible with the cat not being, at least having some bounded form of creativity?

    2. DD

      I, I think it's perfectly compatible. (laughs) Uh, so, uh, animals are amazing things, and, uh, uh, uh, instinctive animal knowledge is, is, uh, designed to make animals easily capable of, um, uh, uh, thriving in environments that they've never seen before. Uh, in fact, uh, if you do- go down to the level of detail, an- animals have never seen the environment before. I mean, maybe a goldfish in a goldfish bowl, uh, uh, might have, but, uh, when a, when a, uh, a wolf runs through the forest, uh, it sees a pattern of trees that it has never seen before, and it has to, um, uh, create strategies for evo- for avoiding each tree, and not only that, for actually catching the rabbit that it's running after as well, in a way that has never been done before. So, um, the way to understand this, I think... Now, now this is because of a vast amount of knowledge that is in the wolf's genes. What kind of knowledge is this? Well, it's not the kind of knowledge that says, "First turn left, then turn right, then jump," uh, and so on. It's, it's not that kind of instruction. It's an instruction that takes input from the outside and, and then generates, uh, a behavior that is relevant to that input. Um, it, it doesn't involve creativity, but it involves a degree of sophistication in the program that w- that, uh, human robotics has not, has not yet reached anywhere near that. And by the way, then when it sees a wolf of the opposite sex, it may decide to leave the rabbit and go and have sex instead. And, uh, a program for a robot to, to, to locate another robot of the right species and then have sex with it is, again, uh, I think beyond present day robotics. But, uh, it, it will be done, um, and it does not, it clearly does not require creativity because it can... That same program will lead the next wolf to do the same thing in the same circumstances. It's the fact that the circumstances are ones that it's never seen before...... and it can still function is a, uh, testimony to the incredible sophistication of that program, but it has nothing to do with creativity. So, um, uh, humans, humans do, do tasks that require much, much less programming sophistication than that, such as sitting around a campfire telling each other a scary story about a wolf that, that almost ate them. Now, animals can do the wolf running away thing, they, they can, they can, they can enact a story that's more complicated even than the one the human is telling, but they can't tell a story. They don't tell a story. Telling a story is, is a, a sort of typical creative activity. It's the same kind of activity as forming an explanation.

    3. DP

      Okay.

    4. DD

      So, uh, I don't think it's at all surprising that cats can, uh, jump on, on handles, because it's the same, uh, I can easily imagine that the same amazingly sophisticated program that lets it jump on a branch so that the branch will g- get out of its way in some sense, will also function in this new environment that it's never seen before.

    5. DP

      Corre-

    6. DD

      But there are all sorts of other things that it can't do.

    7. DP

      Oh, they, that, that's certainly true, th- which was my point is that it has a bounded form of creativity, and if bounded forms of creativity can, can exist, then humans could be in one such. But... So I'm having a hard time imagining the ancestral, um, circumstance in which a, a cat could, uh, would, uh, gen- have genetic, gain the genetic knowledge that jumping on a metal rod would get a wooden plank to open and give it access to, you, you know, the other side.

    8. DD

      Well, uh, uh, I, I thought I just gave an example. I, I mean, if, w- we don't know, uh, at least I don't know, uh, what kind of environment the ancestor of the domestic cat lived in. Uh, but if it was, for example, if, if it contained undergrowth, then, uh, dealing with undergrowth requires some very sophisticated programs. Otherwise, you will just get stuck, uh, somewhere and starve to death. Now, I think a dog, um, if it gets stuck in a bush, it has no program to get out other than just shaking itself about un- until it, it gets out. It, it doesn't have a concept of doing something which temporarily makes matters worse and then allows you to get out.

    9. DP

      Mm-hmm.

    10. DD

      I think dogs can't do that. But it's just... and it's not because that's a particularly complicated thing, it's just that its programming just doesn't have that. But an animal's programming easily could have that if it, if it lived in an environment in which that happened a lot.

  5. 33:2836:55

    How powerful can narrow AIs be?

    1. DD

    2. DP

      Is your, uh, theory of AI compatible with AIs that have narrow objective functions, but functions which, uh, if fulfilled would give the creator of the AI a lot of power? So if I, for example, I wrote a, a deep learning program, I trade it over financial history, and I asked it, uh, "Make me a trillion dollars on the stock market." Um, do you think that this would be impossible? Uh, or... and if you think this would be possible, then it seems like I do... it's not an AGI, but it seems like a very powerful AI, right? So it seems like AI is getting somewhere.

    3. DD

      Yeah. Well, if you want to be powerful, uh, y- you might do better inventing a weapon or something.

    4. DP

      (laughs)

    5. DD

      Um, but, but... or, or a better mousetrap is even better because it's, it's non-violent. So you, you can invent a paperclip (laughs) , to, to use an example, it's often used in this context.

    6. DP

      Right.

    7. DD

      You can invent... if paperclips hadn't been invented, you can invent a paperclip and make a fortune. And that's an idea which is... but it's, it's not an A- AI, because it's not the paperclip that's going out there, it's, it's really your idea in the first place that, that has caused the whole value of the paperclip. And similarly, if you invent a dumb arbitrage, uh, machine which seeks out complicated trades to make, whi- which, which are more complicated than anyone else is trying to do, and that makes you a fortune, well, the thing that made you a fortune was not the arbitrage machine. It was your idea for how, how to search for arbitrage opportunities that no one else sees.

    8. DP

      Right. That, that-

    9. DD

      That, that is what was valuable. And that's the usual way of making money in the economy. You have an idea and then you implement it.

    10. DP

      Right.

    11. DD

      I, th- that, that was an AI is beside the point. It could have been a paperclip.

    12. DP

      But, but the thing is, um, so the models that are used nowadays are, um, are not expert systems like the chess engines of the '90s. They're, you know, something like AlphaZero or AlphaGo. This is just like a, almost a blank neural net, um, and that they were able to help, you know, let it win, Go, or, uh... Um, so if such a, if such a neural network that was kind of blank, and if you just arbitrarily throw financial history at it, wouldn't it be fair to say that the AI actually figured out what the right trades were?

    13. DD

      Uh, no.

    14. DP

      Even though it's not a general intelligence?

    15. DD

      Y- uh, well, I think it, it, it's possible in chess and, but not in the economy, because the value in the economy is being created by creativity. And, uh, most, you know, arbitrage is one thing. That can sort of skim value off the top by taking opportunities that were too expensive for other people to take. So you can, you know, you can make money, you can make a lot of money that way if you, if you know... if you have a good idea about how to do it. But most of the value in the economy is created by...... the creation of knowledge, somebody has the idea that a smartphone would be good to have, even though, uh, most people think that that's not gonna work. And that idea cannot be anticipated by anything less than an AGI. An AGI could have that idea, but no AI could.

    16. DP

      Mm-hmm. Okay. Um, so I, there's definitely other topics I wanna get to. So, let- let's talk about virtual reality. So, in The

  6. 36:5538:45

    Could you implant thoughts in VR?

    1. DP

      Fabric of Reality, um, you discuss the possibility that virtual reality generators could plug in directly into our nervous system and give us sense data that way. Now, uh, as you might know, many meditators, you know, people like Sam Harris, speak of, um, both thoughts and senses as intrusions into consciousness that have a s- sort of similar, uh, they can be welcome, welcome intrusions, but they are both things that come into consciousness. So, um, do you think g- uh, that a virtual reality generator could also place thoughts as well as sense data into the mind?

    2. DD

      Uh, yes, but that's only because I think that this model, uh, is, is wrong. It's, it's basically the, the Cartesian theater as Daniel Dennett puts it, uh, w- with the stage cleared (laughs) of all the characters.

    3. DP

      Uh-huh.

    4. DD

      So, you, uh, that- that's the, that's, that's conscious, pure consciousness without content as, as Sam Harris envisages it. But I think that all is happening there is that you are conscious of this theater, uh, and you're, you're, uh, en- envisaging it as having certain properties, which by the way, it doesn't have, but that doesn't matter. We can imagine lots of things that don't happen, um, you know. In, in fact, you know, that's... in a way, it characterizes what we do all the time. Um, s- uh, so, uh, one can interpret one's thoughts about this empty stage as being thoughts about nothing. One can, uh, interpret the actual hardware of the stage that one is imagining as being cons- pure conscious, contentless consciousness, but it's not. It has the content of a stage or a, a, a, a space, or, you know, however you want to envisage it.

    5. DP

      Mm-hmm.

  7. 38:4541:19

    Can you simulate the whole universe?

    1. DP

      Okay. And then let's talk about the Turing principle. So, this is a term you coined, um, it's otherwise been called the Church-Turing-Deutsch principle, um, th- would this principle imply that you could run... so, by the way, it states that, "Any... a universal computer can simulate any physical process." Would this principle imply that you could simulate the whole of the universe, for example, in a compact, efficient computer that was smaller than the universe itself? Or is it, is it constrained to physical processes of a certain size?

    2. DD

      Uh, again, uh, it... no, it couldn't, uh, it couldn't simulate the whole universe. Uh, that would be an example of a task where it was, uh, computationally able to do it, but it wouldn't have enough memory or time.

    3. DP

      Mm-hmm.

    4. DD

      So, the more memory and time you gave it, the more closely it could simulate the whole universe. But it couldn't ever simulate the whole universe or anything near the whole simul- universe probably because it, um, uh... well, if you want it to simulate itself as well, then there are logical reasons why there are limits to that. But even if you want it to simulate the whole universe apart from itself, just the sheer size of the universe, uh, makes that, that, uh, impossible. E- even if we discovered ways of encoding information extremely densely like some people have said maybe quantum gravity would allow, uh, you know, totally amazing, um, uh, density of information, it still couldn't, uh, simulate the universe, because that would mean because of the universality of the laws of physics, that would mean the rest of the universe also was that complex because quantum gravity applies to the whole uni- rest of the universe as well. So, so but, but I think it's significant being limited by the available time and memory to, to separate that from being limited by, uh, computational capacity. Because it's only when you separate those that you realize what computational universality is. And, and I think that's d- universality, like Turing or quantum universality, is the most important thing, uh, i- in the theory of computation 'cause, uh, computation doesn't even make sense unless you have a concept of a, of a universal computer.

    5. DP

      Mm-hmm. Mm-hmm.

  8. 41:1944:55

    Are some interesting problems insoluble?

    1. DP

      Um, what could falsify your theory that, uh, all interesting problems are soluble? So, I ask this because, um, as, as I'm sure you know, there are people who have tried offering explanations for why certain pro- uh, problems or questions like, why is there something rather than nothing? Or how could mere physical interactions explain consciousness? They've offered explanations for why, why these problems are in, in principle insoluble. Now, I'm not convinced they're right, but do you have a strong reason for, in principle, believing that they're wrong?

    2. DD

      Uh, no. Um, so this, this is a philosophical theory and could not be proved, uh, wrong by experiment. However, uh, I think I have, um, a, a good argument for why they aren't, namely that, eh, e- each individual case of this is, is a bad explanation. So, le- let, let's, uh, say that, that, uh, some, some people say, for example, that, uh, simulating a human brain is, is impossible. Now, I can't prove that it's possible. Nobody can prove that it's possible until they actually do it, or unless they have a design for it which they prove will work. So, um...Pending that, there is, there is no way of proving that, that, uh, it's not true that this is a fundamental limitation. But the, the trouble is, with that w- idea that, that it is a fundamental limitation, the, the trouble with that is that it could be applied to anything. For example, um, it, it could be applied to the theory that you have recently, just a minute ago, been replaced by a, a humanoid robot, uh, which, which has got, is going to say for the next few minutes, just a prearranged set of things, and you're no longer a person.

    3. DP

      I can't believe you figured it out.

    4. DD

      (laughs) Yeah. Well, that's the first thing you'd say.

    5. DP

      (laughs)

    6. DD

      So, there is no way to, uh, refute that by experiment, eh, short of actually doing it, short of actually talking to you and, and so on. So, it's the same with all these other things. In order for it to make sense to have a theory that something is impossible, you have to have an explanation for why it is impossible. So we know that, for example, almost all mathematical propositions are undecidable. So, that's not because somebody has said, "Oh, maybe, maybe we can't decide everything because, uh, thinking we could decide everything is hubris." That's not an argument. You, you need an, an actual functional argument to prove that, that that is so. And then, uh, at being a functional argument in, in, in which the steps of the argument make sense and relate to other things and so on, you can then say, "Well, what does this actually mean? Does this mean that maybe, uh, we w- can never understand the laws of physics?" Uh, well, it doesn't. Because if the laws of physics included an undecidable function, then we would simply write, you know, F of X and F of X is an undecidable function. We couldn't evaluate F of X. It would limit our ability to make predictions, but then (laughs)

    7. NA

      (laughs)

    8. DD

      Lots... You know, our ability to make predictions is totally limited anyway. But it would not affect our ability unders- to understand the properties of the function F, and therefore the properties of the, of the physical world.

  9. 44:5549:57

    Does America fail Popper's Criterion?

    1. DD

    2. DP

      Mm-hmm. Okay. Is a system of government like America's, which has distributed powers and checks and balances, is that incompatible with Popper's criterion? So the reason I ask is, the last administration had a theory that if- if you build a wall, there will be positive consequences. Um, and, uh, you know, that theory could have been tested, and then the person could have been evaluated on whether that theory succeeded. But because our system government has distributed powers, you know, Congress opposed, uh, the testing of that theory, and so it was never tested. Um, so if our- uh, the American government wanted to fulfill Popper's criterion, would we need to give the president more power, for example?

    3. DD

      Um, it's not as simple as that. Uh, so I agree that this is, this is a big defect in the American system of government. No country has a system government that, that perfectly fulfills Popper's criterion, uh, criterion. Um, and we can always improve. I, I think the British one is actually the best, uh, in, in the world, and it's far from (laughs) far from optimal. Uh, n- making a single change like that is not going to be the answer. The, the constitution of a polity is a very complicated thing, much of which is inexplicit. So, um, w- what the, um, the founding fathers, uh, uh, the American founding fathers realized they had a tremendous problem. Is that what they wanted to do, what they thought of themselves as doing was to implement the British constitution. In fact, they thought they were the defenders of the British constitution and that the, uh, British King had violated it and, and was, was bringing it down. They wanted to retain it. The trouble is that they, they all, in order to do this, to gain the independence to do this, they had to get rid of the king, and then they, they wondered whether they should get an alternative king. Whichever way they did it, there were problems. The way they decided to do it, I think, made for a system that was inherently much worse than the one they were, were replacing, but they had no choice. If they wanted to get rid of a king, they had to have a, a different system for having a head of state. Therefore, they had to have, uh, a- an- it- they wanted to be democratic. They, uh- that meant that the, the president had a legitimacy in legislation that the king never had. Or- or sorry, never had it, but the king did used to have it in medieval times, but the king by the, by the time of the, of the enlightenment and so on, uh, no longer had l- uh, full legitimacy to legislate. So, they, they had to implement a system where him seizing power was, was prevented by something other than tradition, and then so they instituted these checks and balances. Checks and- so the whole thing that they instituted was immensely sophisticated. It's an amazing intellectual achievement, and that it works as well as it does is something, some- something of a miracle. But the, uh, inherent flaws are there, and one of them is this, the fact that there are checks and balances means that responsibility is dissipated, and nobody is ever to blame for anything in the American system, and which, which is terrible. In, in the British system, blame is absolutely focused. You know, ev- everything is sacrificed to the, to the, um, end-... of focusing blame and responsibility down to the mid, to, to the government. You know, past, past the, the law courts, uh, past the parliament, right to the government. That's, that's, that's where it's all focused into. Um, and, uh, th- there are no systems that do that better, but, uh, a- as you well know, the, the British system also has, uh, flaws and we, we recently saw with the, with the sequence of events with, with, um, Brexit referendum and then parliament bulking at implementing a, a, some, some laws they didn't agree with. And then, uh, th- th- that being referred to the courts. And so there was the courts and the parliament and the government and the prime minister all blaming each other, and there was a sort of mini constitutional crisis, uh, which could only be resolved by having an election and then having a majority government, which is by the mathematics of how the government works, that's how it usually is in Britain. Although, uh, you know, we have been unlucky, uh, several times recently i- in not having a, a majority government.

  10. 49:5753:12

    Does finite matter mean there's no beginning of infinity?

    1. DD

    2. DP

      Mm-hmm. Um, okay, so th-this could be wrong, but it seems to me in, uh, an expanded universe, our... there will be, like, a finite amount of total matter that will ever exist in our light cone, right? The, there, there's a limit. And that means that th- there's a limit on the amount of, uh, computation that, uh, th- this matter can, uh, you know, execute, the amount of energy it can provide, uh, perhaps even the amount of economic value it can sustain, right? So it would be, uh, it would be weird if the GDP per atom could be arbitrarily large. Um, so does this impose some sort of limit on your concept of the beginning of infinity?

    3. DD

      Uh, th- so what you've just recounted is, is a cosmological theory. Um, th- this, this, th- the universe could be like that, but, uh, w- we know very little about cosmology. We know very little about the universes in the large. Like, theories of cosmology are changing on a timescale of about a decade. So it, it doesn't make all that much sense to speculate about what the ultimate asymptotic form of cosmological theories will be. At the same time, um, we, we don't have a good i- idea about the asymptotic form of very small things. Like, we know that our conception of physical processes must break down somehow at the level of quantum gravity, uh, like 10 to the minus 42 seconds and, uh, th- that kind of thing. But, but, uh, we have no idea what happens below that. Some people say it's gotta stop below that, but there's no, there's no argument for that at all. It, it's just that we don't know what happens beyond that. Now, what happens beyond that may be a finite limit, similarly the way what happens on a large scale may impose a finite limit, in which case computation is bounded by a finite limit imposed by the cosmological initial conditions of this universe, which is still different from its being imposed by inherent, um, uh, hardware limitations. For example, if there's a finite amount of, um, GNP, uh, available in the distant future, then it's still up to us whether we spend that on, um, mathematics or music or, or political systems or, or any of the thousands of even more worthwhile things that have yet to be invented. Uh, so it's, it's up to us which ideas we fill the 10 to the 10 to the 10 to the 10 bits with. Now, uh, I... my guess is that there are no such limits, but my worldview is not affected by whether there are such limits, um, because e- as I said, it's, it's still up to us what to fill them with. And then if we get chopped off at some point in the future, then everything will have been worthwhile up to then.

    4. DP

      Mm-hmm. Gotcha. Um,

  11. 53:1255:30

    The Great Stagnation

    1. DP

      okay. So I, th- the way I understand, uh, your concept of the beginning of infinity, it seems to me that the more knowledge we gain, um, the more knowledge we're in a position to gain, so there should be, like, an exponential growth of knowledge. But if we look at the last 50 years, it seems that there's been a slowdown in, um... or a decrease in research productivity, economic growth, productivity growth. And this seems compatible with the story that, you know, that there's a limited amount of fruit on the tree, that we pick the low-hanging fruit, and now there's, uh, l- less and less fruit and harder and harder fruit to pick, um, and, you know, eventually w- well, the orchard will be empty. Um, eh, so do you have an alternative explanation for what's going on in the last 50 years?

    2. DD

      Yes, uh, I, I think it's very simple. It, th- there are sociological factors, uh, in, in academic life which have, uh, stultified, um, the, the culture. And not, not, not totally and not everywhere, but th- that, that has been a tendency in what has happened, and it has resulted in a, a, uh, loss of productivity in many sectors in many ways. But not in every sector, not in every way. And, and, um, uh, the, the, uh... for example, I, I think there was a... I've, I've often said there was a stultification in, uh, theoretical physics, um, starting in, let's say, the 1920s, and it, and it still hasn't fully dissipated. If it wasn't for that-... quantum computers would have been invented in the 1930s and built in the 1960s. Uh, so th- that is just an accidental fact, but it- it's- it's- uh, it just goes to show that there are no guarantees. The fact that- that- that our horizons are unlimited does not guarantee that we will get anywhere, and that- that we won't start declining tomorrow. I don't think we are currently declining. I think the- the- these declines that we see are parochial effects caused by specific mistakes that- that, uh, have been made and which can be undone.

    3. DP

      Mm-hmm.

  12. 55:3059:25

    Changes in epistemic status is Popperianism

    1. DP

      Um, okay, so, uh, l- I- I wanna ask you a question about Bayesianism versus, uh, Popperianism. So one reason why people prefer, uh, Bayes is because there seems to be a way of describing changes in e- changes in epistemic status when the relative status of a theory hasn't changed. So I'll give you an example. Um, currently, the many-worlds inter-, uh, explanation is the best way to explain quantum mechanics, right? But suppose we, in the future-

    2. DD

      It's the only way.

    3. DP

      Yeah, okay. Uh, but suppose in the future, we, um, we were able to build, uh, an AGI on a quantum computer, and we were able to design some clever, um, interference experiment, as you suggest, to have it be able to report back being in a superposition across many worlds. Now it seems that, um, even though many-worlds remains the best or the only explanation, somehow, its epistemic status has changed, um, as a result of the experiment, um, and in the Bayesian terms, you could say the credence of this theory has increased. How would you describe these sorts of changes in a Popperian view?

    4. DD

      So what- what- what has happened there is that, uh, at the moment, we have only one explanation that can't be immediately knocked down. If we had, if- if we did that thought experiment, we w- we might well decide that this will provide the ammunition to knock down even ideas for alternative explanations that have not been thought of yet. I mean, obviously, it wouldn't be enough to knock down every possible explanation, because for a start, we know that quantum theory is false. We don't know for sure that the next theory will have many-worlds in it. I mean, I think it will, but- but, you know, we're th- the- we can't prove anything like that. But, uh, I- I would replace the idea of increased credence with, uh, uh, uh, theory that- that the- the, uh, experiment will provide, uh, a- a quiver full of arrows or a- a- a, um, a repertoire of arguments that goes beyond, um, the- the known arguments, the known bad arguments, and, uh, um, will reach into other types of arguments because... The- the reason I- I- I would say that is that some of the existing misconceptions about quantum theory reside in misconceptions about, uh, th- the- the methodology of science. Now I've written a paper about what I think is the right methodology of science, where that does- doesn't, uh, uh, apply, but- but, uh, m- many physicists and many philosophers would disagree with that, and they would, um, uh, advocate a methodology of science that's more, um, uh, based on empiricism. Uh, uh, of course, I think that empiricism is a mistake and can be knocked down in its own terms, so, eh, we shouldn't, but- but that, not everybody thinks that. Now, once we have an experiment such as my- my thought experiment, if- if that was actually done, then people could not use the- their arguments based on, uh, uh, uh, fallacious idea of empiricism (laughs) because their theory would have been refuted even by the standards of empiricism.

    5. DP

      Mm-hmm. Mm-hmm.

    6. DD

      Which shouldn't have been n- needed in the first place, but, you know, so that's why I think... Th- that's the way I would express that the repertoire of arguments would become more powerful if that experiment were done successfully.

    7. DP

      Um, the- the next

  13. 59:251:02:51

    Open-ended science vs gain of function

    1. DP

      question I have is, how far do you take the principle that open-ended scientific progress is the best way to deal with existential dangers? To give one example, um, many people have suggested... Um, so for the, you have something like gain-of-function research, right? And it's conceivable that it could lead to more knowledge and how to stop dangerous pathogens. But, um, I guess, at- at least in Bayesian terms, you could say it seems even more likely, uh, that it can or has led to the- the spread of a manmade pathogen that would have not otherwise been, um, naturally developed. So what do- would your belief in open-ended scientific progress allow us to say, "Okay, let's stop gain-of-function research"?

    2. DD

      No. I- it wouldn't allow us to say, "Let's stop it." It might, um, make it reasonable to say, "Let us do research into how to make laboratories more secure before we do gain-of-function research." It's really part of the same thing. It's- it's- it's like saying, uh, let's do research into how- how to make the, uh, plastic hoses through which the reagents pass, uh, more impermeable before we actually do the experiments with the reagents. So it's all part of the same experiment. I wouldn't want to stop something just because new knowledge might be discovered. That- that's- that's the no-no in- in my view, but- but which knowledge we need to...... uh, discover first. That's a problem of scheduling, which is non-trivial, non-trivial part of any research and of any learning.

    3. DP

      Mm-hmm. But g- would it be conceivable for you to say that until we figure out how to make sure these laboratories are, um, a- as safe to a certain standard, um, we will stop, uh, the research as it exists now? And then, uh, meanwhile we will, uh, uh, meanwhile we'll focus on doing the other r- kind of research so gain-of-function can restart, but until then it's not allowed?

    4. DD

      Uh, y- y- yes, in principle that would be reasonable. I don't know enough about the actual situation to have a view.

    5. DP

      Mm-hmm.

    6. DD

      Uh, you know, I do- I don't know how these labs work. I, I don't know what the, what the, um, what the precautions consist of. And w- wha- when I hear people talking about, for example, lab leak, uh, I, I think, well, most likely lab leak is that one of the people who works there walks out of the front door. Uh, so the, the leak is not a leak to, from the lab to the outside. The, the, the, the leak is from the test tube to the person, and then from the person walking out the door. Uh, and, uh, I don't know enough about what these precautions are or what, what the state of the art is to know to what extent the risk is actually minimized. It could be that the, the culture of these labs is not good enough. In which case it would be part of the next experiment (laughs) to improve the culture in the labs. But I, I, I am very suspicious of saying that all labs have to stop and, and meet a criterion because I'm sure that the s- the... Well, I suspect that the s- the stopping wouldn't be necessary and the criterion wouldn't be appropriate. The, the, a- again, the, the which criterion to use depends on the actual research

  14. 1:02:511:07:16

    Contra Tyler Cowen on civilizational lifespan

    1. DD

      being done.

    2. DP

      When I had Tyler Cowen on my podcast, um, I asked him, um, why he thinks... So he thinks that, uh, human civilization is only gonna be around for 700 more years, and then so I asked him, I gave him, you know, your rebuttal, or what I understand to be your rebuttal, that, um, you know, creative, uh, optimistic societies will innovate ways of, uh, you know, safety technologies faster than totalitarian static societies can innovate way, uh, destructive technologies. And he responded, you know, "Maybe, but the cost of destruction is just so much lower than, uh, the cost of building, um, and, you know, that trend has been going on for a while now. Uh, what happens when a nuke costs $60,000? Um, or what happens if there's a mistake like, uh, the kinds that, you know, we saw many times over in the Cold War?" How would you respond to that?

    3. DD

      First of all, I think we've been getting safer and safer throughout the entire history of civilization. Um, the, you know, there were these plagues that, that wiped out a, a, a h- third of the population of, of the, of the world, or half, uh, and it could've been 99% or 100%. Uh, we, we went through some kind of, uh, bottleneck 70,000 years ago, I understand, uh, which they can tell from, like, from, from genetics. All our cousin species have been wiped out, so, so, uh, we were, we were much less safe then than now. Al- also if, um, if a asteroid, 10 kilometer asteroid had been on target with the Earth at any time in the, in the past two million year or whatever it is history of, of the genus homo, that would've been the end of it. Whereas now, it, it, it'll just mean higher taxation for a while. Uh, you know, that, that's the-

    4. DP

      (laughs)

    5. DD

      ... that's how much amazingly safer we are, uh, now. Uh, I am, I would never say that it's impossible that we'll destroy ourselves. That, that would be, uh, the contrary to universality of, of the human mind. We can ch- make wrong choices. We can make so many wrong choices that we'll destroy ourselves. Um, uh, a- and the, the, uh, o- on the other hand, the, the atomic bomb accident sort of thing would have had no, zero chance of destroying civilization. All they would have done is cause a vast amount of suffering. Uh, and, uh, but they, I, I don't think we have the technology to end civilization even if we wanted to. I, I think w- uh, all we would do if we just deliberately unleashed hell all over the world is we would cause a vast amount of suffering. But there would be survivors, and they, they would resolve never to do that again.

    6. DP

      Mm-hmm.

    7. DD

      Um, so, uh, I, I don't think we're even able to, let alone, uh, that, that we would do it accidentally. But, uh, as for the bad guys, well, I think we are doing the wrong thing largely in regard to both, uh, external and internal threats. But, uh, I don't think we're doing the wrong thing to an existential risk level. And over the next 700 years or whatever it is, well, I don't want to prophesy 'cause I, I don't know most (laughs) most of the advances that are gonna be made in that time. But, uh, I see no reason why if we are solving problems we won't solve problems. Uh, I, I, I don't think this, this, uh, forget... So to take another metaphor, uh, Nick Bostrom's, um, jar with white balls and there's one black ball and you, you take out a white ball, a white ball, and white ball, and then you hit the black ball and that's the end of you, I don't think it's like that, because every white ball you take out and have reduces the number of black balls in the jar.So, uh, again, I'm not saying that's a law of nature. It could be that the very next ball we take out will be the black one and that'll be the end of us. It could be. But I think all arguments that it will be

  15. 1:07:161:14:12

    Fun criterion

    1. DD

      are fallacious.

    2. DP

      I do wanna talk about the fun criterion. D- is your definition of fun different from how other people define other positive emotions, like, um, eudaimonia or well-being or satisfaction? Is, is it, fun a different emotion?

    3. DD

      Uh, I, I don't think it's an emotion. And, and I, I... All these things are not very well defined. Uh, they can't possibly be very well defined until we have (laughs) a, a, a, a satisfactory theory of qualia at least, and probably more, a satisfactory theory of creativity, how creativity works and so on. Uh, I think that, um, the choice of the word fun for the thing that I, I, uh, explain more precisely, but still not very precisely as, as, um, uh, a, a creation of knowledge without, uh, uh... wh- where, where the different kinds of knowledge, inexplicit, unconscious, conscious, explicit are, are, uh, all, um, in harmony with each other. Uh, I think that is actually, um... The only way in which the everyday usage of the word fun differs from that is that fun is considered frivolous or, uh, seeking fun is considered, uh, as seeking frivolity. But I, I think that isn't so much a different use of the word. It's, it's, it's just a different pejorative theory about wh- whether this is a good or a bad thing. But, but nevertheless, I can't define it precisely. Th- the important thing is that there is a thing which has these, this property of fun that, that you can't, you can't compulsorily enact it. So in, in, i- in, in, um, i- in some views, you know, no pain, no gain. Well, then you, you can find out mechanically whether the thing is causing pain and whether it's doing it according to the theory that s- that says that you will have gain if you have that pain and so on. So that can all be done mechanically, and therefore it is subject to the criticism and the... Another way of looking at the fun theory is that it's a mode of criticism. Um, it's subject to the criticism that this isn't fun, i.e., this is making and, uh, privileging one kind of knowledge arbitrarily over another rather than being rational and, uh, letting content decide.

    4. DP

      Mm-hmm. I- is this placing a limitation on universal explainers then, if they can't create some sort of theory about why a thing could or should be fun, why anything could be fun? Um, and, um, i- it, it, it seems to me that sometimes we actually can make things fun that aren't, like for example, take exercise. No pain, no gain. It's like when you first go, it's not fun. But, you know, once you start going, you understand the mechanics. You develop a theory-

    5. DD

      Yes.

    6. DP

      ...for why it can and should be fun.

    7. DD

      Yes. Yes. Well, th- that, that's quite a good example, because there you see that fun cannot be defined as the absence of pain. So you, you can be having fun while experiencing physical pain, and that physical pain is not sparking suffering, but joy. Um, however, there is such a thing as physical pain not sparking joy, uh, as Marie Kondo would say. Um, and, and that's important, because if you are dogmatically or uncritically implementing in your life a theory of the good that involves pain and which excludes the criticism that maybe this can't be fun, or maybe this isn't yet fun, or maybe I should make it fun and if I can't, that's a reason to stop, you know, all, all those things. If all those things are excluded because by definition the thing is good and your pain, your suffering doesn't matter, then, then, uh, that opens the door to n- not only to s- suffering, but to stasis. You, you won't be able to g- get to a better theory.

    8. DP

      Mm-hmm. And then why is fun, um, fun central to this instead of another emotion? So, you know, like for example, Aristotle thought that, like, I guess, a sort of, uh, w- widely defined sense of happiness is what should be the goal of our endeavors.

    9. DD

      Mm-hmm.

    10. DP

      Um, why fun instead of something like that?

    11. DD

      Uh, well, i- that's defining it vaguely enough so that what you've s- said might well, very well be fun. The point is th- the, the underlying thing is, as far as, you know... Going one level below where really to understand it we'd need to go about seven levels below that, which we can't do yet. But, uh, the, the important thing is that there are several kinds of knowledge in our brains, and the one that is written down in the exercise book that says you should do this number of reps and you should, um, you should power through this and, and it doesn't matter if you feel that and so on, that's an explicit theory. And it contains some knowledge, but it also contains error. Um, that's like... All our knowledge is like that. We also have other knowledge which is, which is contained in our...... um, biology, it's contained in our genes. We have knowledge that is inexplicit, like our knowledge of, of grammar is my, always my favorite example. As we know, you know, why certain sentences are acceptable and why they're unacceptable. But we can't state explicitly w- a- all in every case why it, it isn't or why it is. Um, uh, and then there's, there's, um, so that there's explicit and inexplicit knowledge, there's conscious and unconscious knowledge. All those are bits of program in the brain, they're ideas. They are, they, they, um, are bits of knowledge in this... If you define knowledge as information with causal power, they are all information with causal power. They all contain truth and they all contain error, and it's always a mistake to shield something, to sh- shield one of them from criticism or replacement. Not doing that is, is what I call a fun criterion. Now you might say that's a, that's a bad name but, you know, it's the best I can find.

    12. DP

      (laughs)

  16. 1:14:121:17:57

    Does AGI through evolution require suffering?

    1. DP

      Um, so why would creating an AGI through evolution necessarily entail suffering? Because the way I see it, or it seems to me your theory is that you need to be a general intelligence in order to feel suffering. But by the point an evolved, uh, simulated being is a general intelligence, we can just stop the, we can just stop the simulation and, uh, so where's the suffering coming from?

    2. DD

      Okay. So the kind of simulation by evolution that I'm thinking of, I mean there, there, there may be several kinds. But the kind that I'm thinking of, a- and which I said would be the greatest crime in history, is the kind where that just simulates the actual evolution of humans from pre-humans that weren't people. So you, you have, you have a population of non-people which in this simulation would be some kind of NPCs. Um, and, and then they would, they would just evolve. We, we don't know what the criterion would be, we just have an artificial universe which simulated the surface of the earth, and they'd be walking around and some of them might or might not become people. And now the, the thing is when you're part of the way there, what is happening is that you have, uh, the way that I... The only way that I can imagine the evolution of personhood or creative, uh, explanatory creativity happened was that, uh, the hardware needed for, for it was, was first needed for something else. I, I have proposed that it was needed to, to transmit memes. So there'd be people who were transmitting memes creatively but they were running out of resources. So they weren't running out of resources, uh, before it managed to increase their, their stock of memes. So in every generation there was a stock of memes that was being passed down to the next generation, and once they got beyond a certain complexity, they had to be passed down by the use of creativity by the recipient. So there may well have been a time, and as I say I can't think of any other way it could have been, where there was genuine creativity being used but it ran out of resources very quickly. But not so quickly that it didn't, uh, increase the meme bandwidth. Then in the next generation there was more meme bandwidth, there were... And then after, you know, a certain number of generations there would have been some opportunity to use this hardware or whatever it is, you know, firmware I expect, to use this firmware for something other than just tran- blindly transmitting memes. Or rather, creatively transmitting memes but, but they were blind memes. So, uh, in that time it would have been very unpleasant to be alive. It was already very unpleasant to, to... Sorry, it was very unpleasant to be alive when we did have enough resources to think as well as do the memes. But I, I, I don't think there would have been a moment at which you would say, "Yes, now the suffering begins to matter because it's not just blind memes." I, I think the people were already suffering at the, at the time when they were blindly transmitting memes-

    3. DP

      Would-

    4. DD

      ... 'cause they, they were using gener- uh, genuine creativity. They were just not using it to any good effect.

    5. DP

      Mm-hmm. Gotcha.

  17. 1:17:571:20:05

    Would David enter the Experience Machine?

    1. DP

      Um, would, uh, would being in the Experience Machine be compatible with the fun criterion? So, um, you're not aware that you're in, you know, uh, the Experience Machine, it's all virtual reality, um, but you're still doing the things that would make you have fun, in fact more so than in the real world. Um, eh, so w- would you be tempted to get into the Experience Machine? Uh, would it be com- uh, com- uh, compatible with the fun criterion? I guess they are different questions but...

    2. DD

      Y- y- I, I'm not sure what the Experience Machine is. I mean if-

    3. DP

      Oh, sorry.

    4. DD

      ... it's just, uh, so I mean is it, is it just a virtual reality world in, in which, uh, things work better than in the real world or something?

    5. DP

      Yeah. So it's a thought experiment by R- Robert Nozick and the idea is that you would enter this world and, but you would forget that you're in virtual reality. So all... Eh, I mean the per- world would be perfect in every possible way that it could be perfect. Or be, not perfect but it would be better in every possible way it could be better, um, but you would think the relationships you have here are real, the knowledge you're discovering here is novel and so on. Is, is, i- would you be tempted to enter such a world?

    6. DD

      Uh, well I... No. I, I, I, I certainly wouldn't want to enter a world, any world which involves erasing the memory that I, uh, have come from this world.

    7. DP

      Mm-hmm.

    8. DD

      Uh, related to that is the fact that the laws of physics in this virtual world-... um, couldn't be the true ones because the true ones aren't yet known. So I'd be in a world in which I was trying to learn laws of physics which aren't the actual laws.

    9. DP

      Mm-hmm.

    10. DD

      And they would have been designed by somebody for some purpose to manipulate me, as it were. Maybe, maybe it would be designed to, like, be a puzzle that would, that would take 50 years to solve. But it would have to be, by definition, a finite puzzle, and it wouldn't be the actual world. And meanwhile, in the actual world, things are going wrong and I don't know about this and eventually, they go so wrong that my computer runs out of power. And then where will I be?

  18. 1:20:051:24:06

    (Against) Advice for young people

    1. DD

    2. DP

      (laughs) Um, the final question I always like to ask people I interview is, what advice would you give to young people? So, um, somebody in their 20s, is there something that you would like to, uh, some advice you would give them?

    3. DD

      Um, well, I, I try very hard not to give advice.

    4. DP

      Mm-hmm.

    5. DD

      Because, uh, it's, it's, it's not a good relationship to be with some- i- in with somebody to give them advice. I can have opinions about things. So, uh, for example, I, I may have an opinion that, that, um, uh, it's dangerous to condition your short-term goals by reference to some long-term goal. And I have a, a, what I think is a good epistemological reason for that, namely that, uh, if your short-term goals are subordinate to your long-term goal, then you won't f- if your long-term goal is wrong or deficient in some way, you won't find out until you're dead. So it, it, it's a bad idea because it is subordinating the things that you could error correct now or in six months' time or in a year's time, to something that you could only, uh, error correct on a 50-year timescale, and then it'll be too late. So I'm, I'm, uh, suspicious of advice of the form, "Set your goal..." and even more suspicious of, "Make your goal be so-and-so."

    6. DP

      Huh. Interesting.

    7. DD

      So that's an example-

    8. DP

      But, but, but, but, but, but-

    9. DD

      ... of an example of advice that isn't advice.

    10. DP

      But why is it, uh, w- why do you think the, uh, uh, the relationship between advicee and, um, advice giver is dangerous?

    11. DD

      Oh, well, because it's one of authority. Uh, uh, again, you know, it, I, I try to make this example of "advice" that I just gave, I try to make it non-authoritative. I just gave an argument for why certain other arguments are bad. So, but if it's advice of the form, w- "A healthy mind in a healthy body," or, um, uh, "Don't drink coffee before 12:00," or, you know, something like that, it, it's, it's, um, well, it's a non-argument. It's, it's, uh, it's, it, if I, if I have an argument, I can give the argument and not tell the person what to do. Uh, who knows what somebody might do with an argument? They might change it to, to a better argument which actually implies different behavior. I can contribute to the world arguments, make arguments as best I can. I don't claim that they are privileged over other arguments. I, I, I just put them out because I think that this argument works. And I expect other people not to think that they work. I mean, we've just done this in this very podcast, you know. I, I put out an argument about AI and that kind of thing, and you criticize it. You, uh, you... Now, if, if, if I was in the position of making that argument and saying that therefore you should do so-and-so, that's a relationship of authority, which I think is immoral to have.

    12. DP

      Well, David, thanks so much for, um, th- thanks so much for coming on the podcast, and thanks so much for giving me so much of your time.

    13. DD

      Well, it's been fascinating. Yeah, fascinating. Thank, thank you for, um, for inviting me. (instrumental music)

Episode duration: 1:24:06

Install uListen for AI-powered chat & search across the full episode — Get Full Transcript

Transcript of episode EVwjofV5TgU

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome