Dwarkesh PodcastGeorge Hotz vs Eliezer Yudkowsky
EVERY SPOKEN WORD
150 min read · 30,065 words- 0:00 – 15:00
Okay. We are gathered…
- DPDwarkesh Patel
Okay. We are gathered here to witness George Hotz and Eliezer Yudkowsky debate and discuss, live on Twitter and YouTube, AI safety and related topics. You guys already know who George and Eliezer are, so I, I don't feel like introduction is necessary. I'm Dwarkesh. I'll be moderating. I'll mostly stay out of the way, um, except to kick things off by letting George explain his basic position. And we'll take things from there. George, I'll kick it off to you.
- GHGeorge Hotz
Sure. Um, so I took an existentialism class in high school, and you'd read about these people, Sartre, Kierkegaard, Nietzsche, and you wonder, "Who were these people alive today?" And I think I'm sitting across from one of them now. Um, rationality and the sequences, uh, this whole field, the whole Less Wrong cinematic universe, uh, have impacted so many people's lives in, I think, a very positive way, including mine. Um, not only are you a philosopher, you're also a, a great storyteller. Um, there's two books that I've picked up and, you know, it was like crack. I couldn't put them down. Uh, one was Atlas Shrugged and the other one was Harry Potter and the Methods of Rationality. Um, it's a great book. Now, those are fictional stories. Um, you've also told some stories pertaining to the real world. Um, one was a story you told when you were younger about how "I remember the day I found staring into the singularity when I was 15." And it starts talking about Moore's law and how Moore's law is fundamentally a human law that says humans double the power of processors every two years. So once computers are doing it, it's going to be two years, but then next time it'll be one year, and then six months, and then three months, and then 1.5 and so on. And this is a hyperbolic sequence. Um, this is a singularity, and that's why it's called staring into the singularity. Then this document said that we were gonna... you know, the AI was gonna do wonderful things for us, we were gonna go colonize the universe, we were gonna go, you know, go forth and do all things till the end of all ages. Um, then you changed your views, and super intelligence does not imply super morality. The orthogonality thesis, I'm not going to challenge it. It is obviously a true statement. Then you kept the basic premise of the story, the recursively self-improving, foom, criticality AI. But instead of saving us, it was gonna kill us. I don't think either of these stories is right, and I don't think either of these stories is right for the same reason. I don't think AI can foom. I don't think AI can go critical. I don't think intelligence can go critical. I think this is an absolutely extraordinary claim. I'm not saying that recursive self-improvement is impossible. Recursive self-improvement is of course possible, humanity has done it. Every time you have used a tool to make a better tool, you have recursively self-improved. What I don't believe in is the AI that's sitting in a basement somewhere running on a thousand GPUs that is suddenly gonna crack the secret to thinking, recursively self-improve overnight, and then flood the world with diamond nanobots. This is an extraordinary claim and it requires extraordinary evidence, and I hand it over to you to deliver that evidence.
- EYEliezer Yudkowsky
Heh. Well, first, let me say that I don't think that the scenario of us all perishing to non-super moral super intelligence requires that particularly rapid rate of ascent. It requires a large enough gap open up with humanity that hasn't followed along in time. And why be- be... is this a crux? Be- be- before we, we start arguing about whether like self-improvement of things on the large internet connected server clusters rather than basements that now prevail, um, before we start arguing about that part, let's first check where the disagreement lies. So from my perspective, if you've got a trillion beings that are, you know, sufficiently intelligent and smarter than us and not super moral, I think that's kind of game over for us. It... even if you got there via a slow 10 year process instead of a 10 hour process or a 10 week- day process or whatever, if you are at the end point where there's this like large mass of intelligence that doesn't care about you, I think that we are, we are dead. And I worry that our s- and, and more importantly, I worry that our successors will go on to do nothing very much worthwhile with the galaxies. So presumably you think that if things don't go quickly, then we're safe. I dispute that, and maybe that's the part we need to talk about.
- GHGeorge Hotz
Sure. Um, well, let's start with, let's give an approximate timeline. We don't, we don't need an exact timeline, but you seem to think this is gonna happen in your lifetime?
- EYEliezer Yudkowsky
That's my wild guess. It is far easier to predict the end point than all the details of the process that takes us, that take us there. Timing is one of those details. Timing is really, really hard. In 2004, I made a prediction that super intelligence would eventually be able to solve the, a special case of the protein folding problem, which is you get to choose the DNA sequence, but you wanna choose a DNA sequence that folds into a shape with a chemical property.... and so I predicted that super intelligence would en- eventually be able to solve this easy special case of protein folding. Now, in reality, protein folding was cracked f- for the much harder general case of biology, was cracked by AI come about 2020 or so, AlphaFold2. Um, there was no way I could've made the timing. I could not even have been confident that the bio- biological case of protein folding was going to be crackable by something so much shorter of super intelligence. A- of course, people at the time said it wasn't possible, you know, for the AI can't do this, like, how do you know this problem was even solvable, et cetera, et cetera. And, you know, I could try to explain how I knew, but that would be a technical story. I would point the fact that a much easier s- sp- um, pardon me, that a much harder general case of the problem I pointed to was solved by a non-super intelligence not all that far in the future as, as proof that I, like, was making a prediction with a lot of safety margin. But in, in 2004, that would've been pretty hard to convince you of 'cause there wouldn't have actually been an AI solving the harder general case of protein folding, a- and the timing, you know, the, the, or, and this particular form of AI that did it, that's, like, incredibly hard. So do, do I, nonetheless, taking a wild guess, expect this to happen in my lifetime? Yeah. My, my wild guess is that I'm very confident of that, if I don't get run over by a truck.
- GHGeorge Hotz
Okay. Um, let's talk about AlphaFold. So I think the form does matter. I think the form is very important. Uh, when you were maybe talking about this in 2005, when I read all the Sequences Less Wrong stuff, 2010, you were thinking about Bayesian AIs that were going to figure out the world from first principles. Now, maybe not exactly that, but that's kind of where we were. But it's important how AlphaFold did it. AlphaFold did not start with the basic laws of physics and then figure out how proteins will fold. AlphaFold was trained on a huge amount of experimental data to extrapolate from that data. I don't doubt that these systems are going to get better. I don't doubt that they're eventually going to surpass us. I do doubt that they are going to have magical or godlike properties like solving the protein structure prediction problem from, you know, the, the, from quantum field theory, right? I, I-
- EYEliezer Yudkowsky
They don't have to.
- GHGeorge Hotz
Well-
- EYEliezer Yudkowsky
Right? Like, why, why do, what, they, they don't need to. There's protein-
- GHGeorge Hotz
Right.
- EYEliezer Yudkowsky
... structure data to learn from. They don't need to do it-
- GHGeorge Hotz
Yes.
- EYEliezer Yudkowsky
... from quantum field theory. Something can be not godlike and still more powerful than you, right? Like, like, like you look at the world, world chess champion Magnus Carlsen, who by objective, by which I mean AI measurements is probably the strongest human player who ever lived.
- GHGeorge Hotz
Sure.
- EYEliezer Yudkowsky
He's not God. He's not infinitely smart. He starts off on a chessboard that with no more resources than you have, and he predictably wipes the board with you 'cause he doesn't have to be godlike to defeat you or me, to be clear. I also can be defeated by being short of godhood.
- GHGeorge Hotz
Um, Magnus Carlsen can't make diamond nanobots. Do we agree on that statement?
- EYEliezer Yudkowsky
I, uh, well, we, we haven't ac- well, not quickly. I'm not sure what happens if you give him a million-
- GHGeorge Hotz
(laughs)
- EYEliezer Yudkowsky
... if you give him a million years to work on it, then, then I'm not sure what happens. Like, I, I agree that, that he probably can't do it quickly.
- GHGeorge Hotz
Okay. Um, so let's talk about timing, because timing, uh, sort of matters a lot.
- EYEliezer Yudkowsky
Why?
- GHGeorge Hotz
Well, because it depends when we should shut it down, right? Well, it definitely does.
- EYEliezer Yudkowsky
I mean, if there's like a predictive, or if there, if there's some kind of predictable phenomenon where you, you can, like, dance around the bullets and know that, like, like, things will become dangerous at, like, this time, but, like, no earlier than that, and we're like, okay, if we put the following, like, precautions into place at this future time, which is not now, we're sure we're going to do it later, 'cause people sure do talk a lot of crap about stuff that they claim will be done later and that never gets done. But-
- GHGeorge Hotz
Sure.
- EYEliezer Yudkowsky
... so, so, you know, like, there's, there's this possibility that we could, like, be clever and dance around bullets if we knew exactly where the bullets were and we could actually coordinate on clever future strategies like that, which I don't think we can.
- GHGeorge Hotz
Okay.
- EYEliezer Yudkowsky
So that said, why do, why does timing matter?
- GHGeorge Hotz
Well, let's, let's start with the basic, and this is related to your question of why timing matters. Um, do you accept that it will not be hyperbolic, right? Staring into the singularity talks about a hyperbolic sequence, a sequence that has a singularity, that has a finite-
- EYEliezer Yudkowsky
Important context, I wrote this when I was 16 years old.
- GHGeorge Hotz
Okay, so you-
- 15:00 – 30:00
That depends on the…
- GHGeorge Hotz
we can talk about, but the center of international control. So I think there actually is potentially a bad scenario with AI, and I'll talk about what my bad scenario is. Um, if aliens were to show up here, we're dead, right? For the most part.
- EYEliezer Yudkowsky
That depends on the aliens.
- GHGeorge Hotz
But you-
- EYEliezer Yudkowsky
Um, I, I... If, if I know nothing else about the aliens, I might give them something like a 5% chance of, of being nice.
- GHGeorge Hotz
But they have the ability to kill us. I mean, they got here, right? They, they, they came-
- EYEliezer Yudkowsky
Oh, they absolutely have the ability to... Yeah. Anything that can cross interstellar distances can-
- GHGeorge Hotz
Yeah.
- EYEliezer Yudkowsky
... run you over without noticing.
- GHGeorge Hotz
Right. I, I didn't expect-
- EYEliezer Yudkowsky
Well, they, they would notice, but they wouldn't, you know, wouldn't be sleeping.
- GHGeorge Hotz
I, I, I didn't expect this to be a controversial point, but I agree-
- EYEliezer Yudkowsky
Yeah.
- GHGeorge Hotz
... with you that if you're talking about intelligences that are on the scale of billions of times smarter than humanity, yeah, we're in trouble, right?
- EYEliezer Yudkowsky
It's just not that hard to be billions of times smarter than humanity. I, I don't, I don't-
- GHGeorge Hotz
Oh, I very much disagree with this. Well, so I also... I somewhat object to the line between humanity and the machines, right? A lot of our intelligence is externalized. Um-
- EYEliezer Yudkowsky
Um, I, I mean, that's the way it is when you've got an intelligence over here that's using a bunch of responsive tools out there. There's, there's no que- there's only one center of gravity there. It, it, it's like looking at a star system and be- and being like, "Well, there's no point in drawing a firm boundary between the sun and the planets. They're all just in space." And, you know, like they're all just oc-... and, you know, sure, they're all ultimately just like objects in space, but one of them is far more massive than the others, and that's humans with the tools we have now.
- GHGeorge Hotz
Is your concern the bandwidth of the link? Is that what you're saying? Like, I'm not one with my tools because of the bandwidth of the link?
- EYEliezer Yudkowsky
Um-
- GHGeorge Hotz
Why are me and... Why am... Why are me and my computer not, like, a shared intelligence?
- EYEliezer Yudkowsky
Well, because there's one thi-... Because your brain is much more powerful than the computer at present. Like, not in terms of operations per second, but in terms of what you can do.
- GHGeorge Hotz
I'm not that sure about that. I think GPT-4 is... I'm a bit smarter than it, but not that-
- EYEliezer Yudkowsky
It's-
- GHGeorge Hotz
It's getting there.
- EYEliezer Yudkowsky
It's, it's, it's a little, but it-
- GHGeorge Hotz
Particularly, yeah.
- EYEliezer Yudkowsky
It's, it's not its own center of gravity. It's, it's like Jupiter to, like, the, the, the Mars of GBT-3 or something.
- GHGeorge Hotz
Yeah. I mean-
- EYEliezer Yudkowsky
But, you know, it's nowhere, nowhere near the sun.
- GHGeorge Hotz
An- another thing also is that, like, I don't think that capabilities... I don't think that intelligence falls on a nice line, right? Computers have been superhuman at adding for a long, long time. Computers are still far subhuman at plumbing, all right? And somewhere in the middle, we have things like chess and Go. Um, so when I mean that, like, like the tools that I use, the information age tools make me way smarter, all right? And you can use the, the, like, operant definition of intelligence and being able to, like, what I could affect in the world, right? Like, again, it's not instantaneous. Your intelligence ain't gonna save you against a bear. But if you asked me to, like, with my modern stuff on my computer, understand the operation of a-... 1800s era, like, Dutch India Trading Company. Oh, I think I could understand their operations super well. I have spreadsheets, I can start to put things in. I can forecast trend lines. So my point is, it is a form of intelligence that's far beyond human intelligence, a human plus a computer.
- EYEliezer Yudkowsky
Um, a human and a chess engine is, like, a, a modern chess engine. The era of centaur chess is-
- 30:00 – 45:00
Yeah. …
- EYEliezer Yudkowsky
whoa, whoa, whoa, whoa, you, you have AI, or does-
- GHGeorge Hotz
Yeah.
- EYEliezer Yudkowsky
... the AI have you?
- GHGeorge Hotz
Ah.
- EYEliezer Yudkowsky
Are you... are, are these... is this one of the little moon AIs orbiting you, and you're gonna go up against the sun? Or do you think you have the sun, Mr. Planet?
- GHGeorge Hotz
Um, I see a large diversity of AIs. Um, so maybe I'll give some arguments for, like, why I think that AI is inherently going to be at least...... I don't, I can't postulate anything about an intelligence that is 1e9, 1e12 smarter than humanity, right? But we agree that those things aren't coming any time soon.
- EYEliezer Yudkowsky
Uh, we don't agree on that.
- GHGeorge Hotz
Oh, okay.
- EYEliezer Yudkowsky
And we also don't agree with you that, that you cannot, like, say anything about it. There's like, the, like instrumental convergence I think was one of the things you agreed upon.
- GHGeorge Hotz
Sure.
- EYEliezer Yudkowsky
We can agree that, you know, not just that it obeys the laws of physics, but also that if, um, like, hmm, how, how to put it? Like there, there is a certain, like, argument, there are cer- cer- certain, like, premise conclusion thing going on here, where like the premise is like, like you do need some amount of like ability to choose actions that lead to results in order to get the instrumental convergence thing going on. But things that are like super effective at choosing actions that lead to results will tend to want to preserve their goals and-
- GHGeorge Hotz
Okay.
- EYEliezer Yudkowsky
... acquire more resources-
- GHGeorge Hotz
Okay.
- EYEliezer Yudkowsky
... and that sort of thing.
- GHGeorge Hotz
Let's, let's, yeah. Again, where it becomes blurry to me, and this is also why timelines matter. So like where we are right now, um, there's about two zetaFLOPS of compute in the world, and if you think that humans have about 20 petaFLOPS-
- EYEliezer Yudkowsky
Too much. There'll be less. Sorry.
- GHGeorge Hotz
Ah, okay, okay, okay. Well, hey, how about this? How about this? Humans, if you think 20 petaFLOPS is an est- is a appropriate estimate, have 160,000 zetaFLOPS. Should there be less of those too?
- EYEliezer Yudkowsky
No, more, a lot more of those. More of those included.
- GHGeorge Hotz
Okay. All right, all right. Uh, more humans, less computers. All right, all right. I, I see. At least, at least it's consistent. (laughs) Um, but okay, so, so right now, where right now whereas is that there's 80,000 times more human compute in the world than silicon compute.
- EYEliezer Yudkowsky
It's a, it's a misleading figure because of how poorly we aggregate. The, you can like, if you can like make one large thing, that potentially beats eight billion small things, even if the small things collectively have a larger mass, just like Kasparov versus The World.
- GHGeorge Hotz
Um, GPT-4 is a mixture of experts. GPT-4 is eight small things, not one big thing.
- EYEliezer Yudkowsky
It'll be interesting to see if that trend continues.
- GHGeorge Hotz
(laughs)
- EYEliezer Yudkowsky
I, I, I sure don't believe it holds in the limit.
- GHGeorge Hotz
So I'm not sure. And actually, this is another like... Let's talk about, let's talk about AIs rewriting their own source code. This is a common thing you bring up, right?
- EYEliezer Yudkowsky
I mean, I do talk a bit less about it nowadays, but I used to talk about it a lot, yeah.
- GHGeorge Hotz
Um, do you talk less about it now because you see how expensive and long the training runs are?
- EYEliezer Yudkowsky
Uh, that's not why I talk less about it now. (laughs)
- GHGeorge Hotz
Oh. Why, why do you talk less about it now?
- 45:00 – 1:00:00
But why? …
- EYEliezer Yudkowsky
has calculated that they can do it, and-
- GHGeorge Hotz
But why?
- EYEliezer Yudkowsky
... waiting for the shallow moment.
- GHGeorge Hotz
Wh- what... Again, like I, I think... Okay, how about this? If I was a AI that just transcended and I don't have to morph as the AI, but my first thought wouldn't be, "Take the atoms from the humans." Right?
- EYEliezer Yudkowsky
So the actual first thought is more along something... Is more along the lines of, "If I let the keep, humans keep running, they will build other super intelligences that are competitors." And that's where you lose large sections of galaxy. And th- and that's why it doesn't want you doing that part.
- GHGeorge Hotz
Yeah, but what if... Okay. See, you know, I have a threat model. I'm, I'm, I'm on the line of, of, of doomer and not doomer about AI. But my threat model from AI looks so much less like it's gonna kill us, and a lot more like it's gonna give us everything we ever wanted.
- EYEliezer Yudkowsky
Um, you know, th- uh, even if you have derived some worrisome thing from that scenario, well, ev- you know, first of all, wants are infinite, resources are finite, et cetera, et cetera, but, um, leaving that aside, um-
- GHGeorge Hotz
You don't get a real castle, you get a virtual castle, but you also get told it's real.
- EYEliezer Yudkowsky
This, this, th- w- we're, we are, we are not... Like, we are... I, I would, I would hope to snap people out of the frame of mind of playing pr- pretend in a schoolyard where you get to decide what game you're going to play and talk about what reality we live in. So, like, you don't get to say like, "I would rather worry about this thing than the other thing," 'cause reality is not put together in a way where it can only throw one thing at you at a time. Like, the doctor tells you get cancer, you don't get to say, "I'd rather worry about my stuffy nose." So if there are problems that result from moon-sized AIs giving us, the planets, a bunch of stuff that we want, that does not prevent the sun-sized AIs from crushing us later.
- GHGeorge Hotz
I agree that after the AIs have taken all the matter in the solar system and built a Dyson sphere around the sun, okay, now I'm a little worried they're gonna come back and try to take my atoms. But until that happens, like again, I'm not the easy target, right? I don't have to run faster than the bear. I gotta run faster than the slowest guy running from the bear. And it turns out the slowest guy running from the bear is Jupiter.
- EYEliezer Yudkowsky
It's at least... Well, it's at least going to take your GPUs, so you can't build a super intelligence that competes with it for the rest of that solar system.
- GHGeorge Hotz
But, but now that sounds a whole lot more like AIs are gonna fight with other AIs to take their GPUs. Now this I believe.
- EYEliezer Yudkowsky
Not if they're... Not if everyone involved is smart. Somebody has to be stupid for there to be a war that isn't just like a war of extermination. Like, anytime you have a combat, that's like playing defect-defect in the prisoner's dilemma. There's a, there's a... It's not in the Pareto frontier. There's an outcome that both sides would prefer to the combat. And humans are not at a level where they can predict the other mind predicting them and do a logical handshake and s- and say, like, "Let's move to the Pareto fr- frontier and divide the gains." Humans are not on a level where they can negotiate the- with each other. Sufficiently smart things are on a level where, um, I basically don't expect them to fight. Some- sometimes they might exterminate one another if the other one cannot offer any defense. If like the extermination outcome is on the Pareto frontier in the sense that it would, it would not be any better for the conquering party if the, like, defending party put up zero resistance instead of some resistance, then the defending party has nothing to offer, they just get eaten. But things that can damage each other in combat, I think will typically choose not to fight and will instead, like, divide the gains from not fighting, if they're smart enough. Humans are not that smart.
- GHGeorge Hotz
I'm so glad you brought up the prisoner's dilemma thing. You know, I actually came to Mary, um, in 2014. And, uh, I worked on exactly that problem. I didn't make any progress, I didn't do anything. I read the papers and thought it was cool, um, about two systems being able to assuredly cooperate by exchanging each other's source code, and it is a very cool theoretical problem. Now, what I think is gonna happen in practice is your two systems are both gonna be large, inscrutable matrices.
- EYEliezer Yudkowsky
(laughs)
- GHGeorge Hotz
How it is possi- ... Well, but this is exactly-
- EYEliezer Yudkowsky
Oh, oh no. W- well, I, I think large, inscrutable matrices are, y- you know, I- they're neither black-
- GHGeorge Hotz
And I'm gonna send him my source, source code so he can exploit me? No way.
- EYEliezer Yudkowsky
No, no. The, the, the, the, the, the superintelligence are not large, inscrutable matrices.
- GHGeorge Hotz
Oh.
- EYEliezer Yudkowsky
Y- y- y- you don't wanna run yourself on that crap. Oh, du- ... That, that, that-
- GHGeorge Hotz
Wait, yeah, but you have to.
- EYEliezer Yudkowsky
That's the kind of horror of what, you know, like, like who wants to be-
- GHGeorge Hotz
Wow.
- EYEliezer Yudkowsky
... built out of disintegrating goo? Who wants to be built out of a giant, inscrutable matrix either?
- GHGeorge Hotz
Wa- ... I'm built out of giant, inscrutable matrices (laughs) .
- EYEliezer Yudkowsky
No, you're not. You're built out of gooey neurons, though it's also a horror story. No superintelligence wants to be built out of that stuff either.
- GHGeorge Hotz
I think I could be modeled as giant, inscrutable matrices too.
- EYEliezer Yudkowsky
I mean, anything can be modeled out of giant, inscrutable matrices, and the-
- GHGeorge Hotz
Yeah.
- 1:00:00 – 1:15:00
Yeah, but this violates…
- EYEliezer Yudkowsky
the kind of, this, uh, you know, like, just doing gradient descent at getting better at whatever job will tend to grind out all the cases of it stepping on its own feet.
- GHGeorge Hotz
Yeah, but this violates orthogonality, right? Like, you're gonna have an AI that, like, not all AIs are gonna be, like, the only way you're gonna get AIs where they're all brutally optimal is if they fight each other in some terrible competition, right? And that's-
- EYEliezer Yudkowsky
How will that help anything?
- GHGeorge Hotz
Well, because you're gonna get AIs randomly all over the space, right? And some of them are not gonna be optimal. Some of them are gonna be completely irrational idiots.
- EYEliezer Yudkowsky
Like GPT-4?
- GHGeorge Hotz
Okay. Sure, right? I mean-
- EYEliezer Yudkowsky
Yeah, GPT-4 is not very powerful.
- GHGeorge Hotz
Well, yeah, but, uh, what I'm not seeing is this, like, when all the AIs are going to converge and suddenly become hyper-rational, when we move away from weight matrices and when we move toward Bayesian updates and we-
- EYEliezer Yudkowsky
I, I-
- GHGeorge Hotz
I just don't-
- EYEliezer Yudkowsky
... I don't, I, to be clear, I don't presently model that anybody's gonna get away from giant matrices before the end of the world.
- GHGeorge Hotz
... okay. So, let, let's talk about, then, this. So, you don't think the giant matrix thing can end the world, right? You think that the giant matrix-
- EYEliezer Yudkowsky
No, I think giant matrix thinking that's smart enough-
- GHGeorge Hotz
... can invent the next thing. Okay.
- EYEliezer Yudkowsky
Or, or possibly do it directly.
- GHGeorge Hotz
Okay. Well, I mean, let's also, like, let's really drill down on what these end of world scenarios are. Do you want to posit, like, protein synthesis and diamond nanobots?
- EYEliezer Yudkowsky
I mean, if I'm going to lose a bunch of viewers that way, I might have to pick some, you know, like, easier to understand process. Like, we're talking about like-
- GHGeorge Hotz
Okay.
- EYEliezer Yudkowsky
... 1823 versus 2023. You know, if you, if you're trying to explain it to 1823, maybe you just talk about, like, the powerful explosive artillery shells, and you don't mention the nuclear weapons.
- GHGeorge Hotz
Sure.
- EYEliezer Yudkowsky
'Cause they don't get that part. So, similarly, you know, like, if we don't wanna start diving into this book over here, then maybe, may- maybe we want to talk about something like, you know, like standard biological weapons or something.
- GHGeorge Hotz
(laughs) .
- EYEliezer Yudkowsky
But, you know, but in, in real life, sure. In r- in real life, it, it, you know, doesn't-
- GHGeorge Hotz
Well-
- EYEliezer Yudkowsky
... use the squishy stuff.
- GHGeorge Hotz
No, I'm not trying to, I'm not trying to say that, that nanobots are impossible. What I'm trying to say is that nanobots are extremely, extremely hard, right?
- EYEliezer Yudkowsky
Why?
- GHGeorge Hotz
And... To figure out?
- EYEliezer Yudkowsky
Why?
- GHGeorge Hotz
Well, 'cause it, 'cause it's a really hard search problem, right?
- 1:15:00 – 1:20:58
This is, again, assuming…
- EYEliezer Yudkowsky
then we've got a bunch of water in our oceans that can be turned into fusion energy. And the main limit on that is how fast Earth can radiate heat once you've used all the existing stuff as a heat sink. That's not very survivable. Like that, that kills us off as a side effect.
- GHGeorge Hotz
This is, again, assuming that this thing is a god, not kinda close to humans, but a bit smarter. And yes, might it get to a god, but the timing matters. It's not 10 years.
- EYEliezer Yudkowsky
Humans-
- GHGeorge Hotz
It's 10,000.
- EYEliezer Yudkowsky
Humans are, humans are a little tiny bit smarter than chimpanzees. And we have nuclear weapons, and they don't. The amount-
- GHGeorge Hotz
Humans are more-
- EYEliezer Yudkowsky
... of godhood, the amount of godhood you get per increment of brain power looks like six times the prefrontal cortex on humans versus chimpanzees, and they got sticks and we got nuclear weapons.
- GHGeorge Hotz
Humans are-
- EYEliezer Yudkowsky
Like, that's how much godhood per increase, factor increase, so frontal cortex keeping the same architecture.
- GHGeorge Hotz
Humans are general purpose. Chimpanzees are not. You can make, you can take Deep Blue, the chess playing computer, and scale that up to the size of the machine that trained GPT-4. And yes, you'll get a better chess playing machine, but it's not gonna be able to understand whether a picture has a cat in it or not. The training algorithm definitely matters, right? So does-
- EYEliezer Yudkowsky
Yeah. I, I, I... Humans are more general than chimps. And yet, when we encounter new problems, we can't just like rewrite our own code to handle those. You can see, you can see how there can be possible minds with much stronger sparks of generality than what we have.
- GHGeorge Hotz
Yes. I, I was gonna-
- EYEliezer Yudkowsky
More creative, less time on... More able to think outside the box.
- GHGeorge Hotz
Yes.
- EYEliezer Yudkowsky
And plausibly just able to do a bunch of thinking very quickly, and also-
- GHGeorge Hotz
Able to boil the oceans overnight for fusion? No. Able to build diamond nanobots? No. Able to outthink us, beat us at chess?
- EYEliezer Yudkowsky
Build... Building diamond nanobots gets you to, to, gets you to, to self-replicating fusion factories pretty quickly.
- GHGeorge Hotz
Well, yeah. But you can't build... Can you build diamond nanobots? You wanna start a diamond nanobot company? (laughs)
- EYEliezer Yudkowsky
I can't, I... No, but I can't solve protein folding either inside my own head.
- GHGeorge Hotz
All right.
- EYEliezer Yudkowsky
This, this problem is predictably solvable in the same way that in 2004 I called it a special case of protein folding problem would eventually be solvable by superintelligence. That was the heart-
- GHGeorge Hotz
Using a lot of, using a lot of experiment, using the entire historical corpus of human experiment. Maybe it can build nanobots. You know, who knows?
- EYEliezer Yudkowsky
Using a bunch of, yeah, yeah.
- GHGeorge Hotz
Can it one-shot them?
- EYEliezer Yudkowsky
It's, it's using a bunch of past survey data, and no experiments. (laughs) No new experiments, just a bunch of sur- You know, no causal experiments, just a bunch of past survey data.
- GHGeorge Hotz
It, it'll
- DPDwarkesh Patel
George, may I?
- GHGeorge Hotz
Yeah.
- DPDwarkesh Patel
C- can I ask real quick? So you s- maybe this was already implied, but you said, well, you know, once they build the Dyson spheres, then it would potentially be worth it to come back for the atoms that humans contained. Before that-
- GHGeorge Hotz
Yeah.
Episode duration: 1:34:29
Install uListen for AI-powered chat & search across the full episode — Get Full Transcript
Transcript of episode 6yQEA18C-XI
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome