Skip to content
Dwarkesh PodcastDwarkesh Podcast

George Hotz vs Eliezer Yudkowsky

George Hotz and Eliezer Yudkowsky will hash out their positions on AI safety, acceleration, and related topics. You can watch live on Twitter as well: https://twitter.com/i/broadcasts/1nAJErpDYgRxL

Dwarkesh PatelhostGeorge HotzguestEliezer Yudkowskyguest
Aug 15, 20231h 34mWatch on YouTube ↗

EVERY SPOKEN WORD

  1. 0:0015:00

    Okay. We are gathered…

    1. DP

      Okay. We are gathered here to witness George Hotz and Eliezer Yudkowsky debate and discuss, live on Twitter and YouTube, AI safety and related topics. You guys already know who George and Eliezer are, so I, I don't feel like introduction is necessary. I'm Dwarkesh. I'll be moderating. I'll mostly stay out of the way, um, except to kick things off by letting George explain his basic position. And we'll take things from there. George, I'll kick it off to you.

    2. GH

      Sure. Um, so I took an existentialism class in high school, and you'd read about these people, Sartre, Kierkegaard, Nietzsche, and you wonder, "Who were these people alive today?" And I think I'm sitting across from one of them now. Um, rationality and the sequences, uh, this whole field, the whole Less Wrong cinematic universe, uh, have impacted so many people's lives in, I think, a very positive way, including mine. Um, not only are you a philosopher, you're also a, a great storyteller. Um, there's two books that I've picked up and, you know, it was like crack. I couldn't put them down. Uh, one was Atlas Shrugged and the other one was Harry Potter and the Methods of Rationality. Um, it's a great book. Now, those are fictional stories. Um, you've also told some stories pertaining to the real world. Um, one was a story you told when you were younger about how "I remember the day I found staring into the singularity when I was 15." And it starts talking about Moore's law and how Moore's law is fundamentally a human law that says humans double the power of processors every two years. So once computers are doing it, it's going to be two years, but then next time it'll be one year, and then six months, and then three months, and then 1.5 and so on. And this is a hyperbolic sequence. Um, this is a singularity, and that's why it's called staring into the singularity. Then this document said that we were gonna... you know, the AI was gonna do wonderful things for us, we were gonna go colonize the universe, we were gonna go, you know, go forth and do all things till the end of all ages. Um, then you changed your views, and super intelligence does not imply super morality. The orthogonality thesis, I'm not going to challenge it. It is obviously a true statement. Then you kept the basic premise of the story, the recursively self-improving, foom, criticality AI. But instead of saving us, it was gonna kill us. I don't think either of these stories is right, and I don't think either of these stories is right for the same reason. I don't think AI can foom. I don't think AI can go critical. I don't think intelligence can go critical. I think this is an absolutely extraordinary claim. I'm not saying that recursive self-improvement is impossible. Recursive self-improvement is of course possible, humanity has done it. Every time you have used a tool to make a better tool, you have recursively self-improved. What I don't believe in is the AI that's sitting in a basement somewhere running on a thousand GPUs that is suddenly gonna crack the secret to thinking, recursively self-improve overnight, and then flood the world with diamond nanobots. This is an extraordinary claim and it requires extraordinary evidence, and I hand it over to you to deliver that evidence.

    3. EY

      Heh. Well, first, let me say that I don't think that the scenario of us all perishing to non-super moral super intelligence requires that particularly rapid rate of ascent. It requires a large enough gap open up with humanity that hasn't followed along in time. And why be- be... is this a crux? Be- be- before we, we start arguing about whether like self-improvement of things on the large internet connected server clusters rather than basements that now prevail, um, before we start arguing about that part, let's first check where the disagreement lies. So from my perspective, if you've got a trillion beings that are, you know, sufficiently intelligent and smarter than us and not super moral, I think that's kind of game over for us. It... even if you got there via a slow 10 year process instead of a 10 hour process or a 10 week- day process or whatever, if you are at the end point where there's this like large mass of intelligence that doesn't care about you, I think that we are, we are dead. And I worry that our s- and, and more importantly, I worry that our successors will go on to do nothing very much worthwhile with the galaxies. So presumably you think that if things don't go quickly, then we're safe. I dispute that, and maybe that's the part we need to talk about.

    4. GH

      Sure. Um, well, let's start with, let's give an approximate timeline. We don't, we don't need an exact timeline, but you seem to think this is gonna happen in your lifetime?

    5. EY

      That's my wild guess. It is far easier to predict the end point than all the details of the process that takes us, that take us there. Timing is one of those details. Timing is really, really hard. In 2004, I made a prediction that super intelligence would eventually be able to solve the, a special case of the protein folding problem, which is you get to choose the DNA sequence, but you wanna choose a DNA sequence that folds into a shape with a chemical property.... and so I predicted that super intelligence would en- eventually be able to solve this easy special case of protein folding. Now, in reality, protein folding was cracked f- for the much harder general case of biology, was cracked by AI come about 2020 or so, AlphaFold2. Um, there was no way I could've made the timing. I could not even have been confident that the bio- biological case of protein folding was going to be crackable by something so much shorter of super intelligence. A- of course, people at the time said it wasn't possible, you know, for the AI can't do this, like, how do you know this problem was even solvable, et cetera, et cetera. And, you know, I could try to explain how I knew, but that would be a technical story. I would point the fact that a much easier s- sp- um, pardon me, that a much harder general case of the problem I pointed to was solved by a non-super intelligence not all that far in the future as, as proof that I, like, was making a prediction with a lot of safety margin. But in, in 2004, that would've been pretty hard to convince you of 'cause there wouldn't have actually been an AI solving the harder general case of protein folding, a- and the timing, you know, the, the, or, and this particular form of AI that did it, that's, like, incredibly hard. So do, do I, nonetheless, taking a wild guess, expect this to happen in my lifetime? Yeah. My, my wild guess is that I'm very confident of that, if I don't get run over by a truck.

    6. GH

      Okay. Um, let's talk about AlphaFold. So I think the form does matter. I think the form is very important. Uh, when you were maybe talking about this in 2005, when I read all the Sequences Less Wrong stuff, 2010, you were thinking about Bayesian AIs that were going to figure out the world from first principles. Now, maybe not exactly that, but that's kind of where we were. But it's important how AlphaFold did it. AlphaFold did not start with the basic laws of physics and then figure out how proteins will fold. AlphaFold was trained on a huge amount of experimental data to extrapolate from that data. I don't doubt that these systems are going to get better. I don't doubt that they're eventually going to surpass us. I do doubt that they are going to have magical or godlike properties like solving the protein structure prediction problem from, you know, the, the, from quantum field theory, right? I, I-

    7. EY

      They don't have to.

    8. GH

      Well-

    9. EY

      Right? Like, why, why do, what, they, they don't need to. There's protein-

    10. GH

      Right.

    11. EY

      ... structure data to learn from. They don't need to do it-

    12. GH

      Yes.

    13. EY

      ... from quantum field theory. Something can be not godlike and still more powerful than you, right? Like, like, like you look at the world, world chess champion Magnus Carlsen, who by objective, by which I mean AI measurements is probably the strongest human player who ever lived.

    14. GH

      Sure.

    15. EY

      He's not God. He's not infinitely smart. He starts off on a chessboard that with no more resources than you have, and he predictably wipes the board with you 'cause he doesn't have to be godlike to defeat you or me, to be clear. I also can be defeated by being short of godhood.

    16. GH

      Um, Magnus Carlsen can't make diamond nanobots. Do we agree on that statement?

    17. EY

      I, uh, well, we, we haven't ac- well, not quickly. I'm not sure what happens if you give him a million-

    18. GH

      (laughs)

    19. EY

      ... if you give him a million years to work on it, then, then I'm not sure what happens. Like, I, I agree that, that he probably can't do it quickly.

    20. GH

      Okay. Um, so let's talk about timing, because timing, uh, sort of matters a lot.

    21. EY

      Why?

    22. GH

      Well, because it depends when we should shut it down, right? Well, it definitely does.

    23. EY

      I mean, if there's like a predictive, or if there, if there's some kind of predictable phenomenon where you, you can, like, dance around the bullets and know that, like, like, things will become dangerous at, like, this time, but, like, no earlier than that, and we're like, okay, if we put the following, like, precautions into place at this future time, which is not now, we're sure we're going to do it later, 'cause people sure do talk a lot of crap about stuff that they claim will be done later and that never gets done. But-

    24. GH

      Sure.

    25. EY

      ... so, so, you know, like, there's, there's this possibility that we could, like, be clever and dance around bullets if we knew exactly where the bullets were and we could actually coordinate on clever future strategies like that, which I don't think we can.

    26. GH

      Okay.

    27. EY

      So that said, why do, why does timing matter?

    28. GH

      Well, let's, let's start with the basic, and this is related to your question of why timing matters. Um, do you accept that it will not be hyperbolic, right? Staring into the singularity talks about a hyperbolic sequence, a sequence that has a singularity, that has a finite-

    29. EY

      Important context, I wrote this when I was 16 years old.

    30. GH

      Okay, so you-

  2. 15:0030:00

    That depends on the…

    1. GH

      we can talk about, but the center of international control. So I think there actually is potentially a bad scenario with AI, and I'll talk about what my bad scenario is. Um, if aliens were to show up here, we're dead, right? For the most part.

    2. EY

      That depends on the aliens.

    3. GH

      But you-

    4. EY

      Um, I, I... If, if I know nothing else about the aliens, I might give them something like a 5% chance of, of being nice.

    5. GH

      But they have the ability to kill us. I mean, they got here, right? They, they, they came-

    6. EY

      Oh, they absolutely have the ability to... Yeah. Anything that can cross interstellar distances can-

    7. GH

      Yeah.

    8. EY

      ... run you over without noticing.

    9. GH

      Right. I, I didn't expect-

    10. EY

      Well, they, they would notice, but they wouldn't, you know, wouldn't be sleeping.

    11. GH

      I, I, I didn't expect this to be a controversial point, but I agree-

    12. EY

      Yeah.

    13. GH

      ... with you that if you're talking about intelligences that are on the scale of billions of times smarter than humanity, yeah, we're in trouble, right?

    14. EY

      It's just not that hard to be billions of times smarter than humanity. I, I don't, I don't-

    15. GH

      Oh, I very much disagree with this. Well, so I also... I somewhat object to the line between humanity and the machines, right? A lot of our intelligence is externalized. Um-

    16. EY

      Um, I, I mean, that's the way it is when you've got an intelligence over here that's using a bunch of responsive tools out there. There's, there's no que- there's only one center of gravity there. It, it, it's like looking at a star system and be- and being like, "Well, there's no point in drawing a firm boundary between the sun and the planets. They're all just in space." And, you know, like they're all just oc-... and, you know, sure, they're all ultimately just like objects in space, but one of them is far more massive than the others, and that's humans with the tools we have now.

    17. GH

      Is your concern the bandwidth of the link? Is that what you're saying? Like, I'm not one with my tools because of the bandwidth of the link?

    18. EY

      Um-

    19. GH

      Why are me and... Why am... Why are me and my computer not, like, a shared intelligence?

    20. EY

      Well, because there's one thi-... Because your brain is much more powerful than the computer at present. Like, not in terms of operations per second, but in terms of what you can do.

    21. GH

      I'm not that sure about that. I think GPT-4 is... I'm a bit smarter than it, but not that-

    22. EY

      It's-

    23. GH

      It's getting there.

    24. EY

      It's, it's, it's a little, but it-

    25. GH

      Particularly, yeah.

    26. EY

      It's, it's not its own center of gravity. It's, it's like Jupiter to, like, the, the, the Mars of GBT-3 or something.

    27. GH

      Yeah. I mean-

    28. EY

      But, you know, it's nowhere, nowhere near the sun.

    29. GH

      An- another thing also is that, like, I don't think that capabilities... I don't think that intelligence falls on a nice line, right? Computers have been superhuman at adding for a long, long time. Computers are still far subhuman at plumbing, all right? And somewhere in the middle, we have things like chess and Go. Um, so when I mean that, like, like the tools that I use, the information age tools make me way smarter, all right? And you can use the, the, like, operant definition of intelligence and being able to, like, what I could affect in the world, right? Like, again, it's not instantaneous. Your intelligence ain't gonna save you against a bear. But if you asked me to, like, with my modern stuff on my computer, understand the operation of a-... 1800s era, like, Dutch India Trading Company. Oh, I think I could understand their operations super well. I have spreadsheets, I can start to put things in. I can forecast trend lines. So my point is, it is a form of intelligence that's far beyond human intelligence, a human plus a computer.

    30. EY

      Um, a human and a chess engine is, like, a, a modern chess engine. The era of centaur chess is-

  3. 30:0045:00

    Yeah. …

    1. EY

      whoa, whoa, whoa, whoa, you, you have AI, or does-

    2. GH

      Yeah.

    3. EY

      ... the AI have you?

    4. GH

      Ah.

    5. EY

      Are you... are, are these... is this one of the little moon AIs orbiting you, and you're gonna go up against the sun? Or do you think you have the sun, Mr. Planet?

    6. GH

      Um, I see a large diversity of AIs. Um, so maybe I'll give some arguments for, like, why I think that AI is inherently going to be at least...... I don't, I can't postulate anything about an intelligence that is 1e9, 1e12 smarter than humanity, right? But we agree that those things aren't coming any time soon.

    7. EY

      Uh, we don't agree on that.

    8. GH

      Oh, okay.

    9. EY

      And we also don't agree with you that, that you cannot, like, say anything about it. There's like, the, like instrumental convergence I think was one of the things you agreed upon.

    10. GH

      Sure.

    11. EY

      We can agree that, you know, not just that it obeys the laws of physics, but also that if, um, like, hmm, how, how to put it? Like there, there is a certain, like, argument, there are cer- cer- certain, like, premise conclusion thing going on here, where like the premise is like, like you do need some amount of like ability to choose actions that lead to results in order to get the instrumental convergence thing going on. But things that are like super effective at choosing actions that lead to results will tend to want to preserve their goals and-

    12. GH

      Okay.

    13. EY

      ... acquire more resources-

    14. GH

      Okay.

    15. EY

      ... and that sort of thing.

    16. GH

      Let's, let's, yeah. Again, where it becomes blurry to me, and this is also why timelines matter. So like where we are right now, um, there's about two zetaFLOPS of compute in the world, and if you think that humans have about 20 petaFLOPS-

    17. EY

      Too much. There'll be less. Sorry.

    18. GH

      Ah, okay, okay, okay. Well, hey, how about this? How about this? Humans, if you think 20 petaFLOPS is an est- is a appropriate estimate, have 160,000 zetaFLOPS. Should there be less of those too?

    19. EY

      No, more, a lot more of those. More of those included.

    20. GH

      Okay. All right, all right. Uh, more humans, less computers. All right, all right. I, I see. At least, at least it's consistent. (laughs) Um, but okay, so, so right now, where right now whereas is that there's 80,000 times more human compute in the world than silicon compute.

    21. EY

      It's a, it's a misleading figure because of how poorly we aggregate. The, you can like, if you can like make one large thing, that potentially beats eight billion small things, even if the small things collectively have a larger mass, just like Kasparov versus The World.

    22. GH

      Um, GPT-4 is a mixture of experts. GPT-4 is eight small things, not one big thing.

    23. EY

      It'll be interesting to see if that trend continues.

    24. GH

      (laughs)

    25. EY

      I, I, I sure don't believe it holds in the limit.

    26. GH

      So I'm not sure. And actually, this is another like... Let's talk about, let's talk about AIs rewriting their own source code. This is a common thing you bring up, right?

    27. EY

      I mean, I do talk a bit less about it nowadays, but I used to talk about it a lot, yeah.

    28. GH

      Um, do you talk less about it now because you see how expensive and long the training runs are?

    29. EY

      Uh, that's not why I talk less about it now. (laughs)

    30. GH

      Oh. Why, why do you talk less about it now?

  4. 45:001:00:00

    But why? …

    1. EY

      has calculated that they can do it, and-

    2. GH

      But why?

    3. EY

      ... waiting for the shallow moment.

    4. GH

      Wh- what... Again, like I, I think... Okay, how about this? If I was a AI that just transcended and I don't have to morph as the AI, but my first thought wouldn't be, "Take the atoms from the humans." Right?

    5. EY

      So the actual first thought is more along something... Is more along the lines of, "If I let the keep, humans keep running, they will build other super intelligences that are competitors." And that's where you lose large sections of galaxy. And th- and that's why it doesn't want you doing that part.

    6. GH

      Yeah, but what if... Okay. See, you know, I have a threat model. I'm, I'm, I'm on the line of, of, of doomer and not doomer about AI. But my threat model from AI looks so much less like it's gonna kill us, and a lot more like it's gonna give us everything we ever wanted.

    7. EY

      Um, you know, th- uh, even if you have derived some worrisome thing from that scenario, well, ev- you know, first of all, wants are infinite, resources are finite, et cetera, et cetera, but, um, leaving that aside, um-

    8. GH

      You don't get a real castle, you get a virtual castle, but you also get told it's real.

    9. EY

      This, this, th- w- we're, we are, we are not... Like, we are... I, I would, I would hope to snap people out of the frame of mind of playing pr- pretend in a schoolyard where you get to decide what game you're going to play and talk about what reality we live in. So, like, you don't get to say like, "I would rather worry about this thing than the other thing," 'cause reality is not put together in a way where it can only throw one thing at you at a time. Like, the doctor tells you get cancer, you don't get to say, "I'd rather worry about my stuffy nose." So if there are problems that result from moon-sized AIs giving us, the planets, a bunch of stuff that we want, that does not prevent the sun-sized AIs from crushing us later.

    10. GH

      I agree that after the AIs have taken all the matter in the solar system and built a Dyson sphere around the sun, okay, now I'm a little worried they're gonna come back and try to take my atoms. But until that happens, like again, I'm not the easy target, right? I don't have to run faster than the bear. I gotta run faster than the slowest guy running from the bear. And it turns out the slowest guy running from the bear is Jupiter.

    11. EY

      It's at least... Well, it's at least going to take your GPUs, so you can't build a super intelligence that competes with it for the rest of that solar system.

    12. GH

      But, but now that sounds a whole lot more like AIs are gonna fight with other AIs to take their GPUs. Now this I believe.

    13. EY

      Not if they're... Not if everyone involved is smart. Somebody has to be stupid for there to be a war that isn't just like a war of extermination. Like, anytime you have a combat, that's like playing defect-defect in the prisoner's dilemma. There's a, there's a... It's not in the Pareto frontier. There's an outcome that both sides would prefer to the combat. And humans are not at a level where they can predict the other mind predicting them and do a logical handshake and s- and say, like, "Let's move to the Pareto fr- frontier and divide the gains." Humans are not on a level where they can negotiate the- with each other. Sufficiently smart things are on a level where, um, I basically don't expect them to fight. Some- sometimes they might exterminate one another if the other one cannot offer any defense. If like the extermination outcome is on the Pareto frontier in the sense that it would, it would not be any better for the conquering party if the, like, defending party put up zero resistance instead of some resistance, then the defending party has nothing to offer, they just get eaten. But things that can damage each other in combat, I think will typically choose not to fight and will instead, like, divide the gains from not fighting, if they're smart enough. Humans are not that smart.

    14. GH

      I'm so glad you brought up the prisoner's dilemma thing. You know, I actually came to Mary, um, in 2014. And, uh, I worked on exactly that problem. I didn't make any progress, I didn't do anything. I read the papers and thought it was cool, um, about two systems being able to assuredly cooperate by exchanging each other's source code, and it is a very cool theoretical problem. Now, what I think is gonna happen in practice is your two systems are both gonna be large, inscrutable matrices.

    15. EY

      (laughs)

    16. GH

      How it is possi- ... Well, but this is exactly-

    17. EY

      Oh, oh no. W- well, I, I think large, inscrutable matrices are, y- you know, I- they're neither black-

    18. GH

      And I'm gonna send him my source, source code so he can exploit me? No way.

    19. EY

      No, no. The, the, the, the, the, the superintelligence are not large, inscrutable matrices.

    20. GH

      Oh.

    21. EY

      Y- y- y- you don't wanna run yourself on that crap. Oh, du- ... That, that, that-

    22. GH

      Wait, yeah, but you have to.

    23. EY

      That's the kind of horror of what, you know, like, like who wants to be-

    24. GH

      Wow.

    25. EY

      ... built out of disintegrating goo? Who wants to be built out of a giant, inscrutable matrix either?

    26. GH

      Wa- ... I'm built out of giant, inscrutable matrices (laughs) .

    27. EY

      No, you're not. You're built out of gooey neurons, though it's also a horror story. No superintelligence wants to be built out of that stuff either.

    28. GH

      I think I could be modeled as giant, inscrutable matrices too.

    29. EY

      I mean, anything can be modeled out of giant, inscrutable matrices, and the-

    30. GH

      Yeah.

  5. 1:00:001:15:00

    Yeah, but this violates…

    1. EY

      the kind of, this, uh, you know, like, just doing gradient descent at getting better at whatever job will tend to grind out all the cases of it stepping on its own feet.

    2. GH

      Yeah, but this violates orthogonality, right? Like, you're gonna have an AI that, like, not all AIs are gonna be, like, the only way you're gonna get AIs where they're all brutally optimal is if they fight each other in some terrible competition, right? And that's-

    3. EY

      How will that help anything?

    4. GH

      Well, because you're gonna get AIs randomly all over the space, right? And some of them are not gonna be optimal. Some of them are gonna be completely irrational idiots.

    5. EY

      Like GPT-4?

    6. GH

      Okay. Sure, right? I mean-

    7. EY

      Yeah, GPT-4 is not very powerful.

    8. GH

      Well, yeah, but, uh, what I'm not seeing is this, like, when all the AIs are going to converge and suddenly become hyper-rational, when we move away from weight matrices and when we move toward Bayesian updates and we-

    9. EY

      I, I-

    10. GH

      I just don't-

    11. EY

      ... I don't, I, to be clear, I don't presently model that anybody's gonna get away from giant matrices before the end of the world.

    12. GH

      ... okay. So, let, let's talk about, then, this. So, you don't think the giant matrix thing can end the world, right? You think that the giant matrix-

    13. EY

      No, I think giant matrix thinking that's smart enough-

    14. GH

      ... can invent the next thing. Okay.

    15. EY

      Or, or possibly do it directly.

    16. GH

      Okay. Well, I mean, let's also, like, let's really drill down on what these end of world scenarios are. Do you want to posit, like, protein synthesis and diamond nanobots?

    17. EY

      I mean, if I'm going to lose a bunch of viewers that way, I might have to pick some, you know, like, easier to understand process. Like, we're talking about like-

    18. GH

      Okay.

    19. EY

      ... 1823 versus 2023. You know, if you, if you're trying to explain it to 1823, maybe you just talk about, like, the powerful explosive artillery shells, and you don't mention the nuclear weapons.

    20. GH

      Sure.

    21. EY

      'Cause they don't get that part. So, similarly, you know, like, if we don't wanna start diving into this book over here, then maybe, may- maybe we want to talk about something like, you know, like standard biological weapons or something.

    22. GH

      (laughs) .

    23. EY

      But, you know, but in, in real life, sure. In r- in real life, it, it, you know, doesn't-

    24. GH

      Well-

    25. EY

      ... use the squishy stuff.

    26. GH

      No, I'm not trying to, I'm not trying to say that, that nanobots are impossible. What I'm trying to say is that nanobots are extremely, extremely hard, right?

    27. EY

      Why?

    28. GH

      And... To figure out?

    29. EY

      Why?

    30. GH

      Well, 'cause it, 'cause it's a really hard search problem, right?

  6. 1:15:001:20:58

    This is, again, assuming…

    1. EY

      then we've got a bunch of water in our oceans that can be turned into fusion energy. And the main limit on that is how fast Earth can radiate heat once you've used all the existing stuff as a heat sink. That's not very survivable. Like that, that kills us off as a side effect.

    2. GH

      This is, again, assuming that this thing is a god, not kinda close to humans, but a bit smarter. And yes, might it get to a god, but the timing matters. It's not 10 years.

    3. EY

      Humans-

    4. GH

      It's 10,000.

    5. EY

      Humans are, humans are a little tiny bit smarter than chimpanzees. And we have nuclear weapons, and they don't. The amount-

    6. GH

      Humans are more-

    7. EY

      ... of godhood, the amount of godhood you get per increment of brain power looks like six times the prefrontal cortex on humans versus chimpanzees, and they got sticks and we got nuclear weapons.

    8. GH

      Humans are-

    9. EY

      Like, that's how much godhood per increase, factor increase, so frontal cortex keeping the same architecture.

    10. GH

      Humans are general purpose. Chimpanzees are not. You can make, you can take Deep Blue, the chess playing computer, and scale that up to the size of the machine that trained GPT-4. And yes, you'll get a better chess playing machine, but it's not gonna be able to understand whether a picture has a cat in it or not. The training algorithm definitely matters, right? So does-

    11. EY

      Yeah. I, I, I... Humans are more general than chimps. And yet, when we encounter new problems, we can't just like rewrite our own code to handle those. You can see, you can see how there can be possible minds with much stronger sparks of generality than what we have.

    12. GH

      Yes. I, I was gonna-

    13. EY

      More creative, less time on... More able to think outside the box.

    14. GH

      Yes.

    15. EY

      And plausibly just able to do a bunch of thinking very quickly, and also-

    16. GH

      Able to boil the oceans overnight for fusion? No. Able to build diamond nanobots? No. Able to outthink us, beat us at chess?

    17. EY

      Build... Building diamond nanobots gets you to, to, gets you to, to self-replicating fusion factories pretty quickly.

    18. GH

      Well, yeah. But you can't build... Can you build diamond nanobots? You wanna start a diamond nanobot company? (laughs)

    19. EY

      I can't, I... No, but I can't solve protein folding either inside my own head.

    20. GH

      All right.

    21. EY

      This, this problem is predictably solvable in the same way that in 2004 I called it a special case of protein folding problem would eventually be solvable by superintelligence. That was the heart-

    22. GH

      Using a lot of, using a lot of experiment, using the entire historical corpus of human experiment. Maybe it can build nanobots. You know, who knows?

    23. EY

      Using a bunch of, yeah, yeah.

    24. GH

      Can it one-shot them?

    25. EY

      It's, it's using a bunch of past survey data, and no experiments. (laughs) No new experiments, just a bunch of sur- You know, no causal experiments, just a bunch of past survey data.

    26. GH

      It, it'll

    27. DP

      George, may I?

    28. GH

      Yeah.

    29. DP

      C- can I ask real quick? So you s- maybe this was already implied, but you said, well, you know, once they build the Dyson spheres, then it would potentially be worth it to come back for the atoms that humans contained. Before that-

    30. GH

      Yeah.

Episode duration: 1:34:29

Install uListen for AI-powered chat & search across the full episode — Get Full Transcript

Transcript of episode 6yQEA18C-XI

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome