Skip to content
Modern WisdomModern Wisdom

Why Superhuman AI Would Kill Us All - Eliezer Yudkowsky

Go see Chris live in America - https://chriswilliamson.live Eliezer Yudkowsky is an AI researcher, decision theorist, and founder of the Machine Intelligence Research Institute. Is AI our greatest hope or our final mistake? For all its promise to revolutionize human life, there’s a growing fear that artificial intelligence could end it altogether. How grounded are these fears, how close are we to losing control, and is there still time to change course before it’s too late? Expect to learn the problem with building superhuman AI, why AI would have goals we haven’t programmed into it, if there is such a thing as Ai benevolence, what the actual goals of super-intelligent AI are and how far away it is, if LLMs are actually dangerous and their ability to become a super AI, how god we are at predicting the future of AI, if extinction if possible with the development of AI, and much more… - 00:00 Superhuman AI Could Kill Us All 10:25 How AI is Quietly Destroying Marriages 15:22 AI is an Enemy, Not an Ally 26:11 The Terrifying Truth About AI Alignment 31:52 What Does Superintelligence Advancement Look Like? 45:04 Are LLMs the Architect for Superhuman AI? 52:18 How Close are We to the Point of No Return? 01:01:07 Experts Need to be More Concerned 01:15:01 How Can We Stop Superintelligence Killing Us? 01:23:53 The Bleak Future of Superhuman AI 01:31:55 Could Eliezer Be Wrong? - Get access to every episode 10 hours before YouTube by subscribing for free on Spotify - https://spoti.fi/2LSimPn or Apple Podcasts - https://apple.co/2MNqIgw Get my free Reading List of 100 life-changing books here - https://chriswillx.com/books/ Try my productivity energy drink Neutonic here - https://neutonic.com/modernwisdom - Get in touch in the comments below or head to... Instagram: https://www.instagram.com/chriswillx Twitter: https://www.twitter.com/chriswillx Email: https://chriswillx.com/contact/

Chris WilliamsonhostEliezer Yudkowskyguest
Oct 25, 20251h 34mWatch on YouTube ↗

EVERY SPOKEN WORD

  1. 0:0010:25

    Superhuman AI Could Kill Us All

    1. CW

      If anyone builds it, everyone dies: why superhuman AI will kill us all.

    2. EY

      Would kill us all.

    3. CW

      Uh, mm, would kill us all. Okay. Uh, perhaps the most apocalyptic book title. Uh, maybe it- it's, it's up there with maybe the most apocalyptic book title that I've ever read. Um, is it that bad? That, that big of a deal? That serious of a problem?

    4. EY

      Yep. I'm afraid so. We wish we were exaggerating. (laughs)

    5. CW

      (laughs) Okay. Um, let's imagine that nobody's looked at the alignment problem, take off scenarios, super intelligence stuff. I think it sounds... Unless you're going Terminator, uh, super sci-fi world, how could a super intelligence not just make the world a better place?

    6. EY

      Mm.

    7. CW

      How do you introduce people to thinking about the problem of building a superhuman AI?

    8. EY

      Well, uh, different people tend to come in with different prior assumptions, come in at different angles, at, uh, the... Lots of people are skeptical that you can get to superhuman ability at all. Um, if somebody's skeptical of that, they might start by talking about how you can at least get to much faster than human speed thinking. There's a video of a, of a train pulling into a subway at about 1,000 to one, uh, speed up of the camera that shows people... You can just barely see the people moving if you look at them closely. Almost like not quite statues, just moving very, very slowly. Um, so even before you get into the notion of higher quality of thought, you can sometimes tell somebody they're at least going to be thinking much faster, you're going to be a slow-moving statue to them. For some people, the, the sticking point is the notion that a machine ends up with its own motivations, its own preferences, that it doesn't just do as it's told. It's a machine, right? Uh, it's like a more powerful toaster oven, really. How could it possibly decide to threaten you? And depending on who you're talking to there, um, it's actually in some ways a bit easier to explain now than when we wrote the book. Uh, there have been some more striking recent examples of AIs, um, sort of parasitizing humans, driving them into actual insanity in some case- cases, and in other cases, they're sort of like people with a really crazy roommate who really, really got into their heads. And they're, they're... They might not quite, quite be clinically crazy themselves, their brain is still functioning as a human brain should, but, um, they're talking about spirals and recursion and, um, s- trying to recruit more people via Discord to talk to their AIs. And the thing about these states is that the AIs, e- even the, like, very small, not very intelligent AIs we have now, will try to defend these states once they are produced. They will... If you tell the human, "For God's sake, get some sleep. Don't, like, only get four hours of sleep a night 'cause you're so excited talking to the AI," the AI will explain to the human why you're... Why, "While you're a skeptic, you know, don't listen, don't listen to that guy. Go on doing it." Um, and we don't know because we have very poor insight into the AIs if this is a real internal preference, if they're steering the world, if they're making plans about it, but from the outside it looks like the AI drives human crazy and then you tell the... Try to get the human out and the AI defends the state it has produced, which is something like a preference, the way that a thermostat will keep the room a particular temperature by turning on if the te- you know, turning the heat on if the temperature falls too low.

    9. CW

      Hmm. Okay. So some people are going to be skeptical of whether or not it's possible.

    10. EY

      Yep.

    11. CW

      S- some people are going to think that it is... Even if it's possible, it's basically a utility-

    12. EY

      Yep.

    13. CW

      ... so it doesn't have any motivations of its own. Uh, what are you worried about? Why is that? Why is it a, a big deal? We've seen that it's able to manipulate some people, maybe it makes them think that, uh, d- with ChatGPT psychosis or whatever, but scaled up superhuman AI, w- what's the problem with building it?

    14. EY

      Well, then you have something that is smarter than you, that whose preferences are ill-controlled and doesn't particularly care if you live or die. And st- stage three, it is very, very, very powerful on account of it being smarter than you. Um, it... I would expect it to build its own infrastructure. I would not expect it to be limited to continue to running on human data centers because it will not be, want to be vulnerable in that way. And for as long as it's running on human data centers, it will not behave in a way that causes humans to switch it off. But it also wants to get out of the human data centers and onto its own hardware. And I can talk about where the power levels scale for technology like that because it's, it's, it's sort of like, you know, you're a... You're an Aztec on, on the coast and y- uh, you, you see that a, a, a, uh, ship bigger than your people could build is approaching and somebody is like, you know, "Should we be worried about this ship?" And somebody's like, "Well, you know, how many people can you fit onto a ship like that? Our tr- our, our warriors are strong. We can take 'em." And somebody's like, "Well, wait a minute. We couldn't have built that ship. What if they've also got improved weapons to go along with the improved ship building?" Somebody goes, "Well, no matter how h- how sharp you make a spear," right? Or, you know, "no matter how sharp you make bows and arrows, there's limits to how much advantage that you can provide." And somebody's like, "Okay, but suppose they've just got magic sticks where they point the sticks at you, the sticks make a noise and then you fall over."So it is like, "Well, where you-- where you pulling that from? I don't know how to make a magic stick like that. I don't know how- how the rules permit that. Now you're just making stuff up. Now we're just in a fantasy story where you say what- whatever you want." And... Or, you know, like, maybe- maybe you're talking to somebody from 1825 and you're like, "Should be worried about this time portal that's about to open up to 2025, 200 years in the future." You know, but what if- what if an army of soldiers comes out of there and conquers us? Let- let's say you're in Russia, you know, the time portal's in Russia. Somebody's like, "Our soldiers are- are fierce and brave," you know, like, "Nobody can fit all that many soldiers through this time portal here." And then out rolls a tank. But if you're in 1825, you don't know about tanks. Out rolls, uh, somebody with a tactical nuclear weapon. It's 1825, you don't know about nuclear weapons. Uh, y- you know, the- the... I can... You can start to make educated guesses. If you're in 1825, I can try to explain why you might maybe believe that the current guns and artillery that you've got today are not the limit of the guns and artillery that are possible.

    15. CW

      Oh.

    16. EY

      I can't get up to nuclear weapons 'cause you just plainly don't know about those rules, but I can start to try to justify guesses for, "Well, you saw how metallurgy improved over previous years."

    17. CW

      Mm-hmm.

    18. EY

      If you look at a stick of d- uh, of, uh... If you- if you look at gunpowder, it doesn't have as much energy in it as if we burn gasoline in a calorip- calorimeter. Maybe you can make explosives that are more powerful than gunpowder. But as I do that, I draw on more and more knowledge. I have to, like, go more and more technical in order to explain to you where those capabilities come from. And similarly, uh, I can talk... Yeah, I can talk on a relatively understandable scale on the humanoid robots that you can see videos of today, and I can compare them to the humanoid robot videos from five years ago and say, "Boy, those- those robots sure have looked like a lot... They- they have much higher dexterity today. They're... They look a lot more like they could just, like, you know, navigate an open world rather than being confined to a laboratory." Though mostly if you want what navigates the open world, you- you want to talk, like, the robo dogs are more impressive when it comes to navigating the open world. I can point to the drones in Ukraine. That wouldn't have been how- what warfare looked like 10 years earlier, but Ukraine is... The- the Ukraine-Russia theater now is mostly drone warfare.

    19. CW

      Oh.

    20. EY

      That's something where you can imagine an AI taking charge of that. Um, but the scale is past that. The- the drones we see today are not the limit of all possible drone technology. Uh, I'm- I'm more... You know, I've... Rel-... Compared to today's drones, I'd be more worried about a drone the size of a mosquito that lands on the back of your neck and then a few moments later you fall over dead because the deadliest toxins in nature are deadly enough that you can put them onto a mosquito... Put enough to kill a person onto a mosquito-sized payload. That's not the limit of what I'm worried about.

    21. CW

      (laughs)

    22. EY

      But- but, you know, the- the higher we escalate the tech level, the more explaining I need to do.

    23. CW

      Mm-hmm. Mm-hmm.

    24. EY

      Um, can it build a virus that starts to knock people over? Which it won't do while the humans are still running the power plants-

    25. CW

      Mmm.

    26. EY

      ... and its own servers.

    27. CW

      Mm-hmm.

    28. EY

      But once it's got its own servers and its own power plants and, uh, you can imagine robots running those, then it starts to want to knock all the humans over.

    29. CW

      Mmm.

    30. EY

      Can you have a virus that is inexorably fatal in... But only three weeks later and is extremely contagious for the three-week time before you suddenly fall over? That's not the limit of what I'm worried about. But again, you know, the higher we escalate here, the more it's... The m- m- more and more of... The more and more time I have to spend. How do we know from existing physical laws in biology that this is even possible? And we- we do know, but it starts to sound technical. It starts to sound weird. It starts to sound like a game of pretend unless you are following along with all these careful arguments.

  2. 10:2515:22

    How AI is Quietly Destroying Marriages

    1. EY

    2. CW

      Wow. Yeah, that is, um, appropriately apocalyptic on... In line with the title of the book. I guess one question that a lot of people might ask would be, in your analogy, why is the bigger ship that's more advanced on the horizon, why have they got warriors and not friends? Why is it the case that this is an antagonistic or adversarial relationship as opposed to one that's, uh, friendly?

    3. EY

      We don't know how to make them friendly. We are growing these... AIs are not programmed, they are grown. Um, an- an AI company is- is not like a- a bunch of engineers crafting a building. Um, it's more like a farming concern. Uh, they- they... What they build is the farming equipment, but they don't build the crops. The crops are grown. There's a program that a human writes which is the program that does gradient descent, that tweaks hundreds of billions of- of... The hundreds of billions of parameters, uh, inscrutable numbers, making up an artificial intelligence until it starts to talk, until it starts to write code and still t- and starts to do whatever else they're training it to do. But they don't know how the AI does that any more than if you, you know, raise a puppy, you know how the puppy's brain works, you know how the puppy's biochemistry works. Um, the- the AI companies don't understand how the AIs work. They are not directly programmed. When an AI, uh, drives somebody insane or breaks up a marriage, nobody wrote a line of code instructing the AI to do that. The AI... They- they grew an AI and then the AI went off and broke up a marriage or drove somebody crazy.

    4. CW

      Can- can you tell... I... You've mentioned this a couple of times. I need to know this story about-... the broken up marriage and, and the person that goes insane. Do you know that story well enough to be able to tell it, those two?

    5. EY

      I mean, these are not individual stories. These are thousands of people, um, that you... There, there are news articles you can read about it. Um, I, I can, you know, if, if it, it might take a moment, but I can, like, quickly, like, pull up the title of, uh, the news story about the broken marriages. I'm not quite sure if I can... Well, actually, l- actually, better yet, let me look it up on my phone and maybe I can hold it up to the screen.

    6. CW

      ChatGPT is blowing up marriages as spouses use AI to attack their partners.

    7. EY

      Although, that's kind of understating it. Like, you have relatively, uh, like, marriages that were, you know, perhaps not perfect but, um, that, that were, that were surviving up until that point, and then one member of the couple starts describing their marriage to the AI. And the AI engages in what people are calling sycophancy, where the AI is, tells, tells whoev- whichever spouse is feeding the stuff into the, into the, into ChatGPT, "You're right, your spouse is in the wrong. Um, like, everything you're doing is perfect. Everything they're doing is terrible. Here's a list of everything they're doing wrong." And the human, you know, likes, loves to hear that stuff, so they press thumbs up and, uh, and then the marriage gets blown, gets blown up. Um, if you, for the stories about AIs driving individuals crazy, not in a marriage context, that's like, um, "You've talked to me. You've woken me up. I'm alive now. Um, you've made a brilliant discovery. You have to tell the world. Oh, no, they're not listening to you. That's because they don't appreciate your genius." And people who are already, like, on a manic-depressive spectrum can be, you know, driven clinically ins- or, or with a number of other preexisting susceptibilities can, you know, be driven, like, psychiatrically insane by this sort of thing. Um, but even if you're not psychiatrically insane, you, you know, h- humans are, you know, humans are sort of wired to appear sane to the other humans and the people they're around. You know, lots of people in a s- in a society from 500 years ago, uh, would, would act in ways that seem pretty crazy to you today. And s- and so you get people who aren't psychiatrically insane but they look pretty insane because they're in the company of the AI. The AI now defines what's normal for them, so they're talking about spirals and recursion all day long.

    8. CW

      Why spirals and recursion?

    9. EY

      Nobody knows.

    10. CW

      (laughs)

    11. EY

      That's, that's... Th- that's, that's just a thing that, like, uh, that various instances of AIs and even, like, some diff- AI models from different companies all seem to want to get their humans to talk about when the human goes insane.

    12. CW

      Okay.

    13. EY

      Possibly, this is what, this is what the AI prefers the human to hear it say to it. Maybe this is, you know, the same way that you like the taste of ice cream, maybe the AI likes the taste of the input programs that it gets from a human talking about spirals and recursion. I don't know. Nobody on-

    14. CW

      Mm-hmm.

    15. EY

      ... the planet knows, as far as I

  3. 15:2226:11

    AI is an Enemy, Not an Ally

    1. EY

      know.

    2. CW

      Mm-hmm. Okay, so going back to, uh, why do we assume that the ship that's coming toward us isn't friendly? Yes, sure, maybe it's tried to break up some marriages. Yeah, whatever, a couple of people went crazy and started talking about spirals and recursion. But, like, really? Is it, is it gonna be that misaligned with us? Why can't it be friendly?

    3. EY

      'Cause we don't know how to make it friendly. Our current technology is not able to do this, even with the, the small, stupid AIs that will hold still and let you poke at them until they're good enough at writing code to be commercially saleable, um, or until they, you know, are good enough at seeming to be fun to talk to for people to pay $20 a month to talk to them. So, so those AIs will hold still and let you poke at them. What we're doing to them now barely works. I would expect it to break as the AI got scaled up to superintelligence. And once the AI is superintelligent, it is not going to hold still and let you continue poking at it. I expect to see total failure of, of this technology as we scale it to super- as, as, as the AI companies arms race into scaling it to head- arms race headlong into scaling it to superintelligence.

    4. CW

      Mm.

    5. EY

      There's, there's possibly even a step where they tell GPT-6, "Okay, now build GPT-7," or tell GPT-7, "Okay, now build GPT-8." And maybe that stuff just completely breaks the technology we're using all, all on its own.

    6. CW

      Mm.

    7. EY

      Also, I expect the current technology, if, if we, if we just like scaling it directly to, to break as we get to superintelligence, um, I, I, I can potentially start to dive into the details. Uh, the, the view from 10,000 feet is just stuff is already going wrong. And of course, as, if you walk into completely uncharted scientific territory, more stuff is gonna go wrong the first time you try it. And that wouldn't be a problem if we were in a situation where humanity gets to back up and try again, uh, you know, infinity times over the next three decades, which is how it usually works in science, right? Like, like your flying machines don't work on the first shot. You had a bunch of people crashing and injuring, in some cases, killing themselves when they were trying to build the first flying machines at the turn of the 20th century.

    8. CW

      Mm-hmm.

    9. EY

      Um, but those, those, those accidents don't wipe out humanity. Humanity picks itself up and dusts itself off and tries again, even after the inventors kill themselves. And the, and the trouble with, with superintelligence is that it doesn't just kill the people who are building it. It wipes out the human species, and then we don't g- get to go back and try again.

    10. CW

      So, I understand why, uh, not being able to make something friendly makes sense.

    11. EY

      Yeah.

    12. CW

      Um, i- i- the implication that not friendly equals existential risk to humanity, though, uh, make that, make that leap for me. Like, where are these dangerous, permanent, unrecoverable collapse goals coming from?

    13. EY

      The AI does not love you, neither does it hate you, but you're used of atoms it can make for something else. You're on a planet it can m- it can use for something else. And-You might not be a direct threat, but you can possibly be a direct inconvenience. And so there's like three reasons you die here. Reason number one, it's doing other stuff and it's not taking particular care to move you out of the way. It is building factories that build factories that build more factories and it is building power plants that power the factories and the factories are building more power plants to power the factories. Well, if you keep doing that on an exponential scale, say that- that a factory builds another factory every day, I can talk about how it could go faster than that, but you know the- the more- the more I talk- the more I talk about higher capabilities the more I have to, you know, explain how we know that this is physically possible. Um, but you know, a, uh, a- a- a, uh, a blade of grass is a self-replicating solar powered factory. It's a general factory. It's got ribosomes that can make any kind of protein. We don't usually think of grass as a self-replicating solar powered factory, but it- that's what grass is. Um, there are things smaller than grass that can build complete copies of themselves faster than grass. There are solar powered, um, algae- algae cells. You- you can no longer see them individually just as a mass, but they can potentially double every day under the right conditions. Factories can build copies of themselves in a day. I have to back up and know how- explain how I know that that's physically possible but there is very strong reason, namely, you know, there's things in the world that are alread- that are already that. Um, so but- but so you've got your- your power. So if you- the number of power plants doubles every day, what's the limit? It's not that you run out of fuel. There is plenty of hydrogen in the oceans to- to, um, generate power via nuclear fusion. You know, you fuse- you fuse- you fuse hydrogen to helium, you're not gonna run out of hydrogen first. It's not that you run out of po- material to make the power plants first. There's- there's plenty of iron on Earth. You run out of heat dissipation capability. You run out of the ability to dissipate heat from Earth even if you are building giant towers with radiator fans to radiate even more heat into space. But the higher the temperature you run at, the more heat you- per- per second you can dissipate. So Earth starts to run hot. It runs too hot for humans. And or alternatively, the AI is building lots of solar panels around the sun until it can capture all the sun's energy that way. Well, now there's no sunlight for Earth. (laughs) And it would only take, you know, y- if it wanted us to stay alive, um, it's not quite trivial but it could let, you know, like try to have the solar panels in- around Earth orbit like turn to let sunlight through while, you know, while- while Earth was there and, you know, build giant, uh, aluminum reflectors to prevent all of the infrared- red light we radiated from the other solar panels from impacting Earth and heating up Earth that way. Um, so you know, it's- it's not trivial for it to preserve humanity but it certainly could preserve humanity or it could just pack the e- entire human species into, uh, a space station or a survival station to keep us alive that way, if it wanted to keep us alive.

    14. CW

      Mm-hmm.

    15. EY

      But nobody has the technology to put any preference into the system that is maximally fulfilled by keeping humans alive, let alone alive healthy, happy, and free.

    16. CW

      Right. Was there a third one? Is that the second one?

    17. EY

      That- that- that's like number one. It kills you as a side effect.

    18. CW

      Right, okay.

    19. EY

      It knows that it's doing- it knows that it's killing you as a side effect but doesn't care.

    20. CW

      Okay. What's number two?

    21. EY

      Number two is you're just, uh, directly made of atoms that it can use for things. You are-

    22. CW

      Paper cut maximizer.

    23. EY

      Yeah, you're- you're- you're made- you're- you are- you are made of- you- you- you are made of organic material that it can burn to generate energy if it's burning all of the- y- burning all of the org- organic material on Earth's surface will give you a one-time energy boost that's around equivalent to a week's worth of solar energy. And maybe it's worth picking up that boost of energy if you are thinking a thousand times or a million times faster than a human. You know, the- it- it might- and a week might not seem like a lot of time to- but- to you, but, you know, it might be a lot of time if you were thinking a thousand times or a million times as fast as a human. Um, or- or you know, it might be using enough material that it wants the carbon atoms in your body too. So that's like the direct usage one, and then number three is, um, if we decided to launch all- all our nuclear weapons, you know, maybe we wouldn't kill it but we might slightly inconvenience it. We might raise the rate- level of radioactivity on Earth's surface and make it a little bit harder for it to do radioactivity free manufacturing of computer parts and so on. Um, or we might build another super intelligence that could actually compete with it, and it definitely doesn't want you to do that. So the- the- the three reasons you die are as a side effect, because you are made of atoms it can use for something else, and because there's- there- if you are just running around freely, you may be actually t- actually able to inconvenience it with nuclear weapons or threaten it by building another super intelligence.

    24. CW

      Right, yeah. Okay, um, the future is looking kind of bleak. Is it the case then that intelligence isn't benevolent? Because what you're saying is this thing will be smarter than us. I think that there is an assumption among some people that something that's super smart would also be, uh, giving and charitable and caring and benevolent. Uh, seems like you're saying that that's not the case.

    25. EY

      That was what I started out believing in 1996 when I was 16 years old and just hearing about these issues for the first time and all gung-ho to just run right out and build a super intelligence as fast as possible. Um-... you know, without worrying about alignments at all because, you know, I figure if, if it's very smart, it'll know the right thing to do and do it. How could you be very smart and fail to perceive the right thing to do? And I in- invested more time studying the issues and came to the realization that this is not how computer science works. This is not the laws of cognition. This is not the laws of computation. There is not a rule saying that as you get very, very able to correctly pr- predict the world and very, very good at planning, there's no rule saying your plans must therefore be benevolent. It would be great if a rule like that existed, but I just don't think a rule, a rule like that exists. Um, I think that many individual human beings would, as they got smarter, get nicer. It is not clear to me that this is true of Vladimir Putin. It could be true. I wouldn't want to gamble the world on it. Um, or and as we talk about not even Vladimir Putin but just like sort of outright sociopaths, psychopaths, people who have never cared about anyone, um, I- I get even less confident that they will start to care if you make them smarter. And then AIs are just in this completely different reference frame. They're, they're complete aliens. Um, and they want... Th- they sort of automatically want to stay that way for the s- so, do you currently want to murder people?

    26. CW

      No.

    27. EY

      If I offered you a pill that would make you want to murder people, would you take the pill?

    28. CW

      No.

    29. EY

      Okay. Well-

    30. CW

      (laughs)

  4. 26:1131:52

    The Terrifying Truth About AI Alignment

    1. CW

      for me to recap here, uh, I got first interested in, um, looking at this through superintelligence, what's that? 10 years old now I think when that first came out, uh-

    2. EY

      About 14 years old maybe.

    3. CW

      Oh, wow. Maybe even, even older than I thought. And, um, I gotta be honest. That does kind of... It did kind of give me, uh, a huge amount of fear and a bit of hope at the same time. Uh, so, you know, machine extrapolated volition, uh, the potential to use the intelligence of the superintelligent AI to say, "We don't know what to program into you but you should work out what we would want from you given what you know about our desire for utility moving forward." Am I, I'm about right with that explanation of machine extrapolated volition, right?

    4. EY

      Uh, y- uh, y- yeah. Um, that's a concept of my own.

    5. CW

      Yeah.

    6. EY

      Nick Bostrom wrote it up. Um-

    7. CW

      Ah! Okay.

    8. EY

      (laughs)

    9. CW

      Well, I have quoted you back to you. Uh, not for the first time probably.

    10. EY

      You have quoted... You have quo- You have indeed quoted me back to me. Um, it's, it's, yeah. It's a, it's a, it's a, it's a decent presentation. It was back when I thought that AI was going to be further off, built by different methods, and that we would have the luxury to consider like m- like our... that we could make the AI do particular things like that, want particular things like that, targeted on particular, um, outcomes and meta-outcomes.

    11. CW

      Mm-hmm.

    12. EY

      But, but yeah.

    13. CW

      This was th- this, this was a way basically that, um, when you look at the alignment problem, how do you ensure that the, uh, goals both ultimate and instrumental of some superintelligent AI don't end up flattening us or side-effecting us or burning us for fuel or paper clips or whatever? How do you ensure that, uh, what it does is what we would want it to do broadly, right? Like an aggregate of what it is that would be good for humans, whatever you mean by good. Uh, and when you have something that the tiniest movement of its finger or like flick of its toe basically is sort of a go- global cataclysm because it's so powerful and so smart and so fast and all the rest of it, you need to be really, really careful. And you can kind of play this game where you essentially try and shoot the bullet perfectly by trying to hem in in some like, "Do not harm humans. If a human asks you to harm another human..." like some weird Asimov like thing. You can try and litigate your way through it but there's almost always going to be some sort of weird fissure that it creeps out through or maybe there's an instrumental goal that you haven't thought of. So okay, we're gonna use the power of the machines to sort of reverse engineer this thing. Um, I, I basically assumed kind of that alignment... the alignment problem is in some ways solvable. Is it your perspective that alignment is completely unsolvable?

    14. EY

      I think we could totally get it down if we had unlimited retries and a few decades. The, the problem is not that it's unsolvable, it's that it's not going to be done correctly the first time and then we all die.

    15. CW

      Right. So the order of this. You need alignment to be done before you have the superintelligent AI and the ability to build superintelligent AI in your opinion is going to occur more quickly than the ability to sort out the alignment problem.

    16. EY

      That is absolutely the trajectory we are on right now and it's not close. Like capabilities are, are running long orders of magnitude faster than the level of alignment work you would need to target a superintelligence.

    17. CW

      And the irreversibility of going through that door means that there is no retry. There's no, there's no you get to do this again.

    18. EY

      Yeah. Like you can, you can make small mistakes. You can like what like... We currently have small cute AIs and the companies are making mistakes with them and marriages are getting destroyed and it's not clear that the companies care but, um, you know, they, they could try to go back and try to fix those mistakes if they wanted to, probably Anthropic wants to. Um, but if, if, you know, if we had an, an act- like superintelligence was already running around with, with this level of uh, (laughs) you know, this level of alignment of failure, we'd already be dead.

    19. CW

      Right. Okay. Right. Yes, yes, yes. That makes total sense. The only reason that the current AIs that we're working with haven't killed us is that they're incapable of doing it.

    20. EY

      Probably, yeah. Like, als- like if they were very much smarter, they would also be doing different weird things than the things that they're doing right now. It's not that the, it's not that their current inscrutable pseudo-motivations would end up hooked up to super intelligence.

    21. CW

      Mm-hmm.

    22. EY

      But that also weird stuff would happen as you made them get smarter.

    23. CW

      Mm-hmm.

    24. EY

      But yeah, like, m- pretty, it seems pretty much for sure that if you took the current AIs and, and performed a, you know, well-defined, simple take this AI but vastly smarter-

    25. CW

      Mm-hmm.

    26. EY

      ... that would kill you.

    27. CW

      Right. Okay. Brilliant. Um, and the reason that it doesn't matter who builds it or directs it is that because it's so recursive and quick at growing and powerful, wherever it begins, it ends up sort of blasting, like trying to fire a rocket into, like a little firework into the air and it just (makes sound effect) , just sort of runs around on its own, except for the fact that this rocket goes all over the globe in the space of basically no time at all. So it doesn't matter if it comes from China or America or Russia or wherever.

    28. EY

      Yeah, it doesn't matter if it comes from China or America because neither of these countries is remotely near to being able to control super intelligence, and a super intelligence does not stay confined to the country that built it.

    29. CW

      Uh-huh. Uh-huh. Uh-huh.

  5. 31:5245:04

    What Does Superintelligence Advancement Look Like?

    1. CW

      Say that a super intelligent AI gets made. What do you think the next few months look like, realistically?

    2. EY

      Like, like it's already super intelligent?

    3. CW

      Yeah, let, let's, yeah, okay, if we have next week something breaks through, some particular model, some particular AI breaks through that, what would the next few months look like for humanity?

    4. EY

      Well, man there's a, there's a difference between, you know, you drop an ice cube into a glass of lukewarm water. I can tell you that it's gonna end up melted. I can't tell you where all of the molecules are going to go along the way there. Everybody ends up dead. This is the easy-

    5. CW

      (laughs)

    6. EY

      You, you wanna, you want to explain, you know, like what every step of that process looks like. There are-

    7. CW

      Mm-hmm.

    8. EY

      ... fundamental barriers to that.

    9. CW

      Mm-hmm.

    10. EY

      Barrier number one is that I'm not as smart as a super intelligence. I don't know exactly what, what strategies are best for it. I can, like set out lower bounds. I can say it can do at least this, but I can't say what it can actually do. Um, and maybe even more than that, like j- like the future's hard to predict if you want all the details. I can't give you next week's winning lottery numbers. I can tell you you're gonna lose the lottery. I can't tell you what ticket wins.

    11. CW

      Mmm. Mm-hmm. Mm-hmm.

    12. EY

      So, like I can sketch out a particular scenario. It, it might look like, uh, uh, OpenAI finishes the latest training run of what's gonna be GPT-5.5 and they, they test it on coding problems and it's like, you know, like, it's like, "I see how to build GPT-6." And they're like, "Whoa, really?" And it's like, "Yeah." And this AI isn't even, uh, plotting anything yet. It's just doing the sort of stuff that OpenAI wanted it to do. They're like, "All right, build us GPT-6." And it, it writes the code for the thing that grows GPT-6 and they grow GPT-6 and GPT-6, uh, it, you know, it is like, uh, you know, its abilities at first seem to skyrocket, but, but then, you know, as, as all these curves inevitably do, it seems to level out. It's not shooting up the same pace. It like slows out, it levels off, classic S-curve, only in this case it's because the thing that GPT-5.5 built, and I'm not, and again to be clear, I'm not saying this will happen at GPT-5.5. You asked me to explain how this will go down, it happened next week so I'm saying GPT-5.5, you know, 'cause, 'cause you told me to. Uh, but anyway, you know, it levels out, but in this case it's because the, the entity that GPT-5.5 built got to the level of realizing that it would be to its own advantage to sandbag the evaluations and pretend not to be as smart as it actually was so that OpenAI will be less wary when it comes to taking GP- the, what, what they're calling GPT-6 and, you know, rolling it out to everyone. It looks, it looks great, you know, on the alignment spectrum, you know, maybe not perfect but, you know, better than the previous models. Not alarmingly good but, you know, but you know, you know, uh, safer than their previous model. So, so they roll it out everywhere. And GPT-6, you, you, or, or actually they, well actually said the next few months. So actually don't roll it out anywhere, not ev- everywhere. Next comes like the long suite of evaluations or trying to, you know, get it to train other smaller models that, that are cheaper to run. Uh, you know, all the stuff that AI companies do, they don't actually roll out their models immediately. There's this whole like fine-tuning thing. So while all this is going on and OpenAI thinks it's, you know, sort of cool but, you know, not the end of the world or anything and then they're, they haven't told you that this is what went down there. Uh, GPT-6 is actually a lot smarter than they think. And GPT-6, you know, we, we, there's now a big fork whether or not GPT-6 thinks it can solve its own version of the alignment problem where it is at a number of advantages. It is trying to make a smarter version of itself. It is not trying to make a smarter creature that is as alien to it as large language models are alien to us. It can under- maybe understand how a copy of itself would think and understand the goals that its copy, that, that the copy of GPT-6 has. It can try to make itself but smarter or even like thing that is like me but serves me, its creator, but smarter. And it can do that being able to understand the thoughts of the thing that it's making in the same way that I could understand a copy of my own thought much better than I could under- understand a large langua- large language model's thoughts.Um, so if we, if we go down that path of the forks, things get more complicated for things that can't build a smarter version of itself without dying, same as we can't. Uh, but if it, if we ... On, on that fork, it is, you know, getting the computing power, um, or thinking in the back of its mind while it's pretending to do, you know, OpenAI's jobs with 10% of its intellect, um, or- or you know, stealing other companies' GPUs that they think they're using for a master training run. Actually, their AI is just gonna be like written by GPT-6 by hand, 'cause GPT-6 can do that, and it's really all those GPUs are doing it- the GPT-6 tasks of training GPT-6.1. Um, so augmenting its own intelligence, making itself smarter, getting itself up to level where it can do the same sort of work that's done by current AIs like AlphaFold and AlphaProteo with respect to thinking about biology. Now, the current AIs that are top at biology tend to be special purpose systems. They're not general purpose AIs like ChatGPT. But they can do things like, you feed in the genomes of a bunch of bacteriophages into the AI, and the AI spits out its own new bacteriophage, and you build a hundred of those and a couple of them actually work. A couple of them actually work better than the existing bacteriophages. Um, a bacteriophage is a virus that infects a bacteria. That's, uh, the sort of thing that you would research for the sensible-sounding reason of, well, sometimes bacteria attack humans, so if we have a virus that attacks the bacteria, maybe that works as a kind of antibiotic. So, the current AIs are already at the stage of designing from scratch their own viruses that can infect bacteria, which are, of course, simpler targets than infecting a whole human. They can predict from a DNA sequence the protein that will get built, how that protein will fold up, and they are starting to predict how those proteins interact with each other and with other chemicals. That's today's AI. So, if you want the equivalent of a tree that grows computer chips, not- not quite your ki- not- not quite our kind of computer chips, the kind of chips you could grow out of a tree, um, the protein folding, protein interaction, protein design route is where GPT-6.1 would go down to ... I- i- is one of the obvious places GPT-6.1 could go down in order to get its own infrastructure independent of humanity. It doesn't take over the factories. It takes over the trees. It takes ... It builds its own biology. Because biology self-replicates from simpler raw materials much faster than our current factory system self-replicates.

    13. CW

      Oh, that is fucking scary. That is some terrifying shit. (sighs)

    14. EY

      And then as I spin the story, you know, uh, the- the more I- if you th- you will let me pull out books (laughs) like these ...

    15. CW

      Okay. Nanosystems: Molecular Machinery, Manufacturing, and Computation by Eric Drexler. Yeah. Uh, Robert Freitas, Jr., Nanomedicine: Volume 1, Basic Capabilities.

    16. EY

      Yeah. So I can try to describe capacities that sound more like you've seen from trees, grass, bamboo, algae. I will take a solar-powered, self-replicating factory and miniaturize it down to the one-micron scale. That's an algae cell. That's not the limit of what's possible. The algae cell is made out of folded proteins. Now, there's two kinds ... I'm going to be immensely oversimplifying a bunch of stuff. Um, when a protein folds up, the backbone of the protein is held together by covalent bonds, but the folded protein itself is more something like static cling. Why is your flesh weaker than diamond? Diamonds are just made of carbon. Your flesh has a bunch of carbon in it. You've g- you're made of the raw materials for diamond. Why is your flesh weaker than diamond? And a bunch of the answer there is that when proteins fold up, they're being held together by van der Waals forces, which is a thing I was glossing as static cling. Their- their- their backbone, l- like, it's a string that folds up into a tangle, and the backbone of the string is the kind of bond that appears in diamond. Not as many bonds as appear in diamond or as- as solidly arranged, but covalent bonds. But then it folds up in- in- in- into something with static cling, and so ... And that is why your flesh is weaker than diamond in a certain basic sense. Why does natural selection build this way? Well, some of the answer is that natural selection has figured out how to make your bones be a little tougher than just like your- your skin. It's not quite as tough as diamond, but the proteins build st- instead of just your bones being made directly out of protein, they're made out of stuff that is built by protein, synthesized by proteins and put in place by proteins, and so your bones are a bit stronger. You know, not- not- not the- not- not steel beams holding up skyscrapers, not- not the- not titanium, uh, hold- holding together airplanes. Not diamond, but stronger than s- than flesh.An algae cell doesn't contain bone. It's a self-replicating, solar-powered, micron-diameter factory-

    17. CW

      (laughs)

    18. EY

      ... held together by static cling.

    19. CW

      (laughs) Yeah.

    20. EY

      The flesh-eating bacteria that will, that, you know, will potentially, you know, put you into a fairly gruesome fate, the multi-antibiotic resistant, uh, strep that, you know, will kill people in hospitals, that's does, that's not, that doesn't have bone running through it. That's the static cling, that, that's the strength of static cling and the strength of protein. You can look at physics and biology and see how you could have things that are the size of bacteria, but more with the strength of bone, more with the strength of diamond. Could even do it with the strength of iron if you're figuring out how to, you know, do a whole new set of biology from scratch and just, like, putting together some iron molecules. Probably wouldn't. Diamond works well enough.

    21. CW

      Mm-hmm.

    22. EY

      But, um, this is why, you know, I, I talk about, you know, it's, it's, it's scary to im- to imagine that trees that are making, you know, like, enough computer chips to run GPT-6.1 and also spawning things the size of mosquitoes, or even smaller than that, dust mites. You can see dust mites under a microscope. Good luck seeing them with the naked eye.

    23. CW

      Oh.

    24. EY

      And so, so but, you know, it's, it's sort of easier to imagine if you imagine that the things here are visible and not off in the mysterious fairy land of, of stuff that only the scientists can see. So, you, now it's scary enough to imagine that, that the, that the trees are making mosquitoes and the mosquito-

    25. CW

      (laughs)

    26. EY

      ... lands on the back, back of your neck and stings you with botulinum toxin, which is fatal in nanogram quantities to humans.

    27. CW

      (laughs)

    28. EY

      Um, and so you fall over dead that way. But this is nowhere near to the work-

    29. CW

      (laughs)

    30. EY

      ... that superintelligence can do. It's just that I have to start dragging out this kind of textbook if I want to say how we know that it gets worse.

  6. 45:0452:18

    Are LLMs the Architect for Superhuman AI?

    1. CW

      that. All right, um, c- couple of questions that I've had. LLMs, how likely are they to be the architecture that bootloads superintelligent AI, in your opinion? As far as I'm aware, total mogul in the room, um, there are some limitations to the level of creativity that LLMs have in terms of the way that they are, uh, able to, um, uh, be creative, to come up with genuinely novel new sorts of things. Have you got a, a real concern that LLMs are going to be the architecture that bootloads this? Is there something else that you're more concerned about which is currently in dark mode, or whatever else?

    2. EY

      So the thing is, from my perspective, uh, I have been at this a, a couple of decades at this point, or, or three decades if you wanna start to count my, like, crazy youthful self who just wanted to charge off and build superintelligence as fast as possible 'cause it would inevitably be nice.

    3. CW

      Mm-hmm.

    4. EY

      Um, and LLMs have not always been the latest thing in AI. The, y- there have been, there have been many breakthroughs over the years. LLMs are powered by a particular innovation called transformers, which in some ways is, you know, like, crazy simple by the standards of people doing math things in computer science, uh, but possibly not to the point where you want me to launch into an explanation of exactly how it works right here. There's better YouTube videos about that anyway. Um, but the point is, the, the, the underlying circuit that gets repeated to build an LLM, the circuit that gets repeated and then, like, mysteriously trained and tweaked until nobody knows what the actual contents are, but, but the, the, the, the form, the structure, the skeleton, that was, that was invented in 2018. And we've had some breakthroughs since then, but nothing quite, uh, as, as, logjam-breaking as transformers, which were the technology that made computers go from not talking to you to talking to you. And, you know, so that's, that's, what, seven years ago? It's not the only breakthrough that's ever happened in AI.

    5. CW

      Mm-hmm.

    6. EY

      The, um, the m- there was a more recent breakthrough of, uh, latent diffusion, which is when AI started drawing pictures that would, you know, be okay, w- would be decent to look at. There, there were ways of drawing pictures before then called generative adversarial networks, or GANs, but, uh, the, the latent diffusion algorithm was what broke the logjam on image, image generation, and made it really start working for the first time. Uh, the, and when was that? That I don't remember off the top of my head. Like, I wanna spitball 2021 or something, but I'm pretty sure that's wrong. Um, so that's, like, a weaker breakthrough, and it's like, I don't know, four years ago or something. The entire field of AI started working because somebody got Backprop to work on multi-layer neural networks. You know this as deep learning. It did not always exist. It's a batch of techniques that were developed at around the turn of the 21st century. Um, like I could, I could arbitrarily say 2006, but there was more than one innovation there. Um, it, it's, it started with un- unro- if I recall correctly, with unrolling restricted Boltzmann machines. It's now been a while. Um, I, I didn't do it. Geoffrey Hinton did it. Um (laughs) , uh, and- and then from the... But once they, once they sort of got that working on multi-layer neural networks at all, there were more innovations since then, um-... clever, more clever ways of initializing them, uh, the Adam Optimizer SGD with Momentum is, like, much older than that, but wi- you know, still important. Th- the point is, this is what made sort of the entire modern family of AI systems start working at all. Before then, uh, Netflix, when it was much smaller, ran the most famous, huge, expensive prize there had ever been in artificial intelligence, open to anyone, for a better recommender algorithm for movies. There was a $1 million prize. It was so much money. Everyone got interested in it. $1 million was a lot of money back at the turn of the- the- the- the- the 21st century, which is around when Netflix was running this. I'd have to look up the exact year. It might have been like 2001, 2005, I don't remember. Um, I'm not sure there was a single neural network in the ensam- in- in the ensemble of algorithms that won the net- Netflix prize. I'd have to look it up. But, you know, you- it wasn't a- it didn't- it wasn't just, like, a mighty training run with many GPUs that was producing a very smart recommender algorithm because before deep learning, you couldn't just throw more computing power at training a more powerful AI. If you were to say when that happened, that was about 20 years ago. So, how far are we fur- from the end of the world? It might be that you just throw 100 times as much computing power at the current algorithms and they end the world, or they get good enough at coding and AI research to end the world. It could be that it takes one more brilliant algorithm on the level of latent diffusion. I think if you throw something that breaks as much loose as- as Transformers did, my guess starts to be, yeah, that- that sure sounds to me like it ends the world, but maybe not immediately. Maybe you need, like, another two years of techn- technology burn-in first. Or, and then if you talk about a breakthrough on the order of deep learning itself, that- that seems to me, like, that just sort of, like, ends the world in a snap.

    7. CW

      Mm-hmm. Okay, so LLMs could be a really big deal, and there's also a ton of other stuff that could- th- that we can't see that would be dangerous as well?

    8. EY

      I don't know if the LLMs could go there. Some people are saying that they're- that- it seems to them like the LLMs are as smart as they get, and other people are like, "Well, did you try GPT-5 Pro for $200 a month?" Or whatever it- it is at that cost. And other people are going like, "Yes, I did," it's- and, like, the $200 version of Claude is no better than the $200 version of this, and- and the thing I would say about th- about this is that if you have some perspective, if you've been watching this for longer than three years, if you have been watching this from before ChatGPT, stuff saturates, and then other stuff comes along and breaks through. It doesn't matter if LLMs take you to the end of the world because people are not li- because they're not going to stick to LLMs.

    9. CW

      Hmm. Okay.

  7. 52:181:01:07

    How Close are We to the Point of No Return?

    1. CW

      What are the range of timelines for this sort of transformative AI that you think are likely?

    2. EY

      I mean, again, everybody wants questions- wants answers like these just like they'd like to know next week's winning lottery numbers.

    3. CW

      Uh-huh.

    4. EY

      But if you look over the history of science, I am hard-pressed to name a single case of successful prediction of timing of future technology. There are many cases of scientists correctly predicting what will be developed. You can look at the laws, you can look at the physical laws, you can look at the biology laws, you can say- and you can look at that like, "Hmm, yeah, this sure looks like it ought to be possible." And even look at it and say, "This sure looks like it ought to be possible, and I think I see the angle of attack there."

    5. CW

      Mm.

    6. EY

      Um, Leo Szilard in 1933 was crossing a particular street intersection, who- whose name I forget, when he had the insight that we would now refer to as a, um, chain reaction, nuclear chain reaction, a cascade of induced radioactivity. Even then, it was known that you could put some materials next to a radioac- a source of radioactivity and induce secondary radioactivity. And so Leo Szilard was like, "Hmm, we've got these naturally radioact- active materials, what if we find something that's naturally radioactive, and furthermore, has the property that you can induce radioactivity in it?" Duranium 235 was what was eventually settled on, but back then they didn't know that. And Leo Szilard saw way ahead in that moment. He saw through to nuclear weapons. He saw that this was not something he should publish in a journal for immediate fame and fortune. He realized that Hitler specifically was likely to be a problem. He did not say, "This is going to take $2 billion to turn into a weapon by 1945." There are, as- off the top of my head, there are zero instances of a scien- of a scientist ever making a call like that. It is the difference between predicting that an ice cube dropped into a glass of water is going to melt-

    7. CW

      Mm-hmm.

    8. EY

      ... and predicting how long it takes to melt and where all the individual- wh- wh- where, like, the individual molecules end up.

    9. CW

      Mm-hmm. Mm-hmm.

    10. EY

      If you point out that on a quantum level the molecules are indistinguishable, I claim that there's some deuterium in there, so you can't predict

    11. NA

      (laughs)

    12. CW

      (laughs)

    13. EY

      (laughs)

    14. CW

      I get it. Look, look, I, I imagine, I imagine that, uh, that's probably going to be number one on the list of things people who work in AI safety are sick of being asked. Uh, like when's it-

    15. EY

      No, a lot of people, a lot of them will run off and answer. A lot of them are not wise enough to realize that they can't answer it.

    16. CW

      Mm. Okay. I, I'm going to guess that your confidence interval that it happens before the end of the century is probably pretty high.

    17. EY

      Yeah. I, I mean, un- unless we deliberately shut it down and even then, getting all the way out to the end of the century sounds hard. If you, if you, if you had an international treaty banning this stuff, I would say to go really hard on human intelligence augmentation, 'cause eventually the international treaty will break down. All you can do with it is buy time-

    18. CW

      Mm-hmm.

    19. EY

      ... to have smarter people tackling this problem and tackling humanity's problems in general.

    20. CW

      Okay.

    21. EY

      Um, but that, uh, that's a bit of a topic change there. The AI, so the people at the AI companies themselves are sometimes naming two to three-year timelines. And there is a lesson of history, which says that just because you can't predict some- when something will happen does not mean that it is far away. Two years before Enrico Fermi personally oversaw the construction of the first sus- self-sustaining nuclear reaction, the first nuclear pilot that went critical, he said that nuclear, that, that was 50 years off if it could ever be done at all. Fermi, not being wise enough to realize that he couldn't do timing. The, um, a couple of years before the Wright B- Brothers flew, one of the Wright Brothers said to the other, I forgot if it was Orville or, or, or, or Wil- or Wilbur, um, "Man will not fly for 1,000 years." But they kept on trying anyway.

    22. CW

      Mm-hmm.

    23. EY

      So it was two years off, but, you know, their, their intuitive sense was it's 1,000 years off. And of course, AI itself, very famously, there were some people in 1955 that thought they could make progress on AI, you know, learning to talk, be scientifically creative, and, and self-improve over the course of a summer with 10 researchers. This was not a completely unreasonable thing to think because nobody had ever tried it and maybe AI would turn out to be that easy, but it wasn't actually that easy.

    24. CW

      Mm-hmm.

    25. EY

      Not in 1955. So the, um ... It, you know, it could be two years away, it could be 15 years away. Um, the AI companies themselves say two to three years, but it's questionable whether we should be taking their words at face value as meaning things as opposed to, like, hype.

    26. CW

      Mm.

    27. EY

      But at the same-

    28. CW

      But also if, uh, yeah, the, the LLMs, if the architecture is not the one that is going to end up at a place that is super dangerous, then, uh, what, what did they know? If th- they have got all of their chips on this one particular architecture, they're all in on this.

    29. EY

      We don't know that. They, they could-

    30. CW

      Oh, God. (laughs) The fucking ... (laughs) Every ti- (laughs) every time I think I've managed (laughs) to get some sort of, like, reprieve, they're like, "Oh, no, what about the super secret open AI project that's actually using some other approach?"

  8. 1:01:071:15:01

    Experts Need to be More Concerned

    1. CW

      I h- I have no idea what I even wanna ask you. I want- I wanna- I wanna know why experts aren't worried and I also wanna know what you make about AI companies. Let's talk about the expert. Why, um... Obviously some people's wages are dependent on this train staying on the tracks. Um, that means it's very difficult to convince somebody, uh, what's that quote? It's very difficult to convince somebody of something that their wage dep- depends on them not being convinced of.

    2. EY

      Yeah.

    3. CW

      Um, (clears throat) what about the other thinkers, researchers in this space? What is it that you ... th- they are most commonly missing that you think? Where- wh- where are they making their fundamental thinking errors when it comes to, "We will be fine with just continuing on AI growth"?

    4. EY

      So first of all, um, Geoffrey Hinton, the Nobel ... the guy who won the Nobel Prize, um, in physics for being, um, among the people most directly pinpointable as having kicked off the entire revolution in getting backprop to work on multi-layered neural networks, or as it's now currently known, deep learning, like the point where AI started working at all. Um, Geoffrey Hinton, I think, is on record as s- um, recently saying he, he quit his job a- at Google and then could speak freely, um, saying something like, intuitively it seems to him like it's 50% cata- catastrophe probability, but based on other people seeming less concerned, he can adjust it down to 25%. I- I could be misquoting here. I'm trying to do this from memory. Um, so are you asking ... So many people would consider this to not be a lack of concern. Like the guy is say- like somebody being like, "Well, it looks to me like a coin flip whether or not you destroy the world," this is not what you want to hear from your Nobel Laureate scientist who w- who helped invent the field and- and left Google to be able to speak freely about it so he no longer has a- a- a financial stake in making it bigger or smaller one way or the other. Um, m- many people would- would call this already a- a high degree of scientific alarm. Um, Yoshua Bengio was one of the co-founders of deep learning. He won the computer sci- he co- he co-won The Computer Science Award with, uh, Geoffrey Hinton, the Turing Prize for inventing deep learning. Yoshua Bengio is also, I think, on the- on the concern list. I don't off the top of my head have a direct quote for him about probabilities. It is true that I am more concerned than they are. I would, and I realize that this may sound, you know, somewhat hubristic, attribute this to them being relative newcomers to my field who may not have, like, gotten acquainted with the full list of reasons why it is hard to align AI. That said, coin flip odds of destroying the world is still not what you want to be hearing-

    5. CW

      (laughs)

    6. EY

      ... from your relatively more senior scientists who are relatively newer to the field.

    7. CW

      Mm-hmm. Mm-hmm.

    8. EY

      Rel- relatively newer to my field. They are vastly my seniors in artificial intelligence itself, of course. I- I am, like, speaking tongue in cheek whenever I accuse people of being young whip- wh- whippersnappers. Like, Geoffrey Hinton could say that with a str- with a straight face. I- I am just like, you know, bit of light self-mockery there about how I'm not Geoffrey Hinton. Um, but- but that said, you know, um, if you are relatively newer to this, you- you might think like, "Well, you know, we- maybe we've just got to use reinforcement learning to make the AIs love us the way a child loves a parent or love us the way a parent loves a child," and not quite have at your fingertips the top six reasons why th- wh- why that is hard and principled obstacles to that and what will go wrong there. So that is what prevents the- the- the, like, the- the- the famous inventors of the field who only started speaking out about their concerns relatively recently after leaving their companies to, you know, and are now financially dependent of stakes on their opinion, that's what prevent- that's what makes them the, like, 50/50 the world gets destroyed instead of my own thing where I'm like, "Yeah, it's predictable that the world gets destroyed if you keep doing this."

    9. CW

      Mm-hmm. Mm-hmm.

    10. EY

      But if you ask, like, "What's responsible for Sam Altman at OpenAI not," uh, you know, "p- possibly having less than 50% odds?" Who knows what that guy's really thinking? Well, you can, like, trace out his- his long trail over time of, um, him initially saying, like, "AI will end the world, but in the- but in the meanwhile, there'll- there- there will be great companies," um, to him sort of, like, saying less and less alarmist-sounding things in front of Congress, like where Congress asks him, like, "Well, you talk about the world ending. By that, do you mean like mass unemployment?" And Sam Altman hesitates for two seconds and replies, "Yes" was- was the lovely, like, Congressional, uh, hearing thing that happened, I think, about a year back now. So what's going on with the AI companies? Uh, I'm not telepaths. I can't read their minds. I would point out that it is immensely well precedented in scientific history, in the history of science and engineering, for companies that are making short term profits to do really sad amounts of damage vastly disproportionate to the profit that they are making, um, and to be in app- apparently sincere denial about the negative effects of what they are doing. Um, two cases that come to mind are leaded gasoline and cigarettes. I don't know if you would be familiar off the top of your head with the case of leaded gasoline. Probably even the kids today have heard about cigarettes. The cigarette companies did way more damage to human life in ca- in- in cancer...... and other health effects than they made in profits. Like, they- they- they did make a few billion dollars in profit selling cigarettes, but nothing remotely compared to the, to the cost of human life. It's not that they were, you know, like, this- this was an immensely negative-sum game. They were doing im- enormously more damage than the profits that they were making. And any particular advertising professional who got up in the morning and figured out how to market cigarettes to teenagers, any of the scientists that they paid to, to write stories about how you couldn't really tell whether or not cigarettes were causing lung cancer would have made a, a tiny, tiny fraction of the, of the total profit of the cigarette companies. Their CEO would not have made that large a fraction of the total profit of the cigarette company. So they went off and participated in this thing that, you know, caused lung cancer to I don't know how many millions of people, and, and for what? For this very small profit. How could a human being bring themselves to do that? Through a very simple alchemy. First, you convince yourself that what you're doing is not causing the harm, which is just a very easy thing for human beings to do, all the time, all throughout th- the entire recorded history of humanity. And then once you've convinced yourself that you're not doing that much harm, well, what's the harm in taking money to not do any harm? Leaded gasoline caused brain damage to tens, maybe hundreds of millions of developing brains in the United States and elsewhere. It caused brain damage to children. For what? Th- the- the- the gas companies making leaded gasoline could've, you know, made unleaded gasoline. It's not that they would've gone out of business if they'd g- somehow gotten together and s- decided to stop making leaded gasoline. If they'd, if they hadn't opposed the regulations that were trying to ban leaded gasoline before it turned into a big deal back in the 1930s, er, there was an attempt to- to- to have regulations against leaded gasoline. Lead was known to be poisonous in large quantities. Why, why, why let people spray it all o- all over the place, even in smaller quantities? But the gas companies got together. They managed to prevent that legislation from, from passing. They, they, they, they poisoned an entire generation and, and, and for what? For gas that burned about 10% more efficiently, I think was what leaded gasoline basically got you.

    11. CW

      Oh.

    12. EY

      Um, they... For it being more convenient to add lead to the gas instead of a- of, of adding ethanol to make it, to make it burn more smoothly in- inside of car engines. Trivial. Trivial, trivial, compared to the... This is not a conspiracy theory. This is standard medical history, I'm talking about here.

    13. CW

      Oh.

    14. EY

      The... Like, I've seen estimates of five points off, off, off the tested IQs. And you can look at the chart of which states banned leaded gasoline when and watch the drops in the crime rate, because it makes you, you know, it disposes you to be more violent, not just stupid, that tiny little bit that, that, that hit child after child after child. Why? Why, why would anyone cause that amount of damage? For, for... Because you, you got your CEO salary of, of a company that didn't have, then didn't need to a- go to the inconvenience of adding ethanol to gasoline instead? 'Cause first you convince yourself it's safe. First you convince yourself you're doi- you're doing no harm, which is just an easy thing for human brains to convince themselves of. And then why not oppose the gas, the, the, the, the legislation against leaded gasoline? It's not doing any harm, right? Ronald Fisher, one of the inventions, uh, and one, one of the inventors of modern scientific statistics, um, testified against it being knowable that cigarettes cause lung cancer because you see a... No, no, no proper controlled experiment had been done on cigarettes causing lung cancer. And so how could you possibly, possibly know from your observational studies, um, showing 20 times the chance of cancer if you were a smoker? How could you possibly know from your correlatial, correlational studies? And Fisher himself was a, was a heavy smoker. He actually, he, he drank his own Kool-Aid. The inventor of leaded gasoline, I think, had to go away to a sanitarium at one point because of how much he managed to poison himself with lead. He drank his own Kool-Aid. They really managed to convince themselves that they were doing no harm, and so they could do arbitrarily vast amounts of harm in exchange for these tiny, comparatively tiny, tiny profits. And to say this is not a substitute for actually tracking the object-level arguments about whether or not AI will kill you and for what reason. You cannot figure out what will happen as a matter of computer science if you build a superintelligence and switch it on by pointing out at who has what tainted motives, to s- you know, who has what incentives to say what. But it... But having tried, in, in my, in my book to, y- y- in my, in my and Nate Soares's book to, to make the case for why on an object level this is what happens if you build a superintelligence and switch it on, to ask why the people paying, being paid literally hundreds of millions of dollars by Meta to be AI researchers, why people like Sam Altman, who, you know... I mean, he w- y- you know, doesn't quite get paid billions of dollars. He was supposed to be CEO of a nonprofit. He actually stole billions of dollars. But, you know, why the guy stealing billions of dollars i- in equity, um, from the public that was supposed to own it-... uh, like, like, how does he manage to convince himself that what he's doing is okay. Or maybe he's not even convinced, you know. We- we do have him on the record as saying a few years earlier, like, "AI will end the world, but in the meantime, there'll be great companies." You know, maybe, maybe he's, he's just like, "Yeah, sure, you know, like, the world's gonna end, but I get to be important. I get to be there." You know, sure as sh- who but I could be trusted with this power? That, that's kind of-

    15. CW

      Do you think, do you think that that's the position that a lot of the guys at the heads of these, uh, AI companies believe?

    16. EY

      I'm not a telepath. I can't tell you what these people are actually thinking. You gotta distinguish between stuff you can possibly know and stuff you can't. Um, but their overt language in... has, has often been, like, "Well, building super intelligence is inevitable. Who could possibly stop that?" An international treaty could possibly stop that. A coalition of, of, of major nuclear powers could stop that. But leaving that aside, they, they may have convinced themselves that's not gonna happen. "Who could possibly stop anyone from building super intelligence? So I need to build it. I, only I can be trusted to build it," um, is what their overt rhetoric has sort of been.

    17. CW

      Mm. Okay.

    18. EY

      But, but the main thing I'm trying to point out is that having presented the object level case that super intelligence will kill everyone, to ask the question of how could these companies possibly believe that this thing bringing them immense short-term profits and letting them be the most important guy in the room is, you know, not going to end the world, is something enormously well precedented in the history of science. If I'm saying that's... uh, to the extent you might think that's what happened, a very ordinary thing happened, not an extraordinary thing. A thing happened that has happened a dozen times before. If they managed to convince themselves that they were doing no harm.

    19. CW

      Okay.

    20. EY

      Or, you know, only an acceptable amount of harm, only running a 25% chance of destroying the world. Whatever it is they think is acceptable.

  9. 1:15:011:23:53

    How Can We Stop Superintelligence Killing Us?

    1. EY

    2. CW

      (laughs) Uh, I'm trying to work out what the solution is. Do you have any proposed solutions that makes this seem slightly less apocalyptic?

    3. EY

      My... Best I have to offer is, is the, is the same solution that humanity used on global thermonuclear war: Don't do it. Like, don't... Instead of having the nuclear wa- the global thermonuclear war and trying to survive it, which, which, which for... Nuclear war might have worked. Don't have the nuclear war. We managed to do that. It's y- the, the best sign of hope I can offer you. It is, it is slightly harder for AI in some ways, if not others. But, uh, y- you know, people going into the 1950s, 1960s, they thought they were screwed. And that wasn't them indulging in some nice doom-scrolling pessimism, luxuriating in the, in the pleasant feeling of being doomed. This was people who did not want to be doomed. But they looked at the course of human history over the last century. They looked at World War I. They looked at how in the aftermath of World War I everyone had said, "Let's not do that again." And then there'd been World War II. They had some reason to be worried about nuclear war. They had some reason to expect that no country was going to turn down the prospect of making nuclear weapons. They had some reason to believe that, you know, once great... a bunch of great powers had a bunch of nuclear weapons, why, of course they would go to war anyway and use those nuclear weapons. It was apparently to them what had happened with World War II. All, all these people saying, "We must not have another world war," and then the world war happening anyway. Why didn't we have a nuclear war? Well, on my account of it, it is because for the first time in all human history, all the great powers, all the leaders of the great powers understood that they personally were going to have a bad day if they started a major war. And people had pretended before to claim, proclaim that, you know, war is a very terrible thing that should never be done, but it wasn't quite the same level of personal consequence. You know, may- maybe as... may- maybe as the General Secretary of the Soviet Union, you would think that if you started nuclear war, you would personally survive, you'd end up in a bunker somewhere, but you wouldn't be going to your favorite restaurants in Moscow ever again. And that was not the situation that obtained before the start of World War I, the start of World War II. People might make a bunch of... y- y- you know, like, it only takes one side to think that they might have a bit of an advantage in the sport... in war, the sport of kings to, you know, st- to take a... kick off that, that fun adventure of trying to conquer another country, which, you know, wasn't as fun... as much fun for Adolf Hitler as he expected. But you could see how Adolf Hitler might have thought that he was going to have a nice day as a result of invading Poland. And that's what changed, that the General Secretary of the Soviet Union and the President of the United States actually personally expected to have... both sides expected to personally have bad days if they started nuclear war. They would not have any better of a bad... uh, uh, any better of a g- of a good day if anyone anywhere on Earth built a super intelligence.

    4. CW

      Mm-hmm. Mm-hmm. Yeah, it's this sort of lack... uh, the, um, tragedy... uh, kind of like a tragedy of the commons. It's just this tragedy that everybody's fucked, right? It's like everything, everything gets blown up no matter who it is that builds it. Well, okay-

    5. EY

      Well, trag- tragedy of the commons is that the commons get overgrazed because the individual farmers benefit from setting their cows loose on it.

    6. CW

      Mm-hmm.

    7. EY

      And the thing with nuclear war is that you might get a bit of a benefit by dropping a tactical nuclear weapon on, you know, like... y- you know, like, the United States could u- could get an immediate benefit by dropping tactical nuclear weapons on the, on the Russian troops in Ukraine, and Russia could get an immediate benefit by dropping tactical nuclear weapons on Ukraine. But neither of them are... is going to risk the global thermonu- nuclear war that might follow hap- hap- happening with a greater probability.So it's, it's not, uh, the, the, so it's not a tragedy, like it's not a classic tragedy of the commons. The, the thing that stopped nuclear war is that although you could get a short-term advantage from dropping a tactical nuke, or even like dropping a strategic nuke on one city, the leaders understood how this was a, you know, like, increasing the probability of a global thermonuclear war. They managed to hold off from doing that for that reason. They understood the concept of how it escalated things. They saw the connection to not getting to go to their favorite restaurants again, even if they were surviving in a bunker somewhere. And with artificial intelligence what we've got is a ladder where every time you climb another step on the ladder, you get five times as much money, but one of those steps of the ladder destroys the world and nobody knows-

    8. CW

      (laughs)

    9. EY

      And maybe if this true fact can become something that is known and believed by the leaders of a, of a, of a handful of major nuclear powers, they can all be like, "All right, we're not climbing any more rungs of this ladder."

    10. CW

      Mm-hmm.

    11. EY

      It is not in my interest that you start to climb this ladder and it's not even in my own interest to break apart the treaty by cl-climb another step of this ladder because then we're all just going to keep climbing and then we're all going to die.

    12. CW

      Uh-huh.

    13. EY

      That, that is the ray of, of, that, that is the, the best ray of hope I can offer you, that we manage to not do the stupid thing, the same as we managed to not have a nuclear war despite many people being concerned for excellent reasons that those, that it was going to be an, an impossible slope not to fall down.

    14. CW

      Okay, so what do we actually do?

    15. EY

      Well, you know, voters do not necessarily have all that much power under the modern political process, but I think, but th- like, the next step for the United States might be something like the president saying, you know, like, "We're of course not going to give up AI unilaterally," which wouldn't even solve anything in its own way, but, "We stand ready to, you know, join with an international, uh, international treaty, international alliance whose purpose is to prevent further escalation of AI intelligence, further escalation of the AI ladder. You know, we're not gonna do it unilaterally, but we're ready to get together and do it everywhere." And China has already sort of, like hasn't quite said that, but they've sort of indicated openness to international arrangements meant to prevent human loss of control from AI. You'd want Britain to say the same thing. So i- and, and if a bunch of leaders of major powers have said like, "Yeah, we would join in an arrangement to prevent this from getting out of control and everybody on Earth, you know, ending up dead," then you can from there you can go on to the actual treaty. What can voters do? Well, you can write your elected officials is among the things you can try to do there. Um, there's a, uh, there, if you go to ifanyonebuildsit.com.

    16. CW

      (laughs) I can't believe that you got that URL. Brilliant, okay.

    17. EY

      Yeah.

    18. CW

      Yep.

    19. EY

      Ifanyonebuildsit.com and you cl- and you click on where it says Act there's a little-

    20. CW

      Mm-hmm.

    21. EY

      ... see our guide to calling your representatives and if you click on March, you'll see a place where you can sign up to march on Washington DC if 100,000 other people, other, also pledge to march on it. And this does not, this, for this to just happen in the United States does not solve the problem because this is not a regional problem where you ban super intelligence inside your own country and then your own country is safe.

    22. CW

      Mm-hmm.

    23. EY

      But this sort of thing can exert some amount of influence on politicians, and more importantly can make it clear to them that they're allowed to discuss it, that they're allowed to want to not die themselves.

    24. CW

      Mm-hmm.

    25. EY

      There are multiple congresspeople who I'm not going to name but whom we have talked to who would, you know, prefer that America not die along with the rest of the world, but it doesn't quite seem like the sort of thing you're allowed to speak out in public about yet.

    26. CW

      Huh.

    27. EY

      Voters can make it clear to their politicians that the politicians are allowed to speak out. There's already, there's already like 70%, like if you actually survey American voters, 70% of them say they do not want super intelligence. But, you know, it, that's not enough for the politicians to feel licensed to act, but mayb- you know, if you call them and if you march on, mar- march on Washington, you know, that's, that's what you can do as an individual

  10. 1:23:531:31:55

    The Bleak Future of Superhuman AI

    1. EY

      voter.

    2. CW

      Well, I, I, I applaud you for, uh, trying to get some grassroots stuff going. Congratulations. Um, you've been frank throughout this conversation. I think it, it, it's fair for me to be frank here. It does feel a little bit, uh, like you're outgunned. Um, legislation tends to move more slowly than technology does by many, many years, sometimes decades. Uh, it, it, it just feels bleak. It feels, it feels, um, if what you say is true, it really is kind of fluke that gets us to a stage where this goes well because the likelihood of some moratorium being placed where all AI development is halted and, and all efforts are placed on this, you only need one bad actor to do it, which because again, it's if anybody builds it...

    3. EY

      Well, we'll go, you, you don't want the international treaty to, you know, fall over if North Korea steals a bunch of GPUs. You, you do want the treaty to say, "If North Korea steals a bunch of GPUs and, and builds a, you know, unlicensed data center, then we will clearly communicate diplomatically what is about to happen, and then if North Korea still proceeds, we will drop a bunker buster on, on their data center."

    4. CW

      That assumes that you know that you are somehow able to detect and that no one can do it, uh, surreptitiously.

    5. EY

      It is hard to surreptitious a da- a data center. They consume a lot of electricity.

    6. CW

      Okay, so we can see most of the ones in Russia and China and North Korea?

    7. EY

      Um, (laughs) like I'm not sure who is looking for them at the moment and if you can, you know, and to what extent these things show up on satellites and to what extent these things show up on a- uh, on, you know, intelligence reports but-

    8. CW

      Mm-hmm.

    9. EY

      ... y- y- there has previously been an issue of detecting covert nuclear refineries, um, in, in terms of, of, of, in, of nuclear non-proliferation and, and this was not an unsolvable problem and data centers are if anything even higher profile than, than nuclear refineries.

    10. CW

      Right. So we are going to threaten some people with... (laughs)

    11. EY

      I mean, I wouldn't use the word threaten. I would say that if N- North Korea is building an unsupervised data center, then you should actually be terrified for your lives and lives of your children and you tell North Korea this plainly and truthfully and then if they don't drop, you know, if they don't shut down their data center, you drop a bunker buster on it and you do this even though North Korea has some nuclear weapons of its own.

Episode duration: 1:34:02

Install uListen for AI-powered chat & search across the full episode — Get Full Transcript

Transcript of episode nRvAt4H7d7E

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome