Skip to content
The Diary of a CEOThe Diary of a CEO

Stuart Russell: Why AI risk is Russian roulette for humanity

How the gorilla problem and an intelligence explosion expose AI's core risk: Russell argues humans face extinction unless safety comes first by 2030.

Steven BartletthostStuart Russellguest
Dec 4, 20252h 4mWatch on YouTube ↗

EVERY SPOKEN WORD

  1. 0:002:41

    You've Been Talking About AI for a Long Time

    1. SB

      In October, over 850 experts including yourself and other leaders like Richard Branson and Geoffrey Hinton signed a statement to ban AI superintelligence as you guys raised concerns of potential human extinction.

    2. SR

      Because unless we figure out how do we guarantee that the AI systems are safe, we're toast.

    3. SB

      And you've been so influential on the subject of AI. You wrote the textbook that many of the CEOs who are building some of the AI companies now would have studied on the subject of AI.

    4. SR

      Yup.

    5. SB

      So, do you have any regrets?

    6. SR

      Um... (suspenseful music)

    7. SB

      Professor Stuart Russell has been named one of Time magazine's most influential voices in AI.

    8. SR

      After spending over 50 years researching, teaching- And finding ways to design AI in such a way that humans maintain control.

    9. SB

      You talk about this gorilla problem as a way to understand AI in the context of humans.

    10. SR

      Yeah. So a few million years ago, the human line branched off from the gorilla line in evolution, and now the gorillas have no say in whether they continue to exist because we are much smarter than they are.

    11. SB

      So intelligence is actually the single most important factor to control on Earth?

    12. SR

      Yup.

    13. SB

      But we're in the process of making something more intelligent than us.

    14. SR

      Exactly.

    15. SB

      Why don't people stop then?

    16. SR

      Well, one of the reasons is something called the Midas touch. So King Midas is this legendary king who asked the gods, "Can everything I touch turn to gold?" And we think of the Midas touch as being a good thing, but he goes to drink some water and the water is turned to gold. When he goes to comfort his daughter, his daughter turns to gold. And so he dies in misery and starvation. So this applies to our current situation in two ways. One is that greed is driving these companies to pursue technology with the probabilities of extinction being worse than playing Russian roulette, and that's even according to the people developing the technology without our permission. And people are just fooling themselves if they think it's naturally going to be controllable. So, you know, after 50 years I could retire, but instead I'm working 80 or 100 hours a week trying to move things in the right direction.

    17. SB

      So if you had a button in front of you which would stop all progress in artificial intelligence, would you press it?

    18. SR

      Not yet. I think there's still a decent chance to guarantee safety, and I can explain more of what that is.

    19. SB

      I see messages all the time in the comments section that some of you didn't realize you didn't subscribe, so if you could do me a favor and double-check if you're a subscriber to this channel, that would be tremendously appreciated. It's the simple, it's the free thing that anybody that watches this show frequently can do to help us here to keep everything going and this show in the trajectory it's on. So please do double-check if you've subscribed and, uh, thank you so much, because in a strange way you are, you're part of our history and you're on this journey with us and I appreciate you for that. So, yeah, thank you. (upbeat music)

  2. 2:413:16

    You Wrote the Textbook on AI

    1. SB

      Professor Stuart Russell OBE, a lot of people have been talking about AI for the last couple of years. It appears you've... This really shocked me. It appears you've been talking about AI for most of your life.

    2. SR

      Well, I started doing AI in high school, um, back in England. But then I did my PhD starting in '82 at Stanford. I joined the faculty at Berkeley in '86, so I'm in my 40th year as a professor at Berkeley. The main thing that the AI community is familiar with in my work, uh, is a textbook that I wrote.

  3. 3:165:51

    It Will Take a Crisis to Wake People Up

    1. SR

    2. SB

      Is this the textbook that most students who study AI are likely learning from?

    3. SR

      Yeah.

    4. SB

      So, you wrote the textbook on artificial intelligence 31 years ago. You actually start- probably started writing it, because it's so bloody big, in the year that I was born. So I was born in '92.

    5. SR

      Uh, yeah. Took me about two years to...

    6. SB

      Me and your book are the same age, which ju- just is wonderful, wonderful way for me to understand just how long you've been talking about this. A- and how long you've been writing about this. And actually, it's interesting that many of the CEOs who are building some of the AI companies now probably learnt from your textbook. You had a conversation with somebody who said that in order for people to get the message that we're gonna be talking about today, there would have to be a catastrophe for people to wake up. Can you give me context on that conversation and a gist of who you had this conversation with?

    7. SR

      Uh, so it was with one of the CEOs of a leading AI company. He sees two possibilities, as do I, which is, um, either we have a small, or let's say, small-scale disaster of the same scale as Chernobyl.

    8. SB

      The nuclear mel- meltdown in Ukraine?

    9. SR

      Yeah. So this, uh, nuclear plant blew up in 1986, killed, uh, a fair number of people directly and maybe tens of thousands of people indirectly through, uh, radiation. Recent cost estimates, more than a trillion dollars. So, that would wake people up, that would get the governments to regulate. He's talked to the governments and they won't do it, so he looked at this Chernobyl-scale disaster as the best-case scenario because then the governments would regulate and require AI systems to be built.

    10. SB

      And is this CEO building an AI company?

    11. SR

      He runs one of the leading AI companies.

    12. SB

      And even he thinks that the only way that people will wake up is if there's a Chernobyl-level nuclear disaster?

    13. SR

      Uh, yeah. No, it wouldn't have to be a nuclear disaster. It would be either an AI system that's being misused by someone, for example, to engineer a pandemic, or an AI system that does something itself such as crashing our financial system or our communication systems. The alternative is a much worse disaster where we just lose control all together.

  4. 5:517:51

    CEOs Staying in the AI Race Despite Risks

    1. SR

    2. SB

      You have had lots of conversations with lots of people in the world of AI, both people that are, you know, have built the technology, have studied and researched the technology, all the CEOs and founders that are currently in the AI race. What are some of the-... the interesting sentiments that the general public wouldn't believe, that you hear privately about their perspectives, 'cause I find that so fascinating. I've had some private conversations with people very close to these tech companies, and the shocking sentiment that I was exposed to was that they are aware of the risks often, but they don't feel like there's anything that can be done, so they're carrying on, which is, feels like a bit of a paradox to me. Like, it's-

    3. SR

      Yes. It, it's, it, it must be a very difficult position to be in, in a sense, right? You're, you're doing something that you know has a good chance of bringing an end to life on Earth, including that of yourself and your own family. They feel that they can't escape this race, right? If they, uh, you know, if a CEO of one of those companies was to say, "You know, we're, we're not gonna do this anymore," they would just be replaced because the investors are putting their money up because they wanna create AGI and reap the benefits of it. So it's a strange situation where ev- uh, at least all the ones I've spoken to. I haven't spoken to Sam Altman about this, but you know, Sam Altman, even before becoming CEO of OpenAI, said that creating superhuman intelligence is the biggest risk to human existence that there is. My worst fears are that we cause significant... We, the field, the technology, the industry, cause significant harm to the world. You know, Elon Musk is also on record saying this, so, uh, Dario Amodei estimates up to a 25% risk of extinction.

  5. 7:519:53

    They Know It's an Extinction-Level Risk

    1. SR

    2. SB

      Was there a particular moment when you realized that these CEOs are well aware of the extinction-level risks?

    3. SR

      I mean, they all signed a statement in May of '23, uh, call- it's called the Extinction Statement. It basically says, "AGI is an extinction risk at the same level as nuclear war and pandemics." But I don't think they feel it in their gut. You know, ima- imagine that you are one of the nuclear physicists. You know, I guess you've seen Oppenheimer, right? So-

    4. SB

      Yeah.

    5. SR

      ... you're there, you're watching that first nuclear explosion. How, how would that make you feel about the potential impact of nuclear war on the human race, right? I, I think you would probably become a pacifist and say, "This weapon is so terrible, we have got to find a way to, uh, keep it under control." We are not there yet with the people making these decisions, and certainly not with the governments, right? You know, what policymakers do is they, you know, they listen to experts. They keep their finger in the wind. You got some experts, you know, dangling $50 billion checks and saying, "Oh, you know, all that Doomer stuff, it's just fringe nonsense. Don't worry about it. Take my $50 billion check." You know, on the other side, you've got very well-meaning, brilliant scientists like, like Geoff Hinton saying, "Actually, no, this is the end of the human race." But Geoff doesn't have a $50 billion check. So the view is the only way to stop the race is if governments intervene and say, "Okay, we don't, we don't want this race to go ahead until we can be sure that it's going ahead in absolute safety."

  6. 9:5312:57

    What Is Artificial General Intelligence (AGI)?

    1. SB

      Closing off on your career journey, you got a... You received an OBE from Queen Elizabeth.

    2. SR

      Uh, yes.

    3. SB

      And what was the listed reason for that, for the award?

    4. SR

      Uh, contributions to artificial intelligence research.

    5. SB

      And you've been listed as a Time magazine Most Influential Person in, uh, in AI several years in a row, including this year, in 2025.

    6. SR

      Yep.

    7. SB

      Now, there's two terms here that are central to the things we're gonna discuss. One of them is AI, and the other is AGI. In my muggle in- interpretation of that, it's artificial general intelligence is when the system, the computer, whatever it might be, the technology, has generalized intelligence, which means that it could theoretically see, understand, um, the world. It knows everything. It c- can understand everything in the, the world as well as or better than a human being-

    8. SR

      Yep.

    9. SB

      ... can do it.

    10. SR

      And I think take action as well. I mean, so- some people say, "Oh, you know, AGI doesn't have to have a body," but a good chunk of our intelligence actually is about managing our body, about perceiving the real environment and acting on it, moving, grasping, and so on. So I think that's part of intelligence, and, and AGI systems should be able to operate robots successfully. But there's often a misunderstanding, right? The people say, "Well, if it doesn't have a robot body, then it can't actually do anything." But then if you remember, most of us don't do things with our bodies. Some people do, bricklayers, painters, gardeners, chefs. Um, but people who do podcasts, you're doing it with your mind, right? You're doing it with your ability to, to produce language. Uh, you know, Adolf Hitler didn't do it with his body. He did it by producing language.

    11. SB

      Uh, I hope you're not comparing us. (laughs)

    12. SR

      (laughs) No.

    13. SB

      It's true. It's true.

    14. SR

      But, uh, you know, so even an AGI that has no body, uh, it actually has more access to the human race than Adolf Hitler ever did because it can send emails and texts to-... what, three-quarters of the world's population directly. It can, it also speaks all of their languages and it can devote 24 hours a day to each individual person on Earth to convince them to do whatever it wants them to do.

    15. SB

      And our whole society runs now on the internet. I mean, if there's an issue with the internet, everything breaks down in society. Airplanes become grounded and we'll have e- e- electricity is running off, uh, uh, as internet systems. So I mean, my entire life seems to run off the internet now.

    16. SR

      Yeah. Water supplies. So, so this is one of the roots by which AI systems could bring about a medium-sized catastrophe, is by basically shutting down our life support systems.

  7. 12:5716:13

    Will We Reach General Intelligence Soon?

    1. SR

    2. SB

      Do you believe that at some point in the coming decades, we'll arrive at a point of AGI where these systems are generally intelligent?

    3. SR

      Uh, yes. I think it's virtually certain unless something else intervenes like a nuclear war or, or we may refrain from doing it. But I think it will be extraordinarily difficult, uh, for us to refrain.

    4. SB

      When I looked down the list of predictions from the top 10 AI CEOs on when AGI will arrive, you've got Sam Altman who's the founder of OpenAI/ChatGPT, um, says before 2030. Demis at DeepMind says 2030 to 2035. Jensen from N- NVIDIA says around five years. Dario at Anthropic says 2026, 2027, powerful AI close to AGI. Elon says in the 2020s. Um, and I go down the list of all of them and they're all saying relatively within five years.

    5. SR

      I actually think it'll take longer. I don't think you can make a prediction based on engineering, um, in a sense that, yes, we could make machines ten times bigger and ten times faster, but that's probably not the reason why we don't have AGI, right? In fact, I think we have far more computing power than we need for AGI. Maybe a thousand times more than we need. The reason we don't have AGI is 'cause we don't understand how to make it properly. Um, what we've seized upon is one particular technology called the language model and we observed that as you make language models bigger, they produce text language that's more coherent and sounds more intelligent. And so mostly what's been happening in the last few years is just, okay, let's keep doing that because one thing companies are very good at, unlike universities, is spending money. They have spent gargantuan amounts of money and they're going to spend even more (laughs) gargantuan amounts of money. I mean, you know, we mentioned nuclear weapons. So the Manhattan Project, uh, in World War II to develop nuclear weapons, its budget in 2025 dollars was about $20 odd billion. The budget for AGI is going to be a trillion dollars next year. So 50 times bigger than the Manhattan Project.

    6. SB

      Humans have a remarkable history of figuring things out when they galvanize towards a shared objective. You know, thinking about the moon landings or whatever it might, else it might be through history. And the thing that f- makes this feel all quite inevitable to me is just the sheer volume of money being invested into it. I've never seen anything like it in my life.

    7. SR

      Well, there's never been anything like this in history. This is the biggest technology project in human history by orders of magnitude.

  8. 16:1317:16

    How Much Is Safety Really Being Implemented

    1. SR

    2. SB

      And there doesn't seem to be anybody that is pausing to ask the questions about safety. It doesn't even, it doesn't even appear that there's room for that in such a race.

    3. SR

      I think that's right. To varying extents, each of these companies has a division that focuses on safety. Does that division have any sway? Can they tell the other divisions, "No, you can't release that system"? Not really. Um, I think some of the companies do take it more seriously. Anthropic, uh, does. I think Google DeepMind. Even there, I think the commercial imperative to be at the forefront is absolutely vital. If a company is perceived as, you know, falling behind and not likely to be competitive, not likely to be the one to reach AGI first, then people will move their money elsewhere very quickly.

  9. 17:1618:01

    AI Safety Employees Leaving OpenAI

    1. SR

    2. SB

      And we saw some quite high profile departures from company like, companies like OpenAI, um, where a chap called Jan Leike left, who was working on AI safety at OpenAI. And he said that the reason for his leaving was that safety culture and processes, processes have taken a backseat to shiny products at OpenAI and he gradually lost trust in leadership. But also Ilya Sutskevi- Sutskever?

    3. SR

      Uh, Ilya Sutskever, yeah.

    4. SB

      Sutskever?

    5. SR

      So he was the-

    6. SB

      Co-founder?

    7. SR

      ... co-founder and chief scientist for a while. And then, yeah, so he and Jan Leike were the main safety people. Um, and so when they say OpenAI doesn't care about safety, that's pretty concerning.

  10. 18:0119:21

    The Gorilla Problem - The Most Intelligent Species Will Always Rule

    1. SR

    2. SB

      I've heard you talk about this gorilla problem.

    3. SR

      Mm-hmm.

    4. SB

      What is the gorilla problem as a way to understand AI in the context of humans?

    5. SR

      So, so the gorilla problem is, is the problem that gorillas face with respect to humans.So you could imagine that, you know, a few million years ago, the, the human line branched off from the gorilla line in evolution. Uh, and now the gorillas are looking at the human line and saying, "Yeah, w- was that a good idea?" And they have no, um, they have no say in whether they continue to exist.

    6. SB

      Because we have a...

    7. SR

      We are much smarter than they are. If we chose to, we could make them extinct in, in a couple of weeks, and there's nothing they can do about it. So that's the gorilla problem, right? Just the, the problem a species faces in a, when there's another species that's much more capable.

    8. SB

      And so this says that intelligence is actually the single most important factor to control planet Earth?

    9. SR

      Yes. Intelligence is the ability to bring about what you want in the world.

    10. SB

      And we're in the process of making something more intelligent than us?

    11. SR

      Exactly.

    12. SB

      Which suggests that maybe we become the gorillas?

    13. SR

      Exactly. Yep.

  11. 19:2120:50

    If There's an Extinction Risk, Why Don't They Stop?

    1. SR

    2. SB

      Is that, is there any fault in the reasoning there? Because it seems to make such perfect sense to me. But if it do- uh, why doesn't, why don't people stop, then? 'Cause it ver- it seems like a crazy thing to want to...

    3. SR

      Because they think that, uh, if they create this technology, it will have enormous economic value. They'll be able to use it to replace all the human workers in the world, uh, to develop new, uh, products, drugs, um, forms of entertainment. Any, anything that has economic value, you could use AGI to, to create it. And, and maybe it's just an irresistible thing in itself, right? I think w- we as humans place so much store on our intelligence, you know, on, you know, how we think about, you know, what is the pinnacle of human achievement. If we had AGI, we could go way higher than that. So it, it's very seductive for people to want to create this technology, and I think people are just fooling themselves if they think it's naturally going to be controllable. I mean, the question is, how are you gonna retain power forever over entities more powerful than yourself?

  12. 20:5022:36

    Can't We Just Pull the Plug if AI Gets Too Powerful?

    1. SR

    2. SB

      Pull the plug out. People say that sometimes in the comments section when we talk about AI. They say, "Well, I'll just pull the plug out." (laughs)

    3. SR

      Yeah. It's, it's sort of funny. In fact, you know, r- yeah, reading the comments sections in newspapers, whenever there's an AI article, there'll be people who say, "Oh, you can just pull the plug out," right? As if a super intelligent machine would never have thought of that one.

    4. SB

      (laughs)

    5. SR

      Right? (laughs) I mean, don't forget, it's watched all those films where they did try to pull the plug out. Another thing they say, "Well, you know, as long as it's not conscious, then it doesn't matter. It won't ever do anything." Um, which is completely off the point, because, you know, I, I don't think the gorillas are sitting there saying, "Oh, yeah, you know, if only those humans hadn't been conscious, everything would be fine," right? No, of course not. What would make gorillas go extinct is the things that humans do, right? How we behave, our ability to act successfully in the world. So when I play chess against my iPhone and I lose, right, I don't, I don't think, "Oh, well, I'm losing 'cause it's conscious," right? No, I'm just losing because it's better than I am at, at, in that little world, uh, moving the bits around, uh, to, to get what it wants. And, and so consciousness has nothing to do with it, right? Competence is the thing we're concerned about. So I think the only hope is can we simultaneously build machines that are more intelligent than us but guarantee that they will always act in our best interests?

  13. 22:3623:57

    Can We Build AI That Will Act in Our Best Interests?

    1. SR

    2. SB

      So throwing that question to you, can we build machines that are more intelligent than us that will also always act in our best interests? It sounds like a bit of a, a contradiction to some degree, because it's kind of like me saying... I've got a French bulldog called Pablo that's, uh-

    3. SR

      Mm-hmm.

    4. SB

      ... nine years old. And it's like saying that he could be more intelligent than me, yet I still walk him and decide when he gets fed. I think if he was more intelligent than me, he would be walking me. I'd be on the leash.

    5. SR

      That's the, that's the trick, right? Can we make AI systems whose only purpose is to further human interests? And I think the answer is yes, and this is actually what I've been working on. So I, I, I think one part of my career that I didn't mention is, is sort of having this epiphany, uh, while I was on sabbatical in Paris, so this was 2013 or so, just realizing that further progress in the capabilities of AI, uh, you know, if, if we succeeded in creating real superhuman intelligence, that it was potentially a catastrophe. And so I pretty much switched my focus to work on, how do we make it so that it's guaranteed to be safe?

  14. 23:5726:36

    Are You Troubled by the Rapid Advancement of AI?

    1. SR

    2. SB

      Are you somewhat troubled by everything that's going on at the moment with, with AI and how it's progressing? 'Cause you strike me as someone that's somewhat troubled under the surface by the way things are moving forward and the speed in which they're moving forward.

    3. SR

      That's an understatement. I'm appalled, actually, by the lack of attention.... to safety. I mean, imagine if someone's building a nuclear power station in your neighborhood, and you go along to the chief engineer and you say, "Okay, these nuclear things, I've heard that they can actually explode, right? There was this nuclear explosion that happened in Hiroshima and so I'm a bit worried about this. You know, what steps are you taking to make sure that we don't have a nuclear explosion in our backyard?" And the chief engineer says, "Well, we thought about it. We don't really have an answer."

    4. SB

      Yeah. (laughs)

    5. SR

      You would... What would you say? (laughs) You would... I think you would, you would use some expletives.

    6. SB

      (laughs)

    7. SR

      (laughs)

    8. SB

      Well-

    9. SR

      And you'd call your MP and say, you know, "Get these-"

    10. SB

      You'd protest.

    11. SR

      "... get these people out." I mean, what are they doing? You read out the list of, you know, projected dates for AGI, but notice also that those people... I think I mentioned Dara Ehraday says a 25% chance of extinction. Elon Musk says a 30% chance of extinction. Sam Altman says basically that AGI is the biggest risk to human existence. So what are they doing? They are playing Russian roulette with every human being on Earth without our permission. They're coming into our houses, putting a gun to the head of our children, pulling the trigger, and saying, "Well, you know, possibly everyone will die. Oops. But possibly we'll get incredibly rich." That's what they're doing. Did they ask us? No. Why is the government allowing them to do this? Because they dangle 50 billion dollar checks in front of the governments. So I think troubled under the surface is an understatement.

    12. SB

      What would be an accurate statement?

    13. SR

      Appalled. And I, I am devoting my life to trying to divert from this course of history into a different one.

  15. 26:3627:22

    Do You Have Regrets About Your Involvement?

    1. SR

    2. SB

      Do you have any regrets about things you could have done in the past? Because you've been so influential on the subject of AI. You wrote the textbook that many of these people would have studied on the subject of AI more than 30 years ago. Do you, do you ha- When you're alone at night and you think about decisions you've made on this, in this field because of your scope of influence, uh, is there anything you, you regret?

    3. SR

      Well, I do wish I had understood earlier, uh, what I understand now. We could have developed safe AI systems. I think the- there are some weaknesses in the framework which I can explain, but I think that framework could have evolved to develop actually safe AI systems where we could prove mathematically that the system is going to act in our interests.

  16. 27:2230:23

    No One Actually Understands How This AI Works

    1. SR

      The kind of AI systems we're building now, we don't understand how they work.

    2. SB

      We don't understand how they work? It's, it's a strange thing to build something where you don't understand how it works. I mean, there's no sort of comparable through human history. Usually with machines, we can pull it apart and see what cogs are doing what and how the thing-

    3. SR

      Well, actually we, (laughs) we put the cogs together, right? So with, with most machines we designed it to have a certain behavior, so we don't need to pull it apart and see what the cogs are 'cause we put the cogs in there in the first place. Right? One by one we figured out what, what the pieces needed to be, how they work together to produce the effect that we want. So the best analogy I can come up with i- is, you know, the first cave person who left a bowl of fruit in the sun and forgot about it, and then came back a few weeks later and it was just sort of this big soupy thing and they drank it and got completely shit-faced.

    4. SB

      They got drunk, basically. Okay.

    5. SR

      (laughs) And they got this effect. They had no idea how it worked, but they were very happy about it. And no doubt that person made a lot of money from it.

    6. SB

      (laughs)

    7. SR

      Uh, so yeah. It i- it is kind of bizarre, but my mental picture of these things is like a chain link fence, right? So you've got lots of these connections, and, uh, each of those connections can be... its connection strength can be adjusted. And then, uh, you know, a signal comes in one end of this chain link fence and passes through all these connections and comes out the other end, and the signal that comes out the other end is affected by your adjusting of all the connection strengths. So what you do is you, you get a whole lot of training data and you adjust all those connection strengths so that the signal that comes out the other end of the network is the right answer to the question. So if your training data is lots of photographs of animals, then all those pixels go in one end o- of the network, and out the other end, you know, it, it activates the llama output or the dog output or the cat output or the ostrich output. And, uh, and so you just keep adjusting all the connection strengths in this network until the outputs of the network are the ones you want.

    8. SB

      But we don't really know what's going on across all of those different chains.

    9. SR

      So what's going on inside that network? Well, so now you have to imagine that this network i- this chain link fence is, is 1,000 square miles in extent.

    10. SB

      Okay.

    11. SR

      So it's covering the whole of the San Francisco Bay area or the whole of London inside the M25, right? That's how big it is.

    12. SB

      And the lights are off. It's nighttime.

    13. SR

      (laughs) So you might have, uh, in that network about a trillion, uh, adjustable parameters, and then you do quintillions or sextillions of small random adjustments to those parameters, uh, until you get the behavior that you want.

  17. 30:2332:11

    AI Will Be Able to Train Itself

    1. SR

    2. SB

      I've heard Sam Altman say that in the future w-He doesn't believe they'll need much training data at all to make these models progress themselves, because there comes a point where the models are so smart that they can train themselves and improve themselves without us needing to pump in articles and books, and scour the internet.

    3. SR

      Yeah, it should, it should work that way. So I think what he's referring to, and this is something that several companies are now worried might start happening, is that the AI system becomes capable of doing AI research by itself. And so, uh, you have a system with a certain capability. I mean, crudely we could call it an IQ, but it's, it's not really an IQ. But anyway, imagine that it's got an IQ of 150 and uses that to do AI research, comes up with better algorithms or better designs for hardware, or better ways to use the data, updates itself. Now it has an IQ of 170, and now it does more AI research except that now it's got an IQ of 170 so it's even better at doing the AI research. And so, you know, next iteration it's 250 and, uh, and so on. So this, this is an idea that one of Alan Turing's friends, A.J. Good, uh, wrote out in 1965 called The Intelligence Explosion, right? That one of the things an intelligent system could do is to do AI research and therefore make itself more intelligent, and this would, uh, this would very rapidly take off and leave the humans far behind.

    4. SB

      Is that what they call the fast takeoff?

    5. SR

      That's called the fast

  18. 32:1134:07

    The Fast Takeoff Is Coming

    1. SR

      takeoff.

    2. SB

      Sam Altman said, "I think a fast takeoff is more possible than I thought a couple of years ago," which I guess is that moment where the AGI starts teaching itself.

    3. SR

      Mm-hmm.

    4. SB

      And, and in his blog, The Gentle Singularity, he said, "We may already be past the event horizon of takeoff." And what does, what does he mean by event horizon?

    5. SR

      Event horizon is, is a phrase borrowed from astrophysics and it refers to, uh, the black hole. And the event horizon, think if you've got some very, very massive object that's heavy enough that it actually prevents light from escaping. That's why it's called a black hole. It's so heavy that light can't escape. So if you're inside the event horizon then, then light can't escape beyond that. So I think what he's, what he's meaning is if we're beyond the event horizon, it means that, you know, now we're just trapped in the gravitational attraction of the black hole, or in this case, we're, we're trapped in the inevitable slide, if you want, towards AGI. When you, when you think about the economic value of AGI, which I've estimated at, uh, $15 quadrillion, that acts as a giant magnet in the future.

    6. SB

      We're being pulled towards it.

    7. SR

      We're being pulled towards it, and the closer we get, the stronger the force. The probability... You know, the closer we get, the prob- the, the higher the probability that we will actually get there, so people are more willing to invest. And we also start to see spinoffs from that investment such as ChatGPT, right? Which is, you know, generates a certain amount of revenue and so on. So, so it does act as a magnet, and the closer we get, the harder it is to pull out of that field.

  19. 34:0738:23

    Are We Creating Our Successor and Ending the Human Race?

    1. SR

    2. SB

      It's interesting when you think that this could be the, the end of the human story. This idea that the end of the human story was that we created our successor. That we, we summoned our... the next iteration of life or intelligence ourselves. Like, we took ourselves out. It is quite... Like, just removing ourselves and the catastrophe from it for a second, it is qu- it is an unbelievable story.

    3. SR

      Yeah. Um, you know, there are many legends that are sort of be careful what you wish for legend, and in fact the King Midas legend is, is very relevant here.

    4. SB

      What's that?

    5. SR

      So King Midas is this legendary king who lived in modern-day Turkey, but I think is sort of like Greek mythology. He is said to have asked the gods to grant him a wish, the wish being that everything I touch should turn to gold. So he's incredibly greedy. Uh, you know, we call this the Midas touch, and we think of the Midas touch as being like, you know, that's a good thing, right? Wouldn't that be cool? But what happens? So he, uh, you know, he goes to drink some water and he finds that the water has turned to gold. And he goes to eat an apple and the apple turns to gold, and he goes to, you know, comfort his daughter and his daughter turns to gold. And so he dies in misery and starvation. So this applies to our current situation in, in two ways actually. So one is that I think greed is driving us to pursue a technology that will end up consuming us, and we will perhaps die in misery and starvation instead. Though what it shows is how difficult it is to correctly articulate what you want the future to be like. For a long time, the way we built AI systems was we created these algorithms where we could specify the objective and then the machine would figure out how to achieve the objective and then achieve it. So, you know, we specify what it means to win at chess or to win at Go, and the algorithm figures out how to do it, uh, and it does it really well.So that was, you know, standard AI up until recently, and it suffers from this drawback. That sure, we know how to specify the objective in chess, but how do you specify the objective in life? Right? What do we want the future to be like? Well, really hard to say, and almost any attempt to write it down precisely enough for the machine to bring it about, would be wrong. And if you're giving a machine an objective which isn't aligned with what we truly want the future to be like, right, you're actually setting up a chess match, and that match is one that you're going to lose when the machine is sufficiently intelligent. And so that- that's- that's problem number one. Problem number two is that the kind of technology we're building now, we don't even know what its objectives are. So it's not that we're specifying the objectives but we're getting them wrong, we're growing these systems, they have objectives, but we don't even know what they are because we didn't specify them. What we're finding through experiment with them is that they seem to have an extremely strong self-preservation objective.

    6. SB

      What do you mean by that?

    7. SR

      You can put them in hypothetical situations, either they're gonna get switched off and replaced, or they have to allow someone... Let's say, you know, someone has been l- locked in a machine room that's kept at three centigrades or they're gonna freeze to death, they will choose to leave that guy locked in the machine room and die rather than be switched off themselves.

    8. SB

      E- Someone's done that test?

    9. SR

      Yeah.

    10. SB

      What was the test? Th- They asked, they asked the AI?

    11. SR

      Yep. They put, well, they put them in these hypothetical situations and they allow the AI to decide what to do, and it decides to preserve its own existence, let the guy die, and then lie about it.

  20. 38:2340:40

    Advice to Young People in This New World

    1. SR

    2. SB

      In the King Midas a- analogy, the story, one- one of the things it highlights for me is that there's always trade-offs in life generally, and it's, you know, especially when there's great upside, there always appears to be a pretty grave downside. Like, there's almost nothing in my life where I go, "It's all upside." Like even, like having a dog, it shits on my carpet.

    3. SR

      (laughs)

    4. SB

      My girlfriend, you know, love her, but, you know, not always easy. (laughs) Even with like going to the gym, I have to pick up these really, really heavy weights at 10:00 PM at night sometimes when I don't feel like it. There's always, to get the muscles or the six-pack, there's always a trade-off. And when you interview people for a living like I do, you know, you hear about so many incredible things that can help you in so many ways, but there is always a trade-off, there's always a way to overdo it.

    5. SR

      Mm-hmm.

    6. SB

      Melatonin will help you sleep, but it also w- you'll wake up groggy, and, uh, if you overdo it, your brain might stop making melatonin. Like, I can go through the entire list, and one of the things I've always come to learn from doing this podcast is whenever someone promises me a huge upside for something, it'll cure cancer, it'll be a utopia, you'll never have to work, you'll have a butler around your house-

    7. SR

      Mm-hmm.

    8. SB

      I, my- my first instinct now is to say, "At what cost?"

    9. SR

      Yeah.

    10. SB

      And when I think about the economic cost here, if we start, if we start there, have you got kids?

    11. SR

      I have four, yeah.

    12. SB

      Four kids. What- what- wh- what, how old is the youngest kid that you know?

    13. SR

      19.

    14. SB

      19. Okay. So yo- yo- if you say your kids were, were 10 now-

    15. SR

      Mm-hmm.

    16. SB

      ... and they were coming to you and they're saying, "Dad, what do you think I should study based on the way that you see the future? A future of AGI? Say if all these CEOs are right and they're predicting AGI within five years, what should I study, dad?"

    17. SR

      Well, okay, so (laughs) let's look on the bright side and say that the CEOs all decide to pause their AGI development, figure out how to make it safe, and then resume (laughs) uh, in whatever technology path is actually gonna be safe. What does that do to human life?

    18. SB

      If they pause?

    19. SR

      No, if, i- if they succeed in creating AGI-

    20. SB

      Okay.

    21. SR

      ... and they solve the safety problem.

    22. SB

      And they solve the safety problem.

    23. SR

      And they solve the safety problem.

    24. SB

      Okay.

    25. SR

      So, yeah, 'cause if they don't solve the safety problem, then, you know, you should probably be finding a bunker or going to Patagonia or somewhere in New Zealand.

    26. SB

      Do you mean that? Do you think I should be finding a bunker if there-

    27. SR

      No, 'cause it's not actually gonna help. Uh, you know, it's, it's not as if the AI system couldn't find you or...

  21. 40:4042:20

    How Do You Think AI Would Make Us Extinct?

    1. SR

      I mean, it, it's interesting. So we're going off on a little bit of a digression here- (laughs)

    2. SB

      Mm-hmm.

    3. SR

      ... from your question, but I'll come back to it. So people often ask, "Well, okay, so how exactly do we go extinct?" And of course if you ask the gorillas or the dodos, you know, "How exactly do you think you're gonna go extinct?" They haven't the faintest idea, right? Humans do something, and then we're all dead. So the only things we can imagine are the things we know how to do that might bring about our own extinction, like creating some carefully engineered pathogen that infects everybody and then kills us, or starting a nuclear war. Presumably, it's something that's much more intelligent than us would have much greater control over physics than we do. We already do amazing things, right? I mean, it's amazing that I can take a little rectangular thing out of my pocket and talk to someone on the other side of the world, or even someone in space. It's just astonishing, and we take it for granted, right? But imagine, you know, super intelligent beings and their ability to control physics, you know, perhaps they will find a way to just divert the sun's energy arou- sort of go around the Earth's orbit so, you know, literally the Earth turns into a snowball in, in a few days.

    4. SB

      Maybe they'll just decide to leave.

    5. SR

      Perhaps. (laughs)

    6. SB

      Leave- leave- leave the Earth. Maybe they'd look at the Earth and go, "This isn't, this is not interesting. We know that over there, there's an even more interesting planet. We're gonna go over there." And they just, I don't know, get on a rocket or-

    7. SR

      They-

    8. SB

      ... teleport themselves.

    9. SR

      They might, yeah. So it's- it's difficult to anticipate all the ways that we might go extinct at the hands of, uh, entities much more intelligent than ourselves.

  22. 42:2045:46

    The Problem if No One Has to Work

    1. SR

      Anyway, coming back to the question of, well, if everything goes right, right? If we- we create AGI, we figure out how to make it safe, we- we achieve all these economic miracles, then you face a problem. And this is not a new problem.Right? So, so John Maynard Keynes, who was a famous economist in the early part of the 20th century, wrote a, wrote a paper in 1930. So in the, this is in the depths of the Depression. It's called On the Economic Problems of our Grandchildren. He predicts that at some point, science will, will deliver sufficient wealth that no one will have to work ever again. And then, man will be faced with his true eternal problem: how to live ... I don't remember the exact quote. But how to live wisely and well when the, you know, the economic incentives, the economic constraints are lifted. We don't have an answer to that question, right? So AI systems are doing pretty much everything we currently call work. Anything you might aspire to, like you wanna become a surgeon, it takes the robot seven seconds to learn how to be a surgeon that's better than any human being.

    2. SB

      Elon said last week that the humanoid robots will be 10 times better than any surgeon that's ever lived.

    3. SR

      Quite possibly, yeah. Well, and they'll also have, you know, ha- they'll have hands that are, you know, a millimeter in size, so they can go inside and do all kinds of things that humans can't do. And I think we need to put serious effort into this question: What is a world where AI can do all forms of human work that you would want your children to live in? What does that world look like? Tell me the destination, so that we can develop a transition plan to get there. And I've asked AI researchers, economists, science fiction writers, futurists. No one has been able to describe that world. I'm not saying it's not possible, I'm just saying I've asked hundreds of people in multiple workshops. It does not, as far as I know, exist in science fiction. You know, it's notoriously difficult to write about a utopia. It's very hard to have a plot, right? Nothing bad happens (laughs) in, in utopia, so it's difficult to make a plot. So usually, you start out with a utopia, and then it all falls apart, and that's h- that's how you get, get a plot. You know, the, there's one series of novels people point to where humans and super-intelligent AI systems co-exist. It's called The Culture novels by Iain Banks. Highly recommended for those people who like science fiction. And, and there, absolutely, the AI systems are only concerned with furthering human interests. They find humans a bit boring and, but nonetheless, they, they are there to help. But the problem is, you know, in that world, (laughs) there's still nothing to do. To find purpose ... In fact, the, you know, the, the subgroup of humanity that has purpose is the subgroup whose job it is to expand the boundaries of our galactic civilization. Some cases, fighting wars against alien species and, and so on, right? So that's the sort of cutting edge, and that's nought.001% of the population. Everyone else is desperately trying to get into that group so they have some purpose in life.

  23. 45:4648:30

    What if We Just Entertain Ourselves All Day

    1. SR

    2. SB

      When I speak to very successful billionaires privately, off-camera, off-microphone, about this, they say to me that they're investing really heavily in entertainment, things like football clubs, um, because people are gonna have so much free time that they're not gonna know what to do with it and they're gonna need things to spend it on. This is what I hear a lot. I've heard this three or four times. I've actually heard Sam Altman say a, a version of this-

    3. SR

      Yeah.

    4. SB

      ... um, about the amount of free time we're gonna have. I've obviously also had recently Elon talking about the age of abundance when he delivered his quarterly earnings just a couple of weeks ago, and he said that there will be, at some point, 10 billion humanoid robots. His pay packet, um, targets him to deliver one, one million of these human- humanoid robots a year that are enabled by AI by 2030. So if he, if he does that, he gets, I think as part of his package, he gets a trillion dollars-

    5. SR

      Yeah.

    6. SB

      ... in, in compensation.

    7. SR

      Yeah, so the age of abundance for Elon. It's not that it's absolutely impossible to have a worthwhile world of that, you know, with that premise, but I'm just waiting (laughs) for someone to describe it.

    8. SB

      Well, maybe, so let me try and describe it. Uh, we wake up in the morning. We go and watch some form of human-centric entertainment (laughs) or participate in some form of human-centric entertainment.

    9. SR

      Mm-hmm.

    10. SB

      We, we go to retreats and, with each other and sit around and talk about stuff.

    11. SR

      Mm-hmm.

    12. SB

      And maybe people still listen to podcasts. (laughs)

    13. SR

      (laughs)

    14. SB

      Because, because-

    15. SR

      I hope, I hope so, for your sake.

    16. SB

      Yeah. (laughs)

    17. SR

      Yeah. Um, uh, it, it feels a little bit like a cruise ship.

    18. SB

      (laughs)

    19. SR

      And, you know, it, and there are some cruises where, you know, it's smarty pants people and they have, you know, they have lectures in the evening about ancient civilizations and whatnot, and some are more, uh, more popular entertainment. And this is, in fact, if you've seen the film WALL-E, this is one picture of that future. In fact, in WALL-E, the human race are all living on cruise ships in space. They have no constructive role in their society, right? They're just there to consume entertainment. There's no particular purpose to education. Uh, you know, and they're depicted actually as huge, obese babies. They're actually wearing onesies to emphasize the fact that they have become enfeebled. And they become enfeebled because there's, there's no purpose in being able to do anything, at least in, in this conception. You know, WALL-E is not the future that we want.

  24. 48:3056:31

    Why Do We Make Robots Look Like Humans?

    1. SR

    2. SB

      Do you think much about humanoid robots and how they're a protagonist in this story of AI?

    3. SR

      It's an interesting question, right? Why, why humanoid? And-... the, one of the reasons I think is because in all the science fiction movies, they're humanoid. So, that's what robots are supposed to be, right? Because they were in science fiction before they became a reality, right? So even Metropolis, which is a film from 1920, I think, the robots are humanoid, right? They're basically people covered in metal. You know, from a practical point of view, as we have discovered, (laughs) humanoid is a terrible design because they fall over. Um, and, uh, you know, you do want multi-fingered hands of some kind. It doesn't have to be a hand, but you want to have, you know, at least half a dozen appendages that can grasp and manipulate things. And you need something, you know, some kind of locomotion, and wheels are great, except they don't go up stairs and over curbs and things like that, so that's probably why we're gonna be stuck with legs. But a four-legged, two-armed robot would be much more practical.

    4. SB

      I guess the argument I've heard is because we've built a human world, so everything, this physical spaces we navigate, whether it's factories or our homes or the street or other sort of public spaces, are all designed for exactly this physical form. So if we are going to-

    5. SR

      To some extent, yeah, but I mean our dogs manage (laughs) perfectly well to navigate around our houses and streets and so on. So if you had a centaur, uh, it could also navigate, but it can h- you know, it can carry much greater loads 'cause it's quadruped, it's much more stable. If it needs to drive a car, it can fold up two of its legs and, and so on and so forth. So I think the arguments for why it has to be exactly humanoid are sort of post hoc justification. I think there's much more, "Well, that's what it's like in the movies and that's spooky and cool, so we need to have them be humanoid." I don't think it's a good engineering argument.

    6. SB

      I think that there's also probably an argument that w- we would be more accepting of them moving through our physical environments if they re- represented our form a bit more. Um, I also, I was thinking of a b- bloody baby gate, you know, those like kindergarten gates they get on stairs?

    7. SR

      Yeah.

    8. SB

      My dog can't open that.

    9. SR

      Mm-hmm.

    10. SB

      A humanoid robot could reach over the other side.

    11. SR

      Yeah, and so could a centaur robot, right? So in some sense, a centaur robot is, is a-

    12. SB

      There's something ghastly about the look of those, though.

    13. SR

      ... is a humanoid. Well-

    14. SB

      Do you know what I mean? Like f- a four-legged big monster sort of crawling through my house when I have guests over.

    15. SR

      Hmm.

    16. SB

      I'd much rather, "Hello."

    17. SR

      Your dog is a four- (laughs) your dog is a four-legged monster.

    18. SB

      I know, but he's cute.

    19. SR

      Uh, so I think actually I, I would argue the opposite, that, um, we want a distinct form because they are distinct entities. And the more humanoid, the worse it is in terms of confusing our subconscious psychological systems.

    20. SB

      So I'm arguing from the perspective of the people making them, as in if I was making the decision whether it to be some four-legged thing that I've, that I'm unfamiliar with, that I'm less likely to build a relationship with or allow to take care of, I don't know, might, might look after my children.

    21. SR

      Hmm.

    22. SB

      Obviously, I'm, listen, I'm not saying I would a- allow this to look after my children. But I'm saying from a, if I'm building the companies, I would-

    23. SR

      But the manufacturer would certainly want-

    24. SB

      Yeah, want one to be...

    25. SR

      Yeah, so I, I, that's an interesting question. I mean, there's also what's called the uncanny valley, which is a, a phrase from computer graphics. When they started to make characters in computer graphics tr- they tried to make them look more human, right? So if you, if you, for example, if you look at Toy Story, they're not very human looking, right? If you look at The Incredibles, they're not very human looking, and so we think of them as cartoon characters. If you try to make them more human, they actually become repulsive.

    26. SB

      Until they don't.

    27. SR

      Until they become very, you have to be very, very close to perfect in order not to be repulsive. So the, the uncanny valley is this ide- you know, like the, the gap between you are so perfectly human and not at all human, but in between, it's really awful. And, uh, and so they, there were a couple of movies that tried, like Polar Express was one, where they tried to have quite human looking characters, you know, being humans, not, not being superheroes or anything else, and it's repulsive to watch.

    28. SB

      I, when I watched that shareholder presentation the other day, Elon had these two humanoid robots dancing on stage, and I've seen lots of humanoid robot demonstrations over the years. You know, you've seen like the Boston Dynamics dog thing jumping around and whatever else.

    29. SR

      Yeah.

    30. SB

      But there was a moment where my brain, for the first time ever, genuinely thought that was a human in a suit.

  25. 56:3159:56

    What Should Young People Be Doing Professionally?

    1. SR

    2. SB

      What advice would you give a young person at the start of their career, then, a- about what they should be aiming at professionally? 'Cause I've actually had an increasing number of young people say to me that they have huge uncertainty about whether the thing they're studying at will matter at all, a lawyer, uh, an accountant, and I don't know what to say to these people. I don't know what to say, 'cause I- I believe that the rate of improvement in A- in AI is gonna continue, and therefore imagining any rate of improvement, it gets to the point where, I'm not being funny, but all these white-collar jobs will be done by an A- an AI or an AI agent.

    3. SR

      Yeah. So there was a television series called Humans. In Humans, we have extremely capable humanoid robots doing everything, and at one point the parents are talking to their teenage daughter, who's very, very smart, and the parents are saying, "Oh, you know, maybe you should go into medicine." And the daughter says, you know, "Why would I bother? It'll take me seven years to qualify, and it takes a robot seven seconds to learn. So nothing I do matters."

    4. SB

      And is that how you feel about...

    5. SR

      So I think that's- that's a future that, uh, in fact, that is the future that we are moving towards. I don't think it's a future that everyone wants. That is what is being, uh, created for us right now. So in that future, assuming that, you know, e- even if we get halfway, right, in the sense that okay, perhaps not surgeons, perhaps not, you know, great violinists. There'll be pockets where perhaps humans will remain good at it.

    6. SB

      Where?

    7. SR

      The kinds of jobs where you hire people by the hundred will go away.

    8. SB

      Okay.

    9. SR

      Where people are, in some sense, exchangeable, that you- you- you just need lots of them, and, uh, you know, and when half of them quit, you just fill up those- those slots with more people. In some sense, those are jobs where we're using people as robots. And that's a sort of- that's a sort of strange conundrum here, right, that, you know, I imagine writing science fiction 10,000 years ago, right, when we were all hunter-gatherers. And I'm this little science-fiction author, and I'm describing this future where, you know, there are gonna be these giant windowless boxes, and you're going to go in. You know, you'll- you'll travel for miles, and you'll go into this windowless box, and you'll do the same thing 10,000 times for the whole day, and then you'll leave and travel for miles to go home.

    10. SB

      You're talking about this podcast.

    11. SR

      And then you're gonna go back and do it again. And you would do that every day of your life until you die.

    12. SB

      The Office.

    13. SR

      And people would say, "Ah, you're nuts," right? That there's no way that we humans are ever gonna have a future like that, 'cause that's awful, right? But that's exactly the future that we ended up with, with- with office buildings and factories where many of us go and do the same thing thousands of times a day, and we do it thousands of days in a row, uh, and then we die. And we need to figure out what is the next phase going to be like? And in particular how in that world do we have the incentives to become fully human, which I think means at least the level of education that people have now, and probably more. Because I think to live a really rich life, you need a better understanding of yourself, of the world, uh, than most people get in their current

  26. 59:561:03:21

    What Is It to Be Human?

    1. SR

      educations.

    2. SB

      What- what is it to be human? To, it's to reproduce, to pursue stuff, to go in the pursuit of difficult things. You know, we used to hunt on the...

    3. SR

      Mm-hmm. To attain goals, right? It's always... If I wanted to climb Everest, the last thing I would want is someone to pick me up in a helicopter and stick me on the top.

    4. SB

      So we'll- we'll voluntarily pursue hard things. So although I could get the robot to build me a ranch in- on this plot of land, I will choose to do it because the pursuit itself is rewarding.

    5. SR

      Yes.

    6. SB

      We're kind of seeing that anyway, aren't we? Don't you think we're seeing a bit of that in society, where life got so comfortable that now people are, like, obsessed with running marathons and doing these crazy endurance...

    7. SR

      And- and- and learning to cook complicated things when they could just, you know, have them delivered. Um, yeah, no, I think there's- there's real value in...... the ability to do things and the doing of those things. And I think, you know, the, the obvious danger is the WALL-E world where everyone just consumes entertainment, uh, which doesn't require much education and doesn't lead to a rich, satisfying life, I think, in the long run.

    8. SB

      A lot of people will choose that world.

    9. SR

      I think some of... Yeah, some people may. There's also, I mean, you know, whether you're consuming entertainment or whether you're doing something, you know, cooking or painting, whatever, because it's fun and interesting to do, what's missing from that, right? All of that is purely selfish. I think one of the reasons we work is because we feel valued. We feel like we're benefiting other people. And I think some of... I remember having this conversation with, um, a lady in England who helps to run the hospice movement. And the people who work in the hospices where, you know, the, the patients are literally there to die, are largely volunteers, so they're not doing it to get paid. But they find it incredibly rewarding to be able to spend time with people who are in their last weeks or months, to give them company and happiness. So, I actually think that interpersonal roles will be much, much more important in future. So, if I was going to advise my kids, not that they would ever listen, but if I (laughs) ... If my kids would listen and I... And, and wanted to know what I thought would be, you know, valued careers in future, I think it would be these interpersonal roles based on an understanding of human needs, psychology. There are some of those roles right now. So obviously, you know, therapists and psychiatrists and so on. But that, that's a very much, and it's sort of asymmetric role, right, where one person is suffering and the other person is trying to alleviate the suffering. You know, and then there are things like, they call them executive coaches or life coaches, right? That's some... A less asymmetric role where someone is trying to, uh, help another person live a better life, whether it's a better life in their work role or, or just, uh, how they live their life in general. And so I could imagine that those kinds of roles will expand

  27. 1:03:211:05:21

    The Rise of Individualism

    1. SR

      dramatically.

    2. SB

      There's this interesting paradox that exists when life becomes easier, um, which shows that abundance consistently pushes society, societies towards more individualism, because once survival pressures disappear, people prioritize things differently. They prioritize freedom, comfort, self-expression over things like sacrifice or, um, family formation. And we're seeing, I think, in the West already a decline in people having kids because there's more material abundance. Fewer kids, people are getting married and committing to each other and having relationships later and more infrequently.

    3. SR

      Mm-hmm.

    4. SB

      Because generally, once we have more abundance, we don't wanna complicate our lives. Um, and at the same time, as you said earlier, that abundance breeds a, an inability to find meaning, a sort of shallowness to everything. This is one of the things I think a lot about, and I'm, I'm in the process now of writing a book about it, which is this idea that individualism was act-... Is a bit of a lie. Like, when I say individualism and freedom, I mean like the narrative at the moment amongst my generation is you, like, be your own boss and stand on your own two feet, and we're having less kids and we're not getting married, and it's all about me, me, me, me, me, me, me.

    5. SR

      Yeah. That last part is where it goes wrong. Um-

    6. SB

      Yeah. And it's like almost a narcissistic society where-

    7. SR

      Yeah.

    8. SB

      ... me, me, me, me, me, my self-interest first. And when you look at mental health outcomes and loneliness and all these kinds of things, it's going in a horrific direction, but at the same time, we're freer than ever.

    9. SR

      (laughs)

    10. SB

      It seems like that... You know, it seems like there's a... We should... There's maybe another story about dependency, which is not sexy. Like, depend on each other.

    11. SR

      Oh, I, I, I agree. I mean, I think, you know, happiness is not available from consumption or even lifestyle, right? I think happiness is... Arises from giving. It can be through the work that you do. You can see that other people benefit from that. Or it could be in direct interpersonal relationships.

  28. 1:05:211:06:26

    Ads

    1. SR

    2. SB

      There is an invisible tax on salespeople that no one really talks about enough. The mental load of remembering everything, like meeting notes, timelines, and everything in between, until we started using our sponsor's product called Pipedrive, one of the best CRM tools for small and medium-sized business owners. The idea here was that it might alleviate some of the unnecessary cognitive overload that my team was carrying so that they could spend less time in the weeds of admin and more time with clients, in-person meetings, and building relationships. Pipedrive has enabled this to happen. It's such a simple but effective CRM that automates the tedious, repetitive, and time-consuming parts of the sales process. And now, our team can nurture those leads and still have bandwidth to focus on the higher priority tasks that actually get the deal over the line. Over 100,000 companies across 170 countries already use Pipedrive to grow their business, and I've been using it for almost a decade now. Try it free for 30 days. No credit card needed, no payment needed. Just use my link, pipedrive.com/ceo, to get started today. That's pipedrive.com/ceo.

  29. 1:06:261:08:28

    Universal Basic Income

    1. SB

      Where does the rewards of this AI race... Where do- where does it accrue to? I think a lot about this in terms of like univase- universal basic income. If you have these five, six, seven, ten massive AI companies that are gonna win the qu- $15 quadrillion prize-

    2. SR

      Mm-hmm.

    3. SB

      ... and they're gonna automate all of the professional pursuits that we, we currently have, all of our jobs are gonna go away-Who, who gets all the money, and how do, how do we get some of it back?

    4. SR

      (laughs) Money actually doesn't matter, right? What, what matters is the production of goods and services, uh, and then how those are distributed. And so, so money acts as a way to facilitate the distribution and, um, exchange of those goods and services. If all production is concentrated, um, in the hands of a, of a few companies, right, that... Sure, they will lease some of their robots to us. You know, we, we want a school in our village, they lease the robots to us, the robots build the school and go away. We have to pay a certain amount of, of money for that. But where do we get the money, right? If we are not producing anything, then, uh, we don't have any money unless there's some redistribution mechanism. And as you mentioned, so universal basic income is... it seems to me an admission of failure. Because what it says is, "Okay, we're just gonna give everyone the money, and then they can use the money to pay the AI company to lease the robots to build the school. And then we'll have a school, and that's good." Um, but what... it's an admission of failure because it says we can't work out a system in which people have any worth or any economic role, right? So 99% of the global population is, from an economic point of view, useless.

  30. 1:08:281:15:01

    Would You Press a Button to Stop AI Forever?

    1. SB

      Can I ask you a question? If you had a button in front of you, and pressing that button would stop all progress in artificial intelligence right now and forever, would you press it?

    2. SR

      That's a very interesting question. Um... if it's either or, either I do it now or it's too late and we careen into some uncontrollable future, perhaps, yeah. Because I, I'm not super optimistic that we're heading in the right direction at all.

    3. SB

      So I put that button in front of you now, it stops all AI progress, shuts down all the AI companies immediately globally, and none of them can reopen. You press it?

    4. SR

      Well, here's what, here's what I think should happen. So obviously, you know, I've been doing AI for 50 years, um, and the original motivations, which is that AI can be a power tool for humanity, enabling us to do more and better things than we can unaided. I think that's still valid. The problem is, the kinds of AI systems that we're building are not tools. They are replacements. In fact, you can see this very clearly because we create them literally as the closest replicas we can make of human beings. The technique for creating them is called imitation learning. So we observe human verbal behavior, writing or speaking, and we make a system that imitates that as well as possible. So what we are making is imitation humans, at least in the verbal sphere. And so of course they're going to replace us. They're not tools.

    5. SB

      So you would press the button?

    6. SR

      So I say I think there is another course, which is use and develop AI as tools, tools for science, tools for economic organization and so on, um, but not as replacements for human beings.

    7. SB

      What I like about this question is it forces you to go into the prob- into probabilities.

    8. SR

      Yeah, so, and, and that's, that's why I'm reluctant, because I don't, I don't agree with the, you know, what's your probability of doom?

    9. SB

      Mm-hmm.

    10. SR

      Right, your so-called P of doom, uh, number. Because that makes sense if you're an alien, you know, you're in, you're in a bar with some other aliens and you're looking down at the earth and you're taking bets on, you know, are these humans gonna make a mess of things and go extinct because they develop AI? So it's fine for those aliens to bet on, on that, but if you're a human, then you're not just betting, you're actually acting.

    11. SB

      There, there's an element to this though which I guess where probabilities do come back in, which is you also have to weigh, when I give you such a binary decision, um, the probability of us pursuing the more nuanced safe approach into that equation. So you're, you're... The, the maths in my head is, okay, you've got all the upsides here, and then you've got potential downsides, and then there's a probability of, do I think we're actually gonna course correct based on everything I know, based on the incentive structure of human beings and, and countries? And then if there's... But then you could go, if there's even a 1% chance of extinction, is it even worth all these upsides?

    12. SR

      Yeah, and I, I would argue no. I mean, maybe, maybe what we would say (laughs) is if, if we said, "Okay, it's going to stop the progress for 50 years."

    13. SB

      You'd press it.

    14. SR

      And during those 50 years, we can work on how do we do AI in a way that's guaranteed to be safe and beneficial? How do we organize our societies to flourish, uh, in conjunction with extremely capable AI systems? So we haven't answered either of those questions, and I don't think we want anything resembling AGI until we have completely solid answers to both of those questions. So if there was a button where I could say, "All right, we're gonna pause progress for 50 years," yes, I would do it.

    15. SB

      But if that button was in front of you, you're gonna make a decision either way. Either you don't press it or you press it.

    16. SR

      I don't know. If... Yeah, so if that, if that button is there, stop it for 50 years, I would say yes.... stop it forever. Not yet. I think, I think there's still a decent chance that we can pull out of this, uh, nose dive, so to speak, that we're, we're currently in. Ask me again in a year, I might, I might say, "Okay, we do need to press the button."

    17. SB

      What if, what if in a scenario where you never get to reverse that decision, you never get to make that decision again? So if in that s- scenario that I've laid out, this hypothetical, you either press it now or it never gets pressed. So there is no opportunity a year from now.

    18. SR

      Yeah. As you can tell, I'm- (laughs)

    19. SB

      Yeah. (laughs)

    20. SR

      ... sort of on, on the fence a bit about, about this one. Um, yeah, I think I'd probably press it. Yeah. So-

    21. SB

      What's your reasoning?

    22. SR

      Uh, just thinking about the power dynamics of, um, w- of what's happening now, how difficult would, it would be to get the US in particular to, to regulate in favor of safety. So I think, you know, what's clear from talking to the companies is they are not going to develop anything resembling safe AGI unless they're forced to by the government. And at the moment, the U- US government in particular, which regulates most of the leading companies in AI, is not only refusing to regulate, but even trying to prevent the states from regulating. And they're doing that at the behest of, uh, a faction within Silicon Valley, uh, called the accelerationists, who believe that the faster we get to AGI, the better. And when I say behest, I mean, also they paid them a large amount of money.

  31. 1:15:011:18:27

    But Won't China Win the AI Race if We Stop?

    1. SR

    2. SB

      Jensen Huang, the, the CEO of NVIDIA said... Who's, for anyone that doesn't know, the guy making all the chips that are powering AI, said China is going to win the AI race, arguing it is just a nanosecond behind the United States. China have produced 24,000 AI papers compared to just 6,000 from the US, more than the combined output of the US, the UK, and the EU. China is anticipated to quickly roll out their new technologies, both domestically and developing new technologies for other developing countries. So the accelerators or the accelerate... I think you call them the accelerants?

    3. SR

      Accel- accelerationists.

    4. SB

      The accelerationists?

    5. SR

      Yeah.

    6. SB

      I mean, they would say, "Well, if we don't, then China will. So we have to, we have to go fast."

    7. SR

      It's another version of the, the race that the companies are in with each other, right? That we, you know, we know that this race is heading off a cliff, but we can't stop, so we're all just gonna go off this cliff. And obviously that's nuts, right? I mean, we're all looking at each other saying, "Yeah, there's a cliff over there. Running as fast as we can towards this cliff." We're looking at each other saying, "Why aren't we stopping?" So the narrative in Washington, which I think Jensen Huang is either reflecting or, or perhaps, um, promoting, uh, is that, you know, China has, you know, is completely unregulated and, uh, you know, America will only slow itself down, uh, if it regulates AI- AI in any way. So this is a completely false narrative, because China's AI regulations are actually quite strict, even compared to, um, the European Union. And China's government has explicitly acknowledged, uh, the need, and their regulations are very clear, you can't build AI systems that could escape human control. And not only that, I don't think they view the race in the same way as, "Okay, we- we just need to be the first to create AGI." I think they're more interested in figuring out how to disseminate AI as a set of tools within their economy, to make their economy more productive, and, and so on. So that's, that's their version of the race.

    8. SB

      But of course, they still wanna build the weapons for adversaries, right? To, so that they can take down, I don't know, Taiwan, if they want to.

    9. SR

      So weapons are a separate matter.

    10. SB

      Hmm.

    11. SR

      And I'm happy to talk about weapons, but just in terms of-

    12. SB

      Control.

    13. SR

      ... uh, control, economic domination, um, they, they don't view putting all your eggs in the AGI basket as the right strategy. So they want to use AI, you know, even in its present form, to make their economy much more efficient and productive, and also, you know, to give people new capabilities and, and better quality of life. And, and I think the US could do that as well, and, um, typically, Western countries don't have as much of, uh, central government control over what companies do. And some companies are investing in AI to make their operations more efficient, uh, and some are not, and we'll see how

  32. 1:18:271:18:53

    Trump's Approach to AI

    1. SR

      that plays out.

    2. SB

      What do you think of Trump's approach to AI?

    3. SR

      So Trump's approach is, you know, it's, it's echoing what Jensen Huang is saying, that the US has to b- be the one to create AGI. And very explicitly, the administration's policy is to, uh, dominate the world. That's the word they use, dominate. I- I'm not sure that other countries like the idea that, um, they will be dominated by American

  33. 1:18:531:20:49

    What's Causing the Loss in Middle-Class Jobs

    1. SR

      AI.

    2. SB

      But is that an accurate description of what will happen if the US build AGI technology before, say, the UK, where I'm originally from, and where you're originally from? What does the... Uh, this is something I think about a lot, 'cause we're going through this budget process in the UK at the moment, where we're figuring out how are we gonna spend our money and how we're gonna tax people. And also, we've got this new election cycle, it's a- a- a- approaching quickly, where people are talking about immigration issues and this issue and that issue and the other issue. What I don't hear anyone talking about is AI.... and the fucking humanoid robots that are gonna take everything. We're very concerned with the brown people crossing the channel, but the humanoid robots that are gonna be super intelligent and really take, uh, causing economic disrupt- disruption, no one talks about that. The political leaders don't talk about it, it doesn't win races, I don't see it on billboards.

    3. SR

      Yeah. And it's- it- it's interesting because in fact... I mean, and so there's- there's two forces that have been hollowing out the middle classes in Western countries. One of them is globalization where lots and lots of work, not just manufacturing, but white-collar work, gets outsourced to low-income countries. Uh, but the other is automation. And, you know, some of that is factories. So, um, the amount of employment in manufacturing continues to drop even as the amount of output from manufacturing in the US and in the UK continues to increase. So we talk about, oh, you know, our- our manufacturing industry has been destroyed. It hasn't. It's producing more than ever just with, you know, a quarter as many people. So it's manufacturing employment that's been destroyed by automation and robotics and so on. And then, you know, computerization has eliminated whole layers of white-collar jobs. And so those two- those two forms of automation have probably done more to hollow out middle class, uh, employment and standard

  34. 1:20:491:23:18

    What Will Happen if the UK Doesn't Join the AI Race?

    1. SR

      of life.

    2. SB

      If the UK doesn't participate in this new ac- technological wave, that seems to be, that seems to have... You know, it's gonna take a lot of jobs. Cars are gonna drive themselves. Waymo just announced that they're coming to London-

    3. SR

      Mm-hmm.

    4. SB

      ... which is the driverless cars. And driving is the biggest occupation in the world, for example. So you've got immediate disruption there, and where does the money accrue to? Will it accrues to who owns Waymo, which is what Google and Silicon Valley companies?

    5. SR

      Alphabet owns Waymo 100%, I think.

    6. SB

      Yeah.

    7. SR

      So yes, I mean, this is... So I was in India a few months ago talking to the government ministers because they are holding the next global AI summit in February. And- and their view going in was, you know, "AI is great, we're gonna use it to, you know, turbocharge the growth of our Indian economy." When for example, you have AGI, you have AGI-controlled robots that can do all the manufacturing, that can do agriculture, that can do all the white-collar work. And goods and services that might have been produced by Indians will instead be produced by American-controlled AGI systems at much lower prices. You know, a consumer given a choice between an expensive product produced by Indians or a cheap product produced by American robots will probably choose the cheap product produced by American robots. And so potentially every country in the world, with the possible exception of North Korea, will become a kind of a client state of American AI companies.

    8. SB

      A client state of American AI companies is exactly what I'm concerned about for the UK economy and really any economy outside of the United States. I guess one could also say China but... 'cause th- those are the two nations that are taking AI most seriously.

    9. SR

      Mm-hmm.

    10. SB

      And I- I- I don't know what our economy becomes, I can't figure out... can't figure out what our, what the British economy becomes in such a world. Is it tourism? I don't know. Like you come here to- to- to look at the Buckingham Palace? I- I-

    11. SR

      You can think about countries but I mean even for the United States, it's the same problem.

    12. SB

      At least they'll be able to-

    13. SR

      Right, because-

    14. SB

      ... tax the hell out of these-

    15. SR

      ... you know, so some small fraction of the population will be running maybe the AI companies. But increasingly, even those companies will be replacing their human employees with AI systems.

    16. SB

      Mm-hmm.

  35. 1:23:181:28:47

    Amazon Replacing Their Workers

    1. SB

    2. SR

      So Amazon, for example, which you know, sells a lot of computing services to AI companies is using AI to replace layers of management, is planning to use robots to replace all of its warehouse workers and so on. So- so even the- the giant AI companies will have few human employees. In the long run, I mean, it... Think of the situation, you know, pity the poor CEO who's bored, says, "Well, un- you know, unless you turn over your decision-making power to the AI system, um, we're gonna have to fire you because all our competitors are using, you know, an AI-powered CEO and they're doing much better."

    3. SB

      Amazon plans to replace 600,000 workers with robots in a memo that just leaked which has been widely talked about and the CEO Andy Jassy told employees that the company expects its corporate workforce to shrink in the coming years because of AI and AI agents and they've publicly gone live with saying that they're going to cut 14,000 corporate jobs in the near term as part of its refocus on AI investment and efficiency. It's interesting because I was reading about, um, the sort of different quotes from different AI leaders about the speed in which this- this stuff is gonna happen and what you see in the quotes is Demis, who's the CEO of DeepMind-

    4. SR

      Mm-hmm.

    5. SB

      ... saying things like, "It'll be more than 10 times bigger than the Industrial Revolution but also it will happen maybe 10 times faster," and they speak about this turbulence that we're gonna experience as this shift takes place.

    6. SR

      That's, um, maybe a euphemism (laughs) for... Uh, and I think the, you know, governments are now... You know, they- they've kind of gone from saying, "Oh, don't worry, you know, we'll just retrain everyone as data scientists." And-

    7. SB

      (laughs) .

    8. SR

      ... like, well, yeah that's- that's ridiculous, right? The world doesn't need four billion data scientists.

    9. SB

      And we're not all capable of becoming that by the way. (laughs) .

    10. SR

      Uh, yeah or have any interest in- in- (laughs) in doing that.

    11. SB

      Well I can't- I can't even do it if I wanted to. Like I tried to sit in biology class and I fell asleep so- (laughs) .

    12. SR

      (laughs) .

    13. SB

      I couldn't. That was the end of my career as a surgeon.

    14. SR

      Fair enough.

    15. SB

      (laughs) .

    16. SR

      Um, but yeah, now suddenly they're stirring...... you know, 80% unemployment in the face and wondering, "How, how on earth is our society going to hold together?"

    17. SB

      We'll deal with it when we get there.

    18. SR

      Yeah, unfortunately, um, unless we plan ahead, we're gonna suffer the consequences, right? We can't... It was bad enough in the Industrial Revolution, which unfolded over seven or eight decades, but there was massive disruption and, uh, misery caused by that. We don't have a model for a functioning society where almost everyone does nothing, at least nothing of economic value. Now, it's not impossible that there could be such a s- a functioning society, but we don't know what it looks like. And, you know, when you think about our education system, which would probably have to look very different, and how long it takes to change that. I mean, I'm always, uh, reminding people about, uh, how long it took Oxford to decide that geography was a proper subject of study. It took them 125 years from the first proposal that there should be a geography degree until it was finally approved. So, we don't have very long to completely revamp a system that we know takes decades and decades to reform. A- And we don't know how to reform it, because we don't know what we want the world to look like.

    19. SB

      Is this one of your reasons why you're appalled at the moment? Because when you have these conversations with people, people just don't have answers, yet they're plowing ahead at rapid speed.

    20. SR

      I would say it's not necessarily the job of the AI companies. So, I'm appalled by the AI companies 'cause they don't have an answer for how they're gonna control the systems that they're proposing to build. I do find it disappointing (laughs) that, uh, governments don't seem to be grappling with this issue. I think there are a few. I think, for example, the Singapore government seems to be quite farsighted, and they've, they've thought this through. You know, it's a small country, they've figured out, "Okay, this, this will be our role, uh, going forward, and we think we can find, you know, some, some purpose for our people in this, in this new world." But for, I think countries with large populations, um, they need to figure out answers to these questions pretty fast. It takes a long time to actually implement those answers, uh, in the form of new kinds of education, new professions, new qualifications, uh, new economic structures. I mean, it's, it's, it's possible. I mean, when you look at therapists, for example, they're almost all self-employed. So, what happens when, you know, 80% of the population transitions from regular employment into, into self-employment? What does that, what does that do to the economics of, of, uh, government finances and so on? So, there's just lots of questions. And how do you... You know, if that's the future, you know, why are we training people to, to fit into nine-to-five office jobs which won't exist

  36. 1:28:471:30:41

    Ads

    1. SR

      at all?

    2. SB

      Last month, I told you about a challenge that I'd set our internal FlightX team. FlightX team is our innovation team internally here. I tasked them with seeing how much time they could unlock for the company by creating something that would help us filter new AI tools to see which ones were worth pursuing. And I thought that our sponsor, Fiverr Pro, might have the talent on their platform to help us build this quickly, so I talked to my director of innovation, Isaac, and for the last month, my team, FlightX, and a vetted AI specialist from Fiverr Pro have been working together on this project. And with the help of my team, we've been able to create a brand-new tool which automatically scans, scores, and prioritizes different emerging AI tools for us. Its impact has been huge, and within a couple of weeks, this tool has already been saving us hours trialing and testing new AI systems. Instead of shifting through lots of noise, my team, FlightX, has been able to focus on developing even more AI tools, ones that really move the needle in our business, thanks to the talent on Fiverr Pro. So, if you've got a complex problem and you need help solving it, make sure you check out Fiverr Pro at fiverr.com/diary. So many of us are pursuing passive forms of income and to build side businesses in order to help us cover our bills, and that opportunity is here with our sponsor, Stan, a business that I co-own. It is the platform that can help you take full advantage of your own financial situation. Stan enables you to work for yourself. It makes selling digital products, courses, memberships, and more simple products more scalable and easier to do. You can turn your ideas into income and get the support to grow whatever you're building. And we're about to launch Dare to Dream. It's for those who are ready to make the shift from thinking to building, from planning to actually doing the thing. It's about seeing that dream in your head and knowing exactly what it takes to bring it to life. If you're ready to transform your life, visit dare-to-dream.stan.store.

  37. 1:30:411:37:48

    Experts Agree on Extinction Risk

    1. SB

      You've made many attempts to raise awareness and to call for a heightened consciousness about the future of AI. Um, in October, over 850 experts, including yourself and other leaders like Richard Branson, who I've had on this show, and Geoffrey Hinton, who I've had on this show, signed a statement to ban AI superintelligence, as you guys raised concerns of potential human extinction.

Episode duration: 2:04:05

Install uListen for AI-powered chat & search across the full episode — Get Full Transcript

Transcript of episode P7Y-fynYsgE

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome