The Diary of a CEOCEO Of Microsoft AI: AI Is Becoming More Dangerous And Threatening! - Mustafa Suleyman
EVERY SPOKEN WORD
150 min read · 30,079 words- 0:00 – 2:11
Intro
- SBSteven Bartlett
Are you uncomfortable talking about this?
- MSMustafa Suleyman
Yeah. I mean, it's pretty wild, right? Mustafa Suleyman-
- SBSteven Bartlett
The billionaire founder of Google's AI technology. He's played a key role in the development of AI from its first critical steps.
- MSMustafa Suleyman
2020, I moved to work on Google's chat bot. It was the ultimate technology. We can use them to turbocharge our knowledge unlike anything else.
- SBSteven Bartlett
Why didn't they release it?
- MSMustafa Suleyman
We were nervous. We were nervous. Every organization is gonna race to get their hands on intelligence, and that's gonna be incredibly disruptive. This technology can be used to identify cancerous tumors as it can to identify a target on the battlefield. A tiny group of people who wish to cause harm are gonna have access to tools that can instantly destabilize our world. That's the challenge, how to stop something that can cause harm or potentially kill. That's where we need containment.
- SBSteven Bartlett
Do you think that it is containable?
- MSMustafa Suleyman
It has to be possible.
- SBSteven Bartlett
Why?
- MSMustafa Suleyman
It must be possible.
- SBSteven Bartlett
Why must it be?
- MSMustafa Suleyman
Because otherwise, it contains us.
- SBSteven Bartlett
Yet you chose to build a company in this space. Why did you do that?
- MSMustafa Suleyman
Because I want to design an AI that's on your side. I honestly think that if we succeed, everything is a lot cheaper. It's gonna power new forms of transportation, reduce the cost of healthcare.
- SBSteven Bartlett
But what if we fail?
- MSMustafa Suleyman
The really painful answer to that question is that...
- SBSteven Bartlett
Do you ever get sad about it?
- MSMustafa Suleyman
Yeah. It's intense.
- SBSteven Bartlett
I think this is fascinating. I looked at the back end of our YouTube channel, and it says that since this channel started, 69.9% of you that watch it frequently haven't yet hit the subscribe button. So, I have a favor to ask you. If you've ever watched this channel and enjoyed the content, if you're enjoying this episode right now, please, could I ask a small favor? Please hit the subscribe button. Helps this channel more than I can explain, and I promise, if you do that, to return the favor, we will make this show better, and better, and better, and better, and better. That's a promise I'm willing to make if you hit the subscribe button. Do we have a deal? Everything
- 2:11 – 9:17
How do you feel emotionally about what's going on with AI?
- SBSteven Bartlett
that's going on with artificial intelligence now, and, um, this new wave and all these terms like AGI and I saw another term in your, your, your book called ACI, the first time I'd heard that term. How do you feel about it emotionally? If you had to encapsulate how you feel emotionally about what's going on in this moment, how would you d- what words would you use?
- MSMustafa Suleyman
I would say in the past, it would've been petrified. And I think that over time, as you really think through the consequences and the pros and cons and the trajectory that we're on, you adapt and you understand that actually there is something incredibly inevitable about this trajectory, and that we have to wrap our arms around it and guide it and control it as a collective species. As a, as humanity. And I think the more you realize how much influence we collectively can have over this outcome, the more empowering it is. Because on the face of it, this is really gonna be the tool that helps us tackle all the challenges that we're facing as a species, right? We need to fix water desalination. We need to grow food 100X cheaper than we currently do. We need renewable energy to be, you know, ubiquitous and everywhere in our lives. We need to adapt to climate change. Everywhere you look, in the next 50 years, we have to do more with less. And there are very, very few proposals, let alone practical solutions, for how we get there. Training machines to help us as aides, scientific research partners, inventors, creators, is absolutely essential. And so the upside is phenomenal. It's enormous. But AI isn't just a thing. It's not an inevitable whole. Its form isn't inevitable, right? Its form, the exact way that it manifests and appears in our everyday lives, and the way that it's governed and who it's owned by and how it's trained, that is a question that is up to us collectively as a species to figure out over the next decade. Because if we don't embrace that challenge, then it happens to us. And that's really what I'm, I have been wrestling with for 15 years of my career, is how to intervene in a way that this really does benefit everybody, and those benefits far, far outweigh the potential risks.
- SBSteven Bartlett
At what stage were you petrified?
- MSMustafa Suleyman
So I founded DeepMind in 2010. And, you know, over the course of the first few years, our progress was fairly modest. But quite quickly in sort of 2013, as the deep learning revolution began to take off, I could see glimmers of very early versions of AIs learning to do really clever things. So for example, one of our big initial achievements was to teach an AI to play the Atari games. So remember Space Invaders and, and Pong where you bat a ball from left to right? And we trained this initial AI to purely look at the raw pixels screen by screen, flickering or moving in front of the AI, and then control the actions up, down, left, right, shoot or not. And it got so good at learning to play this simple game, simply through attaching a value between the reward, like it was, it was getting score, and taking an action-... that it learned some really clever strategies, uh, to play the game really well, that us games players and humans hadn't really even noticed. At least people in the office hadn't noticed it. Some professionals did. Um, and that was amazing to me because I was like, wow, this simple system that learns through a set of stimuli plus a reward to take some actions can actually discover many strategies, clever tricks to play the game well, that us humans... hadn't occurred to us, right? And that, to me, is both thrilling because it presents the opportunity to invent new knowledge and advance our civilization, but of course, in the same measure, is also petrifying.
- SBSteven Bartlett
Mm-hmm. Was there a particular moment when you were at, you were at DeepMind where you go, where y- you had that k- kind of eureka moment, like a day, when something happened and, and it caused that, that epiphany, I guess? Was it-
- MSMustafa Suleyman
Yeah. It, it, it was actually a moment even before 2013 where I remember standing in the office and watching a very early prototype of one of these image recognition, image generation models that ha- um, was trained to generate new handwritten black and white digits. So imagine zero to one, two, three, four, five, six, seven, eight, nine all in a different style of handwriting on a tiny grid of, like, 300 pixels by 300 pixels in black and white. And we were trying to train the AI to generate a new version of one of those digits, a number seven in a new handwriting. Sounds so simplistic today given the incredible photorealistic images that are being generated, right? Um, and I just remember so clearly it, it took sort of 10 or 15 seconds and it just resolved. It... The, the number appeared. It went from complete black to, like, slowly gray and then suddenly these, like, white pixels appeared out of the, the black darkness and it revealed a number seven. And that sounds so simplistic in hindsight, but it was amazing. I was like, wow, the model kinda understands the representation of a seven well enough to generate a new example of a number seven, an image of a number seven. You know, and you roll forward 10 years and our predictions were correct. In fact, it was quite predictable, in hindsight, the trajectory that we were on. More compute plus vast amounts of data has enabled us within a decade to go from predicting black and white digits, generating new versions of those images, to now generating unbelievable photorealistic not just images but videos, novel videos with a simple natural language instruction or a prompt.
- SBSteven Bartlett
What has
- 9:17 – 12:51
What's surprised you most about the last decade?
- SBSteven Bartlett
surprised you? You s- you've referred to that as predictable, but what has surprised you about what's happened over the last decade?
- MSMustafa Suleyman
So I think what was predictable to me back then was the generation of images and of audio, um, because the structure of an image is locally contained. So pixels that are near one another create straight lines and edges and corners, and then eventually they create eyebrows and noses and eyes and faces and entire scenes. And I could... Just intuitively, in a very simplistic way, I could get my head around the fact that, okay, while we're predicting these number sevens, you can imagine how you then can expand that out to entire images, maybe even to videos, maybe, you know, to audio too. You know, what I said, you know, a couple sen- seconds ago is connected in phoneme space in the spectrogram. But what was much more surprising to me was that those same methods for generation applied in the space of language. You know, language seems like such a different abstract space of ideas. When I say, like, the cat sat on the... most people would probably predict mat, right? But it could be table, car, chair, tree. It could be mountain, cloud. I mean, there's g- a gazillion possible next word predictions. And so the space is so much larger, the ideas are so much more abstract, I- I just couldn't wrap my intuition around the idea that we would be able to create the incredible large language models that you see today.
- SBSteven Bartlett
Your ChatGPTs.
- MSMustafa Suleyman
ChatGPT.
- SBSteven Bartlett
Google Bard.
- MSMustafa Suleyman
The Google's Bard, Inflection, my new company, has an AI called PI, PI.AI which stands for Personal Intelligence, and it's as good as ChatGPT but much more emotional, empathetic, and kind. So it's just super surprising to me that just growing the size of these large language models, as we have done, by 10X every single year for the last 10 years we've been able to produce this, and that, that, that's just an amazingly large number. If you just kind of pause for a moment to grapple with the numbers here, in 2013 when we trained the Atari AI that I mentioned to you at DeepMind, that used two petaflops of computation. So peta, P-E-T-A, stands for a million billion calculations. A flop is a calculation, so two million billion, right? Which is already an insane number of calculations.
- SBSteven Bartlett
Yeah. Lost me at two.
- MSMustafa Suleyman
It's totally crazy.
- SBSteven Bartlett
(laughs)
- MSMustafa Suleyman
Yeah. Just two of these units that are already really large-And every year since then, we've 10Xed the number of calculations that can be done, such that today, the biggest language model that we train at Inflection uses 10 billion petaflops. So, 10 billion, million, billion calculations. I mean, it's just unfathomably large number. And what we've really observed is that scaling these models by 10X every single year produces this magical experience of talking to an AI that feels like you're talking to a human that is super knowledgeable and super smart.
- SBSteven Bartlett
There's
- 12:51 – 16:04
I'm scared of this coming wave.
- SBSteven Bartlett
so much that's happened in public conversation around AI, um, and there's so many questions that I have. I've, I've been speaking to a few people about artificial intelligence, trying to understand it, and I'm, I think where I am right now, is I feel quite scared. Um, but when I get scared, I don't get, it's not the type of scared that makes me anxious. It's not like an emotional scared. It's a very logical scared. It's my very logical brain hasn't been able to figure out how the inevitable outcome that I've arrived at, which is that humans become the less dominant species on this planet, um, how that is to be avoided in any way. The first chapter of your book, The Coming Wave, is, is, is, is titled appropriately to how I feel, Containment is Not Possible. You, you say in that chapter, "The widespread emotional reaction I, I was observing is something I've come to call the pessimism aversion trap."
- MSMustafa Suleyman
Correct.
- SBSteven Bartlett
What is the pessimism aversion trap?
- MSMustafa Suleyman
Well, so all of us, me included, feel what you just described when you first get to grips with the idea of this new coming wave. It's scary, it's petrifying, it's threatening. Is it gonna take my job? Is my daughter or son gonna fall in love with it? You know, what does this mean? What does it mean to be human in a world where there's these other human-like things that aren't human? How do I make sense of that? It's super scary, and a lot of people over the last few years, I think things have changed in the last six months, I have to say, but o- over the last few years, I would say the default reaction has been to avoid the pessimism and the fear, right, to just kind of recoil from it and pretend that it's, like, either not happening or that it's all gonna work out to be rosy, it's gonna be fine, we don't have to worry about it. People often say, "Well, we've always created new jobs. We've never permanently displaced jobs. We've only ever seen new jobs be created. Unemployment is at an all-time low." Right? So there's this default optimism bias that we have, and I think it's less about a need for optimism and more about a fear of pep- pessimism. And so that trap, particularly in elite circles, means that often we aren't having the tough conversations that we need to have in order to respond to the coming wave.
- SBSteven Bartlett
Are you scared in part about having those tough conversations because of how it might be received?
- MSMustafa Suleyman
Um, not so much anymore. So I've spent most of my career trying to put those tough questions on the policy table. Right? I've been raising these questions, the ethics of AI, safety, and questions of containment for as long as I can remember with governments and civil societies and all the rest of it. And so I've become used to talking about that and, you know, I think it's essential that we have the honest conversation, because we can't let it happen to us. We have to openly talk about it.
- SBSteven Bartlett
Is
- 16:04 – 23:53
Is containment possible?
- SBSteven Bartlett
... I mean, this is a, this is a big, a big question, but as you sit here now, do you think that it is containable? Because I, I, I can't see how. I can't see how it can be contained. Chapter Three is The Containment Problem, where you give a, give the example of how technologies are often invented for good reasons and for certain use cases, like the hammer, you know, which is used, you know, maybe to build something, but then it also can be used to kill people. Um, and you say in, in history h- we haven't been able to ban a technology ever really. It has always found a way into society, um, because of other societies have an incentive to have it even if we don't, and then we need, we need it, like the nuclear bomb, because if they have it and we don't, then we're at a disadvantage. So, are you optimistic? Honestly.
- MSMustafa Suleyman
I don't think an optimism or a pessimism frame is the right one, 'cause the e- both are equally biased in ways that I think distract us. As I say in the book, on the face of it, it does look like containment isn't possible. We haven't contained or permanently banned a technology of this type in the past. There are some that we have done. Right? So we banned CFCs for example 'cause they were producing a hole in the ozone layer. We've banned certain weapons, chemical and biological weapons, for example, or blinding lasers, believe it or not. There are such things as lasers that will instantly blind you. You know, so we have stepped back from the frontier in some cases, but that's largely where there's either cheaper or, you know, equally effective alternatives that are quickly adopted. In this case, these technologies are omni-use, so the same core technology can be used to identify, you know, cancerous tumors in chest X-rays as it can to identify a target on the battlefield for an aerial strike.So that mixed use or omni use is gonna drive the proliferation, because there's huge commercial incentives because it's going to deliver a huge benefit and do a lot of good. And that's the challenge that we have to figure out, is how to stop something which on the face of it is so good, but at the same time can be used in really bad ways too.
- SBSteven Bartlett
Do you think we will?
- MSMustafa Suleyman
I do think we will. So I think that nation states remain the backbone of our civilization. We have chosen to concentrate power in a single authority, the nation state, and we pay our taxes, and we've given the nation state a monopoly over the use of violence, and now the nation state is going to have to update itself quickly to be able to contain this technology. Because without that kind of essentially oversight, both of those of us who are making it, but also crucially of the open source, then it will proliferate, and it will spread. But regulation is still a real tool, and we can use it, and we must.
- SBSteven Bartlett
What does, what does the world look like in, um, let's say 30 years if that doesn't happen, in your view? People, because people, the average person can't really grap- grapple their head around artificial intelligence. When they think of it, they think of like these large lang- large language models that you can chat to and ask it about your homework. That's like the average person's understanding of artificial intelligence because that's all they've ever been exposed to of it. You have a different view because of the work you've spent the last decade doing. So to try and give Dave, who's, I don't know, an Uber driver in Birmingham, an idea, who's listening to this right now, what artificial intelligence- el- intelligence is and its potential capabilities if, you know, there's no, there's no containment, what does it, what does the world look like in 30 years?
- MSMustafa Suleyman
So I think it's gonna feel largely like another human. So think about the things that you can do, not again in the physical world, but in the digital world.
- SBSteven Bartlett
2050 I'm thinking of. I'm in 2050.
- MSMustafa Suleyman
(laughs) 2050, we will have robots. 2050, we will definitely have robots. I mean, more than that, 2050, we will have new biological beings as well. Because the same trajectory that we've been on with hardware and software is also gonna apply to the platform of biology.
- SBSteven Bartlett
Are you uncomfortable talking about this?
- MSMustafa Suleyman
Yeah. I mean, it's pretty wild, right?
- SBSteven Bartlett
I noticed you crossed your arms (laughs) .
- MSMustafa Suleyman
(laughs)
- SBSteven Bartlett
No, I always, I always look, I always, I always-
- MSMustafa Suleyman
(coughs) .
- SBSteven Bartlett
(coughs) ... use that as a cue for someone when, when a subject matter is uncomfortable. And it's interesting because I know you know so much more than me and, about this, and I know you've spent way more hours thinking off into the future about the consequences of this. I mean, you've written a book about it, so bloody hell. Like you spent 10 years at the very, DeepMind is one of the, the pinnacle companies, the pioneers in this whole space. So you know, you know some stuff. And it's funny because when I was, I watched an interview with Elon Musk, and he was asked a question similar to this. I know he speaks in certain, certain tone of voice, but he said that he's, he's almost, he's gotten to the point where he thinks he's living in suspended disbelief.
- MSMustafa Suleyman
Hmm.
- SBSteven Bartlett
Where he thinks that if he spent too long thinking about it, he wouldn't understand the purpose of what he's doing right now.
- MSMustafa Suleyman
Hmm.
- SBSteven Bartlett
And he wri- he says that, uh, it's more dangerous than nuclear weapons, um, and that it's too late, too late to stop it.
- MSMustafa Suleyman
Hmm.
- SBSteven Bartlett
There, there's this one interview that's chilling, and I was filming Dragon's Den the other day, and I showed the Dragons the clip. I was like, "Look what Elon Musk said when he was asked about what his chi- what advice he should give to his children in a world of, in an- an inevitable world of artificial intelligence." It's the first time I've seen Elon Musk stop for like 20 seconds and not know what to say. Stumble, stumble, stumble, stumble, stumble, uh, uh, uh, uh, uh, and then conclude that he's living in suspended disbelief.
- MSMustafa Suleyman
Hmm. Yeah, I mean it, I think it's a great phrase. That is the moment we're in. We have to, so what I said, too, about the pessimism aversion trap, we have to confront the probability of seriously dark outcomes, and we have to spend time really thinking about those consequences, because the competitive nature of companies and of nation states is gonna mean that every organization is gonna race to get their hands on intelligence. Intelligence is gonna be a new form of, of capital, right? Just as there was a grab for land or there's a grab for oil, there's a grab for anything that enables you to do more with less, faster, better, smarter, right? And we can clearly see the predictable trajectory of the exponential improvements in these technologies. And so we should expect that wherever there is power, there's now a new tool to amplify that power, accelerate that power, turbocharge it, right? And, you know, in 2050, if you ask me to look out there, I mean, o- of course it makes me grimace. That's why I was like, "Oh my God."
- SBSteven Bartlett
Mm.
- MSMustafa Suleyman
It's, it really does feel like a new species, and, and that has to be brought under control. We cannot allow ourselves to be dislodged from our position as the dominant species on this planet. We cannot allow that.
- SBSteven Bartlett
You mentioned robots.
- 23:53 – 27:08
What will these AI biological beings look like?
- SBSteven Bartlett
So these are sort of adjacent technologies that are rising with artificial intelligence. Robots. You mentioned, um, biological, new biological species. Give me some light on what you mean by that.
- MSMustafa Suleyman
Well, so, so far the dream of robotics hasn't really come to fruition, right? I mean, we're, we still have, the most we have now are sort of drones and a little bit of self-driving cars.But that is broadly on the same trajectory as these other technologies, and I think that over the next 30 years, you know, we are gonna have humanoid robotics. We're gonna have, um, you know, physical tools within our everyday system that we can rely on, that will be pretty good, that will be pretty good to do many of the physical tasks. And that's a little bit further out, because I think it, you know, there's a lot of tough problems there. But it's still coming in the same way. And likewise with biology. You know, we can now sequence a genome for a millionth of the cost of the first genome, which took place in 2000. So, 20-ish years ago. The cost has come down by a million times, and we can now increasingly synthesize, that is create or manufacture, new bits of DNA, which obviously give rise to life in every possible form. And we're starting to engineer that DNA to either remove traits, uh, or capabilities that we don't like, or indeed to add new things that we want it to do. We want, you know, fruit to last longer, or we want meat to have higher protein, et cetera, et cetera, synthetic meat to have higher protein levels.
- SBSteven Bartlett
And what's the implications of that?
- MSMustafa Suleyman
Well-
- SBSteven Bartlett
The potential implications.
- MSMustafa Suleyman
I think that the darkest scenario there is that people will experiment with pathogens, engineered, you know, synthetic pathogens that might end up accidentally or intentionally being more transmissible, i.e. they, they're, they can spread faster, um, or more lethal, i.e., you know, they cause more harm or potentially kill.
- SBSteven Bartlett
Like a pandemic.
- MSMustafa Suleyman
Like a pandemic. Um, and that's where we need containment, right? We have to limit access to the tools and the knowhow to carry out that kind of experimentation. So, one framework of thinking about this with respect to making containment possible is that we really are experimenting with dangerous materials, and anthrax is not something that can be bought over the internet that can be freely experimented with. And likewise, the very best of these tools in a few years' time are gonna be capable of creating, you know, new synthetic, um, pandemic pathogens. And so we have to restrict access to those things. That means restricting access to
- 27:08 – 33:10
Would we be able to regulate AI?
- MSMustafa Suleyman
the compute. It means restricting access to the software that runs the models, to the cloud environments that provide APIs, provide you access to experiment with those things. Um, and of course, on the biology side, it means restricting access to some of the substances. And people aren't gonna like this. People are not gonna like that claim, because it means that those who wanna do good with those tools, those who wanna create a startup, the small guy, the little developer that struggles to comply with all the regulations, they're gonna be pissed off, understandably, right? But that is the age we're in. Deal with it. Like, we have to confront that reality. That means that we have to approach this with the precautionary principle, right? Never before in the invention of a technology or in the creation of a regulation have we proactively said, "We need to go slowly. We need to make sure that this first does no harm." The precautionary principle. And that is just an unprecedented moment. No other technology's done that, right? Because I think we collectively in the industry, those of us who are closest to the work, can see a place in five years or 10 years where it could get out of control, and we have to get on top of it now, and it's better to forgo, like that is give up some of those potential upsides or benefits until we can be more sure that it can be contained, that it can be controlled, that it always serves our collective interests.
- SBSteven Bartlett
And I, I think about that. So I think about what you've just said there about being able to create these pathogens, these diseases, and viruses, et cetera, that, you know, could become weapons or whatever else. But with artificial intelligence and the power of that intelligence, with these, um, pathogens, you could theoretically ask one of these systems to create a virus that, a very deadly virus. Um, you could ask the artificial intelligence to create a very deadly virus that has certain properties, um, maybe even that mutates over time in a certain way so it only kills a certain amount of people. Kind of like a nuclear bomb of- of viruses that you could just pop, hit an enemy with. Now, if I'm, if I hear that and I go, "Okay, that's powerful. I would like one of those," uh, you know, there might be an adversary out there that goes, "I would like one of those just in case America get out of hand."
- MSMustafa Suleyman
Exactly.
- SBSteven Bartlett
And America's thinking, you know, "I want one of those in case Russia gets out of hand." And so okay, you might t- take a precautionary approach in the United States, but that's only gonna put you on the back foot when China or Russia or one of your adversaries accelerates forward in that- in that path. And this was same with the- the nuclear bomb, and, you know?
- MSMustafa Suleyman
You nailed it. I mean, that is the race condition. We refer to that as the race condition, the idea that if I don't do it, the other party is gonna do it, and therefore, I must do it. But the problem with that is that it creates a self fulfilling prophecy, so the default there is that we all end up doing it. And that can't be right, because there is a opportunity for massive cooperation here-There's a shared, that is between us and China and every other, quote unquote, them or they, or enemy that we want to create. We've all got a shared interest in advancing the collective health and well-being of humans and humanity.
- SBSteven Bartlett
How well have we done at promoting shared interest-
- MSMustafa Suleyman
Well-
- SBSteven Bartlett
... in the development of technologies over the years? Even at, like, a corporate level? Even, you know...
- MSMustafa Suleyman
(laughs) You know, the Nuclear Non-Proliferation Treaty has been reasonably successful. There's only nine nuclear states in the world today. We've stopped man- like three countries actually gave up nuclear weapons because we incentivized them with sanctions and threats and economic rewards. Um, small groups have tried to get access to nuclear weapons and so far have largely failed.
- SBSteven Bartlett
It's expensive though, right? And hard to... Like, uranium as a, as a chemical to keep it stable and to, to buy it and to house it. I mean, I couldn't just put it in the shed.
- MSMustafa Suleyman
You certainly couldn't put it in a shed. You can't download uranium-235 off the internet. It's not available open source. That is totally true. So, it's got different characteristics for sure.
- SBSteven Bartlett
But a kid in Russia could, you know, in his bedroom could download something onto his computer that's incredibly harmful in the, in artificial intelligence department, right?
- MSMustafa Suleyman
I think that that will be possible at some point in the next five years is true because there's a weird trend that's going on here. On the one hand, you've got the cutting edge AI models that are built by Google and OpenAI and my company, Inflection, and they cost hundreds of millions of dollars, and there's only a few of them. But on the other hand, the, what was cutting edge a few years ago is now open source today.
- SBSteven Bartlett
Hmm.
- MSMustafa Suleyman
So, GPT-3, which came out in the summer of 2020, is now reproduced as an open source model. So, the code and the weights of the model, the design of the model, and the actual implementation code is completely freely available on the web. And it's tiny. It's like 60 times or s- 60, 70 times smaller than the original model, which means that it's cheaper to use and cheaper to run. And that's as, as, you know, we've said earlier, like, that's the natural trajectory of technologies that become useful. They get more efficient, they get cheaper, and they spread further. And so, that's the containment challenge. That's really the essence of what I'm sort of trying to raise in my book, is to frame the challenge of the next 30 to 50 years as around containment, um, and around confronting proliferation.
- 33:10 – 35:43
In 30 years' time, do you think we would have contained AI?
- SBSteven Bartlett
Do you believe... 'Cause we're both gonna be alive unless this, you know, unless some robot kills us. But we're both gonna be alive in 30 years time.
- MSMustafa Suleyman
I hope so.
- SBSteven Bartlett
Maybe the podcast will still be going unless AI is, is now taken my job.
- MSMustafa Suleyman
(laughs)
- SBSteven Bartlett
(laughs) It's very possible.
- MSMustafa Suleyman
There's-
- SBSteven Bartlett
So, I'm gonna s- I'm gonna sit you here in, you know, when you're, I mean, you'll, you'll be what? 60, 68 years old? I'll be 60.
- MSMustafa Suleyman
(laughs)
- SBSteven Bartlett
Um, and I'll say, at that point when we have that conversation, do you think we would have been successful in containment on a global level?
- MSMustafa Suleyman
I think we have to be. I can't even-
- SBSteven Bartlett
It's interesting.
- MSMustafa Suleyman
... think that we're not.
- SBSteven Bartlett
Why?
- MSMustafa Suleyman
(sighs) Because I'm fundamentally a humanist, and I think that we have to make a choice to put our species first. And I think that that's what we have to be defending for the next 50 years. That's what we have to defend because look, it, it's, it's certainly possible that we invent these AGIs in such a way that they are always going to be provably, um, subservient, uh, to humans and take instructions, you know, from their human controller every single time. But enough of us think that we can't be sure about that (laughs) that I don't think we should take the gamble basically. So, uh, that's why I think that we should focus on containment and non-proliferation because some people, if they do have access to the technology, will want to take those risks and they will just want to see, like, what's on the other side of the door, you know, and they might end up opening Pandora's box. And that's a decision that affects all of us, and that's the challenge of the networked age. You know, we live in this globalized world and we use these words like globalization, and we, we, you sort of forget what globalization means. This is what globalization is. This is what a networked world is. It means that someone taking one small action can suddenly spread everywhere instantly.
- SBSteven Bartlett
Regardless of their intentions when they took the action.
- MSMustafa Suleyman
It may be, you know, unintentional, like you say. It may be that they're never, they weren't ever meaning to do harm.
- SBSteven Bartlett
When
- 35:43 – 46:35
Why would such a being want to interact with us?
- SBSteven Bartlett
I think I asked you when I said that in 30 years time you said that there will be, like, human level intelligence shall be interacting with, you know, this new species. But the species, for me to think the, the species will want to interact with me is, feels like wishful thinking-
- MSMustafa Suleyman
(laughs)
- SBSteven Bartlett
... (laughs) because what will I be to them? You know, like, I've got a French Bulldog, Pablo, and I can't imagine our IQ is that far apart.
- MSMustafa Suleyman
(laughs)
- SBSteven Bartlett
Like, it's (laughs) like, you know, i- i- in relative terms, my, the IQ between me and my dog, Pablo, I can't imagine it's that far apart. Even when I think about, is it like the orangutan where we only have like 1% difference in DNA or something crazy? And yet they throw their poop around and I'm sat here broadcasting-
- MSMustafa Suleyman
(laughs)
- SBSteven Bartlett
... around the world. There's quite a difference in that 1%, you know.And then I think about this new species, where, as you write in your book in chapter four, there seems to be no upper limit to AI's potential intelligence. Why would such an intelligence want to interact with me?
- MSMustafa Suleyman
Well, it depends how you design it. So, I think that our goal, one of the challenges of containment, is to design AIs that we want to interact with, that wanna interact with us, right? If you set an objective function for an AI, a goal for an AI, by its design, which, you know, inherently disregards or disrespects you as a human and your goals, then it's gonna wander off and do a lot of strange things.
- SBSteven Bartlett
What if it has kids, and the kids are-
- MSMustafa Suleyman
(laughs)
- SBSteven Bartlett
You know what I mean? What if it replicates in a way where... 'Cause, 'cause I've, I've heard this, uh, this conversation around w- like, it depends how we design it, but, you know, (sighs) I think about... It's kind of like, if I have a kid, and the kid grows up to be 1,000 times more intelligent than me. To think that I could have any influence on it, on it when it's a thinking, sentient, developing species, again feels like I'm overestimating my version of intelligence and importance and significance in the face of something that is incomprehensibly, like, a thu- uh, even 100 times more intelligent than me. And the speed of its computation is 1,000 times what my, the meat in my skull can do.
- MSMustafa Suleyman
Yeah.
- SBSteven Bartlett
Like, how, how m- how is it gonna... How are, how do I know it's gonna respect me-
- MSMustafa Suleyman
Yeah.
- SBSteven Bartlett
... or care about me, or understand w- you know, that I may... you know?
- MSMustafa Suleyman
I think that comes back down to the containment challenge. I think that if we can't be confident that it's going to respect you and understand you and work for you, and us as a species overall, then that's where we have to adopt the precautionary principle. I don't think we should be taking those kinds of risks in experimentation and design. And now, I'm not saying it's possible to design an AI that doesn't have those self-improvement capabilities in the limit, in like 30 or 50 years. I think it, you know, that's kind of what I was saying, is like, it seems likely that if you have one like that, it's gonna take advantage of infinite amounts of data and infinite amounts of computation, and it's gonna kind of outstrip our ability to act. And so, I think we have to step back from that precipice. That's what the containment problem is, is that it's, it's actually saying no sometimes. It's saying no. And that's a different sort of muscle that we've (laughs) never really exercised as a civilization. And, and that's obviously why containment appears not to be possible, because-
- SBSteven Bartlett
We've never done it before, wow.
- MSMustafa Suleyman
... we've never done it before. (laughs) And every inch of our, you know, commerce, and politics, and our war, and our, all of our instincts are just like, "Clash, compete. Clash, compete."
- SBSteven Bartlett
Profit.
- MSMustafa Suleyman
Profit.
- SBSteven Bartlett
Grow. Beat. Yeah.
- MSMustafa Suleyman
Exactly. Dominate. You know, fear them, be paranoid. Like now, all this nonsense about like, China being this new evil, like it, it... How does that slip into our culture? How are we suddenly all shifted from thinking it's the, the, the Muslim terrorists about to blow us all up, to now it's the Chinese who are about to, you know, blow up Kansas? It's just like, what are we talking about? They, like, we really have to pare back the paranoia and the fear and the othering, um, because those are the incentive dynamics that are gonna drive us to, you know, cause self-harm to humanity.
- SBSteven Bartlett
Mm-hmm. Thinking the worst of each other. The, the, there's couple of key moments when, in my understanding of artificial intelligence, there have been kind of paradigm, paradigm shifts for me. 'Cause I think, like many people, I thought of artificial intelligence as, you know, like a, like a child I was raising. And I would program, I would code it to do certain things. So, I would code it to play chess, and I would tell it the moves that are conducive with being successful in chess. And then I remember watching that, like, AlphaGo documentary-
- MSMustafa Suleyman
Right.
- SBSteven Bartlett
... which I think was Deep, DeepMind, wasn't it?
- MSMustafa Suleyman
That was us, yeah.
- SBSteven Bartlett
You guys. So, you programmed this, this, um, artificial intelligence to play the game Go, which is kind of like... Just think of it kind of like a chess, or a backgammon, or whatever. And it eventually just beats the best player in the world of all time. And it... And the way it learnt how to beat the best player in the world of all time, the world champion, who was, by the way, depressed when he got beat, um, was just by playing itself, right? And then there's this moment, I think, in... Uh, is it game four or g- something where-
- MSMustafa Suleyman
Right.
- SBSteven Bartlett
... it does this move that no one could have predicted.
- MSMustafa Suleyman
Right.
- 46:35 – 57:04
Quantum computers & their potential
- MSMustafa Suleyman
in 2200?
- SBSteven Bartlett
You tell me. You're smarter than me.
- MSMustafa Suleyman
(laughs) I mean, it's mind-blowing. It's mind-blowing.
- SBSteven Bartlett
What is the answer?
- MSMustafa Suleyman
We'll have quantum computers by then.
- SBSteven Bartlett
What's a quantum computer?
- MSMustafa Suleyman
A quantum computer is a completely different type of computing architecture which, in simple terms, basically allows you to pr- Those, those calculations that I described at the beginning, billions and billions of flops, those billions of flops can be done in a single computation. So everything that you see in the digital world today relies on computers processing information, and, and the speed of that processing is a friction. It kind of slows things down, right? Uh, you remember back in the day old school modems, 56K modem, the dial-up sound-
- SBSteven Bartlett
Mm-hmm.
- MSMustafa Suleyman
... and the image pixel loading, like pixel by pixel, that was because the computers were slow. And we're getting to a point now where the computers are getting faster and faster and faster, and quantum computing is like a whole new leap, like way, way, way beyond where we c- where we currently are. And so-
- SBSteven Bartlett
By analogy how would I understand that? So, like, if my, I've got my dial-up modem over here-
- MSMustafa Suleyman
(laughs)
- SBSteven Bartlett
... and then quantum computing over here.
- MSMustafa Suleyman
Right.
- SBSteven Bartlett
What's the, how do I... (laughs) What's the difference?
- MSMustafa Suleyman
Well, what, I don't know, what, it's really difficult to explain.
- SBSteven Bartlett
Is it like a billion times faster?
- MSMustafa Suleyman
Oh, it's, it's, it's like, it's like billions of billions times faster. It's, it's, it's much more than that. I mean, one way of thinking about it is like a floppy disk, which I guess most people remember-
- SBSteven Bartlett
Mm-hmm.
- MSMustafa Suleyman
... 1.4 megabytes, a physical thing back in the day. In 1960 or so, that was basically an entire pallet's worth of computer that was moved around by a forklift truck, right? Which is insane. Today, you know, you have billions and billions of times that floppy disk in your smartphone in your pocket.Tomorrow, you're gonna have billions and billions of smartphones in minuscule, wearable devices. There'll be cheap fridge magnets that, you know, are constantly on everywhere, sensing all the time, monitoring, processing, analyzing, improving, optimizing, you know, and they'll be super cheap. So, it's super unclear what do you do with all of that knowledge and information. I mean, ultimately, knowledge creates value. When you know the relationship between things, you can improve them, you know, make it more efficient. And so, more data is what has enabled us to build all the value of, you know, o- o- online in the last 25 years. And so, what does that look like in 150 years? I can't really even imagine, to be honest with you. It's very hard to say. I don't think everybody is gonna be working.
- SBSteven Bartlett
Why would we... yeah, well- well-
- MSMustafa Suleyman
We wouldn't be working in that kind of environment. I mean, the- the other trajectory to add to this is the cost of energy production. You know, AI, if it really helps us solve battery storage, which is the missing piece, I think, to really tackle climate change, then we will be able to source, basically source and store infinite energy from the sun. And I think in 20 or so years' time, 20, 30 years' time, that is gonna be a cheap and widely available, if not completely freely available resource. And if you think about it, everything in life has the cost of energy built into its production value.
- SBSteven Bartlett
Mm-hmm.
- MSMustafa Suleyman
And so if you strip that out, everything is likely to get a lot cheaper. We'll be able to desalinate water. We'll be able to grow crops much, much cheaper. We'll be able to grow much higher-quality food, right? It's gonna power new forms of transportation. It's gonna reduce the cost of drug production and healthcare, right? So all of those gains, obviously there'll be a huge commercial incentive to drive the production of those gains, but the cost of producing them is gonna go through the floor. I think that's one key thing that a lot of people don't realize that is a reason to be hugely hopeful and optimistic about the future. Everything is gonna get radically cheaper in 30 to 50 years.
- SBSteven Bartlett
Hmm. So, 200 years' time, we have no idea what the world looks like. It's, uh, this goes back to the point about being... Tr- is it, did you say transhumanist?
- MSMustafa Suleyman
Right.
- SBSteven Bartlett
What does that mean?
- MSMustafa Suleyman
Transhumanism, I mean, i- it's a group of people who basically believe that you, that, that humans and our soul and our being will one day transcend or move beyond our biological substrate.
- SBSteven Bartlett
Ah, okay.
- MSMustafa Suleyman
So our physical body, our brain, our biology is just an enabler for your intelligence and who you are as a person. And there's a group of kind of crackpots, basically, I think (laughs) , who think that we're gonna be able to upload ourselves to a silicon substrate, right, a computer, that can hold the essence of what it means to be Stephen. So, you in 200 and, in 20, uh, in, in 2200, well, could well still be you, by their reasoning. But you'll live on a server somewhere.
- SBSteven Bartlett
Why are they wrong? I think about all these adjacent technologies like biological, um, biological advancements. Did you call it like biosynthesis or something? Was it-
- 57:04 – 1:03:38
Cybersecurity
- MSMustafa Suleyman
the thing that we focus on, because otherwise it contains us. (laughs)
- SBSteven Bartlett
I've, I've been thinking a lot recently about cybersecurity as well, just broadly, on an individual level. In a world where there are these kinds of tools, which seems to be quite close, um, large language models, it, it br- brings up this whole new question about cybersecurity and cyber safety. And, you know, in a world where there's, um, these, uh, ability to generate audio and language in videos that seem to be real, um, what can we trust? And, you know, I was watching a video of a, of a, of a young girl whose grandmother was called up by a voice that was made to sound like her son saying he'd been in a car accident and asking for money, and sh- her nearly sending the money. All this whole wa... You know, 'cause it, this really brings into focus that we... Our lives are build on, built on trust, trusting the things we see here, and watch. And in, in... And now, we're at a s- what feels like a, a, a moment where we're no longer gonna be able to trust what we see-
- MSMustafa Suleyman
Mm.
- SBSteven Bartlett
... on the internet, on the phone.
- MSMustafa Suleyman
Mm.
- SBSteven Bartlett
What, what, what advice do you, do we, h- you have for people who are worried about this?
- MSMustafa Suleyman
Mm. Mm. So, skepticism, I think, is healthy and necessary, and I think that we're gonna need it, um, even more than, than we ever did, right? And so if you think about how we've adapted to the first wave of this, which was spammy email scams, um, everybody got them. And over time, people learned to identify them and be skeptical of them and reject them. Likewise, you know, I'm sure many of us get, like, text messages. I certainly get loads (laughs) of text messages trying to phish me and ask me to meet up or do this, that, and the other. And we've adapted, right? Now, I think we should all know and expect that criminals will use these tools to manipulate us, just as you described. I mean, you know, the voice is gonna be humanlike. The deep fake is gonna be super convincing. And there are actually ways around those things. So for example, the reason why the banks invented OTP, um, one-time passwords, where they send you a text message with a special code, um, is precisely for this reason, so that you have a 2FA, a two-factor authentication. Increasingly, we will have a three or four factor authentication, where you have to triangulate between multiple separate, independent sources. And it won't just be like, "Call your bank manager and release the funds," right? So, this is where we need the creativity and energy and attention of everybody, because defense, the kind of defensive measures, have to evolve as quickly as the potential offensive measures, the attacks that are coming.
- SBSteven Bartlett
I heard you say this, that you think, um, some people are... For many of these problems, we're gonna need to develop AIs to defend us from the AIs.
- MSMustafa Suleyman
Right. We kind of already have that, right? So we have automated ways of detecting spam online these days. You know, most of the time, there are, um, machine learning systems which are trying to identify when your credit card is used in a fraudulent way. That's not a human sitting there looking at patterns of spending traffic in real time. That's an AI that is, like, flagging that something looks off. Um, likewise with data centers or security cameras. A lot of those security cameras these days are, you know, have tracking algorithms that look for, you know, surprising sounds, or, like, if a, if a glass window is, is smashed, that will be detected by an AI often that is, you know, listening on the security camera.So, you know, that's kind of what I mean by that, is that increasingly those AIs will get more capable and we'll want to use them for defensive purposes, and that's exactly what it looks like to have good, healthy, well-functioning controlled AIs that serve us.
- SBSteven Bartlett
I went on one of these large language models and ty- and said to me, give, I said to the large language model, "Give me an example where an artificial intelligence takes over the world or whatever, and just ends, results in the dist- destruction of humanity, and then tell me what we'd need to do to prevent it." And it said, it gave me this wonderful example of this AI called Cynthia that threatens to destroy the world, and it says, "The way to defend that would be a different AI-"
- MSMustafa Suleyman
(laughs)
- SBSteven Bartlett
... which had a different name. And it was, said that this one would be acting in human interests, and we'd basically be fighting one AI with another AI. And of, and of course, of course, of course at that level if Cynthia started to wreak habit- havoc on the world and take control of the nuclear weapons and infrastructure and all that, we would need an equally intelligent weapon to fight it.
- MSMustafa Suleyman
Although one of the interesting things that we found, um, over the last few decades is that it so far tended to be the AI plus the human that has, that is still dominating. That's the case in chess, uh, in Go, in other games, um, that-
- SBSteven Bartlett
In Go it's still...
- MSMustafa Suleyman
Yeah. So there was a paper that came out a few months ago, two months ago, that showed that a human was actually able to beat the cutting edge Go program, um, even one that was better than AlphaGo with a new strategy that they had discovered. Um, you know, so obviously it's not just a sort of game over environment where the AI just arrives and it gets better. Like humans also adapt. They get super smart. They, like I say, get more cynical, ask, uh, get more skeptical, ask, you know, good questions, invent their own things, use their own AIs to adapt. And that's the evolutionary nature of what it means to have a technology, right? I mean, everything is a technology. Like your pair of glasses made you smarter in a way. Like before there were glasses and people got bad eyesight, they weren't able to read, you know, suddenly those who did adopt those technologies were able to read for, you know, longer in their lives or under low light conditions and they were able to consume more information and got smarter. And so that is the trajectory of technology. It's this iterative interplay between, you know, human and machine that makes us better over time.
- SBSteven Bartlett
You
- 1:03:38 – 1:05:55
Why did you build a company in this space knowing the problems?
- SBSteven Bartlett
know the potential, um, consequences if, if we don't, uh, reach a point of containment, yet you chose to build a company in this space.
- MSMustafa Suleyman
Yeah.
- SBSteven Bartlett
Why, why that? Why did you do that?
- MSMustafa Suleyman
Because I believe that the best way to, uh, demonstrate how to build safe and, and contained AI is to actually experiment with it in practice. And I think that if we are just skeptics or critics and we stand back from the cutting edge, then we give up that opportunity to shape outcomes to, you know, all of those other actors that we referred to, whether it's like China and the US going at each other's throats, uh, you know, or other big companies that are purely pursuing profit at all costs. And so it doesn't solve all the problems, of course. It's super hard. And again, it's full of contradictions, but I honestly think it's the right way for everybody to proceed. You know, if you're-
- SBSteven Bartlett
To experiment at the front.
- MSMustafa Suleyman
Yeah. If you're afraid-
- SBSteven Bartlett
China, Russia, Putin.
- MSMustafa Suleyman
... understand, right? What reduces fear is deep understanding. Spend time playing with these models. Look at their weaknesses. They're not superhumans yet. They make tons of mistakes. They're crappy in lots of ways. They're actually not that hard to make.
- SBSteven Bartlett
The more you've experimented, has there, has that correlated with a reduction in fear?
- MSMustafa Suleyman
(laughs)
- SBSteven Bartlett
(laughs)
- MSMustafa Suleyman
Cheeky question.
- SBSteven Bartlett
No, but that's what you just said.
- MSMustafa Suleyman
Uh, yes and no. You're totally right. Yes, it has in the sense that, you know, the problem is the more you learn-
- SBSteven Bartlett
(laughs)
- MSMustafa Suleyman
... the more you realize-
- SBSteven Bartlett
Yeah, that's what I'm saying (laughs) . I was fine before I started talking about AI.
- MSMustafa Suleyman
(laughs)
- SBSteven Bartlett
Now the more I've talked about it... (laughs)
- MSMustafa Suleyman
It's true. It's true. It's, it's sort of pulling on a thread which-
- SBSteven Bartlett
(laughs) Yeah.
- MSMustafa Suleyman
... (laughs) is a crazy spiral. Um, yeah, I mean, like I think in the short term it's made me way less afraid, because I, I don't see that kind of existential harm that we've been talking about in the next decade or two. But longer term, that's, that's where I struggle to wrap my head around how things play out in 30 years.
- SBSteven Bartlett
Some
- 1:05:55 – 1:15:29
Will governments help us regulate it?
- SBSteven Bartlett
people say government reg- regulation will sort it out. You discuss this in chapter 13 of your book where you, which is titled Containment Must Be Possible (laughs) . I love how you didn't say "is".
- MSMustafa Suleyman
Yeah.
- SBSteven Bartlett
Containment must be-
- MSMustafa Suleyman
(laughs)
- SBSteven Bartlett
(laughs) Containment Must Be Possible. Um, what do you say to people that say government regulation will sort it out? I heard Rishi Sunak did some announcement and he's got a COBRA committee coming together. They'll handle it.
- MSMustafa Suleyman
That's right. And the EU have a huge piece of regulation called the EU AI Act. Um, um, you know Joe Bi- President Joe Biden has, you know, got an, his own, you know, set of proposals and, um, you know, we've been working with, with both, you know, Rishi Sunak and, and Biden and, you know, trying to contribute and shape it in the best way that we can. Look, it isn't gonna happen without regulation. So regulation is essential. It's critical. Um, again, going back to the precautionary principle. But at the same time, regulation isn't enough. You know, I often hear people say, "Well, we'll just regulate it. We'll just stop. We'll just stop. We'll just stop. We'll slow down." Um-And the problem with that is that it kind of ignores the fact that the people who are putting together the regulation don't really (laughs) understand enough about the detail today. You know, in their defense, they're rapidly trying to wrap their head around it, especially in the last six months, and that's a great relief to me, 'cause I feel the burden is now increasingly shared. And, you know, just from a personal perspective, I'm like, I feel like I've been saying this for about a decade, and just in the last six months, now everyone's coming at me and saying, like, you know, "What's going on?" I'm like, "Great. This is the conversation we need to be having," because everybody can start to see the glimmers of the future, like what will happen if a ChatGPT-like product or a Pi-like product really does improve over the next 10 years. And so, when I say, you know, regulation is not enough, what I mean is, it needs movement, it needs culture, it needs people who are actually building and making, you know, in, like, modern, creative, critical ways, not just like giving it up to, you know, companies or small groups of people, right? We need lots of different people experimenting with strategies for containment.
- SBSteven Bartlett
Isn't it predicted that this industry's a $15 trillion industry or something like that?
- MSMustafa Suleyman
Yeah, I've heard that. It is-
- SBSteven Bartlett
So if-
- MSMustafa Suleyman
... a lot.
- SBSteven Bartlett
So if I'm Rishi, and I know that I'm going to be chucked out of office. Rishi's the Prime Minister of the UK. If I'm gonna be chucked out of office in two years unless this economy gets good, I don't wanna do anything to slow down that $15 trillion bag that I could be on the receiving end of. I would, I would definitely not wanna slow that 15 billion, trillion dollar bag and give it to, like, America or Canada or some other country. I'd want that $15 trillion windfall to be on my country.
- MSMustafa Suleyman
Right.
- SBSteven Bartlett
So I have, I have no... Other than the long-term, you know, health and success of humanity, in my four-year election window, I've got to do everything I can to boost these numbers-
- MSMustafa Suleyman
Right.
- SBSteven Bartlett
... and get us looking good. So i could, I could give you lip service, but, but, but listen, I'm not gonna be here unless these numbers look good.
- MSMustafa Suleyman
Right. Exactly. That's another one of the problems. Short-termism is everywhere. Who is responsible for thinking about the 20-year future?
- SBSteven Bartlett
Who is it?
- MSMustafa Suleyman
I mean, that's a deep question, right? I mean, we, we, we, the world is happening to us on a decade-by-decade timescale. It's also happening hour by hour. So change is just ripping through us, and this arbitrary window of governance of, like, a four-year election cycle, where actually it's not even four years, 'cause by the time you've got in, you do some stuff for six months, and then by month, you know, 12 or 18, you're starting to think about the next cycle and are you gonna pull... You know, it's just like the short-termism is killing us, right? And we don't have an institutional body whose responsibility is stability. You could think of it as like a, you know, like a global technology stability function. What is the global strategy for containment that has the ability to introduce friction when necessary, to implement the precautionary principle, and to basically keep the peace? That, I think, is the missing governance piece which we have to invent in the next 20 years, and it's insane, because I'm basically describing the UN Security Council, plus the World Trade Organization. All these huge, you know, global institutions which formed after, you know, the horrors of the Second World War have actually been incredible. They've created interdependence and alignment and stability, right? Obviously, there's been a lot of bumps along the way in the last 70 years, but broadly speaking, it's an unprecedented period of peace, and when there's peace, we can create prosperity. And that's actually what we're lacking at the moment, is that we don't have an international mechanism for coordinating among competing nations, competing corporations, um, to drive the peace. In fact, we're actually going kind of in the opposite direction. We're resorting to the old school language of a clash of civilizations, with like, "China is the new enemy. They're gonna come to dominate us. We have to dominate them. It's a, it's a battle between two poles. China's taking over Africa, China's taking over the Middle East. We have to count..." I mean, it's just like, that can only lead to conflict. That just assumes that conflict is inevitable. And so when I say it, regulation is not enough, no amount of good regulation in the UK or in Europe or in the US is gonna deal with that clash of civilizations language which we seem to have been, become addicted to.
- SBSteven Bartlett
If we need that global collaboration to be successful here, are you optimistic now that we'll, we'll get it? Because the same incentives are at play with climate change and AI, you know, why would I want to reduce my carbon emissions when it's making me loads of money or why, you know. Why would I want to reduce my AI development when it's gonna make us 15 trillion?
- MSMustafa Suleyman
Yeah. So the, the, the really painful answer to that question is that we've only really ever driven extreme compromise and consensus in two scenarios. One, off the back of unimaginable catastrophe and suffering. You know, Hiroshima, Nagasaki, and the Holocaust, and World War II, which drove 10 years of consensus and new political structures, right? And then the second is, um-
- SBSteven Bartlett
We did fire the bullet, though, didn't we? We fired a couple of those nuclear bombs.
- MSMustafa Suleyman
Exactly. And that, that's why I'm saying the brutal truth of that is that it takes a catastrophe to trigger-... the need for alignment, right? So that, that's one. The second is, where there is an obvious mutually assured destruction, um, you know, dynamic, where both parties are afraid that this would trigger nuclear meltdown, right? And that means suicide.
- SBSteven Bartlett
And when there was few parties.
- MSMustafa Suleyman
Exactly. (laughs)
- SBSteven Bartlett
When there was just nine people.
- MSMustafa Suleyman
Exactly.
- SBSteven Bartlett
You could get all nine, but in, in a, when we're talking about artificial technology, there's gonna be more than nine people, right, that have power, access to the full s- sort of power of that technology for nefarious reasons.
- MSMustafa Suleyman
I don't think it has to be like that. I think that's the challenge of containment, is to reduce the number of actors that have access to the existential threat technologies to an absolute minimum, and then use the existing military and economic incentives which have driven world order and peace so far, um, to s- to prevent the proliferation of access to these superintelligences or these AGIs.
- SBSteven Bartlett
A quick word on Huel. As you know, they're a sponsor of this podcast, and I'm an investor in the company. And I have to say, it's moments like this in my life where I'm extremely busy and I'm flying all over the place and I'm recording TV shows and I'm recording shows in America and here in the UK, that Huel is a necessity in my life. I'm someone that regardless of external circumstances or professional demands, wants to stay healthy and nutritionally complete. And that's exactly where Huel fits in my life. It's enabled me to get all of the vitamins and minerals and nutrients that I need in my diet to be aligned with my health goals, while also not dropping the ball on my professional goals, because it's convenient and because I can get it online, in Tesco, in supermarkets all over the country. If you're one of those people that hasn't yet tried Huel, or you have before but f- for whatever reason you're not a Huel consumer right now, I would highly recommend giving Huel a go. And Tesco have now increased their listings with Huel so you can now get the RTD, ready to drink, in Tesco expresses all across the UK. Ten areas of focus for containment. You're the first person I've met that's really hazarded a, laid out a blueprint for the things that need to be done, um, cohesively to try and reach this point of containment, so
- 1:15:29 – 1:30:10
What do we need to do to contain it?
- SBSteven Bartlett
I'm super excited to talk to you about these. The first one is about safety, um, and you mentioned there, that's kind of what we talked about a little bit about there being AIs that are currently being developed to help contain other AIs. Two, audits. Um, which is being able to, f- uh, from what I understand, being able to audit what's being built in the, these open source models. Three, chokepoints. What's that?
- MSMustafa Suleyman
Yeah. So chokepoints refers to points in the supply chain where you can throttle who has access to what.
- SBSteven Bartlett
Okay.
- MSMustafa Suleyman
So on the internet today, everyone thinks of the internet as an idea, this kind of abstract cloud thing that hovers around (laughs) above our heads. But really, the internet is a bunch of cables.
- SBSteven Bartlett
Mm-hmm.
- MSMustafa Suleyman
Those cables, you know, are physical things that transmit information, you know, under the sea, and you know, the, those points, the endpoints can be stopped, and you can monitor traffic, you can control basically what traffic moves back and forth. And then the second chokepoint is access to chips, so the GPUs, graphics processing units, which are used to train these super large clusters.
- SBSteven Bartlett
Mm-hmm.
- MSMustafa Suleyman
I mean, we now have the second-largest supercomputer in the world today. Uh, at least, we, you know, just for this next six months we will. Other people will catch up soon, but we're ahead of the curve, we're very lucky. Cost a billion dollars. And those chips are really the raw commodity that we use to build these large language models, and access to those chips is something that governments can, should, and are, um, you know, restricting. That's a chokepoint.
- SBSteven Bartlett
You spent a billion dollars on a computer?
- MSMustafa Suleyman
We did, yeah. (laughs) It's a bit more than that, actually, about 1.3. (laughs)
- SBSteven Bartlett
(laughs) In a couple of years' time, that'll be the price of an iPhone. (laughs)
- MSMustafa Suleyman
(laughs) That's the problem, everyone's gonna have it. (laughs)
- SBSteven Bartlett
Number six is quite curious. You say that, um, the need for governments to put increased taxation on AI companies to be able to fi- um, fund the massive changes in society, such as paying for reskilling and education.
- MSMustafa Suleyman
Yeah.
- SBSteven Bartlett
Um, you put massive tax on it over here, I'm gonna go over here.
- MSMustafa Suleyman
(laughs)
- SBSteven Bartlett
If you tax it, if I'm an AI company and you're taxing me heavily over here, I'm going to Dubai.
- MSMustafa Suleyman
Yep.
- SBSteven Bartlett
Or Portugal.
- MSMustafa Suleyman
Yep. So-
- SBSteven Bartlett
If it's that much of a competitive disadvantage, I will not build my company where the taxation's high.
- MSMustafa Suleyman
Right. Right. So the way to think about this is what are the strategies for containment? If we're agreed that long-term we want to contain, that is close down, slow down, control both the proliferation of these technologies and the way the really big AIs are used, then the way to do that is to tax things.
- SBSteven Bartlett
Mm-hmm.
- MSMustafa Suleyman
Tax things, taxing things slows them down, and that's what you're looking for, provided you can coordinate internationally. So you're totally right, that, you know, some people will move to Singapore or to Abu Dhabi or Dubai or whatever. The reality is that at least for the next, you know, sort of period, I would say 10 years or so, the concentrations of intellectual, you know, horsepower, will remain the big mega-cities, right? You know, I- I moved from London in 2020 to go to Silicon Valley and I started my new company in Silicon Valley because the concentration of talent there is overwhelming. All the very best people are there on, i- in- in AI and software engineering. So I think it's quite likely that that's gonna remain the case for the foreseeable future. But in the long term, you're totally right. How do you ... It's another coordination problem. How do we get nation states to collectively agree that we want to try and contain, that we want to slow down? Because-... as we've discussed with the proliferation of dangerous materials or on the military side, there's no use one person doing it, or one country doing it, if others race ahead. And that's the conundrum that we face.
- SBSteven Bartlett
I, um, I don't consider myself to be a pessimist in my life. I consider myself to be an optimist, generally. I think, and I always, I think that, uh, uh, as you've said, I think we have no choice but to be optimistic. And I have faith in humanity. We've done so much, so many incredible things and so, overcome so many things. And I also think I'm really logical, as in, I'm the type of person that needs evidence to change my beliefs, either way. Um, when I look at all of the whole picture, having spoken to you and s- several others on this subject matter, I see more reasons why we won't be able to contain than reasons why we will, especially when I dig into those incentives. Um, you talk about incentives at length in your book, um, at different, different points, and it's clear that all the incentives are pushing towards a lack of containment, especially in the short and mid-term, which tends to happen with new technologies. In the short and mid-term, it's like a land grab. The gold is in the stream. We all rush to get the, the shovels and the, the, you know, the sieves and stuff, and then we realize the unintended consequences of that, hopefully bef- not before it's too late. In chapter eight, you talk about unstoppable incentives at play here. "The coming wave represents the greatest economic prize in history, and scientists and technologists are all too human. They crave status, success and legacy, and they wanna be recognized as the first and the best. They're competitive and clever with a carefully nurtured sense of their place in the world and in history."
- MSMustafa Suleyman
Right.
- SBSteven Bartlett
I look at you, I look at people like Sam, um, from OpenAI, Elon, you're all humans with the same understanding of your place in history and status and success. You all want that, right?
- MSMustafa Suleyman
Right.
- SBSteven Bartlett
There's a lot of people that maybe aren't as, don't have, uh, as good a track record of you at doing the right thing, which you certainly have, that will just want the status and the success and the money. Incredibly strong incentives. I always think about incentives as being the thing that you look at-
- MSMustafa Suleyman
Exactly.
- 1:30:10 – 1:34:04
Do you feel sad about all of this?
- MSMustafa Suleyman
It's a lot to take in. This is- it's a- it's a very real reality.
- SBSteven Bartlett
Does that weigh on you?
- MSMustafa Suleyman
Yeah, it does. I mean, every day. Every day. I mean, I've- I've been working on this for many years now, and it's, uh, you know, it's- it's emotionally a lot to take in. It's- it's- it's hard to think about the far out future and how your actions today, our actions collectively, our weaknesses, our failures, that, you know, that irritation that I have that we can't learn the lessons from the pandemic, right? Like, all of those moments where you feel the frustration of governments not working properly or corporations not listening or some of the obsessions that we have in culture where we're debating, like, small things, you know? (laughs) And you're just like, "Whoa, we need to focus on the big picture here."
Episode duration: 1:46:04
Install uListen for AI-powered chat & search across the full episode — Get Full Transcript
Transcript of episode CTxnLsYHWuI
Get more out of YouTube videos.
High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.
Add to Chrome