CEO Of Microsoft AI: AI Is Becoming More Dangerous And Threatening! - Mustafa Suleyman

CEO Of Microsoft AI: AI Is Becoming More Dangerous And Threatening! - Mustafa Suleyman

The Diary of a CEOSep 4, 20231h 46m

Steven Bartlett (host), Mustafa Suleyman (guest)

AI containment and the ‘coming wave’ of transformative technologiesScaling of computation, large language models, and emergent capabilitiesExistential risks: synthetic biology, engineered pathogens, autonomous systemsIncentives, race dynamics, and geopolitical competition around AIRegulation, chokepoints, and new global governance structuresSocietal impacts: jobs, meaning, abundance, and transhumanismPersonal ethics, responsibility, and emotional toll on AI leaders

In this episode of The Diary of a CEO, featuring Steven Bartlett and Mustafa Suleyman, CEO Of Microsoft AI: AI Is Becoming More Dangerous And Threatening! - Mustafa Suleyman explores mustafa Suleyman Warns: Contain AI Now Or Be Contained Later Mustafa Suleyman, co‑founder of DeepMind and CEO of Inflection AI, explains why modern AI is both the greatest tool humanity has ever built and a potentially destabilizing, existential threat. He argues that the central challenge of the 21st century is “containment”: limiting the proliferation and autonomy of powerful AI and adjacent technologies such as synthetic biology and robotics.

Mustafa Suleyman Warns: Contain AI Now Or Be Contained Later

Mustafa Suleyman, co‑founder of DeepMind and CEO of Inflection AI, explains why modern AI is both the greatest tool humanity has ever built and a potentially destabilizing, existential threat. He argues that the central challenge of the 21st century is “containment”: limiting the proliferation and autonomy of powerful AI and adjacent technologies such as synthetic biology and robotics.

Suleyman details how rapidly scaling computation, open‑source diffusion, and geopolitical competition create a “race condition” that pushes toward ever-more capable systems, while incentives, short-term politics, and optimism bias make us reluctant to confront worst‑case scenarios. He insists that containment has “never really been done before” at this scale, yet “must be possible,” or humanity risks being displaced as the dominant species.

He outlines a 10‑point containment agenda—spanning safety research, audits, chokepoints like chips and cloud APIs, and heavy taxation of frontier AI—to slow and shape development, coupled with new global governance beyond current election cycles and nation‑state rivalry. Despite admitting the odds of success are low, he maintains that if we do succeed, AI could drive radical abundance, near‑zero‑cost energy and food, and a more meritocratic, creative civilization.

The conversation oscillates between exhilaration and exhaustion: Suleyman is emotionally candid about fear, responsibility, and sadness, yet calls for widespread public engagement. He urges individuals to understand and use AI tools, push for precautionary regulation, and participate in a cultural shift that prioritizes species‑level survival over short‑term profit and national competition.

Key Takeaways

Containment—not raw innovation—is the defining AI problem of this century.

Suleyman argues that the core challenge of the next 30–50 years is containing the proliferation and misuse of highly capable AI and related technologies. ...

Get the full analysis with uListen AI

Exponential scaling of compute is driving qualitatively new AI behavior.

DeepMind’s early Atari experiments used about 2 petaflops; a decade later, Inflection’s largest model uses about 10 billion petaflops. ...

Get the full analysis with uListen AI

Open-source diffusion and cheapening of past frontier models make containment harder over time.

Today’s cutting‑edge models are centralized, expensive, and somewhat regulatable. ...

Get the full analysis with uListen AI

Synthetic biology plus AI could enable engineered pandemics, demanding strict access controls.

Rapidly falling costs of genome sequencing and DNA synthesis, combined with powerful AI models, create a scenario where systems can help design pathogens that are more transmissible or lethal. ...

Get the full analysis with uListen AI

Incentives at every level currently push against containment.

Scientists want status and legacy; companies seek profit and market dominance; politicians chase GDP growth within 4‑year election cycles; nation‑states fear being outpaced by rivals. ...

Get the full analysis with uListen AI

Chokepoints, taxation, and audits are concrete levers to slow and shape AI.

Suleyman outlines a containment toolkit: (1) safety research and AI that defends against AI, (2) independent audits of powerful models, (3) chokepoints at undersea cables, GPUs, and cloud APIs to control who can train and deploy frontier systems, and (4) significantly higher taxes on AI firms to fund reskilling, education, and social adaptation and to introduce economic ‘friction. ...

Get the full analysis with uListen AI

If containment works, AI could drive ‘radical abundance’ and transform human work.

On the upside, Suleyman envisions AI solving battery storage, enabling near‑free renewable energy, ultracheap desalination, and dramatically lower costs for food, transport, drugs, and healthcare. ...

Get the full analysis with uListen AI

Notable Quotes

This really is the tool that helps us tackle all the challenges that we’re facing as a species… and yet, if we don’t shape it, it happens to us.

Mustafa Suleyman

On the face of it, it does look like containment isn’t possible… The last chapter of my book is called ‘Containment Must Be Possible.’

Mustafa Suleyman

It really does feel like a new species, and that has to be brought under control. We cannot allow ourselves to be dislodged as the dominant species on this planet.

Mustafa Suleyman

A tiny group of people who wish to deliberately cause harm are gonna have access to tools that can instantly destabilize our world.

Mustafa Suleyman

I wanna know will. Will we contain it? I think the odds are low.

Mustafa Suleyman

Questions Answered in This Episode

You distinguish between not believing containment *is* possible and insisting it *must* be possible; what concrete empirical indicator, if any, would make you change your mind and say containment has definitively failed?

Mustafa Suleyman, co‑founder of DeepMind and CEO of Inflection AI, explains why modern AI is both the greatest tool humanity has ever built and a potentially destabilizing, existential threat. ...

Get the full analysis with uListen AI

You argue that certain forms of autonomy (like battlefield robots or self-improving online agents) should be off-limits—what specific technical red lines would you write into international law to operationalize that ban?

Suleyman details how rapidly scaling computation, open‑source diffusion, and geopolitical competition create a “race condition” that pushes toward ever-more capable systems, while incentives, short-term politics, and optimism bias make us reluctant to confront worst‑case scenarios. ...

Get the full analysis with uListen AI

Given your admission that incentives make the odds of voluntary containment ‘low,’ what kind of catalyzing event—short of a full-scale catastrophe—might realistically be strong enough to shift global politics toward serious cooperation?

He outlines a 10‑point containment agenda—spanning safety research, audits, chokepoints like chips and cloud APIs, and heavy taxation of frontier AI—to slow and shape development, coupled with new global governance beyond current election cycles and nation‑state rivalry. ...

Get the full analysis with uListen AI

You’re skeptical of transhumanist uploading but bullish on radical abundance and new forms of meaning; how should education systems change now to prepare children for a world where traditional jobs may largely disappear?

The conversation oscillates between exhilaration and exhaustion: Suleyman is emotionally candid about fear, responsibility, and sadness, yet calls for widespread public engagement. ...

Get the full analysis with uListen AI

As a founder benefiting from the AI boom while warning about its dangers, how do you personally decide when a commercial opportunity at Inflection should be declined or slowed for safety reasons, and can you share a concrete example where you have already done so?

Get the full analysis with uListen AI

Transcript Preview

Steven Bartlett

Are you uncomfortable talking about this?

Mustafa Suleyman

Yeah. I mean, it's pretty wild, right? Mustafa Suleyman-

Steven Bartlett

The billionaire founder of Google's AI technology. He's played a key role in the development of AI from its first critical steps.

Mustafa Suleyman

2020, I moved to work on Google's chat bot. It was the ultimate technology. We can use them to turbocharge our knowledge unlike anything else.

Steven Bartlett

Why didn't they release it?

Mustafa Suleyman

We were nervous. We were nervous. Every organization is gonna race to get their hands on intelligence, and that's gonna be incredibly disruptive. This technology can be used to identify cancerous tumors as it can to identify a target on the battlefield. A tiny group of people who wish to cause harm are gonna have access to tools that can instantly destabilize our world. That's the challenge, how to stop something that can cause harm or potentially kill. That's where we need containment.

Steven Bartlett

Do you think that it is containable?

Mustafa Suleyman

It has to be possible.

Steven Bartlett

Why?

Mustafa Suleyman

It must be possible.

Steven Bartlett

Why must it be?

Mustafa Suleyman

Because otherwise, it contains us.

Steven Bartlett

Yet you chose to build a company in this space. Why did you do that?

Mustafa Suleyman

Because I want to design an AI that's on your side. I honestly think that if we succeed, everything is a lot cheaper. It's gonna power new forms of transportation, reduce the cost of healthcare.

Steven Bartlett

But what if we fail?

Mustafa Suleyman

The really painful answer to that question is that...

Steven Bartlett

Do you ever get sad about it?

Mustafa Suleyman

Yeah. It's intense.

Steven Bartlett

I think this is fascinating. I looked at the back end of our YouTube channel, and it says that since this channel started, 69.9% of you that watch it frequently haven't yet hit the subscribe button. So, I have a favor to ask you. If you've ever watched this channel and enjoyed the content, if you're enjoying this episode right now, please, could I ask a small favor? Please hit the subscribe button. Helps this channel more than I can explain, and I promise, if you do that, to return the favor, we will make this show better, and better, and better, and better, and better. That's a promise I'm willing to make if you hit the subscribe button. Do we have a deal? Everything that's going on with artificial intelligence now, and, um, this new wave and all these terms like AGI and I saw another term in your, your, your book called ACI, the first time I'd heard that term. How do you feel about it emotionally? If you had to encapsulate how you feel emotionally about what's going on in this moment, how would you d- what words would you use?

Mustafa Suleyman

I would say in the past, it would've been petrified. And I think that over time, as you really think through the consequences and the pros and cons and the trajectory that we're on, you adapt and you understand that actually there is something incredibly inevitable about this trajectory, and that we have to wrap our arms around it and guide it and control it as a collective species. As a, as humanity. And I think the more you realize how much influence we collectively can have over this outcome, the more empowering it is. Because on the face of it, this is really gonna be the tool that helps us tackle all the challenges that we're facing as a species, right? We need to fix water desalination. We need to grow food 100X cheaper than we currently do. We need renewable energy to be, you know, ubiquitous and everywhere in our lives. We need to adapt to climate change. Everywhere you look, in the next 50 years, we have to do more with less. And there are very, very few proposals, let alone practical solutions, for how we get there. Training machines to help us as aides, scientific research partners, inventors, creators, is absolutely essential. And so the upside is phenomenal. It's enormous. But AI isn't just a thing. It's not an inevitable whole. Its form isn't inevitable, right? Its form, the exact way that it manifests and appears in our everyday lives, and the way that it's governed and who it's owned by and how it's trained, that is a question that is up to us collectively as a species to figure out over the next decade. Because if we don't embrace that challenge, then it happens to us. And that's really what I'm, I have been wrestling with for 15 years of my career, is how to intervene in a way that this really does benefit everybody, and those benefits far, far outweigh the potential risks.

Install uListen to search the full transcript and get AI-powered insights

Get Full Transcript

Get more from every podcast

AI summaries, searchable transcripts, and fact-checking. Free forever.

Add to Chrome