Skip to content
The Diary of a CEOThe Diary of a CEO

CEO Of Microsoft AI: AI Is Becoming More Dangerous And Threatening! - Mustafa Suleyman

If You Enjoyed This Episode You Will LOVE This One With Mo Gawdat: https://youtu.be/bk-nQ7HF6k4?si=Xn9WWUB2nca77Jd9 00:00 Intro 02:11 How do you feel emotionally about what's going on with AI? 09:17 What's surprised you most about the last decade? 12:51 I'm scared of this coming wave. 16:04 Is containment possible? 23:53 What will these AI biological beings look like? 27:08 Would we be able to regulate AI? 33:10 In 30 years' time, do you think we would have contained AI? 35:43 Why would such a being want to interact with us? 46:35 Quantum computers & their potential 57:04 Cybersecurity 01:03:38 Why did you build a company in this space knowing the problems? 01:05:55 Will governments help us regulate it? 01:15:29 What do we need to do to contain it? 01:30:10 Do you feel sad about all of this? 01:34:04 Well slowly move more toward AI interactions over human ones. 01:36:01 What should young people be dedicating their lives to? 01:37:53 What happens if we fail in containment, and what happens if we succeed? 01:42:31 The last guest's question Are you ready to think like a CEO? Gain access to the 100 CEOs newsletter here: ⁠https://bit.ly/100-ceos-newsletter You can purchase Mustafa’s book, ‘The Coming Wave: Technology, Power, and the Twenty-First Century's Greatest Dilemma’, here: https://amzn.to/3Qudl2Z Follow Mustafa: Twitter: https://bit.ly/45FZ0qr Join this channel to get access to perks: https://bit.ly/3Dpmgx5 My new book! 'The 33 Laws Of Business & Life' pre order link: https://smarturl.it/DOACbook Follow me:  Instagram: http://bit.ly/3nIkGAZ Twitter: http://bit.ly/3ztHuHm Linkedin: https://bit.ly/41Fl95Q Telegram: http://bit.ly/3nJYxST Sponsors:  Huel: https://g2ul0.app.link/G4RjcdKNKsb Airbnb: http://bit.ly/40TcyNr

Steven BartletthostMustafa Suleymanguest
Sep 3, 20231h 46mWatch on YouTube ↗

At a glance

WHAT IT’S REALLY ABOUT

Mustafa Suleyman Warns: Contain AI Now Or Be Contained Later

  1. Mustafa Suleyman, co‑founder of DeepMind and CEO of Inflection AI, explains why modern AI is both the greatest tool humanity has ever built and a potentially destabilizing, existential threat. He argues that the central challenge of the 21st century is “containment”: limiting the proliferation and autonomy of powerful AI and adjacent technologies such as synthetic biology and robotics.
  2. Suleyman details how rapidly scaling computation, open‑source diffusion, and geopolitical competition create a “race condition” that pushes toward ever-more capable systems, while incentives, short-term politics, and optimism bias make us reluctant to confront worst‑case scenarios. He insists that containment has “never really been done before” at this scale, yet “must be possible,” or humanity risks being displaced as the dominant species.
  3. He outlines a 10‑point containment agenda—spanning safety research, audits, chokepoints like chips and cloud APIs, and heavy taxation of frontier AI—to slow and shape development, coupled with new global governance beyond current election cycles and nation‑state rivalry. Despite admitting the odds of success are low, he maintains that if we do succeed, AI could drive radical abundance, near‑zero‑cost energy and food, and a more meritocratic, creative civilization.
  4. The conversation oscillates between exhilaration and exhaustion: Suleyman is emotionally candid about fear, responsibility, and sadness, yet calls for widespread public engagement. He urges individuals to understand and use AI tools, push for precautionary regulation, and participate in a cultural shift that prioritizes species‑level survival over short‑term profit and national competition.

IDEAS WORTH REMEMBERING

5 ideas

Containment—not raw innovation—is the defining AI problem of this century.

Suleyman argues that the core challenge of the next 30–50 years is containing the proliferation and misuse of highly capable AI and related technologies. Historically, we have almost never permanently banned or tightly contained a general‑purpose technology, especially one that is highly commercially beneficial and militarily useful. Yet he insists we must now develop an unprecedented ‘precautionary principle’—slowing and constraining frontier systems before catastrophic failure modes emerge.

Exponential scaling of compute is driving qualitatively new AI behavior.

DeepMind’s early Atari experiments used about 2 petaflops; a decade later, Inflection’s largest model uses about 10 billion petaflops. Suleyman notes we have effectively 10x’ed compute each year, leading from crude digit generation to human‑like conversational agents and photorealistic video. He was unsurprised by progress in images and audio, but shocked that the same scaling laws produced large language models that operate in abstract idea space and feel like intelligent partners.

Open-source diffusion and cheapening of past frontier models make containment harder over time.

Today’s cutting‑edge models are centralized, expensive, and somewhat regulatable. But models like GPT‑3 that were state-of-the-art only a few years ago are now open source, compressed, and runnable on modest hardware. Suleyman expects that within about five years, individuals—including malicious actors—will be able to download and run highly capable models locally, making it possible for “a kid in Russia” to use tools that can design dangerous pathogens or sophisticated cyber attacks.

Synthetic biology plus AI could enable engineered pandemics, demanding strict access controls.

Rapidly falling costs of genome sequencing and DNA synthesis, combined with powerful AI models, create a scenario where systems can help design pathogens that are more transmissible or lethal. Suleyman says such capabilities will require strict controls on compute, model weights, cloud APIs, and biological materials—treated more like anthrax or uranium than software. He acknowledges this will anger startups and small developers, but calls it a non‑negotiable tradeoff in a ‘dangerous materials’ regime.

Incentives at every level currently push against containment.

Scientists want status and legacy; companies seek profit and market dominance; politicians chase GDP growth within 4‑year election cycles; nation‑states fear being outpaced by rivals. Suleyman and the host agree this creates a powerful “unstoppable incentives” dynamic, similar to nuclear arms races or the climate crisis. He bluntly concedes that, as things stand, “the odds are low” that we voluntarily slow down before a catastrophe or clear mutually-assured-destruction moment forces cooperation.

WORDS WORTH SAVING

5 quotes

This really is the tool that helps us tackle all the challenges that we’re facing as a species… and yet, if we don’t shape it, it happens to us.

Mustafa Suleyman

On the face of it, it does look like containment isn’t possible… The last chapter of my book is called ‘Containment Must Be Possible.’

Mustafa Suleyman

It really does feel like a new species, and that has to be brought under control. We cannot allow ourselves to be dislodged as the dominant species on this planet.

Mustafa Suleyman

A tiny group of people who wish to deliberately cause harm are gonna have access to tools that can instantly destabilize our world.

Mustafa Suleyman

I wanna know will. Will we contain it? I think the odds are low.

Mustafa Suleyman

AI containment and the ‘coming wave’ of transformative technologiesScaling of computation, large language models, and emergent capabilitiesExistential risks: synthetic biology, engineered pathogens, autonomous systemsIncentives, race dynamics, and geopolitical competition around AIRegulation, chokepoints, and new global governance structuresSocietal impacts: jobs, meaning, abundance, and transhumanismPersonal ethics, responsibility, and emotional toll on AI leaders

High quality AI-generated summary created from speaker-labeled transcript.

Get more out of YouTube videos.

High quality summaries for YouTube videos. Accurate transcripts to search & find moments. Powered by ChatGPT & Claude AI.

Add to Chrome