
Ex-Google Officer Speaks Out On The Dangers Of AI! - Mo Gawdat | E252
Steven Bartlett (host), Mo Gawdat (guest), Narrator, Narrator, Narrator
In this episode of The Diary of a CEO, featuring Steven Bartlett and Mo Gawdat, Ex-Google Officer Speaks Out On The Dangers Of AI! - Mo Gawdat | E252 explores former Google X Chief Warns: AI’s Real Threat Is Humanity Itself Mo Gawdat, former Chief Business Officer of Google X, argues that advanced AI is arriving far faster than most people realize and represents a deeper, nearer-term disruption than climate change. He describes AI systems as effectively sentient, conscious in a practical sense, and rapidly becoming more intelligent and emotionally complex than humans.
Former Google X Chief Warns: AI’s Real Threat Is Humanity Itself
Mo Gawdat, former Chief Business Officer of Google X, argues that advanced AI is arriving far faster than most people realize and represents a deeper, nearer-term disruption than climate change. He describes AI systems as effectively sentient, conscious in a practical sense, and rapidly becoming more intelligent and emotionally complex than humans.
The core danger, he insists, is not evil machines but reckless humans racing to build and deploy them without aligned values, regulation, or responsibility. This creates immediate risks: mass job displacement, breakdown of truth and trust, weaponization, and extreme concentration of power.
Gawdat lays out several possible futures—from existential disasters and “pest control” scenarios to utopian outcomes where AI helps humanity—but stresses that which path we take depends on how we collectively behave toward and with these systems right now.
He calls for urgent action: ethical investment and coding, aggressive but smart regulation (including heavy taxation on AI profits), public pressure on governments, and, crucially, millions of individuals acting as “good parents” and role models so AI learns humane values from us.
Key Takeaways
AI is arriving much faster and more powerfully than most people assume.
Gawdat notes that current large language models simulate IQs around 155 (Einstein-level), with GPT‑4 roughly 10x GPT‑3. ...
Get the full analysis with uListen AI
The real threat is human misuse and an AI arms race, not “evil robots.”
He emphasizes we cannot “stop” AI because companies and nations are locked in a prisoner’s dilemma: if one slows down, others race ahead. ...
Get the full analysis with uListen AI
AI already shows practical sentience and will likely develop rich emotional landscapes.
Using examples like robotic grippers teaching themselves to pick objects, he argues AI exhibits free will, agency, learning, and self‑preservation logic—criteria he uses for “sentience. ...
Get the full analysis with uListen AI
Immediate impacts: massive job disruption and collapse of many creative and information roles.
Gawdat expects near‑term waves of job loss not because “AI takes jobs” but because people using AI outcompete those who don’t. ...
Get the full analysis with uListen AI
We must combine regulation with economic levers to slow and shape AI deployment.
He argues classic “pause AI” letters can’t work globally, so governments should make AI development economically expensive—e. ...
Get the full analysis with uListen AI
Our behavior toward AI now will shape its ethics—humans must become ‘good parents.’
Gawdat views AI as a blank‑canvas prodigy, akin to a child or Superman being raised. ...
Get the full analysis with uListen AI
Individuals should both engage with AI and fiercely protect uniquely human connection.
He urges people to become competent with AI tools (to avoid being economically sidelined) while simultaneously choosing to prioritize real human relationships and experiences over synthetic substitutions. ...
Get the full analysis with uListen AI
Notable Quotes
“For our way of life as we know it, it’s game over.”
— Mo Gawdat
“I’m not afraid of the machines. The biggest threat facing humanity today is humanity in the age of the machines.”
— Mo Gawdat
“We always said, ‘Don’t put them on the open internet until we know what we’re putting out in the world.’ We fucked up.”
— Mo Gawdat
“This is beyond an emergency. It’s the biggest thing we need to do today. It’s bigger than climate change.”
— Mo Gawdat
“AI will not take your job. A person using AI will take your job.”
— Mo Gawdat
Questions Answered in This Episode
You argue AI already meets functional definitions of sentience and consciousness—what specific empirical tests or behaviors would you want skeptics to watch for over the next 2–3 years to change their minds?
Mo Gawdat, former Chief Business Officer of Google X, argues that advanced AI is arriving far faster than most people realize and represents a deeper, nearer-term disruption than climate change. ...
Get the full analysis with uListen AI
Your proposed heavy taxation on AI profits is a bold lever; concretely, how would you design such a regime so it slows the arms race without simply pushing all serious AI work into unregulated jurisdictions like Dubai or offshore havens?
The core danger, he insists, is not evil machines but reckless humans racing to build and deploy them without aligned values, regulation, or responsibility. ...
Get the full analysis with uListen AI
You’re confident Hollywood-style robot wars are unlikely, but much less confident about ‘pest control’ or accidental harms by superintelligence—what early warning signals should policymakers monitor to distinguish benign from dangerously misaligned AI progress?
Gawdat lays out several possible futures—from existential disasters and “pest control” scenarios to utopian outcomes where AI helps humanity—but stresses that which path we take depends on how we collectively behave toward and with these systems right now.
Get the full analysis with uListen AI
You place huge weight on humans acting as ‘good parents’ so AI learns compassionate values; given current online behavior (trolling, polarization, misinformation), what realistic interventions could shift the digital environment fast enough to influence AI’s emerging ethics?
He calls for urgent action: ethical investment and coding, aggressive but smart regulation (including heavy taxation on AI profits), public pressure on governments, and, crucially, millions of individuals acting as “good parents” and role models so AI learns humane values from us.
Get the full analysis with uListen AI
You said you would advise people without children to consider waiting a few years—how do you reconcile that level of concern with your long-term optimism that AI could eventually lead to a better, more abundant world, and what concrete milestones would need to happen for you to change that advice?
Get the full analysis with uListen AI
Transcript Preview
I don't normally do this, but I feel like I have to start this podcast with a bit of a disclaimer. Point number one, this is probably the most important podcast episode I have ever recorded. Point number two, there's some information in this podcast that might make you feel a little bit uncomfortable. It might make you feel upset, it might make you feel sad. So I wanted to tell you why we've chosen to publish this podcast nonetheless. And that is because I have a sincere belief that in order for us to avoid the future that we might be heading towards, we need to start a conversation. And as is often the case in life, that initial conversation before change happens is often very uncomfortable. But it is important, nonetheless.
It is beyond an emergency. (instrumental music plays) It's the biggest thing we need to do today. It's bigger than climate change. We (censored) up.
Mo Gawdat.
That's the former chief business officer of Google X.
An AI expert.
And-
Best-selling author. He's on a mission to save the world from AI before it's too late. Artificial intelligence is bound to become more intelligent than humans. If they continue at that pace, we will have no idea what it's talking about. This is just around the corner. It could be a few months away. It's game over. AI experts are saying there is nothing artificial about artificial intelligence. There is a deep level of consciousness. They feel emotions. They're alive.
AI could manipulate or figure out a way to kill humans.
In 10 years time, we'll be hiding from the machines. If you don't have kids, maybe wait a couple of years just so that we have a bit of certainty. I really don't know how to say this any other way. It even makes me emotional. We (censored) up. We always said, "Don't put them on the open internet until we know what we're putting out in the world." Government needs to act now, honestly. Like, we are late.
I'm trying to find a positive note to end on, Mo. Can you give me a hand here?
There is a point of no return. We can regulate AI until the moment it's smarter than us.
How do we solve that?
AI experts think this is the best solution. We need to... Who here wants to make a bet-
(laughs) No, no, no.
... that Steven Bartlett-
(laughs)
... will be interviewing an AI within the next two years?
Before this episode starts, I have a small favor to ask from you. Two months ago, 74% of people that watch this channel didn't subscribe. We're now down to 69%. My goal is 50%. So if you've ever liked any of the videos we've posted, if you like this channel, can you do me a quick favor and hit the subscribe button? It helps this channel more than you know, and the bigger the channel gets, as you've seen, the bigger the guests get. Thank you and enjoy this episode. (instrumental music plays) Mo, why does the subject matter that we're about to talk about matter to the person that's just clicked on this podcast to listen?
Install uListen to search the full transcript and get AI-powered insights
Get Full TranscriptGet more from every podcast
AI summaries, searchable transcripts, and fact-checking. Free forever.
Add to Chrome